@Brad: Yes, this paper was recommended …

Comment on GC Votes to Revise SDA Fundamental #6 on Creation by Sean Pitman.


Yes, this paper was recommended earlier in this same thread. – Sean Pitman

You must not actually be looking at these papers. This is a later review paper, from a recent book, that summarises the state of the art on flagellar evolution.

Indeed, but by the same author as the 2007 paper and with the same basic ideas and conclusions…

What we have here is enormous momentum in reconstructing the actual evolutionary trajectory. This provides an increasing volume of evidence that the flagellum was constructed piece by piece from materials that were already present in earlier bacterial species, involving processes that include gene duplication, horizontal gene transfer, and gene loss.

I have no problem with the arguments for horizontal transfer or gene loss. The mechanisms for such events are well within the powers of known mindless biological processes. The only problem I have here is with the concept that gene duplications combined with other types of mutations can produce, from a single ancestral gene mind you, such high levels of functional complexity without the direction of deliberate design.

Phylogenetic arguments based on nested hierarchical patterns do not solve this problem nor do they favor the common descent hypothesis over common design given that NHPs are produced by humans all the time for various reasons.

You evidently do not understand that an argument against the possibility of natural selection constructing a certain kind of system is not an argument for design.

Yes, it is. The demonstration that no known mindless force of nature can produce a phenomenon that is known to be within the creative potential of intelligent minds is a very good argument in favor of ID as the most likely explanation of origin.

Yet again I ask you to tell me that basis for the hypotheses of design behind the sciences of anthropology, forensics and even SETI. Can you tell me the scientific basis argued for ID in these scientific disciplines?

A paper delimiting the bounds of what selection in a certain environment can achieve in a certain amount of time falls squarely within the tradition of population genetics and ought to be publishable if it sound. Indeed, your arguments, if good ones, ought to be capable of being formulated in the mathematical framework of population genetics. So, I invite you to do so.

I have formulated the argument in a mathematical framework with the assumption of vary large populations, high reproductive rates, and maximum viable mutation rates. The problem is that any argument that is seen to be challenging the fundamental dogma of Darwinism, the mechanism of RM/NS, is not going to be received favorably by any mainstream publication.

Consider the comments of John C. Sanford, Cornell University in this regard. Sanford is a well known geneticist, inventor(gene gun), and former atheist, theistic evolutionist, then creationist, who wrote a controversial book, Genetic Entropy and the Mystery of the Genome (2005). In this book he writes:

“Late in my career, I did something which for a Cornell professor would seem unthinkable. I began to question the Primary Axiom [random mutation/natural selection]. I did this with great fear and trepidation. By doing this, I knew I would be at odds with the most “sacred cow” of modern academia. Among other things, it might even result in my expulsion from the academic world…
To my own amazement, I gradually realized that the seemingly “great and unassailable fortress” which has been built up around the primary axiom is really a house of cards. The Primary Axiom is actually an extremely vulnerable theory, in fact it is essentially indefensible. Its apparent invincibility derives mostly from bluster, smoke, and mirrors. A large part of what keeps the Axiom standing is an almost mystical faith, which the true-believers have in the omnipotence of natural selection…
Furthermore, I began to see that this deep-seated faith in natural selection was typically coupled with a degree of ideological commitment which can only be described as religious. I started to realize (again with trepidation) that I might be offending a lot of people’s religion!…
If the Primary Axiom is wrong, then there is a surprising and very practical consequence. When subjected only to natural forces, the human genome must irrevocably degenerate over time. Such a sober realization should have more than just intellectual or historical significance. It should rightfully cause us to personally reconsider where we should rationally be placing our hope for the future.”

– John C. Sanford

Sanford is right you know. There is a great deal of passion generated in the scientific community when anyone tries to challenge the sacred cow of mindless natural selection combined with random mutations as the amazingly creative force that it has been trumped up to be.

The situation is this. Our evidence is that the phylogenetic history of the flagellar system maps smoothly onto the physical structure of the system. The evolutionary hypothesis is that the flagellar structure evolved, which neatly explains this congruence. You have admitted that there is no design hypothesis capable of explaining this evidence. Therefore, this particular piece of evidence supports evolution over design.

Where have I “admitted that there is no design hypothesis capable of explaining [NHPs]”? Have I not explained over and over again that such patterns are well within the production capabilities of ID? – and are in fact produced by human designers? Did I not refer you to Object Oriented Computer Programming? – which does in fact produce NHPs? – without the use of common descent?

There are reasons to produce NHPs in functionally integrated designs. Such patterns aid in the conservation of design and reduce the overall size of the program or genome content needed to code for the functional systems in question. They also have their own symmetry or beauty which some designers find inherently logical, predictable, and even structurally attractive.

Beyond this, phylogenetic tree building for bacteria and other single-celled organisms are notoriously contradictory – dependent upon the sequence chosen for analysis.

When full DNA sequences opened the way to comparing other kinds of genes, researchers expected that they would simply add detail to this tree. But “nothing could be further from the truth,” says Claire Fraser, head of The Institute for Genomic Research (TIGR) in Rockville, Maryland. Instead, the comparisons have yielded many versions of the tree of life that differ from the rRNA tree and conflict with each other as well . . . ”

Elizabeth Pennisi, Is It Time to Uproot the Tree of Life? Science, vol. 284, no. 5418, 21 May 1999, p. 1305

In 1998 biologist Carl Woese, an early pioneer in constructing rRNA-based phylogenetic trees, lamented the problem by writing:

“No consistent organismal phylogeny has emerged from the many individual protein phylogenies so far produced. Phylogenetic incongruities can be seen everywhere in the universal tree, from its root to the major branchings within and among the various taxa to the makeup of the primary groupings themselves. . . Clarification of the phylogenetic relationships of the major animal phyla has been an elusive problem, with analyses based on different genes and even different analyses based on the same genes yielding a diversity of phylogenetic trees.”

In 1999 Philippe and Forterre wrote an article entitled, “The rooting of the universal tree of life is not reliable” in which they made the following comments:

“The addition of new sequences to data sets has often turned apparently reasonable phylogenies into confused ones. . . In general, the two prokaryotic domains were not monophyletic with several aberrant groupings at different levels of the tree. Furthermore, the respective phylogenies contradicted each others, so that various ad hoc scenarios (paralogy or lateral gene transfer) must be proposed in order to obtain the traditional Archaebacteria-Eukaryota sisterhood.”

In other words, the phylogenetic trees for various aspects of the supposed “roots” of the “Tree of Life” are not rooted in the same place and produce different contradictory “relationships” that do not form nice NHPs. Therefore, the argument that the timing of the evolution of various component parts of various bacterial systems can be reliably determined based on phylogenetic analysis is called into question.

To put it another way, the basic problem is that evolutionary mechanisms are used to explain both hierarchical and non-hierarchical patterns. No matter how high up the tree this lack of hierarchy goes, the theory of common descent via RM/NS would still be used to explain the origin of such patterns. For focal problems in the tree between branches at higher levels, a change in mutation rate, or notions like convergence, divergence, or lateral gene transfers are used.

The fact is that the theory of evolution cannot be falsified by either a universal or a focal lack of nested hierarchy. Beyond this, the hierarchical classification method was first introduced by creationists, not evolutionists. So, to say that hierarchical patterns, when present, definitely support the the theory of evolution over intelligent design theory is erroneous. The theory of evolution does not predict hierarchical patterns more than does intelligent design theory.

Again, nested hierarchies can be found all the time in human designs.

However, the death knell to this whole thing is the fact that most of these phylogenetic trees are based on functional genetic sequences. That messes everything up. Evolutionists would have a much stronger case if the sequences in question were actually neutral with regard to phenotypic function, but they aren’t. That is why the notion of “pseudogenes” was so popular for such a long time – until recently when pseudogenes were actually found to be functional. What this means is that the differences are clustered or nested because of the different functional needs of different organisms in different environments.

Many types of functional proteins shared between very different creatures, like cytochrome c, are quite similar overall. In fact, certain key positions are highly conserved. The differences are also quite interesting in that they are maintained over thousands and even millions of generations. This means that most of the differences for such sequences are not neutral, but are indeed functional. In such a protein, that is otherwise so similar, it wouldn’t take much to get to a new sequence if the new sequence was more functionally beneficial or “optimal”.

Biologist Leonard Brand makes this point quite eloquently in the following excerpt:

“Anatomy is not independent of biochemistry. Creatures similar anatomically are likely to be similar physiologically. Those similar in physiology are, in general, likely to be similar in biochemistry, whether they evolved or were designed. An alternate, interventionist hypothesis is that the cytochrome c molecules in various groups of organisms are different (and always have been different) for functional reasons. Not enough mutations have occurred in these molecules to blur the distinct grouping evident. If we do not base our conclusions on the a priori assumption of megaevolution, all the data really tell us is that the organisms fall into nested groups without any indication of intermediates or overlapping of groups, and without indicating ancestor/descendant relationships.”

Brand, Leonard. 1997. Faith, Reason, and Earth History. Andrews University Press, Berrien Springs, MI.

So, classification models of living things that are based on molecular similarities and differences are quite limited as far as their use as evidence of common ancestry beyond very recent times. Many differences that are maintained seem to be function based. Because of this, certain differences in sequences cannot be used as a “molecular clock” since natural selection fixes certain sequences based on functional needs so that random drift is not allowed. Beyond this, very different phylogenetic relationships can be hypothesized depending upon which sequence is subjectively chosen for analysis. These different trees are often outright incompatible with each other or, at best, inconclusive.

I am interested in your claim that the design hypothesis makes predictions. Can you specify some? Note that if they are predictions that are supposed to tell against evolutionary theory, they must be predictions about which evolutionary theory would disagree.

What predictions does the SETI hypothesis for ID make? What predictions do forensic hypotheses for ID make? What about anthropology predictions for the recognition of ID?

The common “prediction” is that a particular phenomenon that is hypothesized to be a true “artifact” will not be shown by later discoveries to have a non-artificial origin via some as yet unknown non-deliberate force of nature…

For example, what “prediction” can be made about the most likely origin of a highly symmetrical polished granite cube that measures one meter on each side?

Sean Pitman

Sean Pitman Also Commented

GC Votes to Revise SDA Fundamental #6 on Creation

I am probably going to write far too much but if you want the conclusion, it is that Sean Pitman is completely and utterly wrong in everything he says in his comments and displays a great ignorance of proteins and their structure and function.


I hope the above short essay on protein structure and function is useful even to Sean Pitman who needs to stop being obsessed with computer-based numerology and do some reading and talk to some practical protein scientists.

From David Dryden of the University of Edinburgh. See: http://groups.google.com/group/talk.origins/msg/a7f670c859772a9b

Ah, so you’ve read Dryden’s arguments…

Where did Dryden point out my ignorance of protein structure and function? I am, after all, a pathologist with a subspecialty in hematopathology – a field of medicine that depends quite heavily on at least some understanding of protein structure and function. Yet Dryden says that I’m completely and utterly wrong in everything I say on this topic? Sounds just a bit overwrought – don’t you think?

In any case, where did Dryden substantively address my argument for an exponential decline of evolutionary potential with increasing minimum structural threshold requirements? Dryden himself only deals with very low level examples of evolution in action. He doesn’t even consider the concept of higher levels of functional complexity and the changes in the ratios of beneficial vs. non-beneficial sequences that would be realized in sequence space.

Dryden also completely misunderstands the challenge of the structural cutoff of systems that require a minimum of at least 1000 specifically arranged amino acid residues to work to do a particular function. He also flatly contradicts Axe’s work which suggests that it is not an easy thing to alter too many amino acid residue positions at the same time and still have the system in question work to do its original function. There is some flexibility to be sure, but there is a limit beyond which this flexibility cannot by crossed for protein-based systems. And, as this minimum limit increases for higher level systems, the ratio of beneficial vs. non-beneficial does in fact decrease exponentially. Dryden seems completely clueless on this particular all-important point.

This cluelessness is especially highlighted by Dryden’s comment that the bacterial rotary flagellum isn’t very complex at all:

These increasing degrees of functional complexity are a mirage.
Just because a flagellum spins and looks fancy does not mean it is
more complex than something smaller. The much smaller wonderful
machines involved in manipulating DNA, making cell walls or
cytoskeletons during the cell’s lifecycle do far more complex and
varied things including switching between functions. Even a small
serine protease has a much harder job than the flagellum. The
flagellum just spins and spins and yawn…

I really couldn’t believe that Dryden actually said this when I first read it. Dryden actually suggests that a small serine protease is more functionally complex than a bacterial flagellum?! – just because it is used more commonly in various metabolic pathways? – or more interesting to Dryden? He completely misses the point that the bacterial flagellum requires, at minimum, a far far greater number of specifically arranged amino acid “parts” than does a serine protease – thousands more.

And Dryden is your “expert” regarding the potential of RM/NS to create protein-based systems beyond very low levels of functional complexity? Why not find somebody who actually seems to understand the basic concept?

Here’s another gem from Dryden. In response to my comment that, “The evidence shows that the distances [in sequence space] between
higher and higher level beneficial sequences with novel functions
increases in a linear manner.” Dryden wrote:

Reply: What evidence? And if importance of function scales with
sequence length and the scaling is linear then I am afraid that 20^100
is essentially identical to 2 x 20^100. Also a novel function is not a
new function but just one we stumble upon in doing the hard work in
the lab. It’s been there a long time…

Dryden doesn’t grasp that in the debate over the creative potential of RM/NS that a novel functional system is one that the evolving population is looking for – not some lab scientists. It is only there in the potential of sequence space. It is not found until random mutations within the gene pool discover it by pure luck.

Dryden also doesn’t understand that this discussion isn’t over the “importance of function” but over levels of beneficial functionality – regardless of there “importance”. He also doesn’t understand that if a system requires a minimum sequence length or size (to include multiprotein systems) and a minimum degree of specific arrangement of amino acid residues within that minimum size, that a linear increase in this minimum structural threshold requirement does not result in a linear increase in average number of random mutations needed to achieve success. The linear increase in structural threshold results in an exponential decrease in the ratio of potentially beneficial vs. non-beneficial. This, obviously (to the candid mind anyway) will result in an exponential increase in the average number of random mutations needed to achieve success at the higher level.

Really, I would love to hear your take on Dryden’s paper in the light of a complete lack of evolution in action beyond very very low levels of functional complexity – i.e., minimum structural threshold requirements. I’m sure you could do a better job than he did…

Sean Pitman

GC Votes to Revise SDA Fundamental #6 on Creation

I’ll reply to your comments over on the special thread I created for this particular discussion regarding the anti-ID arguments of Elliot Sober:


Sean Pitman

GC Votes to Revise SDA Fundamental #6 on Creation

So, do you or do you not accept that, regarding this specific question, the design hypothesis predicts that we will not see congruence between the phylogenies (conditional on the two testable possibilities you provided having low probability)? If you do not, you owe us an explanation of why not, given your claim that the hypothesis is testable.

The “prediction” of ID is that only ID-driven mechanisms will be found to produce the phenomenon in question – that no non-intelligent mechanism will come remotely close to doing the job.

As I’ve mentioned to you before, you cannot “predict” any particular features of what a designer will do or would have done without direct knowledge of the designer in question. However, a lack of such direct knowledge does not remove the scientific ability to detect a true artifact when you see one with high predictive value.

This is the reason I’ve asked you to discuss the granite NHP problem I’ve presented. Instead, you’ve referred me, yet again, to the arguments of another without presenting any argument of your own or even commenting on those ideas that you consider to be most personally convincing to you.

My interest is in forcing you to make a prediction. You claimed you have one; we are all still waiting.

My claim was that evolutionists would have an easier time of things if functionality wasn’t involved in the ToL. The reason for this is that mindless mechanisms can produce NHPs – and do so all the time. However, mindless mechanisms are extremely unlikely to produce high levels of functional complexity in a reasonable amount of time and have never been observed to do so.

In short, some things you can’t predict; some things you can – – with regard to the ID-only hypothesis. You are asking me to predict those things that are not predictable from an ID perspective. You are then arguing that because such things are not predictable that ID cannot be scientifically detectable. This assumption of yours simply doesn’t follow for me…

Therefore, I’m interested in hearing you explain the logical basis behind various fields of science which invoke ID (such as anthropology, forensics, and SETI). What “predictions” are needed to support the ID hypothesis in those sciences? You don’t seem to want to personally address this question for some reason. Why not?

Regarding your reference to Elliot Sober, it would be more interesting for me if you would present your personal take on his arguments rather than simply referencing him without presenting any argument of your own.

But anyway, to get you started, I suggest that there are a number of logical flaws in Elliott Sober’s paper:

The anti-ID Arguments of Elliot Sober


For example, Sober presents the “inverse gambler’s fallacy” noting that it would be a logical error to assume that just because a pair of dice landed on double sixes the first few times that they were observed to be rolled does not mean that a roll of double sixes is more likely. After all, Sober argues, the odds of rolling double sixes are 1/36 regardless of how many times double sixes are initially observed to be rolled in a row. The problem here is that Sober assumes, a priori that the dice are actually “fair” dice that haven’t been loaded or biased in any way.

The problem here is that Sober assumes, a priori that the dice are actually “fair” dice that haven’t been loaded or biased in any way. The assumption of fair dice is a hypothesis that can be subject to testing and potential statistical falsification simply by observing the outcome of a number of rolls of the dice – without actually knowing, for sure, if the dice are or are not loaded. Based on the statistical pattern alone one can gain very high predictive value regarding the hypothesis that the dice are in fact loaded or biased vs. the alternate hypothesis that they are actually fair dice. Such observations have been very successfully used by carefully observant gamblers to exploit subtle biases in roulette wheels, dice, and other games of chance that are dependent upon apparent randomness or non-predictability of a biased pattern against the pattern that the house is betting on…

Can such biases be determined with absolute certainty? – based only on the patterns produced and nothing else? Of course not! But, science isn’t about perfection, but about determining useful degrees of predictive value that are always open to additional testing and potential falsification by future information.

This addresses yet another flaw in Sober’s paper. Sober accuses IDists of appealing to the concept of “modus tollens“, or the absolute perfection of the ID hypothesis. He uses the illustration of a million monkey’s randomly typing on typewriters producing all of the works of Shakespeare. He argues that while such a scenario is extremely unlikely, that it isn’t statistically impossible. There is still a finite probability of success.

While this is true, science doesn’t go with what is merely possible, but what is probable given the available evidence at hand. This is the reason why nobody reading a Shakespearean sonnet would think that it was the product of any kind of mindless random production. The same would be true if you were to walk out of your house and see that the pansies in your front yard had spelled out the phrase, “Good Morning. We hope you have a great day!”

Given such a situation you would never think that such a situation occurred by any non-deliberate mindless process of nature. You would automatically assume deliberate design. Why? Do you know?

Sober argues that if a known designer is not readily available to explain a given phenomenon, that the likelihood that a designer was responsible is just as remotely unlikely as is the notion that a mindless process was responsible for such an unlikely event. Therefore, there is essentially no rational basis to assume intelligent design. However, by the same argument, there would be no rational basis to assume non-intelligent design either.

The detail that Sober seems to selectively overlook is that if certain features fall within the known creative potential of known intelligent agents (i.e., humans) while being well outside of the realm of all known non-deliberate forces of nature, the most rational conclusion is that of ID.

Essentially, Sober does away with all bases for hypothesizing ID behind anything for which an intelligent agent is not directly known. This essentially includes all of modern science that deals with ID – to include anthropology, forensic science, and especially SETI. Yet, amazingly, he goes on to use this very same argument in support of the ID detecting abilities of the same.

In the end, it seems like Sober is more concerned about the specific identity of the designer not being “God” rather being concerned about the idea that the scientific inference of a need for some kind of intelligent designer to explain certain kinds of phenomena is in fact overwhelmingly reasonable – scientifically.

Ultimately, it seems to me like Sober’s arguments are really directed against the detection of God, not intelligent design…

In this line Sober writes:

The upshot of this point for Paley’s design argument is this: Design arguments for the existence of human (and human-like) watchmakers are often unproblematic; it is design arguments for the existence of God that leave us at sea.

– Elliot Sober

Of course, my ID-only hypothesis does not try to demonstrate the need for God. Rather it suggests that at least human-level intelligence had to have been involved to explain certain features of the universe and of life on this planet. It doesn’t attempt to argue that a God or God-like intelligence had to have been involved. If fact, it is impossible for the finite to prove the need for the infinite. However, one may argue that from a given finite perspective a particular phenomenon would require the input of a creative intelligence that would be indistinguishable from a God or God-like creative power.

At this point, a belief that such a God-like creator is in fact omnipotent is not unreasonable, but must be based, not on demonstration, but on trust in the testimony of this Creative Power. If a God-like creative power personally claims to be “The” God of all, Omnipotent in every way, it would be very hard for someone from my perspective to reasonably argue otherwise…

Anyway, your thoughts regarding what seems so convincing to you about Sober’s “arguments” would be most interesting – especially as they apply to granite NHPs or other such “artifacts”…

Sean Pitman

Recent Comments by Sean Pitman

Updating the SDA Position on Abortion
We are talking about something a bit more subtle here than the question as to what is merely “alive” and what is not “alive” – in the most basic sense of the term (as in the skin cell on my earlobe is “alive” and “human”). What we are talking about here are the qualitative differences between a single cell or a small cluster of unformed cells and a human being that can think and feel and appreciate sensory input. Now, someone might have their own personal ideas as to when, exactly, the human soul is achieved during embryogenesis, but the fact remains that the Bible is not clear on this particular question – and even includes passages suggesting that there is a spectrum of moral value to the human embryo/fetus. Consider the passage in Exodus 21:22-25, for example, which seems to many to suggest such a spectrum of value where the unformed fetus is not given the same value as a fully formed baby or the life of the mother (especially if read from the ancient LXX Greek translation which appears to be the most accurate translation of the original Hebrew text).

Because of this, there actually appears to me to be a great deal of disagreement among honest and sincere Bible-believing Christian medical professionals, embryologists, and theologians (modern and ancient) over when, exactly, during embryogenesis, does humanity become fully realized. Given the information currently in hand, I certainly could not, in good conscience, accuse a woman of “murder” for using various forms of birth control that end pregnancy within the first few days after conception (such as intrauterine devises or birth control pills).

Would you actually be willing to take action on this as you would for any other cold-blooded murderer? Would you be willing to accuse such a woman of wanton murder with all the guilt that is involved with such an accusation? or put her in prison for life for such an intentional act? Could you really do this? I certainly could not for the use of such standard forms of birth control during the earliest days following conception. Yet, that is what the current language of the church’s guidelines on abortion suggest… that these women who take such forms of birth control are in fact guilty of a heinous act of cold-blooded murder.

Updating the SDA Position on Abortion
”These latest guidelines appear to me to put the SDA Church in the same position as the Catholic Church on this topic –with human life beginning at the moment of conception. That doesn’t seem like a reasonable position ” – Sean Pitman (From an Adentist Review Article: https://www.adventistreview.org/church-news/story14133-statement-on-the-biblical-view-of-unborn-life-and-its-implications-for-abortion)

Yes, it is also in harmony with when every SDA pioneer, including the church’s founder, put the beginning of human life. It is also the same point that virtually every secular, evolutionary embryologist puts the beginning of human life. In fact, it’s the unanimous point of agreement among every biology textbook. If that doesn’t sound like a reasonable position based on empirical evidence then factual reasoning has escaped those who try to equate preventing the conception of a human being with intentional kllling of one that is already biologically and detectably in existence.

Brilliant and Beautiful, but Wrong
Thank you Wes. Really appreciate your note and being able to see you again!

Complex Organisms are Degenerating – Rapidly
As far as the current article is concerned, I know of no “outdated” information. The information is current as far as I’m aware. The detrimental mutation rate is far too high for complex organisms to avoid an inevitable downhill devolutionary path. There is simply no way to rationally avoid this conclusion as far as I’m aware.

So, perhaps your friend could be more specific regarding his particular objections to the information presented?

Complex Organisms are Degenerating – Rapidly
Look again. I did reference the 2018 paper of Basener and Sanford (which was the motivation for me writing this particular article). Of course, as you’ve mentioned, Sanford has also written an interesting book on this topic entitled, “Genetic Entropy” – which I’ve previously referenced before in this blog (along with a YouTube video of a lecture he gave on the topic at Loma Linda University: (Link). For those who haven’t read it or seen Sanford’s lecture on this topic, it’s certainly worth your time…