@Brad: Before I do so, note that I am leaving …

Comment on GC Votes to Revise SDA Fundamental #6 on Creation by Sean Pitman.

@Brad:

Before I do so, note that I am leaving aside two topics that were generated by our exchange on Liu and Ochman. On the whole core flagellum gene complex evolving by gene duplication I agree with Sean that the evidence is far from convincing, and I thank him for referring me to some of the more recent literature on this.

No problem…

I stand by my earlier claim that there is wide-ranging evidence that gene duplication plays an important role in evolutionary change (and indeed, even the critics of Liu and Ochman think that some of the core flagellar genes clearly evolved by multiple rounds of duplication).

The “wide-ranging evidence” for the role of gene duplication beyond very low levels of functional complexity is based entirely on the same assumptions used by Liu and Ochman – that sequence similarities necessitate a common evolutionary origin without consideration of other means of common origin. We all agree that the evidence supports a common orgin of some kind. The disagreement here is over what kind of common origin was most likely.

On the reliability of phylogenetic methods at the root of the tree of life, I now think we were talking past one another. I took Sean to be arguing that the difficulties of inference at the root infected the whole tree, but I now think he was only arguing that it made inference regarding the flagellum difficult. I agree here. I disagree that the problems are intractable, but we can leave that for another time.

Indeed, this was my argument…

Now, on to where I think the more interesting questions lie.

ONE
I would like a commitment from Sean to send his in-principle argument about the limits of natural selection to a peer reviewed scientific journal. I have argued repeatedly that there is no reason to think that such arguments are not given a fair hearing by the scientific community (indeed, Axe’s work itself shows this), and I think that the best way to improve scientific arguments is to expose them to the experts.

You’ve mentioned this several times before, and I do note your advice here, though I do not share your optimism regarding an unbiased vetting process or the lack of personal passion when it comes to what is and what is not published on hot-button topics. The fairly recent global warming E-mail scandal should be enough to open your eyes to this whole problem with publishing in maintream journals which are clearly hostile to non-party line ideas. But, perhaps, a more benignly worded paper with more liimited conclusions as to the meaning of the data could yet be accepted for publication along these lines

Until then, however, let’s deal with your own personal counterarguments…

TWO
“…What this means is that the differences are clustered or nested because of the different functional needs of different organisms in different environments.” – Sean Pitman

This suggests the following prediction. Suppose that there is a group of species which are discovered to have enough non-functional sequences to ground a phylogenetic inference made purely on that basis. Sean, do you predict that this phylogeny is no more likely than chance to agree with the phylogeny produced by whole genome analysis? The quoted remark suggests that you will. As you know, evolutionary theory predicts agreement. You should be especially willing to make the prediction given your scepticism about the existence of non-functional sequences. I on the other hand regard the existence of many such sequences as established to a reasonably high degree of confidence. Of course, we would have to agree on criteria for determining that a particular sequence is non-functional, but there are good ways of doing so that I think we can ultimately agree on. For now, I just want you to accept this prediction, or to reject it and propose another instead. I remind you that you were the one who claimed that the design hypothesis makes predictions.

Your suggestion that neutral DNA will show the same phylogenetic “relationships” as functional DNA is a problematic argument because of the fact that many if not most sequences previously thought to be neutral or nearly so are showing evidence of functionality and at least some influence by NS. Please refere to the following commentary in this regard:

In fact, the most detailed probe yet into the workings of the human genome has led scientists to conclude [as of June 14, 2007] that a cornerstone concept about the chemical code for life is badly flawed. Reporting in the British journal Nature and the US journal Genome Research on Thursday [June 14, 2007], they suggest that an established theory about the genome should be consigned to history.
In between the genes and the sequences known to regulate their activity are long, tedious stretches that appear to do nothing. The term for them is “junk” DNA, reflecting the presumption that they are merely driftwood from our evolutionary past and have no biological function. But the work by the ENCODE (ENCyclopaedia of DNA Elements) consortium implies that this nuggets-and-dross concept of DNA should be, well, junked.
The genome turns out to a highly complex, interwoven machine with very few inactive stretches, the researchers report. Genes, it transpires, are just one of many types of DNA sequences that have a functional role. And “junk” DNA turns out to have an essential role in regulating the protein-making business. Previously written off as silent, it emerges as a singer with its own discreet voice, part of a vast, interacting molecular choir.
“The majority of the genome is copied, or transcribed, into RNA, which is the active molecule in our cells, relaying information from the archival DNA to the cellular machinery,” said Tim Hubbard of the Wellcome Trust Sanger Institute, a British research group that was part of the team. “This is a remarkable finding, since most prior research suggested only a fraction of the genome was transcribed.”
Francis Collins, director of the US National Human Genome Research Institute (NHGRI), which coralled 35 scientific groups from around the world into the ENCODE project, said the scientific community “will need to rethink some long-held views about what genes are and what they do.”

ENCORE Project Consortium et al., Identification and analysis of functional elements in 1% of the human genome by the ENCODE pilot project, Nature 447, 799-816 (14 June 2007); Richard Ingham, Landmark study prompts rethink of genetic code, Yahoo News, accessed June 15, 2007

“We fooled ourselves into thinking the genome was going to be a transparent blueprint, but it’s not,” says Mel Greaves, a cell biologist at the Institute of Cancer Research in Sutton, UK. Instead, as sequencing and other new technologies spew forth data, the complexity of biology has seemed to grow by orders of magnitude. Delving into it has been like zooming into a Mandelbrot set — a space that is determined by a simple equation, but that reveals ever more intricate patterns as one peers closer at its boundary….
“It seems like we’re climbing a mountain that keeps getting higher and higher,” says Jennifer Doudna, a biochemist at the University of California, Berkeley. “The more we know, the more we realize there is to know.”…
Researchers from an international collaborative project called the Encyclopedia of DNA Elements (ENCODE) showed that in a selected portion of the genome containing just a few per cent of protein-coding sequence, between 74% and 93% of DNA was transcribed into RNA. Much non-coding DNA has a regulatory role; small RNAs of different varieties seem to control gene expression at the level of both DNA and RNA transcripts in ways that are still only beginning to become clear. “Just the sheer existence of these exotic regulators suggests that our understanding about the most basic things — such as how a cell turns on and off — is incredibly naive,” says Joshua Plotkin, a mathematical biologist at the University of Pennsylvania in Philadelphia.

Erika Check Hayden, Human genome at ten: Life is complicated, Nature 464, 664-667, Published online 31 March 2010

So, the notion that it is easy to determine which sequences are actually “neutral” with respect to functionality is just a bit premature I would think. Beyond this, truly neutral sequences should not be maintained very long within a genome. They should be fairly rapidly scrambled by random mutations over time. The existence of non-scrambled sequences over vast pariods of time would be evidence of beneficial functionality, not neutrality.

Also, if all living things were in fact produced within recent history, within the last 10,000 years or so, then the patterns within neutral sequences showing a similar hiearchy compared to functional sequences wouldn’t necessarily be all that different since there would not have been enough time to really scramble the neutral sequences…

THREE
In general I am interested in your response to my basic argument that you quite nicely summarised in an earlier post. I will attempt to re-state that argument here, correcting some points on which I would state things a little differently.

I claim it is reasonable to believe that all biological systems have evolved by natural mechanisms from a single common ancestor. Regarding the relative importance of natural selection, I remain neutral. The evidence for this claim is overwhelming, and has been steadily increasing since Darwin. The most powerful recent evidence comes from the molecular structure of the genome, and some quite detailed possible sequences for the evolution of complex structures have been proposed, including for the flagellum. Moreover the prevalence of homologies throughout the genome suggest that such sequences can be completed. Nevertheless, these sequences are not yet complete, in the sense that it is not the case that every step has been detailed. In light of all this gathering evidence, any argument purporting to provide in-principle reasons to doubt that any natural mechanism could have produced this pattern of evolution is highly unlikely to be sound.

It is interesting to me that you claim to believe in mindless natural mechanisms as the source of all or at least most of the diversity of life while at the same time being “neutral” regarding the involvement of natural selection as the driving natural force. In other words, you believe that the mechanism was some mindless natural process even though you are at best abivalent as to what mindless natural mechanism might be viable?

This is interesting because this was Darwin’s real claim to fame – the proposal of what many believed at the time, and still believe today, to be a viable naturalistic mechansim to explain the origin of the diversity of living things. Without the evident viability of this mechanism it is arguably likely that Darwin would have never become famous and naturalism, with its emphasis on mindless natural mechanisms being responsible for all that we see around and within us, would have never taken hold of the scientific community as it has today. It is, after all, the Darwinian mechanism that allowed for, as Dawkins put it, “Intellectually fulfilled atheism”…

Yet, you cosider the evidence of patterns themselves, without regard to associated functionality, to support your conclusion of a mindless natural mechanism. In this line, consider the following hypothetical situation. Let say that one of our Mars rovers happens across a collection of granite rocks. Lets say that these rocks come in a variety of interesting shapes and sizes. All are highly symmetrical and polished to a high level of precision. Some are highly symmetrical cubes. Others form highly symmetrical quadrahedrons. Yet others come in a large variety of very highly symmetrical geometric shapes of various sizes. Lets say that these granite objects could be classified very nicely into NHPs very similar to how living things are classified. What would this tell us about their origin? Would the pattern clearly indicate a mindless natural origin?

Now, you may ask why any intelligent designer would have produced such a “natural” pattern using the medium of granite, but does one really need to know why an intelligent designer would do anything before the need for high level intelligence is clearly required to explain the origin of certain phenomena?

Again I ask you, what is the basis of anthropology, forensic science, or even SETI when it comes to ID hypotheses?

If there is no known mindless naturalistic mechanism to explain a highly symmetrical granite cube, while there are many known intelligence-driven mechanisms to explain such a phenomena, what is the most reasonable “scientific” conclusion regarding the origin of this granite cube? – regardless of the NHP pattern of granite objects in the vicinity?

Sean Pitman
www.DetectingDesign.com

Sean Pitman Also Commented

GC Votes to Revise SDA Fundamental #6 on Creation
@Brad:

I am probably going to write far too much but if you want the conclusion, it is that Sean Pitman is completely and utterly wrong in everything he says in his comments and displays a great ignorance of proteins and their structure and function.

And:

I hope the above short essay on protein structure and function is useful even to Sean Pitman who needs to stop being obsessed with computer-based numerology and do some reading and talk to some practical protein scientists.

From David Dryden of the University of Edinburgh. See: http://groups.google.com/group/talk.origins/msg/a7f670c859772a9b

Ah, so you’ve read Dryden’s arguments…

Where did Dryden point out my ignorance of protein structure and function? I am, after all, a pathologist with a subspecialty in hematopathology – a field of medicine that depends quite heavily on at least some understanding of protein structure and function. Yet Dryden says that I’m completely and utterly wrong in everything I say on this topic? Sounds just a bit overwrought – don’t you think?

In any case, where did Dryden substantively address my argument for an exponential decline of evolutionary potential with increasing minimum structural threshold requirements? Dryden himself only deals with very low level examples of evolution in action. He doesn’t even consider the concept of higher levels of functional complexity and the changes in the ratios of beneficial vs. non-beneficial sequences that would be realized in sequence space.

Dryden also completely misunderstands the challenge of the structural cutoff of systems that require a minimum of at least 1000 specifically arranged amino acid residues to work to do a particular function. He also flatly contradicts Axe’s work which suggests that it is not an easy thing to alter too many amino acid residue positions at the same time and still have the system in question work to do its original function. There is some flexibility to be sure, but there is a limit beyond which this flexibility cannot by crossed for protein-based systems. And, as this minimum limit increases for higher level systems, the ratio of beneficial vs. non-beneficial does in fact decrease exponentially. Dryden seems completely clueless on this particular all-important point.

This cluelessness is especially highlighted by Dryden’s comment that the bacterial rotary flagellum isn’t very complex at all:

These increasing degrees of functional complexity are a mirage.
Just because a flagellum spins and looks fancy does not mean it is
more complex than something smaller. The much smaller wonderful
machines involved in manipulating DNA, making cell walls or
cytoskeletons during the cell’s lifecycle do far more complex and
varied things including switching between functions. Even a small
serine protease has a much harder job than the flagellum. The
flagellum just spins and spins and yawn…

I really couldn’t believe that Dryden actually said this when I first read it. Dryden actually suggests that a small serine protease is more functionally complex than a bacterial flagellum?! – just because it is used more commonly in various metabolic pathways? – or more interesting to Dryden? He completely misses the point that the bacterial flagellum requires, at minimum, a far far greater number of specifically arranged amino acid “parts” than does a serine protease – thousands more.

And Dryden is your “expert” regarding the potential of RM/NS to create protein-based systems beyond very low levels of functional complexity? Why not find somebody who actually seems to understand the basic concept?

Here’s another gem from Dryden. In response to my comment that, “The evidence shows that the distances [in sequence space] between
higher and higher level beneficial sequences with novel functions
increases in a linear manner.” Dryden wrote:

Reply: What evidence? And if importance of function scales with
sequence length and the scaling is linear then I am afraid that 20^100
is essentially identical to 2 x 20^100. Also a novel function is not a
new function but just one we stumble upon in doing the hard work in
the lab. It’s been there a long time…

Dryden doesn’t grasp that in the debate over the creative potential of RM/NS that a novel functional system is one that the evolving population is looking for – not some lab scientists. It is only there in the potential of sequence space. It is not found until random mutations within the gene pool discover it by pure luck.

Dryden also doesn’t understand that this discussion isn’t over the “importance of function” but over levels of beneficial functionality – regardless of there “importance”. He also doesn’t understand that if a system requires a minimum sequence length or size (to include multiprotein systems) and a minimum degree of specific arrangement of amino acid residues within that minimum size, that a linear increase in this minimum structural threshold requirement does not result in a linear increase in average number of random mutations needed to achieve success. The linear increase in structural threshold results in an exponential decrease in the ratio of potentially beneficial vs. non-beneficial. This, obviously (to the candid mind anyway) will result in an exponential increase in the average number of random mutations needed to achieve success at the higher level.

Really, I would love to hear your take on Dryden’s paper in the light of a complete lack of evolution in action beyond very very low levels of functional complexity – i.e., minimum structural threshold requirements. I’m sure you could do a better job than he did…

Sean Pitman
www.DetectingDesign.com


GC Votes to Revise SDA Fundamental #6 on Creation
@Brad:

I’ll reply to your comments over on the special thread I created for this particular discussion regarding the anti-ID arguments of Elliot Sober:

http://www.educatetruth.com/la-sierra-evidence/elliot-sober-just-dont-call-the-designer-god/

Sean Pitman
www.DetectingDesign.com


GC Votes to Revise SDA Fundamental #6 on Creation
@Brad:

So, do you or do you not accept that, regarding this specific question, the design hypothesis predicts that we will not see congruence between the phylogenies (conditional on the two testable possibilities you provided having low probability)? If you do not, you owe us an explanation of why not, given your claim that the hypothesis is testable.

The “prediction” of ID is that only ID-driven mechanisms will be found to produce the phenomenon in question – that no non-intelligent mechanism will come remotely close to doing the job.

As I’ve mentioned to you before, you cannot “predict” any particular features of what a designer will do or would have done without direct knowledge of the designer in question. However, a lack of such direct knowledge does not remove the scientific ability to detect a true artifact when you see one with high predictive value.

This is the reason I’ve asked you to discuss the granite NHP problem I’ve presented. Instead, you’ve referred me, yet again, to the arguments of another without presenting any argument of your own or even commenting on those ideas that you consider to be most personally convincing to you.

My interest is in forcing you to make a prediction. You claimed you have one; we are all still waiting.

My claim was that evolutionists would have an easier time of things if functionality wasn’t involved in the ToL. The reason for this is that mindless mechanisms can produce NHPs – and do so all the time. However, mindless mechanisms are extremely unlikely to produce high levels of functional complexity in a reasonable amount of time and have never been observed to do so.

In short, some things you can’t predict; some things you can – – with regard to the ID-only hypothesis. You are asking me to predict those things that are not predictable from an ID perspective. You are then arguing that because such things are not predictable that ID cannot be scientifically detectable. This assumption of yours simply doesn’t follow for me…

Therefore, I’m interested in hearing you explain the logical basis behind various fields of science which invoke ID (such as anthropology, forensics, and SETI). What “predictions” are needed to support the ID hypothesis in those sciences? You don’t seem to want to personally address this question for some reason. Why not?

Regarding your reference to Elliot Sober, it would be more interesting for me if you would present your personal take on his arguments rather than simply referencing him without presenting any argument of your own.

But anyway, to get you started, I suggest that there are a number of logical flaws in Elliott Sober’s paper:

The anti-ID Arguments of Elliot Sober

http://philosophy.wisc.edu/sober/design%20argument%2011%202004.pdf

For example, Sober presents the “inverse gambler’s fallacy” noting that it would be a logical error to assume that just because a pair of dice landed on double sixes the first few times that they were observed to be rolled does not mean that a roll of double sixes is more likely. After all, Sober argues, the odds of rolling double sixes are 1/36 regardless of how many times double sixes are initially observed to be rolled in a row. The problem here is that Sober assumes, a priori that the dice are actually “fair” dice that haven’t been loaded or biased in any way.

The problem here is that Sober assumes, a priori that the dice are actually “fair” dice that haven’t been loaded or biased in any way. The assumption of fair dice is a hypothesis that can be subject to testing and potential statistical falsification simply by observing the outcome of a number of rolls of the dice – without actually knowing, for sure, if the dice are or are not loaded. Based on the statistical pattern alone one can gain very high predictive value regarding the hypothesis that the dice are in fact loaded or biased vs. the alternate hypothesis that they are actually fair dice. Such observations have been very successfully used by carefully observant gamblers to exploit subtle biases in roulette wheels, dice, and other games of chance that are dependent upon apparent randomness or non-predictability of a biased pattern against the pattern that the house is betting on…

Can such biases be determined with absolute certainty? – based only on the patterns produced and nothing else? Of course not! But, science isn’t about perfection, but about determining useful degrees of predictive value that are always open to additional testing and potential falsification by future information.

This addresses yet another flaw in Sober’s paper. Sober accuses IDists of appealing to the concept of “modus tollens“, or the absolute perfection of the ID hypothesis. He uses the illustration of a million monkey’s randomly typing on typewriters producing all of the works of Shakespeare. He argues that while such a scenario is extremely unlikely, that it isn’t statistically impossible. There is still a finite probability of success.

While this is true, science doesn’t go with what is merely possible, but what is probable given the available evidence at hand. This is the reason why nobody reading a Shakespearean sonnet would think that it was the product of any kind of mindless random production. The same would be true if you were to walk out of your house and see that the pansies in your front yard had spelled out the phrase, “Good Morning. We hope you have a great day!”

Given such a situation you would never think that such a situation occurred by any non-deliberate mindless process of nature. You would automatically assume deliberate design. Why? Do you know?

Sober argues that if a known designer is not readily available to explain a given phenomenon, that the likelihood that a designer was responsible is just as remotely unlikely as is the notion that a mindless process was responsible for such an unlikely event. Therefore, there is essentially no rational basis to assume intelligent design. However, by the same argument, there would be no rational basis to assume non-intelligent design either.

The detail that Sober seems to selectively overlook is that if certain features fall within the known creative potential of known intelligent agents (i.e., humans) while being well outside of the realm of all known non-deliberate forces of nature, the most rational conclusion is that of ID.

Essentially, Sober does away with all bases for hypothesizing ID behind anything for which an intelligent agent is not directly known. This essentially includes all of modern science that deals with ID – to include anthropology, forensic science, and especially SETI. Yet, amazingly, he goes on to use this very same argument in support of the ID detecting abilities of the same.

In the end, it seems like Sober is more concerned about the specific identity of the designer not being “God” rather being concerned about the idea that the scientific inference of a need for some kind of intelligent designer to explain certain kinds of phenomena is in fact overwhelmingly reasonable – scientifically.

Ultimately, it seems to me like Sober’s arguments are really directed against the detection of God, not intelligent design…

In this line Sober writes:

The upshot of this point for Paley’s design argument is this: Design arguments for the existence of human (and human-like) watchmakers are often unproblematic; it is design arguments for the existence of God that leave us at sea.

– Elliot Sober

Of course, my ID-only hypothesis does not try to demonstrate the need for God. Rather it suggests that at least human-level intelligence had to have been involved to explain certain features of the universe and of life on this planet. It doesn’t attempt to argue that a God or God-like intelligence had to have been involved. If fact, it is impossible for the finite to prove the need for the infinite. However, one may argue that from a given finite perspective a particular phenomenon would require the input of a creative intelligence that would be indistinguishable from a God or God-like creative power.

At this point, a belief that such a God-like creator is in fact omnipotent is not unreasonable, but must be based, not on demonstration, but on trust in the testimony of this Creative Power. If a God-like creative power personally claims to be “The” God of all, Omnipotent in every way, it would be very hard for someone from my perspective to reasonably argue otherwise…

Anyway, your thoughts regarding what seems so convincing to you about Sober’s “arguments” would be most interesting – especially as they apply to granite NHPs or other such “artifacts”…

Sean Pitman
www.DetectingDesign.com


Recent Comments by Sean Pitman

After the Flood
Thank you Ariel. Hope you are doing well these days. Miss seeing you down at Loma Linda. Hope you had a Great Thanksgiving!


The Flood
Thank you Colin. Just trying to save lives any way I can. Not everything that the government does or leaders do is “evil” BTW…


The Flood
Only someone who knows the future can make such decisions without being a monster…


Pacific Union College Encouraging Homosexual Marriage?
Where did I “gloss over it”?


Review of “The Naked Emperor” by Pastor Conrad Vine
I fail to see where you have convincingly supported your claim that the GC leadership contributed to the harm of anyone’s personal religious liberties? – given that the GC leadership does not and could not override personal religious liberties in this country, nor substantively change the outcome of those who lost their jobs over various vaccine mandates. That’s just not how it works here in this country. Religious liberties are personally derived. Again, they simply are not based on a corporate or church position, but rely solely upon individual convictions – regardless of what the church may or may not say or do.

Yet, you say, “Who cares if it is written into law”? You should care. Everyone should care. It’s a very important law in this country. The idea that the organized church could have changed vaccine mandates simply isn’t true – particularly given the nature of certain types of jobs dealing with the most vulnerable in society (such as health care workers for example).

Beyond this, the GC Leadership did, in fact, write in support of personal religious convictions on this topic – and there are GC lawyers who have and continue to write personal letters in support of personal religious convictions (even if these personal convictions are at odds with the position of the church on a given topic). Just because the GC leadership also supports the advances of modern medicine doesn’t mean that the GC leadership cannot support individual convictions at the same time. Both are possible. This is not an inconsistency.