@Brad: @Sean Pitman: Regarding transitions, the key statement in your reply …

Comment on GC Votes to Revise SDA Fundamental #6 on Creation by Sean Pitman.

@Brad:

@Sean Pitman:

Regarding transitions, the key statement in your reply is this: “The actual series of transitions is not needed before the problem can be known to be insurmountable”.

Let’s be clear. You acknowledge that you are not in a position to exhaustively enumerate the possible transitions between bacterial species, and thereby demonstrate that there is no path through which natural selection could travel with reasonable probability. Nevertheless, you claim to know that there exists no such path. That is, it’s not just that you believe there exists no such path, or have some evidence that there exists no such path; rather you claim to know that there exists no such path. Well, as I said before, I invite you to submit your reasons to a peer reviewed scientific journal, and if you do I would very much like to see the reports of the referees.

I invite you to explain the statistical odds that such a path really exists, a path that is actually crossable via RM/NS this side of a practical eternity of time, beyond very low levels of functional complexity.

You are the one arguing that the existence of such a path is scientifically supported. Where? Where is the existence of such a path supported as being statistically likely beyond very low levels of functional complexity? The evolution scenarios you referenced for the assumed flagellar evolution pathways suggest steppingstones that are not remotely close enough together for RMs to step across in a reasonable amount of time (i.e., this side of trillions of years of time). Nowhere in these papers are any statistical analyses given for the time it would likely take to produce the functional changes needed to get from one steppingstone to the next in their proposed pathway.

Oh, but perhaps there are more steppingstones within the pathway that are yet to be discovered? – undiscovered steppingstones within sequence space that are in fact much closer together? This is a possibility, but the odds of such a situation are extremely poor given the extremely tiny ratio of viable vs. non-viable sequences at these higher levels of functional complexity.

Science isn’t based on what might be discovered in the future. Science is based on what is known right now. And, what is known right now is that beneficial sequences are distributed in an essentially uniform manner throughout sequence space and that they experience an exponential decline in relative numbers compared to non-viable sequences with each increase in minimum structural threshold requirements.

This problem, though quite real and seemingly obvious, is the reason why there have been no observed examples of evolution in action at such levels of functional complexity. None of the proposed steppingstones for flagellar evolution, for example, have been crossed in real time. Why not? Because, the gap distances between these steppingstones are statistically huge – very unlikely to be crossed, even with very large population numbers, this side of trillions upon trillions of years of time.

It is somewhat of a mystery, therefore, why this exponential non-beneficial gap problem is not discussed in mainstream literature?

Regarding design rationale, you wave your hands at design reasons and aesthetic reasons. But I want you to provide a specific hypothesis that you think is credible. Why would the designer have made the flagellar system so that it looks for all the world like it was assembled piece by piece in evolutionary sequence? Did they want to fool us?

The “evolutionary sequence” is a nested hierarchical pattern (NHP). Common descent is not the only reason for the existence of such patterns. Such patterns are deliberately produced by design as previously noted – to include the designs of interacting functional elements as in object oriented computer programming. I fail to see how I’m just “waving my hands” by referring to such examples of NHPs?

And, yet again, the existence of a pattern that can be explained by a particular mechanism does not explain how this same mechanism produced all aspects of the phenomenon in question that are associated with the pattern.

Again, both intelligent design and common descent can explain NHPs. In determining the need to invoke an ID-only hypothesis, one does not need to know anything about the actual motive of the designer. All one needs to know is that some aspect of the phenomenon in question is far beyond any known non-deliberate force of nature while being well within the creative powers of intelligent design.

For example, say that the Mars Land Rover happened to come across a highly symmetrical polished granite cube measuring 10 x 10 x 10 meters. Such a cube, if actually discovered, would be hailed by scientists themselves as clear evidence of a deliberately produced artifact. Such a cube would be clearly artifactual even if the actual process by which it was made and the motives for why the intelligent agent(s) made it remained unknown.

How can I be so sure? Because, such a cube is well beyond the known creative processes of non-deliberate forces of nature, yet is well within the known creative potential of intelligent designers.

The very same thing is true of biosystem complexity. Beyond very low levels of functional complexity there is no force of mindless nature that can be used as a rational explanation. RM/NS has simply proven untenable and there is no other known mindless force to take its place. All that is left to explain higher levels of functional complexity is very high level ID.

PS. Just curious. Have you really read the Rudwick book?

I have only read excerpts and reviews of this particular book, but I would like to read the whole thing eventually.

From what I understand of Rudwick’s main arguments for now, however, is that he sees the geologic and fossil evidence as clearly indicating that “species had died out in piecemeal fashion and not as the result of one main catastrophe.” He also heavily discusses Charles Lyell’s approach to geology. Of course, Lyell viewed the evolution of the Earth as a slow and continuous/uniform process. Also, as Rudwick points out, Lyell’s uniformitarian views and “actual causes” heavily influenced Darwin’s own thinking. And, Rudwick suggests that William Buckland’s “diluvianism” notions and the concept of a world-wide biblical flood were clearly falsified by mainstream science though the discoveries of the 1800-1900s.

As far as I am aware, Rudwick does not discuss the many problems with uniformitarian thinking or the massive problems for the current mainstream model of the geologic column and fossil records representing hundreds of millions of years of time. While catastrophic concepts are finally beginning to reemerge within mainstream thinking, vast periods of time are still thought to exist between those layers that were clearly deposited in a catastrophic manner. Such modern theories also seem to me to have enormous problems when it comes to explaining many features of the geologic column and fossil record – and even certain genetic features.

So, between now and the time that I actually get to read Rudwick, why don’t you explain to me at least some of the various features I have listed on my website that seem to challenge modern interpretations of the geologic column and fossil record? – and strongly favor a catastrophic model of origins?

http://www.detectingdesign.com/geologiccolumn.html#Counter
http://www.detectingdesign.com/fossilrecord.html

Sean Pitman
www.DetectingDesign.com

Sean Pitman Also Commented

GC Votes to Revise SDA Fundamental #6 on Creation
@Brad:

I am probably going to write far too much but if you want the conclusion, it is that Sean Pitman is completely and utterly wrong in everything he says in his comments and displays a great ignorance of proteins and their structure and function.

And:

I hope the above short essay on protein structure and function is useful even to Sean Pitman who needs to stop being obsessed with computer-based numerology and do some reading and talk to some practical protein scientists.

From David Dryden of the University of Edinburgh. See: http://groups.google.com/group/talk.origins/msg/a7f670c859772a9b

Ah, so you’ve read Dryden’s arguments…

Where did Dryden point out my ignorance of protein structure and function? I am, after all, a pathologist with a subspecialty in hematopathology – a field of medicine that depends quite heavily on at least some understanding of protein structure and function. Yet Dryden says that I’m completely and utterly wrong in everything I say on this topic? Sounds just a bit overwrought – don’t you think?

In any case, where did Dryden substantively address my argument for an exponential decline of evolutionary potential with increasing minimum structural threshold requirements? Dryden himself only deals with very low level examples of evolution in action. He doesn’t even consider the concept of higher levels of functional complexity and the changes in the ratios of beneficial vs. non-beneficial sequences that would be realized in sequence space.

Dryden also completely misunderstands the challenge of the structural cutoff of systems that require a minimum of at least 1000 specifically arranged amino acid residues to work to do a particular function. He also flatly contradicts Axe’s work which suggests that it is not an easy thing to alter too many amino acid residue positions at the same time and still have the system in question work to do its original function. There is some flexibility to be sure, but there is a limit beyond which this flexibility cannot by crossed for protein-based systems. And, as this minimum limit increases for higher level systems, the ratio of beneficial vs. non-beneficial does in fact decrease exponentially. Dryden seems completely clueless on this particular all-important point.

This cluelessness is especially highlighted by Dryden’s comment that the bacterial rotary flagellum isn’t very complex at all:

These increasing degrees of functional complexity are a mirage.
Just because a flagellum spins and looks fancy does not mean it is
more complex than something smaller. The much smaller wonderful
machines involved in manipulating DNA, making cell walls or
cytoskeletons during the cell’s lifecycle do far more complex and
varied things including switching between functions. Even a small
serine protease has a much harder job than the flagellum. The
flagellum just spins and spins and yawn…

I really couldn’t believe that Dryden actually said this when I first read it. Dryden actually suggests that a small serine protease is more functionally complex than a bacterial flagellum?! – just because it is used more commonly in various metabolic pathways? – or more interesting to Dryden? He completely misses the point that the bacterial flagellum requires, at minimum, a far far greater number of specifically arranged amino acid “parts” than does a serine protease – thousands more.

And Dryden is your “expert” regarding the potential of RM/NS to create protein-based systems beyond very low levels of functional complexity? Why not find somebody who actually seems to understand the basic concept?

Here’s another gem from Dryden. In response to my comment that, “The evidence shows that the distances [in sequence space] between
higher and higher level beneficial sequences with novel functions
increases in a linear manner.” Dryden wrote:

Reply: What evidence? And if importance of function scales with
sequence length and the scaling is linear then I am afraid that 20^100
is essentially identical to 2 x 20^100. Also a novel function is not a
new function but just one we stumble upon in doing the hard work in
the lab. It’s been there a long time…

Dryden doesn’t grasp that in the debate over the creative potential of RM/NS that a novel functional system is one that the evolving population is looking for – not some lab scientists. It is only there in the potential of sequence space. It is not found until random mutations within the gene pool discover it by pure luck.

Dryden also doesn’t understand that this discussion isn’t over the “importance of function” but over levels of beneficial functionality – regardless of there “importance”. He also doesn’t understand that if a system requires a minimum sequence length or size (to include multiprotein systems) and a minimum degree of specific arrangement of amino acid residues within that minimum size, that a linear increase in this minimum structural threshold requirement does not result in a linear increase in average number of random mutations needed to achieve success. The linear increase in structural threshold results in an exponential decrease in the ratio of potentially beneficial vs. non-beneficial. This, obviously (to the candid mind anyway) will result in an exponential increase in the average number of random mutations needed to achieve success at the higher level.

Really, I would love to hear your take on Dryden’s paper in the light of a complete lack of evolution in action beyond very very low levels of functional complexity – i.e., minimum structural threshold requirements. I’m sure you could do a better job than he did…

Sean Pitman
www.DetectingDesign.com


GC Votes to Revise SDA Fundamental #6 on Creation
@Brad:

I’ll reply to your comments over on the special thread I created for this particular discussion regarding the anti-ID arguments of Elliot Sober:

http://www.educatetruth.com/la-sierra-evidence/elliot-sober-just-dont-call-the-designer-god/

Sean Pitman
www.DetectingDesign.com


GC Votes to Revise SDA Fundamental #6 on Creation
@Brad:

So, do you or do you not accept that, regarding this specific question, the design hypothesis predicts that we will not see congruence between the phylogenies (conditional on the two testable possibilities you provided having low probability)? If you do not, you owe us an explanation of why not, given your claim that the hypothesis is testable.

The “prediction” of ID is that only ID-driven mechanisms will be found to produce the phenomenon in question – that no non-intelligent mechanism will come remotely close to doing the job.

As I’ve mentioned to you before, you cannot “predict” any particular features of what a designer will do or would have done without direct knowledge of the designer in question. However, a lack of such direct knowledge does not remove the scientific ability to detect a true artifact when you see one with high predictive value.

This is the reason I’ve asked you to discuss the granite NHP problem I’ve presented. Instead, you’ve referred me, yet again, to the arguments of another without presenting any argument of your own or even commenting on those ideas that you consider to be most personally convincing to you.

My interest is in forcing you to make a prediction. You claimed you have one; we are all still waiting.

My claim was that evolutionists would have an easier time of things if functionality wasn’t involved in the ToL. The reason for this is that mindless mechanisms can produce NHPs – and do so all the time. However, mindless mechanisms are extremely unlikely to produce high levels of functional complexity in a reasonable amount of time and have never been observed to do so.

In short, some things you can’t predict; some things you can – – with regard to the ID-only hypothesis. You are asking me to predict those things that are not predictable from an ID perspective. You are then arguing that because such things are not predictable that ID cannot be scientifically detectable. This assumption of yours simply doesn’t follow for me…

Therefore, I’m interested in hearing you explain the logical basis behind various fields of science which invoke ID (such as anthropology, forensics, and SETI). What “predictions” are needed to support the ID hypothesis in those sciences? You don’t seem to want to personally address this question for some reason. Why not?

Regarding your reference to Elliot Sober, it would be more interesting for me if you would present your personal take on his arguments rather than simply referencing him without presenting any argument of your own.

But anyway, to get you started, I suggest that there are a number of logical flaws in Elliott Sober’s paper:

The anti-ID Arguments of Elliot Sober

http://philosophy.wisc.edu/sober/design%20argument%2011%202004.pdf

For example, Sober presents the “inverse gambler’s fallacy” noting that it would be a logical error to assume that just because a pair of dice landed on double sixes the first few times that they were observed to be rolled does not mean that a roll of double sixes is more likely. After all, Sober argues, the odds of rolling double sixes are 1/36 regardless of how many times double sixes are initially observed to be rolled in a row. The problem here is that Sober assumes, a priori that the dice are actually “fair” dice that haven’t been loaded or biased in any way.

The problem here is that Sober assumes, a priori that the dice are actually “fair” dice that haven’t been loaded or biased in any way. The assumption of fair dice is a hypothesis that can be subject to testing and potential statistical falsification simply by observing the outcome of a number of rolls of the dice – without actually knowing, for sure, if the dice are or are not loaded. Based on the statistical pattern alone one can gain very high predictive value regarding the hypothesis that the dice are in fact loaded or biased vs. the alternate hypothesis that they are actually fair dice. Such observations have been very successfully used by carefully observant gamblers to exploit subtle biases in roulette wheels, dice, and other games of chance that are dependent upon apparent randomness or non-predictability of a biased pattern against the pattern that the house is betting on…

Can such biases be determined with absolute certainty? – based only on the patterns produced and nothing else? Of course not! But, science isn’t about perfection, but about determining useful degrees of predictive value that are always open to additional testing and potential falsification by future information.

This addresses yet another flaw in Sober’s paper. Sober accuses IDists of appealing to the concept of “modus tollens“, or the absolute perfection of the ID hypothesis. He uses the illustration of a million monkey’s randomly typing on typewriters producing all of the works of Shakespeare. He argues that while such a scenario is extremely unlikely, that it isn’t statistically impossible. There is still a finite probability of success.

While this is true, science doesn’t go with what is merely possible, but what is probable given the available evidence at hand. This is the reason why nobody reading a Shakespearean sonnet would think that it was the product of any kind of mindless random production. The same would be true if you were to walk out of your house and see that the pansies in your front yard had spelled out the phrase, “Good Morning. We hope you have a great day!”

Given such a situation you would never think that such a situation occurred by any non-deliberate mindless process of nature. You would automatically assume deliberate design. Why? Do you know?

Sober argues that if a known designer is not readily available to explain a given phenomenon, that the likelihood that a designer was responsible is just as remotely unlikely as is the notion that a mindless process was responsible for such an unlikely event. Therefore, there is essentially no rational basis to assume intelligent design. However, by the same argument, there would be no rational basis to assume non-intelligent design either.

The detail that Sober seems to selectively overlook is that if certain features fall within the known creative potential of known intelligent agents (i.e., humans) while being well outside of the realm of all known non-deliberate forces of nature, the most rational conclusion is that of ID.

Essentially, Sober does away with all bases for hypothesizing ID behind anything for which an intelligent agent is not directly known. This essentially includes all of modern science that deals with ID – to include anthropology, forensic science, and especially SETI. Yet, amazingly, he goes on to use this very same argument in support of the ID detecting abilities of the same.

In the end, it seems like Sober is more concerned about the specific identity of the designer not being “God” rather being concerned about the idea that the scientific inference of a need for some kind of intelligent designer to explain certain kinds of phenomena is in fact overwhelmingly reasonable – scientifically.

Ultimately, it seems to me like Sober’s arguments are really directed against the detection of God, not intelligent design…

In this line Sober writes:

The upshot of this point for Paley’s design argument is this: Design arguments for the existence of human (and human-like) watchmakers are often unproblematic; it is design arguments for the existence of God that leave us at sea.

– Elliot Sober

Of course, my ID-only hypothesis does not try to demonstrate the need for God. Rather it suggests that at least human-level intelligence had to have been involved to explain certain features of the universe and of life on this planet. It doesn’t attempt to argue that a God or God-like intelligence had to have been involved. If fact, it is impossible for the finite to prove the need for the infinite. However, one may argue that from a given finite perspective a particular phenomenon would require the input of a creative intelligence that would be indistinguishable from a God or God-like creative power.

At this point, a belief that such a God-like creator is in fact omnipotent is not unreasonable, but must be based, not on demonstration, but on trust in the testimony of this Creative Power. If a God-like creative power personally claims to be “The” God of all, Omnipotent in every way, it would be very hard for someone from my perspective to reasonably argue otherwise…

Anyway, your thoughts regarding what seems so convincing to you about Sober’s “arguments” would be most interesting – especially as they apply to granite NHPs or other such “artifacts”…

Sean Pitman
www.DetectingDesign.com


Recent Comments by Sean Pitman

Updating the SDA Position on Abortion
We are talking about something a bit more subtle here than the question as to what is merely “alive” and what is not “alive” – in the most basic sense of the term (as in the skin cell on my earlobe is “alive” and “human”). What we are talking about here are the qualitative differences between a single cell or a small cluster of unformed cells and a human being that can think and feel and appreciate sensory input. Now, someone might have their own personal ideas as to when, exactly, the human soul is achieved during embryogenesis, but the fact remains that the Bible is not clear on this particular question – and even includes passages suggesting that there is a spectrum of moral value to the human embryo/fetus. Consider the passage in Exodus 21:22-25, for example, which seems to many to suggest such a spectrum of value where the unformed fetus is not given the same value as a fully formed baby or the life of the mother (especially if read from the ancient LXX Greek translation which appears to be the most accurate translation of the original Hebrew text).

Because of this, there actually appears to me to be a great deal of disagreement among honest and sincere Bible-believing Christian medical professionals, embryologists, and theologians (modern and ancient) over when, exactly, during embryogenesis, does humanity become fully realized. Given the information currently in hand, I certainly could not, in good conscience, accuse a woman of “murder” for using various forms of birth control that end pregnancy within the first few days after conception (such as intrauterine devises or birth control pills).

Would you actually be willing to take action on this as you would for any other cold-blooded murderer? Would you be willing to accuse such a woman of wanton murder with all the guilt that is involved with such an accusation? or put her in prison for life for such an intentional act? Could you really do this? I certainly could not for the use of such standard forms of birth control during the earliest days following conception. Yet, that is what the current language of the church’s guidelines on abortion suggest… that these women who take such forms of birth control are in fact guilty of a heinous act of cold-blooded murder.


Updating the SDA Position on Abortion
”These latest guidelines appear to me to put the SDA Church in the same position as the Catholic Church on this topic –with human life beginning at the moment of conception. That doesn’t seem like a reasonable position ” – Sean Pitman (From an Adentist Review Article: https://www.adventistreview.org/church-news/story14133-statement-on-the-biblical-view-of-unborn-life-and-its-implications-for-abortion)

Yes, it is also in harmony with when every SDA pioneer, including the church’s founder, put the beginning of human life. It is also the same point that virtually every secular, evolutionary embryologist puts the beginning of human life. In fact, it’s the unanimous point of agreement among every biology textbook. If that doesn’t sound like a reasonable position based on empirical evidence then factual reasoning has escaped those who try to equate preventing the conception of a human being with intentional kllling of one that is already biologically and detectably in existence.


Brilliant and Beautiful, but Wrong
Thank you Wes. Really appreciate your note and being able to see you again!


Complex Organisms are Degenerating – Rapidly
As far as the current article is concerned, I know of no “outdated” information. The information is current as far as I’m aware. The detrimental mutation rate is far too high for complex organisms to avoid an inevitable downhill devolutionary path. There is simply no way to rationally avoid this conclusion as far as I’m aware.

So, perhaps your friend could be more specific regarding his particular objections to the information presented?


Complex Organisms are Degenerating – Rapidly
Look again. I did reference the 2018 paper of Basener and Sanford (which was the motivation for me writing this particular article). Of course, as you’ve mentioned, Sanford has also written an interesting book on this topic entitled, “Genetic Entropy” – which I’ve previously referenced before in this blog (along with a YouTube video of a lecture he gave on the topic at Loma Linda University: (Link). For those who haven’t read it or seen Sanford’s lecture on this topic, it’s certainly worth your time…