@Geanna Dane: Here is something else I do not understand. …

Comment on GC Votes to Revise SDA Fundamental #6 on Creation by Sean Pitman.

@Geanna Dane:

Here is something else I do not understand. You wrote this at Spectrummagazine.com:

It isn’t a change of 1000aa, it is any change, even a single amino acid change in any pre-existing sequence, that ends up hitting upon a new 1000aa system that has an attached function which itself requires at least 1000 fsaars to work. – Sean Pitman

Are you saying that a single amino acid change in a pre-existing sequence could lead to macroevolution? And that this single amino acid change would require “trillions upon trillions” of years to occur?

Yes, it is possible that a single amino acid change could produce a “macro” evolutionary change since a single amino acid change could produce a qualitatively novel functional system that requires a minimum of more than 1000 specifically arranged amino acid residues to work. It is just very very unlikely that such a thing would happen this side of trillions upon trillions of years of time.

For example, lets just say that a 1000aa system already existed within the gene pool doing one particular type of function. Now, let’s say that a single point mutation happens to come along and change the system so that it gains the ability to do a completely different type of function. Let’s also say that this qualitatively unique function just so happened to require, at minimum, 1000 specifically arranged amino acid residues. Such a scenario would indeed qualify as “macroevolution” in my book.

The problem is, of course, that such an observation has never been made and is, statistically, very unlikely to be made this side of a practical eternity of time (outside of deliberate design that is).

Such events have been assumed by mainstream scientists based on phylogenetic similarities, but have never been observed in real time. Lower level evolution has been commonly observed at levels below 300-400 fsaars, but not even close to the 1000 fsaar level of functional complexity. The reason for this stalling out effect of observable evolution in action is due to the exponential decline in the ratio of beneficial vs. non-beneficial sequences in sequence space at higher and higher levels of functional complexity…

You also misquote me in your discussion of the evolution of fangs and venom. I never said that teeth or heat-sensing pits required less than 1000 fsaar at minimum. They require more. However, getting bigger or sharper teeth is not a qualitative change, but a quantitative informational change. It is like modifying an enzyme, like lactase, so that it has greater activity. Such a modification is a quantitative change, not a qualitative change – as in going from a lactase enzyme to a nylonase enzyme.

Quantitative modifications are relatively easy to achieve via the mechanism of RM/NS in very short order. The same is not true of qualitatively unique functional changes and the difficulty increases exponentially with each step up the ladder of functional complexity.

If you actually viewed the lecture video I gave on the origin of carnivores and parasites you would note that such highly complex structures were based on front-loaded information. The qualitative informational complexity needed to produce these structures was already there, pre-loaded, in the original gene pool of options.

The sticking point that I noted in Spectrum, AToday, and here in this forum was over the origin of venom. Venom isn’t very functionally complex – not nearly as functionally complex as heat-sensing pits. The enzymatic activity needed for venom production is based on enzymes that require no more than a few hundred averagely specified amino acids at minimum. Such enzymatic activities evolve all the time in real time because, statistically, this level of evolutionary progress is very likely to be achieved in short order due to the relatively high ratio of potentially beneficial vs. non-beneficial sequences in sequence space at such low levels of functional complexity.

So, in the future, please try not to confuse the relatively simplistic venom of snakes with the far greater structural complexity of heat-sensing pits etc.

Regarding the mainstream notion that the far more complex heat-sensing pits evolved in snakes after venom evolved, such notions are not based on a statistical understanding of the functional changes that would have to be realized over time. They are based on the mere assumption that RM/NS would be able to do the job since their interpretation of phylogenetic data cannot, in their mind, be interpreted in any other way besides common evolutionary descent over time.

No one even considers the idea that perhaps original intelligent design had to have been involved with not only the high-level functional system, but with the phylogenetic patterns as well…

Remember also that God is not the only creative intelligence in this universe that had the potential to manipulate gene pools on this planet…

Sean Pitman
www.DetectingDesign.com

Sean Pitman Also Commented

GC Votes to Revise SDA Fundamental #6 on Creation
@Brad:

I am probably going to write far too much but if you want the conclusion, it is that Sean Pitman is completely and utterly wrong in everything he says in his comments and displays a great ignorance of proteins and their structure and function.

And:

I hope the above short essay on protein structure and function is useful even to Sean Pitman who needs to stop being obsessed with computer-based numerology and do some reading and talk to some practical protein scientists.

From David Dryden of the University of Edinburgh. See: http://groups.google.com/group/talk.origins/msg/a7f670c859772a9b

Ah, so you’ve read Dryden’s arguments…

Where did Dryden point out my ignorance of protein structure and function? I am, after all, a pathologist with a subspecialty in hematopathology – a field of medicine that depends quite heavily on at least some understanding of protein structure and function. Yet Dryden says that I’m completely and utterly wrong in everything I say on this topic? Sounds just a bit overwrought – don’t you think?

In any case, where did Dryden substantively address my argument for an exponential decline of evolutionary potential with increasing minimum structural threshold requirements? Dryden himself only deals with very low level examples of evolution in action. He doesn’t even consider the concept of higher levels of functional complexity and the changes in the ratios of beneficial vs. non-beneficial sequences that would be realized in sequence space.

Dryden also completely misunderstands the challenge of the structural cutoff of systems that require a minimum of at least 1000 specifically arranged amino acid residues to work to do a particular function. He also flatly contradicts Axe’s work which suggests that it is not an easy thing to alter too many amino acid residue positions at the same time and still have the system in question work to do its original function. There is some flexibility to be sure, but there is a limit beyond which this flexibility cannot by crossed for protein-based systems. And, as this minimum limit increases for higher level systems, the ratio of beneficial vs. non-beneficial does in fact decrease exponentially. Dryden seems completely clueless on this particular all-important point.

This cluelessness is especially highlighted by Dryden’s comment that the bacterial rotary flagellum isn’t very complex at all:

These increasing degrees of functional complexity are a mirage.
Just because a flagellum spins and looks fancy does not mean it is
more complex than something smaller. The much smaller wonderful
machines involved in manipulating DNA, making cell walls or
cytoskeletons during the cell’s lifecycle do far more complex and
varied things including switching between functions. Even a small
serine protease has a much harder job than the flagellum. The
flagellum just spins and spins and yawn…

I really couldn’t believe that Dryden actually said this when I first read it. Dryden actually suggests that a small serine protease is more functionally complex than a bacterial flagellum?! – just because it is used more commonly in various metabolic pathways? – or more interesting to Dryden? He completely misses the point that the bacterial flagellum requires, at minimum, a far far greater number of specifically arranged amino acid “parts” than does a serine protease – thousands more.

And Dryden is your “expert” regarding the potential of RM/NS to create protein-based systems beyond very low levels of functional complexity? Why not find somebody who actually seems to understand the basic concept?

Here’s another gem from Dryden. In response to my comment that, “The evidence shows that the distances [in sequence space] between
higher and higher level beneficial sequences with novel functions
increases in a linear manner.” Dryden wrote:

Reply: What evidence? And if importance of function scales with
sequence length and the scaling is linear then I am afraid that 20^100
is essentially identical to 2 x 20^100. Also a novel function is not a
new function but just one we stumble upon in doing the hard work in
the lab. It’s been there a long time…

Dryden doesn’t grasp that in the debate over the creative potential of RM/NS that a novel functional system is one that the evolving population is looking for – not some lab scientists. It is only there in the potential of sequence space. It is not found until random mutations within the gene pool discover it by pure luck.

Dryden also doesn’t understand that this discussion isn’t over the “importance of function” but over levels of beneficial functionality – regardless of there “importance”. He also doesn’t understand that if a system requires a minimum sequence length or size (to include multiprotein systems) and a minimum degree of specific arrangement of amino acid residues within that minimum size, that a linear increase in this minimum structural threshold requirement does not result in a linear increase in average number of random mutations needed to achieve success. The linear increase in structural threshold results in an exponential decrease in the ratio of potentially beneficial vs. non-beneficial. This, obviously (to the candid mind anyway) will result in an exponential increase in the average number of random mutations needed to achieve success at the higher level.

Really, I would love to hear your take on Dryden’s paper in the light of a complete lack of evolution in action beyond very very low levels of functional complexity – i.e., minimum structural threshold requirements. I’m sure you could do a better job than he did…

Sean Pitman
www.DetectingDesign.com


GC Votes to Revise SDA Fundamental #6 on Creation
@Brad:

I’ll reply to your comments over on the special thread I created for this particular discussion regarding the anti-ID arguments of Elliot Sober:

http://www.educatetruth.com/la-sierra-evidence/elliot-sober-just-dont-call-the-designer-god/

Sean Pitman
www.DetectingDesign.com


GC Votes to Revise SDA Fundamental #6 on Creation
@Brad:

So, do you or do you not accept that, regarding this specific question, the design hypothesis predicts that we will not see congruence between the phylogenies (conditional on the two testable possibilities you provided having low probability)? If you do not, you owe us an explanation of why not, given your claim that the hypothesis is testable.

The “prediction” of ID is that only ID-driven mechanisms will be found to produce the phenomenon in question – that no non-intelligent mechanism will come remotely close to doing the job.

As I’ve mentioned to you before, you cannot “predict” any particular features of what a designer will do or would have done without direct knowledge of the designer in question. However, a lack of such direct knowledge does not remove the scientific ability to detect a true artifact when you see one with high predictive value.

This is the reason I’ve asked you to discuss the granite NHP problem I’ve presented. Instead, you’ve referred me, yet again, to the arguments of another without presenting any argument of your own or even commenting on those ideas that you consider to be most personally convincing to you.

My interest is in forcing you to make a prediction. You claimed you have one; we are all still waiting.

My claim was that evolutionists would have an easier time of things if functionality wasn’t involved in the ToL. The reason for this is that mindless mechanisms can produce NHPs – and do so all the time. However, mindless mechanisms are extremely unlikely to produce high levels of functional complexity in a reasonable amount of time and have never been observed to do so.

In short, some things you can’t predict; some things you can – – with regard to the ID-only hypothesis. You are asking me to predict those things that are not predictable from an ID perspective. You are then arguing that because such things are not predictable that ID cannot be scientifically detectable. This assumption of yours simply doesn’t follow for me…

Therefore, I’m interested in hearing you explain the logical basis behind various fields of science which invoke ID (such as anthropology, forensics, and SETI). What “predictions” are needed to support the ID hypothesis in those sciences? You don’t seem to want to personally address this question for some reason. Why not?

Regarding your reference to Elliot Sober, it would be more interesting for me if you would present your personal take on his arguments rather than simply referencing him without presenting any argument of your own.

But anyway, to get you started, I suggest that there are a number of logical flaws in Elliott Sober’s paper:

The anti-ID Arguments of Elliot Sober

http://philosophy.wisc.edu/sober/design%20argument%2011%202004.pdf

For example, Sober presents the “inverse gambler’s fallacy” noting that it would be a logical error to assume that just because a pair of dice landed on double sixes the first few times that they were observed to be rolled does not mean that a roll of double sixes is more likely. After all, Sober argues, the odds of rolling double sixes are 1/36 regardless of how many times double sixes are initially observed to be rolled in a row. The problem here is that Sober assumes, a priori that the dice are actually “fair” dice that haven’t been loaded or biased in any way.

The problem here is that Sober assumes, a priori that the dice are actually “fair” dice that haven’t been loaded or biased in any way. The assumption of fair dice is a hypothesis that can be subject to testing and potential statistical falsification simply by observing the outcome of a number of rolls of the dice – without actually knowing, for sure, if the dice are or are not loaded. Based on the statistical pattern alone one can gain very high predictive value regarding the hypothesis that the dice are in fact loaded or biased vs. the alternate hypothesis that they are actually fair dice. Such observations have been very successfully used by carefully observant gamblers to exploit subtle biases in roulette wheels, dice, and other games of chance that are dependent upon apparent randomness or non-predictability of a biased pattern against the pattern that the house is betting on…

Can such biases be determined with absolute certainty? – based only on the patterns produced and nothing else? Of course not! But, science isn’t about perfection, but about determining useful degrees of predictive value that are always open to additional testing and potential falsification by future information.

This addresses yet another flaw in Sober’s paper. Sober accuses IDists of appealing to the concept of “modus tollens“, or the absolute perfection of the ID hypothesis. He uses the illustration of a million monkey’s randomly typing on typewriters producing all of the works of Shakespeare. He argues that while such a scenario is extremely unlikely, that it isn’t statistically impossible. There is still a finite probability of success.

While this is true, science doesn’t go with what is merely possible, but what is probable given the available evidence at hand. This is the reason why nobody reading a Shakespearean sonnet would think that it was the product of any kind of mindless random production. The same would be true if you were to walk out of your house and see that the pansies in your front yard had spelled out the phrase, “Good Morning. We hope you have a great day!”

Given such a situation you would never think that such a situation occurred by any non-deliberate mindless process of nature. You would automatically assume deliberate design. Why? Do you know?

Sober argues that if a known designer is not readily available to explain a given phenomenon, that the likelihood that a designer was responsible is just as remotely unlikely as is the notion that a mindless process was responsible for such an unlikely event. Therefore, there is essentially no rational basis to assume intelligent design. However, by the same argument, there would be no rational basis to assume non-intelligent design either.

The detail that Sober seems to selectively overlook is that if certain features fall within the known creative potential of known intelligent agents (i.e., humans) while being well outside of the realm of all known non-deliberate forces of nature, the most rational conclusion is that of ID.

Essentially, Sober does away with all bases for hypothesizing ID behind anything for which an intelligent agent is not directly known. This essentially includes all of modern science that deals with ID – to include anthropology, forensic science, and especially SETI. Yet, amazingly, he goes on to use this very same argument in support of the ID detecting abilities of the same.

In the end, it seems like Sober is more concerned about the specific identity of the designer not being “God” rather being concerned about the idea that the scientific inference of a need for some kind of intelligent designer to explain certain kinds of phenomena is in fact overwhelmingly reasonable – scientifically.

Ultimately, it seems to me like Sober’s arguments are really directed against the detection of God, not intelligent design…

In this line Sober writes:

The upshot of this point for Paley’s design argument is this: Design arguments for the existence of human (and human-like) watchmakers are often unproblematic; it is design arguments for the existence of God that leave us at sea.

– Elliot Sober

Of course, my ID-only hypothesis does not try to demonstrate the need for God. Rather it suggests that at least human-level intelligence had to have been involved to explain certain features of the universe and of life on this planet. It doesn’t attempt to argue that a God or God-like intelligence had to have been involved. If fact, it is impossible for the finite to prove the need for the infinite. However, one may argue that from a given finite perspective a particular phenomenon would require the input of a creative intelligence that would be indistinguishable from a God or God-like creative power.

At this point, a belief that such a God-like creator is in fact omnipotent is not unreasonable, but must be based, not on demonstration, but on trust in the testimony of this Creative Power. If a God-like creative power personally claims to be “The” God of all, Omnipotent in every way, it would be very hard for someone from my perspective to reasonably argue otherwise…

Anyway, your thoughts regarding what seems so convincing to you about Sober’s “arguments” would be most interesting – especially as they apply to granite NHPs or other such “artifacts”…

Sean Pitman
www.DetectingDesign.com


Recent Comments by Sean Pitman

After the Flood
Thank you Ariel. Hope you are doing well these days. Miss seeing you down at Loma Linda. Hope you had a Great Thanksgiving!


The Flood
Thank you Colin. Just trying to save lives any way I can. Not everything that the government does or leaders do is “evil” BTW…


The Flood
Only someone who knows the future can make such decisions without being a monster…


Pacific Union College Encouraging Homosexual Marriage?
Where did I “gloss over it”?


Review of “The Naked Emperor” by Pastor Conrad Vine
I fail to see where you have convincingly supported your claim that the GC leadership contributed to the harm of anyone’s personal religious liberties? – given that the GC leadership does not and could not override personal religious liberties in this country, nor substantively change the outcome of those who lost their jobs over various vaccine mandates. That’s just not how it works here in this country. Religious liberties are personally derived. Again, they simply are not based on a corporate or church position, but rely solely upon individual convictions – regardless of what the church may or may not say or do.

Yet, you say, “Who cares if it is written into law”? You should care. Everyone should care. It’s a very important law in this country. The idea that the organized church could have changed vaccine mandates simply isn’t true – particularly given the nature of certain types of jobs dealing with the most vulnerable in society (such as health care workers for example).

Beyond this, the GC Leadership did, in fact, write in support of personal religious convictions on this topic – and there are GC lawyers who have and continue to write personal letters in support of personal religious convictions (even if these personal convictions are at odds with the position of the church on a given topic). Just because the GC leadership also supports the advances of modern medicine doesn’t mean that the GC leadership cannot support individual convictions at the same time. Both are possible. This is not an inconsistency.