Comment on Dr. Nick Matzke Explains the Evolution of Complexity by Sean Pitman.
3D molecular evolution is based on the proper arrangement of characters in 2D sequences, the same as in any human language system – like English or French, etc. And, the appearance of sequence space at various levels of functional complexity is also essentially the same.
Sean Pitman Also Commented
This is not a “debate”, it’s some comments that I randomly posted on your blog, which you have fairly weirdly strung together into a single piece and then called a “debate”. That’s all you.
Whatever you want to call it…
You still aren’t getting the difference between a “functional island” as measured by blasting a protein with multiple simultaneous mutations, and the reality, which is a web (even within a single function), the strands of which are gradually explored by a step-by-step process of various sorts of substitutions, including compensatory ones, and which would essentially never return to the starting point, or even be constrained within the region of sequences-similar-enough-to-be-identified-by-BLAST. The web covers a far, far greater area of the sequence landscape than your little island. There’s not much point in discussing more complex issues if we can’t resolve simple points like this.
I’m sorry, but I do not see how compensatory mutations 1) significantly increase the size of the islands within sequence space (as far as the absolute number of protein-based sequences in sequence space that can produce a given type of function), 2) how they create narrow paths between qualitatively different islands at higher levels of functional complexity that are significantly more likely to be traversed in a given amount of time, or 3) how they significantly narrow the non-beneficial gap distances between different islands within sequence space?
As far as I can tell, compensatory mutations are simply a way of compensating for detrimental mutations by maintaining the same basic structure and function of the system (within the overall flexibility limitations of the minimum structural threshold requirements of the system in question). I guess I just don’t see how this significantly improves the odds of finding novel islands with qualitatively novel functionality? Such has not been demonstrated anywhere in literature that I’m aware, and I personally don’t see how the odds of success could be improved by invoking compensatory mutations?
The reason for this, as far as I can tell, is that compensatory mutations are limited to producing the same basic type of structure that characterizes a particular island within sequence space. For example, “It is well known that the folding of RNA molecules into the stem-loop structure requires base pair matching in the stem part of the molecule, and mutations occurring to one segment of the stem part will disrupt the matching, and therefore, have a deleterious effect on the folding and stability of the molecule. It has been observed that mutations in the complementary segment can rescind the deleterious effect by mutating into a base pair that matches the already mutated base, thus recovering the fitness of the original molecule (Kelley et al., 2000; Wilke et al., 2003). (Link)” Of course, in this particular situation, there are very limited choices for workable compensatory mutations given the high degree of required sequence specificity of the structure.
So, as far as I can tell, this nicely illustrates my observation that compensatory mutations simply don’t produce novel structures with novel functional complexity. They simply compensate for a loss of structure/function that results from detrimental mutations by trying to get back to the original to one degree or another. This is why compensatory mutations are so constrained. Depending upon the detrimental mutation, only a limited number of compensatory mutations are possible – and most of these do not provide full functional recovery from what was lost. In other words, the original level of functionality is not entirely reproduced by most compensatory mutations. In fact, populations tend to fix compensatory mutations only when the rate of compensatory mutations exceeds the rate of reversion or back mutations by at least an order of magnitude (Levine et. al., 2000). This means, of course, that back or reversion mutations are usually the most ideal solution for resolving detrimental mutations, but are not always the first to be realized by random mutations. And, compensatory mutations are usually detrimental by themselves. That means, once a compensatory mutation occurs, it is no longer beneficial to revert the original detrimental mutation (since one would also have to revert the compensatory mutation as well). This is somewhat of a problem since since compensatory options are more common, a compensatory mutation is usually realized before a reversion mutation – up to 70% of the time (Link). However, because compensatory mutations are not generally as good as back mutations at completely restoring the original level of functionality, they are less likely to be fixed in a population – especially larger populations.
In any case, your argument that there are a number of potential compensatory mutations for most detrimental mutations (an average of 9 or so) is clearly correct. So then, doesn’t it therefore follow that these compensatory options do in fact expand the size of the island and create a net-like appearance across vast regions of sequence space? – as you originally claimed?
Consider that the overall shape of the island remains the same – with sharp peaks and steeply sloping sides. I do not see how compensatory mutational options change this basic appearance of the island? They simply make the island 10 times larger, and much more spread out, than if there were no compensatory options (as in a case of increased specificity requirements) – which isn’t really relevant given the overall size of sequence space and the ratio of beneficial vs. non-beneficial within sequence space.
For example, say that a protein sequence experiences a point mutation that happens to be detrimental. Say there are 10 potentially compensating mutational options to “fix” this protein to some useful degree, one of which is realized. Now, let’s say that this protein experiences a second detrimental mutation in a different location. Now, there are 10 more potentially compensating mutational options, one of which is realized. What happens with each additional detrimental mutation to the sequence? Is the observation of Bloom et. al., that proteins suffer an exponential decline in functionality with each additional detrimental mutation, negated by the compensatory mutation options? Not at all. The compensatory mutations simply expand the size of the island to the extent allowed by the specificity requirements of the system, but they do not make it possible for the island stretch out indefinitely over all of sequence space. The minimum structural threshold requirements simply will not allow this. The same basic structure with the same basic function and the same minimum number of required building blocks must be maintained. And, that puts a very restrictive size limit on the overall size of the island with this type of function within sequence space (size being defined by the absolute number of protein sequences that can produce a given structure with a particular type of function). In other words, the actual maximum number of protein sequences that comprise the island is very very limited.
But, you argue, compensatory mutations may allow for narrow arms or branches to extend long distances (Hamming or Levenshtein distances) within sequence space. And, this is true. However, remember that sequence space is hyperdimensional. Changing the shape of an island comprised of a limited number of grains of sand doesn’t significantly change the odds of putting it within closer range of the next closest island within sequence space. After all, the shape of the island has a random appearance that is not biased toward other surrounding islands within sequence space. Therefore, the odds of successfully locating a different island with qualitatively novel functionality remain essentially the same as far as I can tell. There is no significant change in the minimum likely gap distances between any part of the starting island, regardless of its shape, and any other island within sequence space at higher levels of functional complexity. Other islands with other types of functions still have to be found by getting off of the original island and crossing a non-selectable gap distance – and I don’t see how compensatory mutations improve these odds?
And, that, in a nutshell, is why your proposed steppingstones in your flagellar evolution pathway simply require too many non-selectable differences to get from one to the other in a reasonable amount of time.
Re: flagellum — the word “Pallen” does not appear in your webpage, and the homology-and-unessentiality table from that paper is not discussed.
I do discuss the homologies that you proposed in your 2003 paper, “Evolution in (Brownian) space: a model for the origin of the bacterial flagellum”.
It seems to me that they key difference you see between your 2003 and your 2006 papers is the discovery of more homologies for vital structural flagellar proteins. You write:
The important overall point, as discussed in my blog post, is that of the 42 proteins in Table 1 of Pallen and Matzke, only two proteins, FliE and FlgD, are both essential and have no identified homologous proteins. This is substantially more impressive than the situation in 2003, and means that the evidence for the evolutionary origin of the flagellum by standard gene duplication and cooption processes is even stronger than in 2003. (Link)
You see, I really don’t care if every single one of the individual protein parts within the flagellum are homologous to proteins within other systems (even though a couple of them are not currently known to be homologous to anything else). This is completely irrelevant to the plausibility of the evolution of higher level systems based on pre-existing subsystems. You see, the problem is that having all the required parts to make a new type of complex system isn’t enough. Why not? Because, these parts must be modified in very specific ways before they can work together in a new specified arrangement as parts of a different type of complex system – like a flagellar motility system. And, the number of required modifications to get the parts in your proposed pathway to work together, to any selectable advantage at a higher level of functional complexity, is simply too great to be realized this side of a practical eternity of time.
How is that? After all, is it not possible for the required parts to be perfectly homologous between different systems? Well no, it’s not. If this were the case, all kinds of very different complex systems could be produced using identical subparts. The reason why this doesn’t happen is because qualitatively different complex systems require key modifications for the otherwise homologous parts to work together in different ways. And, the great number of these required non-selectable modifications to otherwise homologous parts, the exponentially greater the average random walk time. So, by the time you’re at a level when the minimum number of required non-selectable modifications is a couple dozen or so, the average time to randomly produce all of these required modifications, which cannot be guided by natural selection, is trillions upon trillions of years. And, I fail to see how compensatory mutations help to solve this problem? As far as I can tell, they do nothing to solve this problem for the ToE.
In short, homologies simply don’t cut it because it isn’t the similarities that are important, but the number of required non-selectable modifications that completely kills off evolutionary potential, in an exponential manner, beyond very low levels of functional complexity.
“You mistakenly assume that human languages evolve via the same mechanism as Darwinian evolution. They do not.”
Dr. Pitman, you are the one that introduced the language metaphor to compare probabilities of genetic sequence space for new functional complexity. You are hoisting yourself on your own petard my friend!
You confuse the look of sequence space with various methods of moving about within sequence space.
All meaningful language/information systems have the same basic look of sequence spaces at various levels of functional complexity, to include an essentially uniform fairly random distribution of potentially beneficial islands within sequence space at higher levels of functional complexity. However, the mechanism by which one gets around in that space may be different. The various words that are used for similar purposes within various human languages “evolve” over time via the involvement of intelligent design (an intelligent environment if you will). The Darwinian mechanism of RM/NS does not have this option since the “environment” within which protein-based systems may evolve is not intelligent. Movement through protein-based sequence space via this method is therefore much more limited, even at the level of small individual sequences that are equivalent to most individual words in various language systems. When one starts talking about moving beyond the mere definition of small individual words, to sentences, paragraphs, and entire essays or books (i.e., higher and higher levels of functional complexity), the nature of sequence space changes exponentially with regard to the ratio of potentially beneficial vs. non-beneficial – regardless of the language/information system. With each increase in the minimum structural threshold requirement of the system (in any language/information system) the minimum likely distance between a given island and the next closest island in sequence space increases in an linear manner.
How this non-beneficial gap distance is crossed can be done with or without intelligence. It is just a matter of a significant difference in the average time required to achieve success.
If one chooses to move around in higher level sequence spaces via random mutations and function-based selection, without including intelligent manipulation of the mutations, the average random walk time required to get from one island to the next closest island within sequence space increases exponentially as the minimum gap distances increase in a linear manner. That is why the “evolution” of sentences, paragraphs, and books (and novel individual words as well) in English or Latin or Greek or Russian, etc., happens so fast – because intelligence is involved in this kind of intelligently directed “evolution”. This is not the case, however, for a mindless mechanism like RM/NS, where exponentially greater and greater amounts of time are required.
I hope this makes things a bit clearer for you…
Recent Comments by Sean Pitman
Science and Methodological Naturalism
Very interesting passage. After all, if scientists are honest with themselves, scientific methodologies are well-able to detect the existence of intelligent design behind various artifacts found in nature. It’s just the personal philosophy of scientists that makes them put living things and the origin of the fine-tuned universe “out of bounds” when it comes to the detection of intelligent design. This conclusion simply isn’t dictated by science itself, but by a philosophical position, a type of religion actually, that strives to block the Divine Foot from getting into the door…
Why is it that creationists are afraid to acknowledge the validity of Darwinism in these settings? I don’t see that these threaten a belief in God in any way whatsoever.
The threat is when you see no limitations to natural mindless mechanisms – where you attribute everything to the creative power of nature instead of to the God of nature.
God has created natural laws that can do some pretty amazing things. However, these natural laws are not infinite in creative potential. Their abilities are finite while only God is truly infinite.
The detection of these limitations allows us to recognize the need for the input of higher-level intelligence and creative power that goes well beyond what nature alone can achieve. It is here that the Signature of God is detectable.
For those who only hold a naturalistic view of the universe, everything is attributed to the mindless laws of nature… so that the Signature of God is obscured. Nothing is left that tells them, “Only God or some God-like intelligent mind could have done this.”
That’s the problem when you do not recognize any specific limitations to the tools that God has created – when you do not recognize the limits of nature and what natural laws can achieve all by themselves.
Since the fall of Adam, Sean, all babies are born in sin and they are sinners. God created them. Even if it was by way of cooperation of natural law as human beings also participated in the creation process.
God did not create the broken condition of any human baby – neither the physical or moral brokenness of any human being. God is responsible for every good thing, to include the spark or breath of life within each one of us. However, He did not and does not create those things within us that are broken or bad.
“The owner’s servants came to him and said, ‘Sir, didn’t you sow good seed in your field? Where then did the weeds come from?’ ‘An enemy did this,’ he replied. “The servants asked him, ‘Do you want us to go and pull them up?'” Matthew 13:27-28
Of course, all humans are indeed born broken and are in a natural state of rebellion against God. However, God is not the one who created this condition nor is God responsible for any baby being born with any kind of defect in character, personality, moral tendency, or physical or genetic abnormality. God did not create anyone with such brokenness. Such were the natural result of rebellion against God and heading the temptations of the “enemy”… the natural result of a separation from God with the inevitable decay in physical, mental, and moral strength.
Of course, the ones who are born broken are not responsible for their broken condition either. However, all of us are morally responsible for choosing to reject the gift of Divine Grace once it is appreciated… and for choosing to go against what we all have been given to know, internally, of moral truth. In other words, we are responsible for rebelling against the Royal Law written on the hearts of all mankind.
This is because God has maintained in us the power to be truly free moral agents in that we maintain the Power to choose, as a gift of God (Genesis 3:15). We can choose to accept or reject the call of the Royal Law, as the Holy Spirit speaks to all of our hearts…
Remember the statement by Mrs. White that God is in no wise responsible for sin in anyone at any time. God is working to fix our broken condition. He did not and does not create our broken condition. Just as He does not cause Babies to be born with painful and lethal genetic defects, such as those that result in childhood leukemia, He does not cause Babies to be born with defects of moral character either. God is only directly responsible for the good, never the evil, of this life.
Again, your all-or-nothing approach to the claims of scientists isn’t very scientific. Even the best and most famous of scientists has had numerous hair-brained ideas that were completely off base. This fact does not undermine the good discoveries and inventions that were produced.
Scientific credibility isn’t based on the person making the argument, but upon the merits of the argument itself – the ability of the hypothesis to gain predictive value when tested. That’s it.
Gary Gilbert, Spectrum, and Pseudogenes
Don’t be so obtuse here. We’re not talking about publishing just anything in mainstream journals. I’ve published several articles myself. We’re talking about publishing the conclusion that intelligent design was clearly involved with the origin of various artifactual features of living things on this planet. Try getting a paper that mentions such a conclusion published…