3D molecular evolution is based on the proper arrangement of …

Comment on Dr. Nick Matzke Explains the Evolution of Complexity by Sean Pitman.

3D molecular evolution is based on the proper arrangement of characters in 2D sequences, the same as in any human language system – like English or French, etc. And, the appearance of sequence space at various levels of functional complexity is also essentially the same.

Sean Pitman Also Commented

Dr. Nick Matzke Explains the Evolution of Complexity

This is not a “debate”, it’s some comments that I randomly posted on your blog, which you have fairly weirdly strung together into a single piece and then called a “debate”. That’s all you.

Whatever you want to call it…

You still aren’t getting the difference between a “functional island” as measured by blasting a protein with multiple simultaneous mutations, and the reality, which is a web (even within a single function), the strands of which are gradually explored by a step-by-step process of various sorts of substitutions, including compensatory ones, and which would essentially never return to the starting point, or even be constrained within the region of sequences-similar-enough-to-be-identified-by-BLAST. The web covers a far, far greater area of the sequence landscape than your little island. There’s not much point in discussing more complex issues if we can’t resolve simple points like this.

I’m sorry, but I do not see how compensatory mutations 1) significantly increase the size of the islands within sequence space (as far as the absolute number of protein-based sequences in sequence space that can produce a given type of function), 2) how they create narrow paths between qualitatively different islands at higher levels of functional complexity that are significantly more likely to be traversed in a given amount of time, or 3) how they significantly narrow the non-beneficial gap distances between different islands within sequence space?

As far as I can tell, compensatory mutations are simply a way of compensating for detrimental mutations by maintaining the same basic structure and function of the system (within the overall flexibility limitations of the minimum structural threshold requirements of the system in question). I guess I just don’t see how this significantly improves the odds of finding novel islands with qualitatively novel functionality? Such has not been demonstrated anywhere in literature that I’m aware, and I personally don’t see how the odds of success could be improved by invoking compensatory mutations?

The reason for this, as far as I can tell, is that compensatory mutations are limited to producing the same basic type of structure that characterizes a particular island within sequence space. For example, “It is well known that the folding of RNA molecules into the stem-loop structure requires base pair matching in the stem part of the molecule, and mutations occurring to one segment of the stem part will disrupt the matching, and therefore, have a deleterious effect on the folding and stability of the molecule. It has been observed that mutations in the complementary segment can rescind the deleterious effect by mutating into a base pair that matches the already mutated base, thus recovering the fitness of the original molecule (Kelley et al., 2000; Wilke et al., 2003). (Link)” Of course, in this particular situation, there are very limited choices for workable compensatory mutations given the high degree of required sequence specificity of the structure.

So, as far as I can tell, this nicely illustrates my observation that compensatory mutations simply don’t produce novel structures with novel functional complexity. They simply compensate for a loss of structure/function that results from detrimental mutations by trying to get back to the original to one degree or another. This is why compensatory mutations are so constrained. Depending upon the detrimental mutation, only a limited number of compensatory mutations are possible – and most of these do not provide full functional recovery from what was lost. In other words, the original level of functionality is not entirely reproduced by most compensatory mutations. In fact, populations tend to fix compensatory mutations only when the rate of compensatory mutations exceeds the rate of reversion or back mutations by at least an order of magnitude (Levine et. al., 2000). This means, of course, that back or reversion mutations are usually the most ideal solution for resolving detrimental mutations, but are not always the first to be realized by random mutations. And, compensatory mutations are usually detrimental by themselves. That means, once a compensatory mutation occurs, it is no longer beneficial to revert the original detrimental mutation (since one would also have to revert the compensatory mutation as well). This is somewhat of a problem since since compensatory options are more common, a compensatory mutation is usually realized before a reversion mutation – up to 70% of the time (Link). However, because compensatory mutations are not generally as good as back mutations at completely restoring the original level of functionality, they are less likely to be fixed in a population – especially larger populations.

In any case, your argument that there are a number of potential compensatory mutations for most detrimental mutations (an average of 9 or so) is clearly correct. So then, doesn’t it therefore follow that these compensatory options do in fact expand the size of the island and create a net-like appearance across vast regions of sequence space? – as you originally claimed?

Consider that the overall shape of the island remains the same – with sharp peaks and steeply sloping sides. I do not see how compensatory mutational options change this basic appearance of the island? They simply make the island 10 times larger, and much more spread out, than if there were no compensatory options (as in a case of increased specificity requirements) – which isn’t really relevant given the overall size of sequence space and the ratio of beneficial vs. non-beneficial within sequence space.

For example, say that a protein sequence experiences a point mutation that happens to be detrimental. Say there are 10 potentially compensating mutational options to “fix” this protein to some useful degree, one of which is realized. Now, let’s say that this protein experiences a second detrimental mutation in a different location. Now, there are 10 more potentially compensating mutational options, one of which is realized. What happens with each additional detrimental mutation to the sequence? Is the observation of Bloom et. al., that proteins suffer an exponential decline in functionality with each additional detrimental mutation, negated by the compensatory mutation options? Not at all. The compensatory mutations simply expand the size of the island to the extent allowed by the specificity requirements of the system, but they do not make it possible for the island stretch out indefinitely over all of sequence space. The minimum structural threshold requirements simply will not allow this. The same basic structure with the same basic function and the same minimum number of required building blocks must be maintained. And, that puts a very restrictive size limit on the overall size of the island with this type of function within sequence space (size being defined by the absolute number of protein sequences that can produce a given structure with a particular type of function). In other words, the actual maximum number of protein sequences that comprise the island is very very limited.

But, you argue, compensatory mutations may allow for narrow arms or branches to extend long distances (Hamming or Levenshtein distances) within sequence space. And, this is true. However, remember that sequence space is hyperdimensional. Changing the shape of an island comprised of a limited number of grains of sand doesn’t significantly change the odds of putting it within closer range of the next closest island within sequence space. After all, the shape of the island has a random appearance that is not biased toward other surrounding islands within sequence space. Therefore, the odds of successfully locating a different island with qualitatively novel functionality remain essentially the same as far as I can tell. There is no significant change in the minimum likely gap distances between any part of the starting island, regardless of its shape, and any other island within sequence space at higher levels of functional complexity. Other islands with other types of functions still have to be found by getting off of the original island and crossing a non-selectable gap distance – and I don’t see how compensatory mutations improve these odds?

And, that, in a nutshell, is why your proposed steppingstones in your flagellar evolution pathway simply require too many non-selectable differences to get from one to the other in a reasonable amount of time.

Re: flagellum — the word “Pallen” does not appear in your webpage, and the homology-and-unessentiality table from that paper is not discussed.

I do discuss the homologies that you proposed in your 2003 paper, “Evolution in (Brownian) space: a model for the origin of the bacterial flagellum”.

It seems to me that they key difference you see between your 2003 and your 2006 papers is the discovery of more homologies for vital structural flagellar proteins. You write:

The important overall point, as discussed in my blog post, is that of the 42 proteins in Table 1 of Pallen and Matzke, only two proteins, FliE and FlgD, are both essential and have no identified homologous proteins. This is substantially more impressive than the situation in 2003, and means that the evidence for the evolutionary origin of the flagellum by standard gene duplication and cooption processes is even stronger than in 2003. (Link)

You see, I really don’t care if every single one of the individual protein parts within the flagellum are homologous to proteins within other systems (even though a couple of them are not currently known to be homologous to anything else). This is completely irrelevant to the plausibility of the evolution of higher level systems based on pre-existing subsystems. You see, the problem is that having all the required parts to make a new type of complex system isn’t enough. Why not? Because, these parts must be modified in very specific ways before they can work together in a new specified arrangement as parts of a different type of complex system – like a flagellar motility system. And, the number of required modifications to get the parts in your proposed pathway to work together, to any selectable advantage at a higher level of functional complexity, is simply too great to be realized this side of a practical eternity of time.

How is that? After all, is it not possible for the required parts to be perfectly homologous between different systems? Well no, it’s not. If this were the case, all kinds of very different complex systems could be produced using identical subparts. The reason why this doesn’t happen is because qualitatively different complex systems require key modifications for the otherwise homologous parts to work together in different ways. And, the great number of these required non-selectable modifications to otherwise homologous parts, the exponentially greater the average random walk time. So, by the time you’re at a level when the minimum number of required non-selectable modifications is a couple dozen or so, the average time to randomly produce all of these required modifications, which cannot be guided by natural selection, is trillions upon trillions of years. And, I fail to see how compensatory mutations help to solve this problem? As far as I can tell, they do nothing to solve this problem for the ToE.

In short, homologies simply don’t cut it because it isn’t the similarities that are important, but the number of required non-selectable modifications that completely kills off evolutionary potential, in an exponential manner, beyond very low levels of functional complexity.

Dr. Nick Matzke Explains the Evolution of Complexity

“You mistakenly assume that human languages evolve via the same mechanism as Darwinian evolution. They do not.”

Dr. Pitman, you are the one that introduced the language metaphor to compare probabilities of genetic sequence space for new functional complexity. You are hoisting yourself on your own petard my friend!

You confuse the look of sequence space with various methods of moving about within sequence space.

All meaningful language/information systems have the same basic look of sequence spaces at various levels of functional complexity, to include an essentially uniform fairly random distribution of potentially beneficial islands within sequence space at higher levels of functional complexity. However, the mechanism by which one gets around in that space may be different. The various words that are used for similar purposes within various human languages “evolve” over time via the involvement of intelligent design (an intelligent environment if you will). The Darwinian mechanism of RM/NS does not have this option since the “environment” within which protein-based systems may evolve is not intelligent. Movement through protein-based sequence space via this method is therefore much more limited, even at the level of small individual sequences that are equivalent to most individual words in various language systems. When one starts talking about moving beyond the mere definition of small individual words, to sentences, paragraphs, and entire essays or books (i.e., higher and higher levels of functional complexity), the nature of sequence space changes exponentially with regard to the ratio of potentially beneficial vs. non-beneficial – regardless of the language/information system. With each increase in the minimum structural threshold requirement of the system (in any language/information system) the minimum likely distance between a given island and the next closest island in sequence space increases in an linear manner.

How this non-beneficial gap distance is crossed can be done with or without intelligence. It is just a matter of a significant difference in the average time required to achieve success.

If one chooses to move around in higher level sequence spaces via random mutations and function-based selection, without including intelligent manipulation of the mutations, the average random walk time required to get from one island to the next closest island within sequence space increases exponentially as the minimum gap distances increase in a linear manner. That is why the “evolution” of sentences, paragraphs, and books (and novel individual words as well) in English or Latin or Greek or Russian, etc., happens so fast – because intelligence is involved in this kind of intelligently directed “evolution”. This is not the case, however, for a mindless mechanism like RM/NS, where exponentially greater and greater amounts of time are required.

I hope this makes things a bit clearer for you…

Recent Comments by Sean Pitman

Mandates vs. Religious Exemptions
Come on now. The antigens were detected in very small amounts due to the “ultralow detection limits of the Simoa assays” that were used. Just because very small amounts of spike protein antigens end up in the plasma does not discount the “basic science” that the spike protein produced by the vaccines does in fact anchor itself, generally speaking, to the surfaces of the cells that produce it following vaccination.

Mandates vs. Religious Exemptions
Again, this paper doesn’t present an actual mechanism for harming the human immune system as already explained to you. Let me know what the authors say – if they ever respond.

Mandates vs. Religious Exemptions
I thought you’d appreciate it – given the irony of it. After all, this is just basic science here. The authors here are not claiming something novel that has no mechanistic basis. There are many other places where you can read up on the mechanism of how the spike proteins are presented on the surfaces of the muscles cells where they are produced (Link, Link, Link, Link).

Pastor Ivor Myers and Medical Panel Discuss COVID-19 and Vaccines
The mRNA vaccine are now fully approved by the FDA (no longer under EUA). The technology itself is not “experimental” in any meaningful sense of the world since it has been around now for over 30 years with extensive use in other applications. The current use to produce a small part of the COVID-19 virus, the spike protein, to teach the human immune system how to fight the live virus better when exposed, functions in the very same way as traditional vaccines – and is highly effective as well as having very rare serious side effects. Those who cite VAERS don’t generally understand the purpose of the VEARS data system that is maintained by the CDC and the FDA. The VEARS system is not meant to establish causation, but rather to detect unusual patterns of correlation. This is a key misunderstanding for many people. As far as the human immune system is concerned, the fact is that the human immune system, while certainly amazing, isn’t perfect in this world and tends to degrade over time as we age. That is why vaccines have turned out to be such a God-given gift to humanity, having saved millions upon millions of lives. Also, historically, vaccine mandates are nothing new. Vaccines have long been required to work in various jobs, particularly as medical providers, and to attend schools around the country.

All that being said, I do agree that the current general mandates for the mRNA vaccines against COVID-19 will tend to be less effective compared to other methods… with the exception of those working in places like hospitals or nursing homes. Such medical providers working with the most vulnerable should be required to be vaccinated. For most other people, medical exceptions and even religious exemptions are still recognized and honored in this country.

Mandates vs. Religious Exemptions
If the DNA of a person does not get altered by the mRNA vaccines, then, by definition, these vaccines are not “gene therapy”. This is what was noted by Bayer itself in their response to the comments of Oelrich:

The Bayer group tells 20 Minutes that this is “an obvious slip.” “At Bayer, [les vaccins à] mRNA does not come under gene therapy in the sense that is commonly attributed to this expression,” adds the company. (Link)