Comment on Dr. Nick Matzke Explains the Evolution of Complexity by Sean Pitman.
“You mistakenly assume that human languages evolve via the same mechanism as Darwinian evolution. They do not.”
Dr. Pitman, you are the one that introduced the language metaphor to compare probabilities of genetic sequence space for new functional complexity. You are hoisting yourself on your own petard my friend!
You confuse the look of sequence space with various methods of moving about within sequence space.
All meaningful language/information systems have the same basic look of sequence spaces at various levels of functional complexity, to include an essentially uniform fairly random distribution of potentially beneficial islands within sequence space at higher levels of functional complexity. However, the mechanism by which one gets around in that space may be different. The various words that are used for similar purposes within various human languages “evolve” over time via the involvement of intelligent design (an intelligent environment if you will). The Darwinian mechanism of RM/NS does not have this option since the “environment” within which protein-based systems may evolve is not intelligent. Movement through protein-based sequence space via this method is therefore much more limited, even at the level of small individual sequences that are equivalent to most individual words in various language systems. When one starts talking about moving beyond the mere definition of small individual words, to sentences, paragraphs, and entire essays or books (i.e., higher and higher levels of functional complexity), the nature of sequence space changes exponentially with regard to the ratio of potentially beneficial vs. non-beneficial – regardless of the language/information system. With each increase in the minimum structural threshold requirement of the system (in any language/information system) the minimum likely distance between a given island and the next closest island in sequence space increases in an linear manner.
How this non-beneficial gap distance is crossed can be done with or without intelligence. It is just a matter of a significant difference in the average time required to achieve success.
If one chooses to move around in higher level sequence spaces via random mutations and function-based selection, without including intelligent manipulation of the mutations, the average random walk time required to get from one island to the next closest island within sequence space increases exponentially as the minimum gap distances increase in a linear manner. That is why the “evolution” of sentences, paragraphs, and books (and novel individual words as well) in English or Latin or Greek or Russian, etc., happens so fast – because intelligence is involved in this kind of intelligently directed “evolution”. This is not the case, however, for a mindless mechanism like RM/NS, where exponentially greater and greater amounts of time are required.
I hope this makes things a bit clearer for you…
Sean Pitman Also Commented
This is not a “debate”, it’s some comments that I randomly posted on your blog, which you have fairly weirdly strung together into a single piece and then called a “debate”. That’s all you.
Whatever you want to call it…
You still aren’t getting the difference between a “functional island” as measured by blasting a protein with multiple simultaneous mutations, and the reality, which is a web (even within a single function), the strands of which are gradually explored by a step-by-step process of various sorts of substitutions, including compensatory ones, and which would essentially never return to the starting point, or even be constrained within the region of sequences-similar-enough-to-be-identified-by-BLAST. The web covers a far, far greater area of the sequence landscape than your little island. There’s not much point in discussing more complex issues if we can’t resolve simple points like this.
I’m sorry, but I do not see how compensatory mutations 1) significantly increase the size of the islands within sequence space (as far as the absolute number of protein-based sequences in sequence space that can produce a given type of function), 2) how they create narrow paths between qualitatively different islands at higher levels of functional complexity that are significantly more likely to be traversed in a given amount of time, or 3) how they significantly narrow the non-beneficial gap distances between different islands within sequence space?
As far as I can tell, compensatory mutations are simply a way of compensating for detrimental mutations by maintaining the same basic structure and function of the system (within the overall flexibility limitations of the minimum structural threshold requirements of the system in question). I guess I just don’t see how this significantly improves the odds of finding novel islands with qualitatively novel functionality? Such has not been demonstrated anywhere in literature that I’m aware, and I personally don’t see how the odds of success could be improved by invoking compensatory mutations?
The reason for this, as far as I can tell, is that compensatory mutations are limited to producing the same basic type of structure that characterizes a particular island within sequence space. For example, “It is well known that the folding of RNA molecules into the stem-loop structure requires base pair matching in the stem part of the molecule, and mutations occurring to one segment of the stem part will disrupt the matching, and therefore, have a deleterious effect on the folding and stability of the molecule. It has been observed that mutations in the complementary segment can rescind the deleterious effect by mutating into a base pair that matches the already mutated base, thus recovering the fitness of the original molecule (Kelley et al., 2000; Wilke et al., 2003). (Link)” Of course, in this particular situation, there are very limited choices for workable compensatory mutations given the high degree of required sequence specificity of the structure.
So, as far as I can tell, this nicely illustrates my observation that compensatory mutations simply don’t produce novel structures with novel functional complexity. They simply compensate for a loss of structure/function that results from detrimental mutations by trying to get back to the original to one degree or another. This is why compensatory mutations are so constrained. Depending upon the detrimental mutation, only a limited number of compensatory mutations are possible – and most of these do not provide full functional recovery from what was lost. In other words, the original level of functionality is not entirely reproduced by most compensatory mutations. In fact, populations tend to fix compensatory mutations only when the rate of compensatory mutations exceeds the rate of reversion or back mutations by at least an order of magnitude (Levine et. al., 2000). This means, of course, that back or reversion mutations are usually the most ideal solution for resolving detrimental mutations, but are not always the first to be realized by random mutations. And, compensatory mutations are usually detrimental by themselves. That means, once a compensatory mutation occurs, it is no longer beneficial to revert the original detrimental mutation (since one would also have to revert the compensatory mutation as well). This is somewhat of a problem since since compensatory options are more common, a compensatory mutation is usually realized before a reversion mutation – up to 70% of the time (Link). However, because compensatory mutations are not generally as good as back mutations at completely restoring the original level of functionality, they are less likely to be fixed in a population – especially larger populations.
In any case, your argument that there are a number of potential compensatory mutations for most detrimental mutations (an average of 9 or so) is clearly correct. So then, doesn’t it therefore follow that these compensatory options do in fact expand the size of the island and create a net-like appearance across vast regions of sequence space? – as you originally claimed?
Consider that the overall shape of the island remains the same – with sharp peaks and steeply sloping sides. I do not see how compensatory mutational options change this basic appearance of the island? They simply make the island 10 times larger, and much more spread out, than if there were no compensatory options (as in a case of increased specificity requirements) – which isn’t really relevant given the overall size of sequence space and the ratio of beneficial vs. non-beneficial within sequence space.
For example, say that a protein sequence experiences a point mutation that happens to be detrimental. Say there are 10 potentially compensating mutational options to “fix” this protein to some useful degree, one of which is realized. Now, let’s say that this protein experiences a second detrimental mutation in a different location. Now, there are 10 more potentially compensating mutational options, one of which is realized. What happens with each additional detrimental mutation to the sequence? Is the observation of Bloom et. al., that proteins suffer an exponential decline in functionality with each additional detrimental mutation, negated by the compensatory mutation options? Not at all. The compensatory mutations simply expand the size of the island to the extent allowed by the specificity requirements of the system, but they do not make it possible for the island stretch out indefinitely over all of sequence space. The minimum structural threshold requirements simply will not allow this. The same basic structure with the same basic function and the same minimum number of required building blocks must be maintained. And, that puts a very restrictive size limit on the overall size of the island with this type of function within sequence space (size being defined by the absolute number of protein sequences that can produce a given structure with a particular type of function). In other words, the actual maximum number of protein sequences that comprise the island is very very limited.
But, you argue, compensatory mutations may allow for narrow arms or branches to extend long distances (Hamming or Levenshtein distances) within sequence space. And, this is true. However, remember that sequence space is hyperdimensional. Changing the shape of an island comprised of a limited number of grains of sand doesn’t significantly change the odds of putting it within closer range of the next closest island within sequence space. After all, the shape of the island has a random appearance that is not biased toward other surrounding islands within sequence space. Therefore, the odds of successfully locating a different island with qualitatively novel functionality remain essentially the same as far as I can tell. There is no significant change in the minimum likely gap distances between any part of the starting island, regardless of its shape, and any other island within sequence space at higher levels of functional complexity. Other islands with other types of functions still have to be found by getting off of the original island and crossing a non-selectable gap distance – and I don’t see how compensatory mutations improve these odds?
And, that, in a nutshell, is why your proposed steppingstones in your flagellar evolution pathway simply require too many non-selectable differences to get from one to the other in a reasonable amount of time.
Re: flagellum — the word “Pallen” does not appear in your webpage, and the homology-and-unessentiality table from that paper is not discussed.
I do discuss the homologies that you proposed in your 2003 paper, “Evolution in (Brownian) space: a model for the origin of the bacterial flagellum”.
It seems to me that they key difference you see between your 2003 and your 2006 papers is the discovery of more homologies for vital structural flagellar proteins. You write:
The important overall point, as discussed in my blog post, is that of the 42 proteins in Table 1 of Pallen and Matzke, only two proteins, FliE and FlgD, are both essential and have no identified homologous proteins. This is substantially more impressive than the situation in 2003, and means that the evidence for the evolutionary origin of the flagellum by standard gene duplication and cooption processes is even stronger than in 2003. (Link)
You see, I really don’t care if every single one of the individual protein parts within the flagellum are homologous to proteins within other systems (even though a couple of them are not currently known to be homologous to anything else). This is completely irrelevant to the plausibility of the evolution of higher level systems based on pre-existing subsystems. You see, the problem is that having all the required parts to make a new type of complex system isn’t enough. Why not? Because, these parts must be modified in very specific ways before they can work together in a new specified arrangement as parts of a different type of complex system – like a flagellar motility system. And, the number of required modifications to get the parts in your proposed pathway to work together, to any selectable advantage at a higher level of functional complexity, is simply too great to be realized this side of a practical eternity of time.
How is that? After all, is it not possible for the required parts to be perfectly homologous between different systems? Well no, it’s not. If this were the case, all kinds of very different complex systems could be produced using identical subparts. The reason why this doesn’t happen is because qualitatively different complex systems require key modifications for the otherwise homologous parts to work together in different ways. And, the great number of these required non-selectable modifications to otherwise homologous parts, the exponentially greater the average random walk time. So, by the time you’re at a level when the minimum number of required non-selectable modifications is a couple dozen or so, the average time to randomly produce all of these required modifications, which cannot be guided by natural selection, is trillions upon trillions of years. And, I fail to see how compensatory mutations help to solve this problem? As far as I can tell, they do nothing to solve this problem for the ToE.
In short, homologies simply don’t cut it because it isn’t the similarities that are important, but the number of required non-selectable modifications that completely kills off evolutionary potential, in an exponential manner, beyond very low levels of functional complexity.
Dr. Nick Matzke Explains the Evolution of Complexity
3D molecular evolution is based on the proper arrangement of characters in 2D sequences, the same as in any human language system – like English or French, etc. And, the appearance of sequence space at various levels of functional complexity is also essentially the same.
Recent Comments by Sean Pitman
The Arguments of Adventists Opposed to Vaccines
The LORD does not suffer fools who deliberately put themselves in paths of known dangers. If you deliberately jump off a cliff, putting the LORD to the test, this is not virtuous faith, but presumption – a sin against God.
The Arguments of Adventists Opposed to Vaccines
After extensive review of the available data, the FDA issued “emergency use authorization” for the Pfizer and Moderna mRNA vaccines. Pfizer, in particular, is planning on applying for full FDA approval as early as the middle of this month (April 2021).
As far as the length of immunity, it is currently known that robust immunity following mRNA vaccination lasts “at least” six months, and probably years (Link). However, if additional variants arise that aren’t effectively covered by the current vaccines, additional booster shoots would be needed.
1. I assume some defective mRNA strands and lipid layers can be generated during the myriad of involved complex chemical processes. Do we understand percentage of defective nanoparticles / mRNA strands? Does process include QA that somehow reduces or eliminates potentially harmful defects. What is risk of defective mRNA strands that could encode for harmful proteins? Any other associated risks here that I am not addressing?
Given that the mRNA sequences in the Pfizer and Moderna vaccines are synthetically produced, I would say that there are very few defective mRNA sequences. And, when it comes to producing proteins based on these few defective sequences, the additional risk from such defective sequences for the human body would be, effectively, zero. In fact, a few slight variations in the protein sequence for the spike protein would only result in slight variations in the immune system response. And, producing such slight variations are already part of how our human immune system is programmed to work – automatically producing slight variations in the antibodies produced against a particular type of foreign antigen, for example.
2. How much independent review occurred with these vaccines? Is the Global Advisory Committee on Vaccine Safety the only body that reviewed. Do scientiests get hands-on and eyes-on access to the actual chemical processes to verify what is happening (in vitro and in vivo), or are they just provided with white papers and reports for review?
A great many scientists were involved in the production and review of the mRNA vaccines. These vaccines, how they work, and their effects on human biochemistry are very well known by a great many scientists who work in this field of immunochemistry. There are no fundamental secrets here.
3. Some papers and FAQs claim the generated viral “spike protein” is presented on the cell surface. Some of your dialogue here seems to indicate that this is not the case. Which is it? How is it presented? Is it presented in a variety of ways?
Here are a few diagrams that illustrate what’s happening within different cells of the body where the mRNA sequences are decoded and presented:
Mechanism of action of mRNA vaccines. 1. The mRNA is in vitro transcribed (IVT) from a DNA template in a cell-free system. 2. IVT mRNA is subsequently transfected into dendritic cells (DCs) via (3) endocytosis. 4. Entrapped mRNA undergoes endosomal escape and is released into the cytosol. 5. Using the translational machinery of host cells (ribosomes), the mRNA is translated into antigenic proteins. The translated antigenic protein undergoes post-translational modification and can act in the cell where it is generated. 6. Alternatively, the protein is secreted from the host cell. 7. Antigen protein is degraded by the proteasome in the cytoplasm. The generated antigenic peptide epitopes are transported into the endoplasmic reticulum and loaded onto major histocompatibility complex (MHC) class I molecules (MHC I). 8. The loaded MHC I-peptide epitope complexes are presented on the surface of cells, eventually leading to the induction of antigen-specific CD8 + T cell responses after T-cell receptor recognition and appropriate co-stimulation. 9. Exogenous proteins are taken up DCs. 10. They are degraded in endosomes and presented via the MHC II pathway. Moreover, to obtain cognate T-cell help in antigen-presenting cells, the protein should be routed through the MHC II pathway. 11. The generated antigenic peptide epitopes are subsequently loaded onto MHC II molecules. 12. The loaded MHC II-peptide epitope complexes are presented on the surface of cells, leading to the induction of the antigen-specific CD4 + T cell responses. Exogenous antigens can also be processed and loaded onto MHC class I molecules via a mechanism known as cross-presentation. (Link)
Now, The mRNA-1273-encoded prefusion stabilizes the S protein (Moderna Vaccine) consists of the SARS-CoV-2 glycoprotein with a transmembrane anchor and an intact S1–S2 cleavage site. The presence of the transmembrane anchor would seem to enable some of the spike proteins to remain attached to the surface of the cell that produced them, such as a muscle cell, but would still be recognized as “foreign” by the immune system. (Link)
See also: Link
Are mRNA Vaccines for COVID-19 helpful or harmful?
The following commentary by organic chemist Derek Lowe is also helpful in understanding this question (December 4, 2020):
Bob Wachter of UCSF had a very good thread on Twitter about vaccine rollouts the other day, and one of the good points he made was this one. We’re talking about treating very, very large populations, which means that you’re going to see the usual run of mortality and morbidity that you see across large samples. Specifically, if you take 10 million people and just wave your hand back and forth over their upper arms, in the next two months you would expect to see about 4,000 heart attacks. About 4,000 strokes. Over 9,000 new diagnoses of cancer. And about 14,000 of that ten million will die, out of usual all-causes mortality. No one would notice. That’s how many people die and get sick anyway.
But if you took those ten million people and gave them a new vaccine instead, there’s a real danger that those heart attacks, cancer diagnoses, and deaths will be attributed to the vaccine. I mean, if you reach a large enough population, you are literally going to have cases where someone gets the vaccine and drops dead the next day (just as they would have if they *didn’t* get the vaccine). It could prove difficult to convince that person’s friends and relatives of that lack of connection, though. Post hoc ergo propter hoc is one of the most powerful fallacies of human logic, and we’re not going to get rid of it any time soon. Especially when it comes to vaccines. The best we can do, I think, is to try to get the word out in advance. Let people know that such things are going to happen, because people get sick and die constantly in this world. The key will be whether they are getting sick or dying at a noticeably higher rate once they have been vaccinated.
No such safety signals have appeared for the first vaccines to roll out (Moderna and Pfizer/BioNTech). In fact, we should be seeing the exact opposite effects on mortality and morbidity as more and more people get vaccinated. The excess-death figures so far in the coronavirus pandemic have been appalling (well over 300,000 in the US), and I certainly think mass vaccination is the most powerful method we have to knock that back down to normal.
That’s going to be harder to do, though, if we get screaming headlines about people falling over due to heart attacks after getting their vaccine shots. Be braced.
Are mRNA Vaccines for COVID-19 helpful or harmful?
I know that various European countries, including the Netherlands, Denmark, and Spain, have reported outbreaks of COVID-19 in mink pelt farms – leading to the culling of more than a million animals. From laboratory experiments, it’s also clear that ferrets (a relative of the mink) are also readily infected with the “novel coronavirus”. Aside from this, however, I’m not aware of any “issues” with animal experiments regarding COVID-19 in particular. However, in 2008 there was an interesting experiment involving ferrets that were given the flu vaccine against the H1N1 virus – who then became sicker once exposed to the live virus as compared to those ferrets that weren’t vaccinated. The reason for the effect was unclear, and Skowronski, the lead author, urged other research groups to take up the question.
“Skowronski likened the mechanism to what happens with dengue viruses. People who have been infected with one subtype of dengue don’t develop immunity to the other three. In fact, they are more at risk of developing a life-threatening form of dengue if they are infected with one of the other strains.”
Skowronski called the second theory the infection block hypothesis. Having a bout of the flu gives the infected person antibodies that may be able, for a time, to fend off other strains; flu shots only protect against the strains they contain. So under this theory, people who didn’t have flu in 2008 because they got a flu shot may have been less well armed against the pandemic virus.”
While interesting, such an effect has not been identified in the animal or human trials for the mRNA vaccines against COVID-19. Also, subsequently updated flu vaccines to the H1N1 strain haven’t had this problem either (Link).