The Darwinian mechanism is dependent upon random genetic mutations in order to produce variability in gene pools upon which natural selection can then work to preferentially select out the best mutations to be passed on to future generations. One of the serious problems here (and there are more) is that beneficial mutations are very rare within the gene pools of multicellular complex organisms – like all mammals for instance. In fact, the ratio of detrimental vs. beneficial mutations is on the order of a million to one (Gerrish and Lenski 1998). This is a real problem even if mutation rates seem to be relatively mild.
Table of Contents
- 1 High mutation rates:
- 2 Claims of high genome functionality:
- 3 Claims of low genome functionality:
- 4 Function beyond sequence constraint between species:
- 5 The debate continues:
- 6 The problem exponentially worse for Darwin:
- 7 Current state of the situation:
- 8 Functional genomic redundancy:
- 9 Turtles all the way down:
- 10 Related
High mutation rates:
Take humans, for instance. The average number of new mutations that a newly born human baby sustains is around 100 point mutations (Lynch, 2016). Some other mammals, like mice, have a significantly higher mutation rate over a given span of time. “While the generational mutation rate in mice is 40% of that in humans, the annual mutation rate is 16 times higher, and the mutation rate per cell division is two-fold higher.” (Lindsay et. al., 2016 – see also Milholland et. al., 2017).
Claims of high genome functionality:
So, how many of these 100 novel mutations are functional? For a long time it was thought that only around 1-2% of the human genome was functional – the rest being nonfunctional “junk DNA”. Since then, the entire human genome has been sequenced and analyzed as part of the ENCODE project that involved 21 leading international scientists. When they published their results in 2012, they made a shocking claim – that while only 1% or so of the human genome codes for proteins, that a whopping 80% of the human genome is being transcribed (into RNA molecules) or has some kind of other functional role.
In the paper’s abstract the team interpreted their observations as follows:
These data enabled us to assign biochemical functions for 80% of the genome, in particular outside of the well-studied protein-coding regions. Many discovered candidate regulatory elements are physically associated with one another and with expressed genes, providing new insights into the mechanisms of gene regulation. The newly identified elements also show a statistical correspondence to sequence variants linked to human disease, and can thereby guide interpretation of this variation.
Similar observations have been made for the mouse genome where some 300,000 regulatory regions comprised 11% of the mouse genome (Link). In fact, there was also a “mouse ENCODE project” that produced similar results to the human ENCODE project. Piero Carninci wrote a summary article, published in Nature, entitled, “Genomics: Mice in the ENCODE spotlight” – which highlighted the following:
Surprisingly, the Mouse ENCODE Consortium … finds that sequences commonly considered useless or harmful, such as retrotransposon elements (stretches of DNA that have been incorporated into chromosomal sequences following reverse transcription from RNA), have species-specific regulatory activity. Because retrotransposon elements can contain embedded transcription-factor binding sites, this may provide unexpected regulatory plasticity… Evolutionary conservation of primary sequence is typically considered synonymous with conserved function, but this finding suggests that this concept should be reinterpreted, because insertions of retrotransposon elements in new genomic regions are not conserved between species…
the existing findings are already thought-provoking. For example, they suggest that we should rethink the relationship between genomic function and evolutionary conservation. Regulatory regions and long non-coding RNAs (lncRNAs) are not subject to the evolutionary constraints of protein-coding genes, which may help to explain the sequence drifts reported in these papers. However, it is striking that transcription-factor networks are conserved despite low conservation of their binding positions in the genome.
However, the claim of 80% functionality for the human genome was met, of course, with very strong resistance from the scientific community – many of whom immediately saw the fundamental problem that this would create for Darwinism. At the 2013 meeting of the Society for Molecular Biology and Evolution in Chicago, Dan Graur, one of the strongest opponents of the conclusions of the ENCODE team, argued the following:
If the human genome is indeed devoid of junk DNA as implied by the ENCODE project, then a long, undirected evolutionary process cannot explain the human genome… If ENCODE is right, then evolution is wrong.
Dan Graur, 2013
See also a couple interesting discussions of this talk by Dr. Larry Moran (Link) and Dr. Jay Wile (Link).
Seems to be a rather obvious truism I would think…
Claims of low genome functionality:
It’s no surprise, then, that one of the first published criticisms was an article entitled, “On the Immortality of Television Sets: ‘Function’ in the Human Genome According to the Evolution-Free Gospel of ENCODE.” (Graur, et. al., Genome Biology and Evolution, 2013). In the abstract the authors reject the ENCODE thesis claiming that the evolutionary constrained regions of the genome are less then 10% of the genome’s total DNA. This work was followed up with a more detailed analysis of the fraction of the genome that is evolutionarily constrained in mammals and provides an extrapolated measurement that 8.2% of the genome is conserved in mammals (Rands, et. al., PLoS Genetics, 2014).
However, the problem with using sequence conservation between species as the rule for defining genomic function is that this argument does not accommodate continual evolution very well – which would seem to defeat the whole point of the arugment. After all we humans are not mice. Indeed, the senior author of the PLoS paper, Chris Pointing, Ph.D., made the following interesting observations:
This is in large part a matter of different definitions of what is “functional” DNA. We don’t think our figure is actually too different from what you would get looking at ENCODE’s bank of data using the same definition for functional DNA. (Link)
So, it’s a matter of definition – and interpretation. The idea that if a sequence isn’t the same between different species then it cannot be functional is entirely based on evolutionary assumptions rather than a direct determination of if a particular sequence is or is not functional within a particular organism. Of course, these assumptions haven’t turned out to be true. Many regulatory sequences and known functional non-coding RNAs, including many microRNAs, are not conserved over significant evolutionary distances, and recent evidence from the ENCODE project suggests that many functional elements show no detectable level of sequence constraint. Also, a 2010 report on research by Kunarso et al. in Nature suggests:
Although sequence conservation has proven useful as a predictor of functional regulatory elements in the genome the observations by Kunarso et. al. are a reminder that it is not justified to assume in turn that all functional regulatory elements show evidence of sequence constraint. (Link)
Function beyond sequence constraint between species:
Shape-based constraint, evolutionary turnover, and lineage-specific constraint:
This is in line with the most conservative estimates published in literature since the ENCODE project’s 2012 paper. For example, Kellis (2014) argues that:
The lower bound estimate that 5% of the human genome has been under evolutionary constraint was based on the excess conservation observed in mammalian alignments relative to a neutral reference (typically ancestral repeats, small introns, or fourfold degenerate codon positions). However, estimates that incorporate alternate references, shape-based constraint, evolutionary turnover, or lineage-specific constraint each suggests roughly two to three times more constraint than previously (12-15%), and their union might be even larger as they each correct different aspects of alignment-based excess constraint…. Although still weakly powered, human population studies suggest that an additional 4-11% of the genome may be under lineage-specific constraint after specifically excluding protein-coding regions…
Both scientists and nonscientists have an intuitive definition of function, but each scientific discipline relies primarily on different lines of evidence indicative of function. Geneticists, evolutionary biologists, and molecular biologists apply distinct approaches, evaluating different and complementary lines of evidence. The genetic approach evaluates the phenotypic consequences of perturbations, the evolutionary approach quantifies selective constraint, and the biochemical approach measures evidence of molecular activity. All three approaches can be highly informative of the biological relevance of a genomic segment and groups of elements identified by each approach are often quantitatively enriched for each other. However, the methods vary considerably with respect to the specific elements they predict and the extent of the human genome annotated by each…
The proportion of the human genome assigned to candidate functions varies markedly among the different approaches, with estimates from biochemical approaches being considerably larger than those of genetic and evolutionary approaches. These differences have stimulated scientific debate regarding the interpretation and relative merits of the various approaches.
This means that, at minimum, between 16% to 26% of the genome is likely to be functionally constrained to one degree or another – based only on the amount of sequence constraint, imposed by natural selection, within the genome. This doesn’t even include those functional elements that are not significantly constrained within a lineage, but are detected via the “biochemical approach” to the data.
Conserved RNA structures:
For example, there is also evidence that up to 30% of the RNA transcripts that are produced by DNA in various creatures are “conserved” in their structure even if they are not conserved in their sequence. Martin Smith et. al. (2013), reported:
When applied to consistency-based multiple genome alignments of 35 [placental and marsupial, including including bats, mice, pigs, cows, dolphins and human] mammals, our approach confidently identifies >4 million evolutionarily constrained RNA structures using a conservative sensitivity threshold… These predictions comprise 13.6% of the human genome, 88% of which fall outside any known sequence-constrained element, suggesting that a large proportion of the mammalian genome is functional…
Our findings provide an additional layer of support for previous reports advancing that >20% of the human genome is subjected to evolutionary selection while suggesting that additional evidence for function can be uncovered through careful investigation of analytically involute higher-order RNA structures.
The RNA structure predictions we report using conservative thresholds are likely to span >13.6% of the human genome we report. This number is probably a substantial underestimate of the true proportion given the conservative scoring thresholds employed, the neglect of pseudoknots, the liberal distance between overlapping windows and the incapacity of the sliding-window approach to detect base-pair interactions outside the fixed window length. A less conservative estimate would place this ratio somewhere above 20% from the reported sensitivities measured from native RFAM alignments and over 30% from the observed sensitivities derived from sequence-based realignment of RFAM data. (see also: Link)
But how could RNA secondary structure be conserved if the DNA encoding for it is not? Analysis of the fitness effect of compensatory mutations (HSFP J, 2009) may explain:
It is well known that the folding of RNA molecules into the stem-loop structure requires base pair matching in the stem part of the molecule, and mutations occurring to one segment of the stem part will disrupt the matching, and therefore, have a deleterious effect on the folding and stability of the molecule. It has been observed that mutations in the complementary segment can rescind the deleterious effect by mutating into a base pair that matches the already mutated base, thus recovering the fitness of the original molecule (Kelley et al., 2000; Wilke et al., 2003).
As the diagram helps illustrate, if a base on the top RNA sequence mutates, a base on the bottom can also mutate to match it again, which maintains or “conserves” the same secondary structure – even though the sequence itself is no longer conserved.
Considering such discoveries, it is very likely that the functionality of the human genome is well over 8.2% – perhaps to the point where the ENCODE scientists weren’t entirely crazy after all?
The debate continues:
Upper limit of 25% functionality:
Yet, the debate continues – for obvious reasons. In a 2017 paper Dan Gaur, a strong opponent to the ENCODE paper who originally argued that less than 10% of the human genome was beneficial, modified this a bit arguing that up to 25% of the genome could be functional to one degree or another (Gaur, 2017). Gaur went on to explain that if any more of the human human genome is functional (is not junk DNA), then the evolution of mankind would not be possible (due to genetic degeneration). More specifically, Gaur states that human evolution would be very problematic even if the genome was 10% functional, but would be completely impossible if 25% or more was functional.
Evolutionists overstepping themselves:
Again, we see that the determination of functionality here is not based on direct analysis of the genetic sequence in question, but on a determination of what Darwinian evolution might allow. Even here, however, Gaur oversteps himself. How so? Well, consider that in 2012, before the findings of the ENCODE project, Peter D. Keightley argued that only “5% of the mammalian genome is under selection” and that, based on this assumption, that the expected deleterious mutation rate would only be around “2.2 for the whole diploid genome per generation” – given an overall mutation rate of 70 per person per generation (Link). Even this detrimental mutation rate (Ud) would suggest a necessary reproductive rate of “only” 18 children per woman per generation (according to the formula: 1 – e-U as proposed by Kimura and Moruyama, 1966). Even this reproductive rate is problematic. After all, how many women in first world countries are experiencing pregnancy 18 times in their lifetimes? – and where has there ever been an 88% average death rate, per generation, throughout all of human history?
The problem exponentially worse for Darwin:
Far too many detrimental mutations:
Of course, now we know that the situation is much worse, exponentially worse, since a good deal more than 5% of the human genome is now known to be functional to one degree or another (at least 16-26% as noted above) and the mutation rate is known to be a bit higher than 70 – at around 100 mutations per person per generation. Consider the situation we humans would be in if only 10% of our genome were functional. This situation would imply a detrimental mutation rate of around 10 per person per generation. This would require a reproductive rate of around 44,052 children, per woman, for two to survive without detrimental mutations (which suggests a death rate before reproduction of >99.9999% of the offspring per generation). If the human genome were 20% functional the detrimental mutation rate would be around 20 per person per generation and the required reproductive rate would be on the order of 970 million children per woman for just two to survive without any detrimental mutations. Obviously, such a reproductive rate is impossible for human beings. This is true even for Gaur’s suggested low-end range for the functional percentage of the human genome.
Nuchman and Crowell:
Nuchman and Crowell detail this perplexing situation for the Darwinian position back in 2000 in the following conclusion from their paper on human mutation rates – and it’s only gotten exponentially worse for Darwinists since then:
The high deleterious mutation rate in humans presents a paradox. If mutations interact multiplicatively, the genetic load associated with such a high U [detrimental mutation rate] would be intolerable in species with a low rate of reproduction [like humans and apes and pretty much all complex organisms like birds, mammals, reptiles, etc] . . .
The reduction in fitness (i.e., the genetic load) due to deleterious mutations with multiplicative effects is given by 1 – e-U (Kimura and Moruyama 1966). For U = 3, the average fitness is reduced to 0.05, or put differently, each female would need to produce 40 offspring for 2 to survive and maintain the population at constant size. This assumes that all mortality is due to selection and so the actual number of offspring required to maintain a constant population size is probably higher.
Michael W. Nuchman and Susan L. Crowell, Estimate of the Mutation Rate per Nucleotide in Humans, Genetics, September 2000, 156: 297-304 (Link)
Hermann Muller:
Consider also that Hermann Joseph Muller, a famous pioneer in the field of genetics, argued that a detrimental mutation rate of just 0.5/person/generation (an average reproductive rate of 3 children per woman) would doom the human population to eventual extinction (H. J. Muller, 1950). After all, it was Muller who realized that, in effect, each detrimental mutation leads, ultimately, to one “genetic death,” since each mutation can be eliminated only by death or failure to reproduce. Sexual recombination softens this conclusion somewhat (by about half), but does not really solve the problem (Link). Also, various forms of truncation selection and quasi-truncation selection (Link) and positive epistasis (Link) really don’t solve a problem of this magnitude either. Rather, if anything, “on the balance it seems that beneficial mutation assemblage, rather than detrimental mutation removal, is the stronger evolutionary mechanism underlying the advantage of sex” (Gray and Goddard, 2012). The problem is, of course, the extreme rarity of beneficial mutations relative to detrimental mutations (on the order of 1 vs. 1 million).
Mutations Galore:
Consider also an excerpt from a Scientific American article published in 1999 entitled, “Mutations Galore”:
According to standard population genetics theory, the figure of three harmful mutations per person per generation implies that three people would have to die prematurely in each generation (or fail to reproduce) for each person who reproduced in order to eliminate the now absent deleterious mutations [75% death rate]. Humans do not reproduce fast enough to support such a huge death toll. As James F. Crow of the University of Wisconsin asked rhetorically, in a commentary in Nature on Eyre-Walker and Keightley’s analysis: “Why aren’t we extinct?”
Crow’s answer is that sex, which shuffles genes around, allows detrimental mutations to be eliminated in bunches. The new findings thus support the idea that sex evolved because individuals who (thanks to sex) inherited several bad mutations rid the gene pool of all of them at once, by failing to survive or reproduce.
Tim Beardsley, Mutations Galore, Scientific American, Apr 1999, Vol. 280 Issue 4, p32, 2p
Again, however, sexual recombination only solves about half the problem compared to asexual reproduction – which just isn’t remotely enough to deal with the severe magnitude of the problem that is currently in hand.
As a relevant aside, consider that the human Y-chromosome does reproduce asexually, and is therefore degenerating at about twice the rate as the rest of the somatic chromosomes (Link, Link).
Current state of the situation:
As it currently stands, humans are devolving rather rapidly – as are other complex organisms. Studies across different species estimate that apart from selection, the decrease in fitness from mutations is 0.2–5% per generation, with human fitness decline estimated at up to 5%, with most supporting a 1% long-term decline (Lynch, 2016). An earlier paper published in 2012 suggested a detrimental mutation rate for humans of 6 per child (Kong, 2012).
In any case, these published rates of decline appear to be quite conservative since even the most conservative current estimates of genome functionality are higher than this. Given an average of 100 mutations per child per generation and a ratio of detrimental vs. beneficial of around a million to one, the actual detrimental mutation rate should be in line with the total amount of functional DNA in the genome. And, as noted above, even with the most conservative estimates of genome functionality proposed by the evolutionary scientists themselves (8-10% or so), the resulting equivalent detrimental mutation rate (8-10 per individual per generation) would inevitably lead to a steady and fairly rapid decline in the informational quality of the human genome – with eventual genetic meltdown and extinction. Natural selection simply cannot overcome this degree of decline due to the low reproductive rate of humans and other slowly reproducing organisms on this planet – which strongly counters the notion that humans and apes and mammals in general have existed on this planet for more than a few thousand years, much less millions of years.
See also Basener and Sanford, “The fundamental theorem of natural selection with mutations,” Journal of Mathematical Biology, 2018 , Sanford’s book and lecture on “Genetic Entropy“, and Basener’s YouTube Lecture – Link).
Functional genomic redundancy:
This is especially true when one considers the degree of redundancy within the human genome and the genomes of other complex slowly reproducing organisms. Consider, for instance, a situation where a functional genetic element has a spare copy of itself within the same genome. Such redundancy is able to fend off detrimental mutations for longer without there being an actual functional effect on the organism. Yet, a loss of this redundancy through detrimental mutations is still detrimental to the gene pool since it pushes the gene pool closer to eventual genetic meltdown and extinction.
Of course, this redundancy observation isn’t new. In fact, this very same observation was presented by Kellis et. al. in 2014:
The approach [where functionality is only based on homologous or “constrained” sequences between various species or tests that measure immediate “loss of function” tests – as in, for example, “knock out mice” where various genetic segments are deleted from the mouse genome] may also miss elements whose phenotypes occur only in rare cells or specific environmental contexts, or whose effects are too subtle to detect with current assays. Loss-of-function tests can also be buffered by functional redundancy, such that double or triple disruptions are required for a phenotypic consequence. Consistent with redundant, contextual, or subtle functions, the deletion of large and highly conserved genomic segments sometimes has no discernible organismal phenotype and seemingly debilitating mutations in genes thought to be indispensible have been found in the human population.
Clearly then, this observation significantly undermines the impact of the conclusions of Rands et. al., because Rands’ conclusion of overall human genomic functionality of just 8.2% is based entirely on “constrained” sequence homologies between different mammalian species. Rands doesn’t take into account the possibility of functional aspects of DNA that would not be significantly constrained between or even within various species. In fact, it’s been known in a general way for some time now that there is a lot of redundancy in the human genome since most genes and other functional genetic elements have at least two copies within the genome – with some having several dozen or even several hundred copies. The human genome is in fact a very “repetitive landscape.” Of course, some biologists considered the repetition either superfluous or sort of a “backup supply” of DNA. While it is true that some of the repetition within the human genome could just be ‘extra DNA’, new research is also suggesting that such redundant sequences may have a variety of more direct functional roles (Link, Link). Genetic redundancy is the key to the robustness of organisms – i.e., their built-in flexibility to rapidly adapt to different environments. It is also right in line with very good design. Consider, for example, the arguments of David Stern (HHMI investigator) in this regard:
Over the past 10 to 20 years, research has shown that instructional regions outside the protein-coding region are important for regulating when genes are turned on and off. Now we’re finding that additional copies of these genetic instructions are important for maintaining stable gene function even in a variable environment, so that genes produce the right output for organisms to develop normally. (Frankel, et. al., 2010).
For example, in 2008, the University of California-Berkeley’s Michael Levine reported the discovery of secondary enhancers for a particular fruit fly gene that were located much farther away from the target genes and from the previously discovered enhancers that were located adjacent to the gene. “Levine’s team called the apparently redundant copies in distant genetic realms “shadow enhancers” and hypothesized that they might serve to make sure that genes are expressed normally, even if development is disturbed. Factors that might induce developmental disturbances include environmental conditions, such as extreme temperatures, and internal factors, such as mutations in other genes.”
So, Stern and his team put Levine’s hypothesis to the test by studying a fruit fly gene that codes for the production of tiny hair-like projections on the insect’s body, which are called trichomes. “The gene, known as shavenbaby, takes its name from the fact that flies with a mutated copy of the gene are nearly hairless. Stern previously led a research effort that identified three primary enhancers for shavenbaby. In the new research, his team discovered two shadow enhancers for shavenbaby, located more than 50,000 base pairs away from the gene.
In their experiments, the researchers deleted these two shadow enhancers, leaving the primary enhancers in place, and observed developing fly embryos under a range of temperature conditions. At optimal temperatures for fruit fly development — around 25 degrees Celsius, or a comfortable 77 degrees Fahrenheit — the embryos without shadow enhancers had only very slight defects in their trichomes. But the results were very different when the researchers observed embryos that developed at temperatures close to the extremes at which developing fruit flies can survive — 17 degrees Celsius, or 63 degrees Fahrenheit, on the low end and 32 degrees Celsius, or 90 degrees Fahrenheit, at the upper limit. These flies without shadow enhancers developed with severe deficiencies in the number of trichomes produced.” (Link)
“These results indicate that the genetic instructions that seemed dependable at optimal temperatures were just not up to the task in other conditions,” Stern said. (Link)
“Backup regulatory DNAs, also called shadow enhancers, ensure the reliable activities of essential genes such as shavenbaby even under adverse conditions, such as increases in temperature,” Levine said. “If Dr. Stern and his associates had not examined the activities of shavenbaby under such conditions, then the shadow enhancers might have been missed since they are not needed when fruit flies are grown at optimal culturing conditions in the laboratory.” (Link)
Consider also the explanation of Denis Noble, a physiologist, regarding this same phenomenon (November, 2013):
Simply by knocking genes out we don’t necessarily reveal function, because the network may buffer what is happening. So you may need to do two knockouts or even three before you finally get through to the phenotype. … If one network doesn’t succeed in producing a component necessary to the functioning of the cell and the organism, then another network is used instead. So most knockouts and mutations are buffered by the network… Now that doesn’t mean to say that these proteins that are made as a consequence of gene templates for them don’t have a function. Of course they do. If you stress the organism you can reveal the function. .. If the organism can’t make product X by mechanism A, it makes it by mechanism B. (Link)
Of course, the very existence of genetic buffering, and the functional redundancies required for it, presents a paradox in light of the evolutionary concepts. On one hand, for genetic buffering to take place there is a necessity for redundancies of gene function. Yet, on the other hand, such redundancies are clearly unstable in the face of natural selection and are therefore unlikely to be found in extensively evolved genomes (Link). Why then does so much genetic buffering continue to exist within the human genome? – if natural selection does in fact destroy such buffering redundancy over relatively short periods of time? Yet, extensive redundancy or genetic “buffering” does still exist within the human genome. And this is only the tip of the iceberg. “The study of DNA and genetics is beginning to resemble particle physics. Scientists continually find new layers of organization and ever more detailed relationships.” (Link).
Turtles all the way down:
What is left then, given the weight of evidence that is currently in hand, is that humans and other complex organisms are degenerating over time – and always have been. This strongly supports the conclusion that things actually were better in the past – that it was better when we first come into existence than it is now. And, of course, this implies our origin at the hands of a very intelligent Creator that would be very hard to distinguish from a God or someone with God-like abilities. It also strongly suggests that the only real hope for humanity, and for most other creatures on this planet, is also in the hands of our original Creator.
Sean, Dr. John Sanford, who was an important contributor to the development of GMOs, has written a book on this issue entitled, “Genetic Entropy.” I don’t see him quoted anywhere in your article, and I’m wondering if you are familiar with his work. It is noteworthy that Dr. Sanford has abandoned Darwinism and adopted creationism/intelligent design, not originally for religious reasons, but because of this problem.
Bob Helm(Quote)
View CommentLook again. I did reference the 2018 paper of Basener and Sanford (which was the motivation for me writing this particular article). Of course, as you’ve mentioned, Sanford has also written an interesting book on this topic entitled, “Genetic Entropy” – which I’ve previously referenced before in this blog (along with a YouTube video of a lecture he gave on the topic at Loma Linda University: (Link). For those who haven’t read it or seen Sanford’s lecture on this topic, it’s certainly worth your time…
Sean Pitman(Quote)
View Comment@Sean Pitman: OK, I see it now. Sorry – I missed it earlier.
Bob Helm(Quote)
View Comment@Bob Helm: Dr. Sanford is very familiar to most of us. He was invited to speak at LLU several years ago and I and a great many were privileged to hear him.
kime wesley(Quote)
View CommentGreat review. The redundancy factor helps explain why humanity has endured as long as the Bible indicates!
Ariel A. Roth(Quote)
View CommentHey I have a concern, a aqaintance of mine have brought some attention of this to me are some of the things you said outdated or are they valid ? Also he said scientists don’t use Darwin’s model for evolution ? Like we don’t use Newton’s model for gravity…basically scientists have at least claimed that they replaced these models with new ones. Can you help Me with these objections ? They seem valid. Thank you ! For your response
Carlos(Quote)
View CommentAs far as the current article is concerned, I know of no “outdated” information. The information is current as far as I’m aware. The detrimental mutation rate is far too high for complex organisms to avoid an inevitable downhill devolutionary path. There is simply no way to rationally avoid this conclusion as far as I’m aware.
So, perhaps your friend could be more specific regarding his particular objections to the information presented?
Sean Pitman(Quote)
View Comment@Carlos: Far from being outdated, I would say that Sean’s arguments are cutting edge. As for the assertion that scientists don’t use Darwin’s model for evolution, that is correct – because Darwin had no knowledge of Mendelian genetics. The original Darwinian model was replaced by the Neo-darwinian Synthesis about 1940, which claims that evolution takes place as natural selection acts on random mutations. Although this model still dominates biology today, it is facing increasingly serious problems, which Sean has touched on.
Bob Helm(Quote)
View CommentGreat review. The redundancy factor helps explain why humanity has endured as long as the Bible indicates!
Ariel A. Roth(Quote)
View Comment@Carlos: Far from being outdated, I would say that Sean’s arguments are cutting edge. As for the assertion that scientists don’t use Darwin’s model for evolution, that is correct – because Darwin had no knowledge of Mendelian genetics. The original Darwinian model was replaced by the Neo-darwinian Synthesis about 1940, which claims that evolution takes place as natural selection acts on random mutations. Although this model still dominates biology today, it is facing increasingly serious problems, which Sean has touched on.
Bob Helm(Quote)
View CommentHey I have a concern, a aqaintance of mine have brought some attention of this to me are some of the things you said outdated or are they valid ? Also he said scientists don’t use Darwin’s model for evolution ? Like we don’t use Newton’s model for gravity…basically scientists have at least claimed that they replaced these models with new ones. Can you help Me with these objections ? They seem valid. Thank you ! For your response
Carlos(Quote)
View CommentAs far as the current article is concerned, I know of no “outdated” information. The information is current as far as I’m aware. The detrimental mutation rate is far too high for complex organisms to avoid an inevitable downhill devolutionary path. There is simply no way to rationally avoid this conclusion as far as I’m aware.
So, perhaps your friend could be more specific regarding his particular objections to the information presented?
Sean Pitman(Quote)
View CommentSean, Dr. John Sanford, who was an important contributor to the development of GMOs, has written a book on this issue entitled, “Genetic Entropy.” I don’t see him quoted anywhere in your article, and I’m wondering if you are familiar with his work. It is noteworthy that Dr. Sanford has abandoned Darwinism and adopted creationism/intelligent design, not originally for religious reasons, but because of this problem.
Bob Helm(Quote)
View Comment@Bob Helm: Dr. Sanford is very familiar to most of us. He was invited to speak at LLU several years ago and I and a great many were privileged to hear him.
kime wesley(Quote)
View CommentLook again. I did reference the 2018 paper of Basener and Sanford (which was the motivation for me writing this particular article). Of course, as you’ve mentioned, Sanford has also written an interesting book on this topic entitled, “Genetic Entropy” – which I’ve previously referenced before in this blog (along with a YouTube video of a lecture he gave on the topic at Loma Linda University: (Link). For those who haven’t read it or seen Sanford’s lecture on this topic, it’s certainly worth your time…
Sean Pitman(Quote)
View Comment@Sean Pitman: OK, I see it now. Sorry – I missed it earlier.
Bob Helm(Quote)
View CommentPingback: Viruses and Human Complexity | Educate Truth
Pingback: Viruses and Human Complexity | Detecting Design
Pingback: Pacific Union College Encouraging Homosexual Marriage? | Educate Truth