Jason Rosenhouse wrote (on his own Blog): I am not a …

Comment on Dr. Jason Rosenhouse “Among the Creationists” by Sean Pitman.

Jason Rosenhouse wrote (on his own Blog):

I am not a biochemist, but it seems pretty obvious that you cannot possibly make a good argument for your claims here [regarding the limits of RM/NS with regard to levels of functional complexity]. We can do no more than study minuscule portions of protein space, and that only in modern organisms. The precise nature of protein space is itself something that evolves with time, which complicates things considerably. The fitness of a gene that codes for a given protein is often dependent on the environment in which it finds itself. The reachability of a given gene likewise depends on what happened previously in natural history. Furthermore, the overall size of the space is imply irrelevant, as I explained during the radio show. Natural selection guarantees that most of the space will never be explored in natural history, while guiding organisms to the functional genes. The result is a vast space where we have no good way of assigning a probability distribution, precisely as I said in my post.

You claim that natural selection guides organisms to functional genes. The problem with this notion is, of course, that natural selection cannot guide, in a positive manner, the random mutations of a mutating sequence at all, not even a little bit, until the mutations happen to hit upon a novel sequence that is actually functionally beneficial over what already exists within the gene pool. Until this happens, natural selection is completely helpless in the process of searching for the edges of novel beneficial islands in sequence space. It simply has not part to play aside from preserving what already exists. It is therefore more of a preserving force rather than a creative force of nature. RM/NS simply stalls out, in an exponential manner, with each step up the ladder of functional complexity. That is why there are no examples of evolution in action beyond a relatively low level of functional complexity. We aren’t talking about the evolution of highly complex machines here – like the human eye or the mammalian ear. We are talking about the lack of evolution of any qualitatively novel system that requires more than just 1000 specifically arranged residues. That’s not a very high level of functional complexity. That’s a very low level of complexity beyond which evolution simply stalls out – despite huge population sizes in bacteria and very rapid generation times and very high selection pressures. Despite all of these things favoring evolutionary progress at higher levels, the mechanism of RM/NS completely stalls out on a rather low-level rung of the ladder of functional complexity. Why do you think that might be? – if your vision of closely spaced steppingstones were actually correct at higher levels of functional complexity?

Also, the overall nature of protein sequence space simply does not evolve with time in a manner which would actually favor evolutionary discoveries at higher levels of complexity. Most of the problem with protein sequence space is that the vast majority of sequence options within the space are simply not structurally stable and could not form useful proteins of any kind. Beyond this, say the environment changes so that new protein sequences are beneficial within sequence space (which does of course happen). How does this not provide an evolutionary advantage? Because, such changes to the potential targets in sequence space does nothing to the overall ratio of beneficial vs. non-beneficial since new islands appear while others disappear with such environmental changes. Also, it does nothing as far as setting up the steppingstones in a nice line of very closely spaced steppingstones as you originally claimed.

In short, this key argument upon which your entire theory is dependent, is simply mistaken and does not solve the problem of the exponential decline in potentially beneficial islands within sequence space with each step up the ladder of minimum structural threshold requirements.

That’s an in principle argument for being highly skeptical of big bold claims about the nature of sequence space. When you then factor in the myriad practical successes in the field of molecular evolution, and the fact that not many biochemists seem to share your view, it looks like you are once again just waving your hands.

Where are these “practical successes” in the field of molecular evolution? – beyond very very low levels of functional complexity? Where is there a single example of evolution in action that produced any qualitatively novel system of function that requires a minimum of more than 1000 specifically arranged residues? As far as I’m aware, there are no such examples in literature – not a single one.

As I pointed out during our debate, all the practical successes of the mechanism of RM/NS are based on low-levels of functional complexity – to include antibiotic resistance, novel single-protein enzymes, antifreeze proteins, and all of the other examples that you listed off in your book.

The size of the space is irrelevant, as I have already explained. Your other claims are nonsense. At most we can make some judgments about small, local areas of sequence space as we see them in modern organisms and modern environments. That’s plainly insufficient for drawing grand conclusions about the viability of evolution.

This size of sequence space is not irrelevant because it demonstrates the exponential nature of the increase in the overall size of sequence space with each increase in the minimum structural threshold requirements of systems at higher and higher levels of functional complexity. This observation would only be irrelevant if it could be shown that potentially beneficial sequences increase at the same rate. The problem, of course, is that the increase in potentially beneficial sequences is dwarfed by the increase in non-beneficial sequences – which in turn creates the exponentially declining ratio problem.

As far as your claim that we can only make judgments about small local areas of sequence space, that’s true. As you point out, it is completely impossible to explore all of sequence space at higher levels of complexity – since the size of sequence space beyond the level of 1000 specifically arranged residues is beyond imagination – being larger than universes upon universes. However, science, but definition, is about extrapolating the information that is currently in hand to make predictions about things which cannot be definitively known. And, given the information that is in fact currently in hand, we can gain a very good idea as to the nature of all of sequence space. The same can be said of the universal Law of Gravity, for example. It is thought that this Law of nature is true everywhere in the universe even though we haven’t actually tested it in all parts of the universe. It’s a scientific prediction based on the relatively little evidence that we have in hand. The very same thing is true of protein sequence space – or any other form of sequence space that is based on information that is coded into character sequences (i.e., English, French, Russian, computer code, Morse Code, etc). All of these language/information systems have the same basic features of sequence space where meaningful/beneficial sequences are randomly distributed throughout sequence space at various levels of functional complexity.

And, for all of these language/information systems we can actually know, with very high confidence and high predictive value, that the ratio of potentially beneficial vs. non-beneficial sequence does in fact decrease, in an exponential manner, with each increase in the minimum size and/or specificity requirement of a sequence.

In this line, consider an argument from a paper published in 2000 by Thirumalai and Klimov:

The minimum energy compact structures (MECSs), which have protein-like properties, require that the ground states have H residues surrounded by a large number of hydrophobic residues as is topologically allowed. . . There are implications of the spectacular finding that the number of MECSs, which have protein-like characteristics, is very small and does not grow significantly with the size of the polypeptide chain.

The number of possible sequences for a protein with N amino acids is 20^N which, for N = 100, is approximately 10^130. The number of folds in natural proteins, which are low free energy compact structures, is clearly far less than the number of possible sequences. . .

By imposing simple generic features of proteins (low energy and compaction) on all possible sequences we show that the structure space is sparse compared to the sequence space. Even though the sequence space grows exponentially with N (the number of amino acid residues [by 20^N]) we conjecture that the number of low energy compact structures only scales as lnN [The natural logarithm or the power to which e (2.718 . . . ) would have to be raised to reach N] . . . The number of sequences for which a given fold emerges as a native structure is further reduced by the dual requirements of stability and kinetic accessibility. . . We also suggest that the functional requirement may further reduce the number of sequences that are biologically competent.

So if, as sequence space size grows by 20^N the number of even theoretically useful protein systems only scales by the natural log of N, this differential rapidly produces an unimaginably huge discrepancy between potential target and non-target systems (given that the structures themselves require a certain degree of specificity). For example, the sequence space size of 1000aa space is 20^1000 = ~1e1301. According to these authors, what is the number of potentially useful protein structures contained within this space? It is 20^ln1000 = ~1e9.

All we really have left then is your argument that these exponentially rarer and rarer beneficial sequences are somehow all lined up in a nice neat little row. Is this a testable claim or not? If not, your claim simply isn’t scientific. If it is testable, what are the results of the tests? What is the best evidence that pertains to this hypothesis of yours? Is it likely to be truly representative of any aspect of sequence space? – or not?

In answer to this question, consider the work of Babajidge et. al. (1997) where the observation is made, regarding stable protein and RNA sequences:

“The sequences folding into a common structure are distributed randomly in sequence space. No clustering is visible.”

While this paper was admittedly based on very very short low-level sequences, it provides a good idea as to what higher levels of sequence space look like. The extrapolation is a very reasonable one that can be tested in a potentially falsifiable manner – a testable position which has been continually verified and has yet to be falsified (increasing its predictive value). Upon what, then, do you base your hypothesis that the line-up of closely spaced steppingstones that you envision remotely represents reality anywhere in sequence space at any level of functional complexity? – past, present, or future? It seems to me like you’re hiding behind the unknown, hopeful that someday someone will find some evidence to support your vision of what reality needs to be in order for your hypothesis to be true. I’m sorry, but that’s just wishful thinking, not science. The evidence that is currently in hand strongly counters your imagined scenario.

You absolutely insist on discussing this at a highly abstract level. But for actual biologists this is not an abstract question. They do not apply natural selection as some vague principle in their work. Instead they do the hard work of studying actual complex systems, and in every case their findings are the same. They find that once the systems are well understood, and once similar systems in other organisms are studied and understood, plausible gradualistic scenarios inevitably appear. Those scenarios are hardly the end of the story, however, as I explained to you during our radio debate. Once you think you have a good scenario for how something evolved, that scenario can be used to generate testable hypotheses. Subsequent testing of these hypotheses then leads to new knowledge. This type of reasoning has been applied so frequently and so successfully that, if it is fundamentally flawed, we must conclude scientists are getting mighty lucky.

Do you have even one example of what you call a “plausible gradualistic scenario”? – beyond very low levels of functional complexity? Take, for example, a scenario proposed for flagellar evolution by Nicholas J. Matzke in this 2003 paper, “Evolution in (Brownian) space: a model for the origin of the bacterial flagellum.” In this paper Matzke lists off what appear to him to be fairly closely-spaced steppingstones, each of which would be beneficially selectable in most environments, along a pathway from simple to much more complex – i.e., the fully functional flagellar motility system. It looks great on paper! The steps certainly seem “plausible”. The only problem, of course, being that none of Matzke’s proposed steppingstones are actually close enough to each other, in sequence space, for random mutations to get across what initially seems like a fairly small gap (i.e., a series of non-selectable required mutational changes to get from one steppingstone to the next) this side of a practical eternity of time. And, in fact, there are no laboratory demonstrations of the crossing of any of Matzke’s proposed steppingstones – not a single one. You’d think that if Matzke’s proposed steppingstones were really as close together in sequence space as he suggests, that a real life demonstration of the crossing of at least one of his proposed gaps wouldn’t be much a problem. The problem, of course, is that there is no such demonstraiton because of the fact that his proposed steppingstones are far too far apart in sequence space to be reached from a lower-level steppingstone this side of a practical eternity of time. Yet, this is about as good as it gets in literature as far as any attempt to produce a truly plausible story of evolvability.

http://www.detectingdesign.com/flagellum.html

So, if you have anything better, I’d love to see it…

The reason scientists routinely find plausible gradualistic scenarios is that these complex systems all carry clear evidence of their evolutionary past. We are not talking about “design flaws” in some abstract sense and we are not trying to psychoanalyze some hypothesized creative supermind. Instead we are talking about structures that are hard to understand from the standpoint of what a human engineer would do, but are easy to understand once the history of the structure is taken into consideration. Not one or two examples, but every complex structure studied to date. Apparently it amused the designer to create in a way that perfectly mimics what we would expect if these systems were actually produced gradually by natural selection.

Again, one does not judge design or non-design based on supposed design flaws or a nested hierarchical pattern or any other such pattern or sequence that supposedly can only be produced by mindless mechanisms. All of these features can be and have been produced by human designers for various systems and for various reasons. I’m sorry, but appeals to design flaws and other such patterns simply doesn’t explain how your proposed mechanism could reasonable have done the job – especially in the light of very clear factors that strongly suggest that it simply cannot move around in sequence space like you imagine.

In short, despite your claims to the contrary, your entire theory is based on what you think an intelligent designer would or would not do (which is very subjective since intelligent designers can do and often do all kinds of things for all kinds of reasons). Your position simply is not based upon evidence for what your mechanism can actually do. I’m sorry, but arguing what an intelligent designer wouldn’t do, in your estimation, is just not a scientific argument when it comes to determining the creative potential and/or limitations of your proposed mechanism.

There’s so much more, of course. In some cases, like the mammalian inner ear, we have strong evidence from paleontology and embryology to show how a complex structure evolved gradually. Likewise for molecular evolution where, in cases like anti-freeze proteins in fish, we have strong evidence for how the proteins evolved from simpler precursors. I could point also to the success of game theoretic models in ethology. In every case scientists are approaching their work with theoretical models based on an assumption of natural selection, and they get results. This consistent success, again, is mighty coincidental if the theory is just fundamentally flawed.

Again, antifreeze proteins are not very complex. They are very simple, requiring a minimum of no more than a few dozen specifically arranged residues – the same as the similar examples in your book.

As far as your story of the evolution of the mammalian inner ear, it is indeed a lovely story, but it says nothing as far as how your proposed mechanism could have done the job. It just shows a serious of what appear to you to be gradual enough modifications, and you simply wave your hand and claim, without any other support, that your mechanism could easy produce such changes. Really? Where is your description of the mutations that would be required, on a genetic level, to get from one selectable steppingstone to the next in your proposed pathway?

Again, what seems to you to be easily explained from an anatomic level is not so easily explained once you start to actually look at the number of non-selectable genetic changes that would be required. It’s much like computer programming where apparently simple changes to the function and/or appearance of a computer program require fairly significant changes to the underlying computer code for the program.

Yes, the evidence is circumstantial, since this process plainly takes too long to be observed in toto. That’s not biologists’s fault. Nor is it their fault that the metaphor of sequence space, useful in many contexts, is not so useful for drawing grand conclusions about the viability of evolution.

You yourself draw grand conclusions about the viability of evolution given a little bit of circumstantial evidence that is almost entirely based on what you think an intelligent designer would or would not do. Based on these assumptions, you make grand conclusions about the potential for evolutionary progress via a mechanism that you really don’t understand beyond what you know must be true if your theory is to remain viable. Therefore, you dream up a picture of sequence space which is completely contrary to what is currently known about sequence space. You’re willing to argue that sequence space must somehow have these very very closely spaced steppingstones all lined up in nice little rows and that these neat little rows are not at all effected by the exponential decline in the ratio of beneficial vs. non-beneficial. You make these claims, not based on some superior understanding of the nature of sequence space, but based on your claimed ignorance of the actual nature of sequence space.

I’m sorry, but that isn’t a scientific position when it comes to a useful understanding of the evolutionary mechanism. Science isn’t about what is possible, but what is probable. If you don’t understand sequence space, you don’t understand your mechanism. And, if you don’t understand your mechanism, you really don’t have a scientific basis for the creative potential you ascribe to it.

I’m genuinely surprised to see a mathematician with an interest in biological evolution produce this common, but mistaken, argument. It’s like saying that one shouldn’t be surprised if Arnold Schwarzenegger happens to win the California Lottery 10 times in a row. After all, unlikely events happen all the time! – Sean Pitman

Oh please. Since I plainly discuss this point in my next paragraph, you have a lot of nerve cutting me off where you did. The whole question at issue is whether the endpoints of evolution are like getting 500 heads on 500 tosses of a coin, or whether they are more like firing an arrow into a wall and then painting a target wherever it lands. You claim it is the former; more sensible people claim it is the latter. And that is why the end result of any probability calculation you carried out would be irrelevant.

I’m sorry, I must have misunderstood your argument. Even after reading your entire argument several times, it seemed to me like you were trying to argue that rare events happen all the time, so it doesn’t matter if the odds are not favorable to your position.

In any case, your scenario is still very much misguided. You do realize that the sequences in sequence space are pre-defined as being beneficial or non-beneficial? Beneficial “targets” cannot be “pained later” after the randomly shot arrow hits the wall in just any location. If the arrow lands on a non-beneficial sequence, no one can claim that the sequence is in fact beneficial. The sequence is what it is. Therefore, it is perfectly reasonable to argue that the odds of actually hitting a novel beneficial target are extremely low and get exponentially worse and worse at higher and higher levels of functional complexity – worse than the odds than getting 500 heads in a row at relatively low levels of functional complexity (still at the level of small subcellular machines). Yet, you reject the implications of this statistical problem and argue that I’m painting targets after the arrow hits the wall? How can you possibly suggest such a thing when nature defines the targets ahead of time, not me?

But the issue wasn’t merely coming up with a definition of functional complexity. It was doing so in a manner that is in any way relevant for determining what natural selection can do with eons in which to work. Show me in concrete terms how the definitions you’ve produced here permit a calculation that shows natural selection to be ineffective, and then I will be impressed. This is precisely what William Dembski attempted to do, but his work was so shot through with false assumptions and vague premises that it did not amount to much.

It’s not just calculations, it is observations and statistical extrapolations based on those empirical observations of the real world. This isn’t just a mathematical world we’re talking about here. This is the real world observations and mathematical extrapolations based on those real world observations – i.e., a real scientific theory.

In any case, as already noted, the definition of levels of functional complexity is easy. It’s been published as well. For example, Hazen et. al. (2007) define functional complexity as follows:

1. n, the number of letters in the sequence.
2. Ex, the degree of function x of that sequence. In the case of the fire example cited above, Ex might represent the probability that a local fire department will understand and respond to the message (a value that might, in principle, be measured through statistical studies of the responses of many fire departments). Therefore, Ex is a measure (in this case from 0 to 1) of the effectiveness of the message in invoking a response.
3. M(Ex), the total number of different letter sequences that will achieve the desired function, in this case, the threshold degree of response, rEx. The functional information, I(Ex), for a system that achieves a degree of function, rEx, for sequences of exactly n letters is therefore

I(Ex)= – log(sub2) [M(Ex) / C^n] (C = number of possible characters per position)

What is also interesting is that Hazen et. al. go on to note that, “In every system, the fraction of configurations, F(Ex), capable of achieving a specified degree of function will generally decrease with increasing Ex.” And, according to their own formulas, this decrease is an exponential decrease with each linear increase in n – or the number of “letters” or characters (or in this case amino acid residues), at minimum, required by the system to achieve the beneficial function in question.

So, yet again, the basic concept of levels of functional complexity is well defined in literature. It isn’t that science can’t define the concept or that the basic concept is difficult to understand, contrary to what you seemed to suggest in your book and during our debate. The only real question is if the potentially beneficial target islands are closely spaced and lined up in a nice little line across sequence space like you imagine – as must be the case if the claims of evolutionists for the creative power of RM/NS is actually “plausible”.

Given all that is currently known, through empirical observations, about sequence space and how beneficial islands are actually arranged in sequence space, your imagined scenario simply isn’t tenable. And, there is no evidence for why it might ever have been tenable – outside of intelligent design. There simply is no empirical evidence that sequence/structure space remotely resembles your description of it.

Beyond this, the hypothesis that all of sequence space at various levels of functional complexity has beneficial islands scattered around in a randomly uniform appearance, is a testable hypothesis with predictive value. This hypothesis can therefore be compared to your hypothesis to see which one produces the best results. The scenario I describe can be used to predict an exponential decline in evolution with each linear increase in the level of functional complexity under consideration. Your hypothesis, in comparison, predicts no such decline in evolutionary potential whatsoever. In fact, according to your hypothesis of sequence space, evolution should proceed at higher levels of functional complexity at pretty much the same rate as occurs at lower levels of functional complexity. Of course, this simply isn’t what happens. Lower-level functions that require no more than a few hundred specifically arranged characters (or amino acid residues for protein-based sequence space) evolve commonly and rapidly in fairly small populations with fairly slow reproductive rates. However, even given very large population with very rapid reproductive rates, short generation times, and high mutation rates, nothing evolves beyond the level of systems that require a minimum of at least 1000 specifically arranged amino acid residues. It just doesn’t happen – as predicted by my view of sequence space, not yours.

Therefore, your view of sequence space has effectively no predictive power. You cannot predict how often your mechanism will succeed in finding something qualitatively new at a given level of functional complexity within a given span of time. My model, on the other hand, can predict how often success can be expected at a given level of functional complexity within a given span of time. That is why my view of sequence space carries far more scientific predictive value compared to your view.

Your further remarks in this paragraph strike me as very confused. Science makes rather a lot of claims that do not depend on probability in any way, so I don’t know where you came up with this idea that probability is the most important thing there is. And since unlikely things occur all the time, I don’t see why I have to show that an event was likely to occur before I can conclude that it happened. Moreover, showing that something is likely or unlikely rarely involves performing an actual probability calculation. Usually you just follow the evidence where it leads, and if it points strongly to the conclusion that something happened then that’s good enough. Abstract probability calculations are irrelevant in most cases.

Science is dependent upon predictive value among competing hypotheses. One must therefore be able to demonstrate that the favored hypothesis actually has greater predictive value than the competing or opposing hypothesis, and that the predictions of the hypothesis have not been effectively falsified by various potentially falsifying tests. In other words, it must be shown that a given hypothesis has greater probability of predicting the future, of predicting future observations, than the opposing hypothesis when put to the test. If you cannot do this, if you cannot quantify the degree to which your hypothesis has greater predictive value than the opposing hypothesis, if your hypothesis cannot be effectively falsified by another hypothesis, even in theory, then you simply don’t have a scientific position. What you have is a just-so story.

Anyway, I do appreciate that you took the time to respond to my comments. Believe me, I know the time it takes, as I have very little time for such things myself. All the best to you and I hope to hear from you again in the future…

Sean

Sean Pitman Also Commented

Dr. Jason Rosenhouse “Among the Creationists”
I have no fear, thanks to God and His mercy, and no one is free of bias – not even you. You’ve simply traded one religion for another. It is still possible that your current bias blinds you to what would otherwise be obvious.


Dr. Jason Rosenhouse “Among the Creationists”

No, I think science would have discredited them if their ideas were not supported by observation and experimentation.

Exactly, so why not at least try to do the same for my ideas, which are quite easily falsifiable?

I know, you can’t do it yourself, but you’re quite sure that if I publish my ideas in a mainstream science journal that someone out there will know how to shoot my theory all to shreds. Right? This sounds like a no-brainer! Why not just published my ideas and test them against the big boys? It must be that I’m afraid to get shot down! and that’s why I don’t publish… Don’t you think?

I guess that’s why I went on live radio to debate Jason Rosenhouse? – because I was afraid that he’d show me how silly my ideas are on public radio? – how the Darwinian mechanism is so clearly capable of creating all kinds of things regardless of their level of functional complexity? If I was so afraid of getting smashed to pieces by some of these Darwinian big shots, why take such public risks? – even in their own blogs and public forums? Why not just hide out in my own little ghetto?

Come on now. You have to know that I’d love to be able to publish my ideas on the statistical limits to the Darwinian mechanism in a science journal like Nature or Science or any mainstream science journal. I really would. The problem, as I’ve already explained, is that no one is going to publish, in any mainstream science journal, any argument for intelligent design or creative intelligence (even if the intelligence were a “natural” intelligence like some kind of intelligent alien life form) as the origin of various kinds of biological machines. It just doesn’t happen these days without someone getting fired over it. So, the next best thing is to take the argument directly to them and challenge them in their own blogs, on the radio, and on television, etc. There’s nothing else I can do. My hands are tied.

In any case, do let me know when you’re willing to reasonably define what it would take for you to recognize a phenomenon as a true “miracle” or when you’re able to present something, anything, that explains how the Darwinian mechanism of RM/NS can actually work beyond very low level of functional complexity.

Until then, what are you really contributing here? What are you trying to say? – that you don’t know but someone else probably does? That you’re skeptical about everything and nothing could possibly convince you of the existence of God or any other designer of life? – not even if you were to personally witness some of the most fantastic miracles described in the Bible? Good luck with that… but you’re just fooling yourself in your efforts never to be tricked by anything. You’re missing out on a great deal that life has to offer.

Still, I wish you all the best.


Dr. Jason Rosenhouse “Among the Creationists”
All the best to you… yet again 😉


Recent Comments by Sean Pitman

After the Flood
Thank you Ariel. Hope you are doing well these days. Miss seeing you down at Loma Linda. Hope you had a Great Thanksgiving!


The Flood
Thank you Colin. Just trying to save lives any way I can. Not everything that the government does or leaders do is “evil” BTW…


The Flood
Only someone who knows the future can make such decisions without being a monster…


Pacific Union College Encouraging Homosexual Marriage?
Where did I “gloss over it”?


Review of “The Naked Emperor” by Pastor Conrad Vine
I fail to see where you have convincingly supported your claim that the GC leadership contributed to the harm of anyone’s personal religious liberties? – given that the GC leadership does not and could not override personal religious liberties in this country, nor substantively change the outcome of those who lost their jobs over various vaccine mandates. That’s just not how it works here in this country. Religious liberties are personally derived. Again, they simply are not based on a corporate or church position, but rely solely upon individual convictions – regardless of what the church may or may not say or do.

Yet, you say, “Who cares if it is written into law”? You should care. Everyone should care. It’s a very important law in this country. The idea that the organized church could have changed vaccine mandates simply isn’t true – particularly given the nature of certain types of jobs dealing with the most vulnerable in society (such as health care workers for example).

Beyond this, the GC Leadership did, in fact, write in support of personal religious convictions on this topic – and there are GC lawyers who have and continue to write personal letters in support of personal religious convictions (even if these personal convictions are at odds with the position of the church on a given topic). Just because the GC leadership also supports the advances of modern medicine doesn’t mean that the GC leadership cannot support individual convictions at the same time. Both are possible. This is not an inconsistency.