Reviewing Darwin’s Doubt Chapter 9

© Can Stock Photo Inc. / kgtoh

© Can Stock Photo Inc. / kgtoh

I am deep into the book, Darwin’s Doubt, by Stephen C. Meyer, and chronicling my way through it. The title of the book comes from the problem that the Cambrian explosion posed, and still poses, to evolutionary theory. In the first article, the problem that first appears in the fossil record is explained. In the next article, some possible solutions to the problems are explored and discarded. In the third article, we begin to look to genes for possible solutions, and that sets the stage for this article.

The origin of the animals that appeared suddenly in the Cambrian period necessarily required vast amounts of new functional information. Where did it come from and how did it arise? The discovery of DNA as information retaining and building mechanisms seemed to present great hope for a solution, but that is not the story the history of exploring this solution tells. In fact, the study of DNA has only accentuated the problem.

If we consider DNA to be like books or computers, it is easy to understand the problem. Meyer does an exceptional job unraveling the problem in detail using words and word sequences to show just how difficult it would be for random changes to produce meaningful (functional) results.

Keep in mind that mutations work on already functional material. If the mutations produce nonfunctional material, natural selection must work to eliminate it (weed it out). Thus, only mutations that produce functional material would survive, and those mutations must go from function to function to produce any kind of significant change that would explain evolutionary change of any kind, let alone the prolific and sudden evolutionary change we see in the Cambrian explosion and to explain the vast “evolutionary tree” that Darwin envisioned.

Most of the postulation and analysis has been done on a very basic, simple level, but Meyer focuses on the forest, rather than the trees. Murray Eden’s observation of “combinatorial inflation” highlights the issue. To understand combinational inflation, we must consider what DNA is. DNA, simply put, is detailed information appearing in strands, and the information in the DNA affects function. Prior analysis focused solely on the information, ignoring the sequence of the information, but Murray observed that the sequence is just as important to function as the information itself.

As with any computer program, the characters that comprise the coding and the sequence of the characters must all align to create a functional program. Both the characters and the sequence of the characters combine to create function. If any of the characters are missing or any characters are in the wrong sequence, the function is lost.  DNA operates the same way.

It is not enough, then, simply to get all the characters “right”; all the characters must be present and in the right sequence to create function. Any change that loses function natural selection would weed out, leading to a dead end. The greater the combinations of characters that exist to create function, the more likely a change in the characters will result in a loss of function. Add to that equation the necessity of correct sequence (in addition to the correct characters), and the combinations grow exponentially. This is the problem of combinational inflation.

Eden and others have applied mathematics to demonstrate the effect of combinatorial inflation. Eden summarized to fellows at a conference in 1966 at Wistar Institute in Philadelphia that “random changes to written texts or selections of digital code would inevitably degrade the function of the information-bearing sequences, particularly when allowed to accumulate.” (p. 171) “[I]f someone makes even a few random changes in the arrangement of the digital characters in a computer program, “we find we have no chance (i.e., like less than 1/101000) even to see what the modified program would compute: it just jams.” (quoted on p. 171-172)  He observed that the same problem is presented in evolutionary theory.

“And that was the problem…: random mutations must do the work of composing new genetic information, yet the sheer number of possible nucleotide base or amino-acid combinations … associated with a single gene or protein of even modest length rendered the possibility of random assembly prohibitively small. For every sequence of amino acids that generates a functional protein, there are a myriad of other combinations that don’t. As the length of the protein grows, the number of possible amino-acid combinations mushrooms exponentially. As this happens, the probability of ever stumbling by random mutation onto a functional sequence rapidly diminishes.” (p. 173)

Meyer details the problem the Wistar participants reviewed in the text at page 175 of the book and summarizes as follows:

“[A]n average-length protein represents just one possible sequence among an astronomically large number – 20300, or 10390 – of possible amino-acid sequences of that length. Putting these numbers in perspective, there are only 1065 atoms in our Milky Way and 1080 elementary particles in the known universe.”

These are the possible mutations of just one, average-length protein, the vast majority of which are nonfunctional and, thus, dead ends. The more information we learn about DNA, the more complex we see that it is. As our understanding of the complexity of DNA grows, the more unlikely evolutionary change becomes. The more complex DNA is, the more combinations are possible; the more combinations that are possible, the greater the likelihood becomes that changes will result in a loss of function. Functional loss gets weeded out by natural selection. The greater the combinations are, the more functional loss results and fewer gains in function are predicted. Combinatorial inflation, in fact, predicts a degradation of function over time.

Even if degradation of function is not the outcome that is predicted, Eden concluded that there is not enough time available for the evolutionary process to work on the combination of possibilities for just one protein, and there is, therefore, virtually no chance of mutations creating new genetic information. Eden likened the possibility to creating a library of a thousand volumes by making random changes to a single phrase with the following instructions: “Begin with a meaningful phrase [without questioning where it came from], retype it with a few mistakes, make it longer by adding letters [at random], and rearrange subsequences in the string of letters; then examine the result to see if the new phrase is meaningful. [Toss it out if it is not meaningful.] Repeat the process until the library is complete.” (Quoted at p. 176)

Marcel-Paul Schutzenberger, another attendee at the Wistar conference, pictured “a computer ‘mutating’ at random the text of Hamlet either by individual-letter substitutions or by duplicating, swapping, inverting, or recombining whole sections of text” and asked the question: “Would such a computer simulation have a realistic chance of generating a completely different and equally informative text such as, say, The Blind Watchmaker by Richard Dawkins, even granting multiple millions of undirected mutational iterations?” Schutzenberger did not think so.

Sanislav Ulam, a physicist who attended the conference, observed: “[The evolutionary process ‘seems to require many thousands, perhaps millions, of successive mutations to produce even the easiest complexities we see in life now. It appears, naively at least, that no matter how large the probability of a single mutation is, should it be even as great as one-half, you would get this probability raised to a millionth power, which is very close to zero that the chances of such a chain seem to be practically non-existent.’” (Quoted at p. 177)

Meyer adds, “Geneticist Michael Denton has shown that in English meaningful words and sentences are extremely rare among the set of possible combinations of letters of a given length, and they become proportionally rarer as sequence length grows. The ratio of meaningful 12-letter words to 12-letter sequences is 1/1014; the ratio of meaningful 100-letter sentences to possible 100-letter strings has been estimated as 1/10100.” Denton’s analysis explains why “random letter substitutions invariably degrade meaning in English text after only a few changes and why the same thing might be true in the genetic text.” (p. 178)

Robert Sauer, an MIT biologist, determined that the ratio of functional to non-functional amino-acid sequences to be about 1 to the 1063 for a relatively short protein 92 amino acids in length. Hubert Yockey in earlier experiments determined similarly that the ratio of functional to non-functional sequences in certain proteins to be 1 to 1090. Meyer described the probability of attaining a correct sequence by random search “roughly equal [to] the probability of a blind spaceman finding a single marked atom by chance among all the atoms in the Milky Way – on its face clearly not a likely outcome.” (p. 183)

While much of the scientific analysis and discovery in the Neo-Darwinian era focuses on the the trees, Meyers observes that mathematical analysis of the forest exposes the seeming implausibility of evolutionary theory to explain the origins of life. In subsequent articles we will follow the journey deeper into the forest, into the trees, but we find more of the same problems encountered from a bird’s eye view.

One thought on “Reviewing Darwin’s Doubt Chapter 9

  1. Pingback: Reviewing Darwin’s Doubt Chapters 7-8 | Perspective

Comments welcome

Please log in using one of these methods to post your comment: Logo

You are commenting using your account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.