Purple Hearts
Main menu
Home
English
Scopo del sito
Mappa del sito
ID in pillole
Links
Ricerca
Libri stranieri
Libri italiani
Documenti
Contatto
Administrator
Login
Username

Password

Remember me
Forgotten your password?

RSS

Mathematics and the origin-of-life problem
Staff  

 

Introduction

 

What does mathematics have to do with Darwinian macroevolution and the origin-of-biological-complexity problem in general? According to the biological theory of evolution macroevolution is an unguided, unintelligent, natural process of morphing from a unique common ancestor to all plant and animal species (live or extinct). Darwinists believe that macroevolution is possible thanks to random mutations and natural selection only. Biological organisms are very complex, organized and ordered systems. Complexity, organization and order need information. In an unguided, unintelligent, natural process information doesn't increase per se. Information is not gratis because it needs an intelligent source. Unfortunately Darwinian macroevolution lacks any information source. IDers (who support Intelligent Design - ID) and creationists claim that origin-of-life and origins-of-species need an information source. The former call this information source the "Designer", the latter call it "God", but anyway IDers and creationists share the concept and the need of an information source. They reject Darwinian macroevolution because it doesn't recognize this logical and physical necessity.

 

Why - in a sense - are the works of great mathematicians such as K. Gödel, A.Turing, G.Chaitin, J.Von Neumann "friends" of the Intelligent Design movement and - at the same time - "enemies" of Darwinism, the biological theory according to which life and species arose without the need of intelligence? As stated above, biological complexity, organization and order need information of the highest degree. The works of Gödel, Turing, Chaitin and Von Neumann all deal with mathematical information theory from some point of view. So information is the link between mathematics and biology. For this reason some truths and results of mathematics can illuminate the field of biology, namely about the origin-of-biological-complexity in general and specifically the origin-of-life problem. Roughly biologists divide into two groups: design skeptics (Darwinists) and intelligent design theorists (ID supporters and creationists). The former claim that life arose without need of an intelligent agency. The latter claim that life arose thanks to intelligent agency.

 

Gödel's works in metamathematics, Turing's ideas in computability theory, Chaintin's results in algorithmic information theory (AIT) and Von Neumann's researches in informatics are friends to ID because all express a universal truth: more doesn't come from less; a lower thing cannot cause a higher thing; causes are more than effects; intelligence stays above and its results below.

 

Gödel proved that, in general, a complete mathematical theory cannot be derived entirely from a finite number of axioms. In general mathematics is too rich to be derived from a limited number of propositions (what mathematicians term a "formal system"). In particular even arithmetic is too rich to be reducible to a finite set of axioms. What we can derive from a finite formal system is necessarily incomplete.

 

Turing proved that, in general, there are functions not computable by means of algorithms. In other words, there are problems non solvable simply by means of sets of instructions. For example the "halting problem" is incomputable. This means that a mechanical procedure able to tell us if a certain computer program will halt after a finite number of steps, cannot exist. In general, information is too rich to be derived from a limited number of instructions.

 

Chaitin saw relations between Gödel's results and Turing's. Gödel's incompleteness and Turing's incomputability are two aspects of the same problem. Chaitin expressed that problem yet another way. In algorithmic information theory one defines the algorithmic complexity H(x) of a bit string "x" as the minimal computer program able to output it. When H(x) is nearly equal to "x' one says the "x" string is "uncompressible" or "irreducibly complex" (IC). In other words it contains "non-minimizable" information. Expressed in algorithmic information theory terminology, Gödel's and Turing's theorems prove that information is in general uncompressible. In particular a Turing machine (a specialized computer) is an uncompressible system. The algorithmic information theory definition of complexity can be related to the concepts of the ID theory. The information algorithmic content H(x) is related to "complex specified information" (CSI). Moreover the information incompressibility of algorithmic information theory is related to the "irreducible complexity" concept.

 

In turn Von Neumann developed the fundamentals of informatics and studied mathematically the architectures capable of processing information. His computational model is still the main computational model used by our computers. What's more, he studied the problem of self-reproduction and developed formal mathematical models of self-reproducing automata. He did this before biologists had investigated the molecular structure of living automata (i.e. cells).

 

In the origin-of-life problem we have as inputs: matter, energy, natural laws and randomness. Evolutionists believe these inputs are sufficient to obtain a living cell without the need of intelligence. Natural laws are a set of rules. ID theorists believe these laws are intelligently designed. Moreover they think the universe is fine tuned for life. Randomness is the simplest rule: a blind choice among atoms. If evolutionists are right, accordingly to the AIT terminology, the algorithmic complexity of the cell would be compressible. Life would have reducible information content.

 

Why are such hypotheses absurd? A cell is a hierarchical architecture system where a controller manages symbolic data to carry out its self-survival, self-reproducing and self-repairing biological functions. An organism is an hierarchical organized system composed of many cells. Tissues are organized sets of cells. Organs are organized sets of tissues. Apparatuses are organized sets of organs. This giant hierarchy of biological architecture involves — beyond the above self-operations — the ability to grow. Notice that these four self-operations strictly distinguish biological systems from artificial ones. The presence of molecular machines (which implement some sort of computational models) processing DNA digital code into the cell is sufficient to prove that the cell's information content is not compressible. As such the cell cannot derive simply from unguided natural laws and randomness acting on atoms. Hence a spontaneous origin-of-life is impossible from the inputs aforesaid only. Intelligence is needed.

 

At the end of his interesting article "Biological function and the genetic code are interdependent" A.Voie[1] rightly and definitively writes: "The structure of life has probability zero". That is to say that spontaneous origin-of-life is a matter of impossibility. The Gödel of biology has not yet arrived. One of the goals of the ID movement in this century is to develop, starting from a physical-chemical abstract model of the problem, a formal proof that unintelligent origin-of-life is mathematically impossible.

 

The origin-of-life problem

 

But let's start from the beginning and try to explain the issues more simply. The origin-of-life problem has fascinated men since ancient times. Before modern evolutionism few individuals thought that life arose by means of undirected random natural processes. They reasoned that only a powerful cause can have produced life.

 

The origin-of-life problem can be examined from different points of view. However from all these different perspectives, without exception, we reach the same conclusion: evolutionary explanations of the origin-of-life and origin-of-species problem are unsound. That conclusion is logical and coherent: if a thing is false, it is false from all points of view. Life cannot be born by chance or by simple natural laws or by a combination of both. The complexity of the living world is so huge that a simple natural explanation based on unguided spontaneous forces is impossible. But how may we make clear to evolutionists that — as noted philosopher and mathematician William Dembski says — there is "no free lunch, specified complexity cannot be purchased without intelligence"[2]?

 

Here we will try to tackle the problem of proving, logically and mathematically, that the origin of life cannot be explained without recurring to an intelligent design paradigm. To do this, we will have to start from a point very far from biology. Nevertheless, the argument will eventually lead to a biological conclusion. We will have to deal with some fundamental scientific results obtained during the twentieth century. Also certain strange mathematical discoveries, apparently paradoxical, represent milestones in the contemporary understanding of the origin-of-life problem.

 

Gödel and metamathematics

 

In 1900 the German mathematician D. Hilbert proposed the so-called total axiomatization of mathematics by means of a finite number of formal systems. At that time all mathematicians agreed this task was possible. This branch of logic studying the possibility or impossibility of certain mathematical results was named "metamathematics". The goal desired by Hilbert was that of deriving all mathematical truths starting from a few simple axioms and a few simple inference rules. Hilbert, and with him in practice all mathematicians of the world, did not doubt that such a program could be achieved.

 

There was incredible astonishment in the community of mathematicians when in 1931 K. Gödel demonstrated his very famous "incompleteness theorem". The "Gödel proof" states that an axiomatic formal coherent and complete system for arithmetic (and a fortiori an axiomatic formal system coherent and complete for all mathematics) cannot exist. Nor can an algorithm for determining if a certain mathematical assertion is true or false. In short Gödel demonstrated that mathematics has irreducible complexity. In other words mathematics is indefinitely rich. To extend mathematical knowledge one has to add new axioms again and again, because beginning from a finite number of axioms no derived theory can be complete. A complete theory of mathematics cannot entirely derive from few simple axioms. "Incompleteness" means the formal system contains indemonstrable propositions. This is a very paradoxical result. Or better said, that is a paradoxical result for the rationalistic-enlightened thought that is based on positivism and scientism. If we, unlike reductionists, understand that at the Beginning there is Infinity (containing all possibilities), then Gödel's results are not paradoxical because we have correctly put more before less. Denying Infinity and pretending that more derives from less leads to absurdities.

 

Turing and computability theory

 

A few years after Gödel the researches of Turing in the field of computer science produced some results confirming the Gödel "revolution". Specifically Turing studied computability theory. He asked if every problem is subject to calculation by an algorithm. If the answer is affirmative a universal Turing machine (UTM) could find its solution. Any problem could be mechanically resolved. An UTM is the conceptual prototype of a computer. But Turing showed non-computable functions do exist. For example the "halting problem" is a problem which cannot be computed. That means there cannot exist a mechanical procedure able to tell us whether a certain computer program will halt after a finite number of steps. Thus we see computer science reflects what happens in metamathematics. In short, informatics contains irreducible complexity. The theoretical limits of computability theory entail fundamental limitations of so-called "Artificial Intelligence" and set ultimate limits for any type of computation. Of course there are even deeper objections to Artificial Intelligence stemming from traditional metaphysics.

 

Chaitin and algorithmic information theory (AIT)

 

On 1960s G.Chaitin (and, independently from him, the Russian mathematician A.Kolmogoroff) developed a new branch of computer science dealing with the mathematical complexity. His algorithmic information theory deal with concepts and methods relevant to what we are discussing here. Chaitin, from the beginning, perceived a strong correspondence between the findings of Gödel and Turing. He saw that Gödel's incompleteness and Turing's incomputability are two peaks of the same iceberg. In algorithmic information theory one defines the algorithmic complexity H(x) of a bit string "x" as the minimal computer program able to output it. Also the program size is measured in bits because we can always consider those programs as written in machine code. When H(x) is near equal to "x' one says the "x" string is "uncompressible" or "irreducibly complex" (IC). In other words it contains "non-minimizable" information. Accordingly in algorithmic information theory terminology Gödel's theorem proves that in mathematics information is in generally uncompressible. Analogously information needed to solve Turing's "halting problem" is also uncompressible for each algorithm, because there is no general calculable solution to the problem. No calculability implies no compressibility of information.

 

Let's look at some simple examples. According to algorithmic information theory the irrational number pi (3.14 ...) is not irreducibly complex because there exists a simple algorithm for producing all its digits.  Analogously the string "010101 ..." (containing "01" repeated "n" times) is compressible because a simple algorithm such as "print 01 n times" is able to output it for any number "n". On the other hand, a bit string obtained by flipping a fair coin is irreducibly complex because no algorithm lesser than itself is able to output it. Thus we can see that algorithmic information theory is closely related to the concepts of Intelligent Design Theory (IDT). The algorithmic information theory definition of complexity can be related to fundamental concepts of the IDT. In particular the information algorithmic content H(x) relates to Dembski's "complex specified information". The information incompressibility of algorithmic information theory is also tied to Behe's "irreducible complexity" concept.

 

Following some previous works of R.J.Solomonoff, Chaitin noted that an axiomatic theory or formal system (like those considered by Hilbert) could be thought of as a computer program. To achieve the goal this program must contain the axioms, the grammar, the inference rules and an algorithm for generating all the theorems of the theory. This algorithm in a finite number of steps should be able to prove whether any propositions of the formal system is true or false. But from the insolubility of the "halting problem" proved by Turing we know that such an algorithm cannot exist. That proves our axiomatic theory is not complete. Here Turing's results perfectly confirm Gödel's incompleteness theorem.

 

Algorithmic information theory has some remarkable epistemological consequences. In fact accordingly algorithmic information theory any scientific theory can be considered as a computer program. Chaitin said his theory is capable of interesting applications in the field of physics too: "As physicists have become more and more interested in complex systems, the notion of algorithm has become increasingly important, together with the idea that what physical systems actually do is computation. In other words, due to complex systems, physicists have begun to consider the notion of algorithm as physics. And the Universal Turing Machine now begins to emerge as a fundamental physical concept, not just a mathematical concept. [...] It is sometimes useful to think of physical systems as performing algorithms and the entire universe as a single giant computer"[3].

 

In a scientific theory modeled as a computer program the inputs "I" are the observations of past events. The outputs "O" are the predictions of future events. Outputs are derived by the inputs by means of the natural laws. Natural laws are simply the instructions of something we can call "natural software", or "software of the universe", i.e. the above computer program. In fact laws, rules, specifications, instructions are quite similar from this point of view. We can say that the simpler the computer program the better the theory. Also for this theory-program "p" one can define the algorithmic complexity H(p). If H(p) is almost equal to "I" we stand in front of useless theory. In practice it does not tell us anything more that the input observation data tell us in the first place.

 

This epistemological application of AIT is important in the realm of biology we wish to examine. For example think of the theoretical strivings of contemporary physicists to discover a so-called Theory Of Everything (TOE), able to explain all the natural phenomena beginning from a finite number of physical-chemical laws mathematically described. A scientific theory in general is a mathematical theory, an axiomatic formal system. For these reasons the applications of Gödel's incompleteness theorem and the Turing's limits of computability affect all scientific theories, too. As a consequence we get a fundamental irreducible complexity of the "kernel" of the physical world.

 

The works of Gödel, Turing, Chaitin and Von Neumann reveal the fundamental limits of reasoning. Someone said that these results represent the fall of platonistic conceptions about mathematics. This conclusion is fully wrong because it misunderstands the real meaning of Platonism. The truth is exactly the inverse of that. Platonism claims that men can discover only what already pertains to an infinite supernatural realm. Plato never said all truth could be derived from a few simple truths or that Infinity can derive from the finite. Instead what was destroyed in 1931 was the positivistic, utopian and reductionist point of view that reason may get more from less! Reason cannot derive Infinity from the finite. That is the moral we can learn from the theorems about information irreducibility. These are particular cases only of a more general conclusion: total truth cannot be axiomatizable. We cannot close the Unlimited into a limited system. The Total Possibility, i.e. the Infinite, is not reducible to a system. In all fields one can find the effects of this universal truth. Gödel found them in metamathematics, Turing found them in informatics, Chaitin found them in algorithmic information theory. In the following section we will see how these insights apply to the fundamental problem of biology - the origin of life.

 

Von Neumann and the concept of software

 

In the 1950s Von Neumann worked at the development of informatics. He used the concept of software and its applications. Thanks to him we can think of a very general model, which can be useful to represent a wide range of reality. This model is fully based on the notion of software, a notion that is fundamental in informatics. Moreover this concept is very useful in the field of complexity theory too. "Software" is a word that is able to represent in general anything that process information. Information processing means a process in which some information called "input" enters and some information called "output" is generated. In the middle, what does the job is called "software". Schematically we can represent it this simple way:  input -> software -> output. Because the concept of software is so general the software-model is able to represent many phenomena at different scale levels, from a simple mathematical calculation to the entire universe.

 

Here are some very elementary examples. Arithmetical example: inputs are the numbers two and three; the software is the addition operation; the output is the number five. Physical example: inputs are a body of mass M and a force acting on it of value F; the software is the Newton's law F=M*A; the output is an acceleration of value A = F/M. Chemical example: inputs are two molecules of Hydrogen and one molecule of Oxygen; the software is the law of chemical composition; the output is one molecule of water (H2O).

 

Here are some more complex examples. Metamathematical example:  the inputs are a set of a finite number of axioms for a mathematical theory; the software is a set of laws of inference; the output is all the theorems derivable from the axioms. (By the way Gödel's results state that these derivable theorems are not all the theorems of the theory.) Cosmological example: the inputs are a set C of cosmological events and bodies; the software contains all the natural laws and constants; the outputs are all events and consequences derived from C. (If the set C contains all the events and bodies of the universe, then the model might be a representation of the entire universe.) Epistemological example: the inputs are a set of experimental data (observations of past events); the software is a scientific theory for D; the outputs are predictions of future events.

 

Assuming this general software-model the origin-of-life problem can be expressed as: the inputs are atoms and energy (the so-called primordial "soup"); the software is all the physical and chemical natural laws and randomness; the output is life. Let's express that this way: (atoms + energy) -> (natural laws + randomness) -> life [yes/no?]. Materialistic evolutionary theory answers "yes" to this question. Intelligent design theory answers "no" to the same question. Those who deny an unguided origin-of-life say the software must contain much more information than natural laws are able to give. Moreover they say this additional information is not reducible, i.e. there isn't an algorithm smaller than this additional information itself for generating it. Such information can derive from an intelligent agent only. According to IDT the formula becomes (atoms + energy) -> (intelligence + natural laws) -> life. Intelligence is the source of information that is missing in the materialistic evolutionary model. In other words intelligence provides the uncompressible software needed for doing the job of making living cells.

 

Cellular complexity

 

The origin-of-life problem is related to the rise of complex living beings. But what type of complexity is it? There are many sort of complexities. From ameba to man organisms are made of cells. The cell is the very base of life. Therefore a hypothetical common ancestor must have had at least one cell. The cell contains hereditary and developmental information stored in genes, which are contained in DNA macromolecules. Some design skeptics claim that the patterns in DNA do not constitute information per se. They would be right if DNA was a stand-alone string of nucleotides only (a pattern of ATCG). But DNA is only a part of a whole, of a greater information processing system. As biologists J.T.Trevors and D.L.Abel rightly say: "Genes represent programming. These algorithms are written in a pre-existent operating system environment. [...] We must not only find models for specific genetic programming, but for the genetic operating system"[4]. It is the whole system (storage + processor + operating system + code) that processes information (programs). DNA is only a pattern, but when that pattern is inputted into the cell machinery (which decodes and runs it) we get a functioning cell, which carries out all its biological functions. The software-model diagram of the cell could be something like this: (matter + energy) -> cellular software -> life. Here the term "life" is a shortcut meaning all the several biological functions carried out by cell. Of course the cellular software, as the artificial one, is designed.

 

For some design skeptics even DNA carries no information at all. DNA has no meaning for them. They claim DNA is only a random pattern of ATCG symbols generated by chance. As a mouse (as in the figure above) walking on a written document might think: "... no information here, only ink upon paper all around", so the design skeptic looking into the cell says: "... no information here, only molecules interacting one with another". Let's consider a simple example. Here is this sequence of hexadecimal numbers:

 

77 68 69 6C 65 20 28 29 20 7B 0D 0A 09 73 79 73 74 65 6D 20 28 22 70 69 6E 67 20 31 30 2E 31 30 2E 33 31 2E 32 35 30 22 29 3B 09 23 20 6C 61 62 36 0D 0A 09 73 6C 65 70 20 31 35 3B 09 0D 0A 7D 0D 0A 0D 0A 31 3B

 

Sure, it has no meaning for you. But put into a specific computer, it decodes the string and run its instructions (because this string is really a program specific for that computer). So we can say it does have meaning and does convey information. The meaning and information content of any pattern depend on the context and the system into which they are introduced. For example, DNA strings (converted into binary) don't function on an artificial computer. Nor does a binary code taken from a computer function when converted to DNA code. Why? Because processors, operating systems, instruction sets and codes are different. To sum up when the design skeptic says "there is no information in the DNA. DNA is a random pattern only" he is wrong because his reductionism does not allow him to  consider the whole system.

Moreover when the design skeptics think that "DNA is only a random pattern" they are forgetting that DNA - and in general all the information contained in the cell - "works", "organizes" and "orders", i.e. drives and controls complex cellular functions. A random pattern can't contribute to the organization and order of a system. Conversely the organization and ordering effects of DNA manifest its information content.

 

A system that processes information (as computers and a fortiori cells do) is not reducible to a pattern only (as we have shown above). A cell is not merely  DNA. A cell  is an integrated system that overlooks and manages DNA patterns. We can apply low-probability methods only to patterns. A pattern can be generated by natural laws and by chance. We cannot apply low-probability reductionist methods to conceptual, abstract and integrated systems such as computers and cells because they are able to process patterns and give them meaning. They possess a higher level of abstraction that is unreachable by chance and natural laws. For these reasons of principle the probability for unguided evolution to create a living cell from atoms is not merely low, but zero indeed.

 

The scientific trend is to discover ever more information in cells. One of the reasons for the rise of evolution is that evolutionists were not interested in information theory and information theorists were not interested in biology. The great mathematician J. Von Neumann explained how a functioning computer architecture should be organized. Engineers followed his suggestions and created the ancestors of our modern computers. Moreover Von Neumann explained how self-reproducing automata should be done. He proved mathematically that self-reproduction requires stored instructions (i.e. software). Within a few years biologists had "opened" the cell and saw cell machinery functioning exactly that way. Von Neumann, the engineers and biologists were, in a sense, all ID oriented. Suppose a limit-hypothesis: that molecular biology had just shown in detail that the cell is a specialized super-advanced bio-computer (that's one of the goals of molecular biology in the twenty-first century). The design skeptic would say: "artificial computers are designed but bio-computers aren't". Why would he be wrong? Why can't a computer be produced by natural causes? Because a computer is not merely a complex pattern, it is an agent that uses ordered patterns (containing many instructions). If a computer were a complex pattern only, the probability of creating it by natural causes would be vanishingly low. But odds of natural causes creating a device that produces complex patterns is zero. A computer has a hierarchical architecture: the central processing unit with the operating system (CPU  +  OS) controls everything all around. It controls memory, inputs and outputs, events, processes, devices, services, applications, interrupts and many other things. Natural causes and randomness cannot produce such a "controller" because they cannot create an integrated hierarchy (composed of controller and controlled things). In order to create a hierarchy you have to overarch it. You have to stay at a higher level in respect to what you dominate. In Scholasticism there was the motto "nihil agit se ipsum". Natural causes and randomness cannot create hierarchical architectures (like computers and cells) because they stay at the very bottom level. Only a designer can design a computer because he stays at a superior level in respect to the controller and at a yet higher level in respect to what the controller controls. Since a cell is an agent controlling and using stored patterns/instructions (DNA), and the cell self-reproduces, self-repairs and self-survives, a cell cannot be produced by "chance and necessity". That's not a matter of low probability; that's a matter of impossibility.

 

Design skeptics believe that a "self-propagating protein population, a population perhaps subject to some kind of selection pressure, may be within the reach of nature". But a "self-propagating protein population" is far from being a "self-reproducing automaton", as a biological cell is. Evolutionists cannot explain the origin of life with self-propagating proteins. Von Neumann was the first mathematician to supply an algorithmic model of a self-reproducing automaton (see "Theory of Self-Reproducing Automata").  Roughly speaking, according to Von Neumann, to achieve this goal it is necessary to resolve four problems:  (1) to collect information in a memory;  (2) to duplicate information;  (3) to implement an automatic factory ("universal constructor") that, thanks to the memory instructions, could construct the other components and then self-duplicate; (4) to manage all these functions by means of a control unit. No single protein can accomplish these requirements. Only a complex integrated system such as the cell can do so.

 

The remarkable thing is that Von Neumann was able to construct a successful mathematical model of a self-reproducing automaton without knowing that cells do the very same in nature. Besides he told biologists that a cell had to implement a system isomorphic to such an automaton, before biologists themselves discovered such mechanisms in the cell.  His farseeing forecast was perfectly realized five years later with the discovery of DNA and the extremely complex molecular processes of DNA transcription and traduction. Thus the Darwinists' claim that IDT is a "science stopper" is dead wrong. In fact Von Neumann's works suggest that: (a) Von Neumann's self-reproducing architecture is an intelligent design; (b) hence his forecast could be considered an ante litteram example of ID theory prediction; (c) on that occasion IDT was indeed the opposite of a science stopper!

 

Computer science considers many computational models. Von Neumann's model is only one of many. How many computational models are used in the information processing we find in biology? It is difficult to answer this question. But there are alternative models of computation. No matter what computational model they use, biological systems necessarily process information, a sort of thing that must be designed. They are so huge and complex that we have to consider them at different hierarchical levels. It's likely that biological systems use several models of computation depending on the level considered. For example at the global architectural level we know the brain uses the neural-network model of computation. It's likely that the functional programming (another computation paradigm) is used at some other level. Moreover we wouldn't exclude that the von Neumann model of computation is used at some lower level by the machinery in the cell. And perhaps other computational paradigms are used somewhere else in biological systems. But all models of computation entail information processing and computers to run them. Perhaps those who doubt that a von Neumann pattern of computation exists in the cell wish to prohibit an informatics viewpoint in biology. Unfortunately for all evolutionary biologists, information processing in biological systems is a reality. This undeniable reality points towards intelligent design. (By the way this also explains why many software programmers who learnt the "fact" of evolutionism at the school are likely to be sympathetic to Intelligent Design).

 

Life is irreducibly complex

 

What do the evolutionists really assert when they affirm life arose by means of chance and necessity only?  They mean life has a natural source only, i.e. organic structures derive spontaneously from inorganic matter. In short they assert that physical-chemical laws are sufficient to obtain a first living ancestor of all living forms. Then from sparse atoms gradually the complexity of biological structures increased until they attained the actual complexity of superior organisms. But accordingly to the AIT terminology, that means that the actual algorithmic complexity of an organism is compressible and it has a very little value indeed. In other words that means life has a very reducible information content.

 

The design skeptic about information existence in the universe has two choices only: (1) the information is built in the structure of the universe. It has always existed. (2) The information does not exist as such. For example, the DNA sequence is patterned, but there is no information there. This way the design skeptic chooses either an "eternal information" hypothesis or an information-doesn't-exist one. The former — information hardwired into the universe — would entail a designer because only a mind can hardwire information into a system. This would transform the design skeptic into a non-materialist.  Moreover the design skeptic, saying that "all is information" or "information is embedded in the universe", is saying "all is design". So to rebut design he would advance an hypothesis that entails ... design. Hence ultimately and much to his chagrin he would be a supporter of Intelligent Design. The latter (2) is contrary to the scientific evidence: DNA contains the instructions for all the functionalities of the living cell. Instructions are information. Moreover the design skeptic contradicts himself when he says "information does not exist" because his statement itself is information! To sum up both design skeptic's arguments are deceptive.

 

Evolutionary biologists, trying to prove the spontaneous rise of life from inorganic matter, construct models of the primordial environment in which they assume life arose. However, their simulations are typically overly reductionist and so constrained by deliberate design that they do not represent the actual randomness of a true natural environment.  A model which would accurately represent the multitude of chaotic factors present in any primordial environment is not merely beyond the abilities of present computer technology; it is beyond any conceivable system of modeling reality. Because models are necessarily finite, they run up against the limitations of any formal system and collide with the boundary of computability. Reality and the cosmos (which includes life) are irreducibly complex because they cannot be derived or calculated from a few rules or laws. If life could be derived from a set of natural laws—the axioms of a formal system—then it would not be irreducibly complex. But rules and laws alone are insufficient to generate the complex specified information which characterizes life. Only the intelligent use of rules and laws and immense injection of complex specified information can create life. Thus by its obstinate irreducibility life subjects itself to the mathematical limits established by Gödel, Turing and Chaitin.

 

It might seem odd that the origin-of-life problem is so closely related to basic mathematical and information propositions. When materialist biologists speak of a "natural origin" of life they perhaps are not conscious that their affirmation contains the germ of its rebuttal. In fact laws mean instructions, models and mathematical systems. Systems carry inside themselves the axiomization limits, i.e. the incompleteness, the incomputability, the irreducible complexity.

 

By showing that it was impossible to axiomize mathematics, Gödel proved mathematics contains irreducible information (i.e. not compressible in a finite set of axioms). By proving that computability in a finite number of steps cannot be done, Turing also showed that his  information models contain irreducible information. When organisms are modeled mathematically it becomes clear that they too contain irreducible information.

 

In this document we have tried to approach the origin-of-life problem from a perspective other than the usual one based on the unlikelihood that randomness would provide the necessary events. In a sense here we have attempted to clarify the problem from a higher point of view, a point of view based on logical principles and mathematical truths. Perhaps this is a less intuitive approach to the problem, but the obtained results are more certain than those obtained by means of the low-probability approaches. Low-probability approaches don't grant an absolute certainty about conclusions. The "principles approach" does.

 

The natural origins approach implies the reducibility and compressibility of information contained inside biological organisms. But as we have seen that is mathematically impossible. So by this reasoning we have proven the impossibility of a spontaneous origin of life. Life cannot be derived only from chance or necessity because the complex specified information and the irreducible complexity that suffuse life reveal intentional design.

 

Intelligent Design Theory, denying the materialistic natural explanation of life's origin, perfectly agrees with algorithmic information theory and with the important results of Gödel, Turing, Chaitin and Von Neumann. Like almost all mathematicians they had a platonistic conception of mathematics. As all Platonists do, they thought the fine tuning of the universe and the huge richness of life (which has more complexity and more information) could not derive from simple laws and chance (which have less complexity and less information). Hence mathematics as applied to biology proves the impossibility of a spontaneous origin of life. We can conclude that IDT, is perfectly coherent with logic, mathematics and information theory. Darwinism, on the contrary, disagrees with all these.

 



[2] William A. Dembski, No Free Lunch: Why Specified Complexity Cannot Be Purchased without Intelligence.

[3] Gregory J. Chaitin, Meta-Mathematics and the foundations of Mathematics, http://www.cs.auckland.ac.nz/CDMTCS/chaitin/italy.html

[4] J.T. Trevors, D.L. Abel, Chance and necessity do not explain the origin of life,

http://progettocosmo.altervista.org/index.php?option=content&task=view&id=51&catid=34&Itemid=53