What did the first organism eat?

Ummm, the prevailing opinion is that RNA came before DNA.

What experiment? I hope you do not reference the imperfect Miller–Urey.

I’m extremely skeptical of this claim. And it seems the question is predicated on it.

1 Like

I mean… let’s figure this out right now. Name one thing alive that doesn’t eat. …uh…what…uh…

Please define “eat”.

It’s tough to eat when you don’t have a mouth.

Shut up man… seriously? The absorbing of a substance, whether it be chemically-based substances broken down by mitochondria, or radiation absorbed by chloroplasts and converted into energy by a process called photosynthesis, both require DNA which needed to be able to duplicate itself, a process that could only be done if the second law of thermodynamics were read backwards and entropy were inverted. I’ve already said this. Try going to the library. A fifth grader understands these principles or they don’t get to go to the sixth grade. Wooooooowwwww

Well I happen to be familiar with the 2nd law. Why don’t show your work and give us a peek at some of your entropy calculations?

Like it seems you are saying that two copies of an organism (or at least its DNA) have less entropy than one copy? (I think that is what you are saying?). If that is an accurate description, perhaps you could show us a peek at how you came to that conclusion.

The Entropy argument has been explained long ago. There is no violation of The Second Law of Thermodynamics involved here.

Organic life dissipates heat better than any inorganic matter. Therefore, in order to dissipate heat and reach the lowest level of entropy, an interim step involving more complexity is needed.

The classic example to explain how entropy works to go through complex forms to reach simpler ones is mixing cream with coffee.

Both ends of this image are very low entropy, but to get from one to the other, it passes through a period of higher complexity.

Nature tends to do whatever it takes to reach lower entropy, even increasing entropy temporarily to lower it in the end. If a planet can keep it entropy lower by harboring organic life, assuming it has the right materials and conditions, it will just make life all by itself. It naturally follows that the meaning of life is to dissipate heat.

We are in a period of complexity, but most Physicists agree that The Universe will end in low entropy, either via expanding to the point of “Heat Death” or gravity winning and it all just contracting back to a singularity.

The complexity will not last, so enjoy life while it’s around, because nothing material is eternal.

That only goes for isolated systems. A living organism is not an isolated system since it receives energy from the outside (sunlight, heat, chemicals) and convert these into energy it can use to do whatever makes the organism tick, and then get rid of the waste (heat radiation, chemicals with lower utility as energy source). In other words, they take low-entropy energy as input and expel high-entropy energy as output, so there is an energy flow through the organism.

Likewise, the ecosystem the organism(s) live in is not isolated, as it received energy input from the sun, from geothermal heat, influx of chemicals that can be utilized, and so on, and expel heat radiation and waste chemicals.

And the earth is not an isolated thermodynamical system, as it receives energy from the sun and emits infrared heat radiation.

So you see, the argument from the second law of thermodynamics that creationists love to use is a distortion of the actual science involved. In other words, it is a straw man argument. And it is wrong. And this has been explained to creationists time and again, but they are seemingly incapable of understanding it.

Oh dear, someone has resurrected yet another version of the tiresome creationist bullshit known as “evolution violates the Second Law of Thermodynamics”.

Er, no it doesn’t. As you would have known if you had paid attention in science class. Indeed there are scientific papers in circulation that will educate youi on this topic, if you bother to read them. This will be a two part post, covering relevant scientific papers. Welcome to Part 1.

Creationists in particular have a totally fuckwitted understanding of what the Second Law of Thermodynamics actually says. Moreover, when Rudolf Clausius formulated the 2LT in the 19th century, he was careful to be specific about the nature of thermodynamic systems, and classified them into three groups:

[1] Isolated systems are systems that engage in no exchange of matter or energy with their surroundings. Such systems are therefore reliant upon the internal energy that they already possess. However, isolated systems constitute an idealisation that is almost never achieved in practice, and are mostly useful as a starting point for developing thermodynamic theory prior to extending it to the other classes of system.

[2] Closed systems are systems that engage in exchange of energy with the surroundings, but no exchange of matter. A good example of a closed system would be a solar panel, which does not exchange matter with its surroundings, but which, when illuminated, is a net recipient of energy in the form of visible light, which it then converts to electricity, which we can use.

[3] Open systems are systems that engage in exchange of both energy and matter with the surroundings. Living organisms plainly fall into this latter category.

When Rudolf Clausius erected his original statement of the Second Law of Thermodynamics, he stated it thus:

[b]In an isolated system, a process can only occur if it increases the total entropy of the system.[/b]

The trouble with the 2LT is that it applies to all of these systems, but the exact manner in which it applies differs between the three classes of system. Clausius’ original statement about the application of the 2LT to an isolated system does not apply to the other classes of system in anything like the same manner. Trouble is, creationists alight upon the statement about entropy increasing, which was originally erected by Clausius to describe isolated systems, and think that the formulation Clausius erected to apply to isolated systems applies to all systems in the same manner, when Clausius himself plainly stated that it doesn’t.

In a non isolated system, if there is an energy input, that energy input can be harnessed to perform useful work, such as locally decreasing the entropy of entities within the system in exchange for a greater increase in entropy beyond those systems. As long as there exists inhomogeneity within the universe, i.e., there exist regions of differing conditions with respect to material content, energy flux, etc., any net recipient of energy from an outside energy source can harness that energy to perform useful work, including work that results in a temporary local decrease of entropy. The Earth constitutes such a system, because it is engaging in both matter and energy transfer with the surroundings, and is in fact a large net recipient of energy from the surroundings. See that yellow thing in the sky? It’s called The Sun. It’s a vast nuclear fusion reactor 866,000 miles across that is irradiating the Earth with massive amounts of energy as I type this. Energy that can be harnessed to perform useful work such as constructing living organisms.

Incidentally, as a tangential diversion, the classical formulation has again required revision to take account of more recent developments with respect to observed phenomena, which is why we now have a scientific discipline called Quantum Thermodynamics … a discipline that was contributed to by, among others, Stephen Hawking, when he published his landmark paper on the radiative nature of black holes that brings them into equilibrium with the Second Law of Thermodynamics. I don’t recall him ruling out evolution as a result of this.

Another common fallacy is the wholly non-rigorous association of entropy with “disorder”, however this is defined. This has been known to be non-rigorous by physicists for decades, because there exist numerous documented instances of systems whose entropy increases when they spontaneously self-assemble into ordered structures as a result of the effect of electrostatic forces. Lipid bilayers are an important example of this, which are found throughout the biosphere.

The following scientific paper is apposite here:

Gentle Force Of Entropy Bridges Disciplines by David Kestenbaum, Science, 279: 1849 (20th March 1998)

Normally, entropy is a force of disorder rather than organization. [b]But physicists have recently explored the ways in which an increase in entropy in one part of a system can force another part into greater order. The findings have rekindled speculation that living cells might take advantage of this little-known trick of physics[/b].

Phospholipids being an excellent example thereof. In fact, any chemical system in which there exists the capacity for electrostatic forces to apply to either aggregating or reacting molecules can exhibit this phenomenon. Which is why scientists have long since abandoned the notion that “entropy” equals “disorder”, which requires a thorough statistical mechanical treatment in terms of microstates in any case.

This is applied to the physics and physical chemistry of lipid bilayers in the following paper:

Electrostatic Repulsion Of Positively Charged Vesicles And Negatively Charged Objects by
Helim Aranda-Espinoza, Yi Chen, Nily Dan, T. C. Lubensky, Philip Nelson, Laurence Ramos and D. A. Weitz, Science, 285: 394-397 (16th July 1999)

in which the authors calculated that the entropy of the lipid bilayer system increased when it arranged itself spontaneously into an ordered structure in accordance with the laws of electrostatics.

Entropy, as rigorously defined, has units of Joules per Kelvin, and is therefore a function of energy versus thermodynamic temperature. The simple fact of the matter is that if the thermodynamic temperature increases, then the total entropy of a given system decreases if no additional energy was input into the system in order to provide the increase in thermodynamic temperature. Star formation is an excellent example of this, because the thermodynamic temperature at the core of a gas cloud increases as the cloud coalesces under gravity. All that is required to increase the core temperature to the point where nuclear fusion is initiated is sufficient mass. No external energy is added to the system. Consequently, the entropy at the core decreases due to the influence of gravity driving up the thermodynamic temperature. Yet the highly compressed gas in the core is hardly “ordered”.

More to the point, there are scientific papers in existence establishing that evolution is perfectly consistent with the 2LT. Two important papers being:

Entropy And Evolution by Daniel F. Styer, American Journal of Physics, 78(11): 1031-1033 (November 2008) DOI: 10.1119/1.2973046

Natural Selection As A Physical Principle by Alfred J. Lotka, Proceedings of the National Academy of Sciences of the USA, 8: 151-154 (1922) [full paper downloadable from here]

Evolution Of Biological Complexity by Christoph Adami, Charles Ofria and Travis C. Collier, Proceedings of the National Academy of Sciences of the USA, 97(9): 4463-4468 (25th April 2000) [Full paper downloadable from here]

Order From Disorder: The Thermodynamics Of Complexity In Biology by Eric D. Schneider and James J. Kay, in Michael P. Murphy, Luke A.J. O’Neill (ed), What is Life: The Next Fifty Years. Reflections on the Future of Biology, Cambridge University Press, pp. 161-172 [Full paper downloadable from here]

Natural Selection For Least Action by Ville R. I. Kaila and Arto Annila, Proceedings of the Royal Society of London Part A, 464: 3055-3070 (22nd july 2008) [Full paper downloadable from here]

Evolution And The Second Law Of Thermodynamics by Emory F. Bunn, arXiv.org, 0903.4603v1 (26th March 2009) [Download full paper from here]

Let’s take a look at some of these, shall we?

First of all, we have this:

In a paper presented concurrently with this, the principle of natural selection, or of the survival of the fittest (persistence of stable forms), is employed as an instrument for drawing certain conclusions regarding the energetics of a system in evolution.

Aside from such interest as attaches to the conclusions reached, the method itself of the argument presents a feature that deserves special note. The principle of natural selection reveals itself as capable of yielding information which the first and second laws of thermodynamics are not competent to furnish.

The two fundamental laws of thermodynamics are, of course, insufficient to determine the course of events in a physical system. They tell us that certain things cannot happen, but they do not tell us what does happen.

In the freedom which is thus left, certain writers have seen the opportunity for the interference of life and consciousness in the history of a physical system. So W. Ostwald2 observes that “the organism utilizes, in manyfold ways, the freedom of choice among reaction velocities, through the influence of catalytic substances, to satisfy advantageously its energy requirements.” Sir Oliver Lodge also, has drawn attention to the guidance3 exercised by life and mind upon physical events, within the limits imposed by the requirements of available4 energy. H. Guilleminot5 sees the influence of life upon physical systems in the substitution of guidance by choice in place of fortuitous happenings, where Carnot’s principle leaves the course of events indeterminate. As to this, it may be objected that the attribute of fortuitousness is not an objective quality of a given. event. It is the expression of our subjective ignorance, our lack of complete information, or else our deliberate ignoring of some of the factors that actually do determine the course of events. Admitting, however, broadly, the directing influence of life upon the world’s events, within the limits imposed by the Mayer-Joule and the Carnot-Clausius principles, it would be an error to suppose that the faculty of guidance which the established laws of thermodynamics thus leave open, is a peculiar prerogative of living organisms. If these laws do not fully define the course of events, this does not necessarily mean that this course, in nature, is actually indeterminate, and requires, or even allows, some extra-physical influence to decide happenings. It merely means that the laws, as formulated, take account of certain factors only, leaving others out of consideration; and that the data thus furnished are insufficient to yield an unambiguous answer to our enquiry regarding the course of events in a physical system. Whether life is present or not, something more than the first and second laws of thermodynamics is required to predict the course of events. And, whether life is present or not, something definite does happen, the course of events is determinate, though not in terms of the first and second laws alone. The “freedom” of which living organisms avail themselves under the laws of thermodynamics is not a freedom in fact, but a spurious freedom6 arising out of an incomplete statement of the physical laws applicable to the case. The strength of Carnot’s principle is also its weakness: it holds true independently of the particular mechanism or configuration of the energy transformer (engine) to which it is applied; but, for that very reason it is also incompetent to yield essential information regarding the influence of mechanism upon the course of events. In the ideal case of a reversible heat engine the efficiency is independent of the mechanism. Real phenomena are irreversible; and, in particular, trigger action,7 which plays so important a role in life processes, is a typically irreversible process, the release of available energy from a “false” equilibrium. Here mechanism is all-important. To deal with problems presented in these cases requires new methods,8 requires the introduction, into the argument, of new principles. And a principle competent to extend our systematic knowledge in this field seems to be found in the principle of natural selection, the principle of the survival of the fittest, or, to speak in terms freed from biological implications, the principle of the persistence of stable forms.

For the battle array of organic evolution is presented to our view as an assembly of armies of energy transformers-accumulators (plants), and engines (animals); armies composed of multitudes of similar units, the individual organisms. The similarity of the units invites statistical treatment, the development of a statistical mechanics of which the units shall be, not simple material particles in ordinary reversible collision of the type familiar in the kinetic theory, collisions in which action and reaction were equal; the units in the new statistical mechanics will be energy transformers subject to irreversible collisions of peculiar type-collisions in which trigger action is a dominant feature

So, even as far back as 1922, scientists were arguing that evolution is not in violation of the Second law of Thermodynamics. Interesting revelation, yes?

Lotka continues with this:

In systems evolving toward a true equilibrium (such as thermally and mechanically isolated systems, or the isothermal systems of physical chemistry), the first and second laws of thermodynamics suffice to determinate at any rate the end state; this is, for example, independent of the amount of any purely catalytic substance that may be present. The first and the second law here themselves function as the laws of selection and evolution, as has been recognized by Perrin9 and others, and exemplified in some detail by the writer, for the case of a monomolecular reversible reaction.10 [b]But systems receiving a steady supply of available energy (such as the earth illuminated by the sun), and evolving, not toward a true equilibrium, but (probably) toward a stationary state, the laws of thermodynamics are no longer sufficient to determine the end state; a catalyst, in general, does affect the final steady state. Here selection may operate not only among components taking part in transformations, but also upon catalysts, in particular upon auto-catalytic or auto-catakinetic constituents of the system. Such auto-catakinetic constituents are the living organisms,11 and to them, therefore the principles here discussed, apply.[/b]

Now this, as I’ve just stated, was written as far back as 1922, which means that scientists have been aware that thermodynamic laws and evolution are not in conflict for eighty-seven years.

Moving on, let’s look at the more recent papers. Let’s look first at the abstract of the Adami et al paper:

To make a case for or against a trend in the evolution of complexity in biological evolution, [b]complexity needs to be both rigorously defined and measurable. A recent information-theoretic (but intuitively evident) definition identifies genomic complexity with the amount of information a sequence stores about its environment. We investigate the evolution of genomic complexity in populations of digital organisms and monitor in detail the evolutionary transitions that increase complexity. We show that, because natural selection forces genomes to behave as a natural ‘‘Maxwell Demon,’’ within a fixed environment, genomic complexity is forced to increase[/b].

Oh look. A point I’ve been arguing for a long time here, namely that a rigorous definition of complexity is needed in order to be able to make precise categorical statements about complexity. I also note with interest that the authors of this paper perform detailed experiments via simulation in order to establish the fact that complexity can arise from simple systems (the behaviour of the Verhust Equation I’ve mentioned here frequently establishes this, and indeed, the investigation of such systems as the Verhulst Equation and similar dynamical systems is now the subject of its own branch of applied mathematics).

The authors open their paper thus:

Darwinian evolution is a simple yet powerful process that requires only a population of reproducing organisms in which each offspring has the potential for a heritable variation from its parent. This principle governs evolution in the natural world, and has gracefully produced organisms of vast complexity. Still, whether or not complexity increases through evolution has become a contentious issue. Gould (1), for example, argues that any recognizable trend can be explained by the ‘‘drunkard’s walk’’ model, where ‘‘progress’’ is due simply to a fixed boundary condition. McShea (2) investigates trends in the evolution of certain types of structural and functional complexity, and finds some evidence of a trend but nothing conclusive. In fact, he concludes that ‘‘something may be increasing. But is it complexity?’’ Bennett (3), on the other hand, resolves the issue by fiat, defining complexity as ‘‘that which increases when self-organizing systems organize themselves.’’ [b]Of course, to address this issue, complexity needs to be both defined and measurable[/b].

In this paper, we skirt the issue of structural and functional complexity by examining genomic complexity. It is tempting to believe that genomic complexity is mirrored in functional complexity and vice versa. Such an hypothesis, however, hinges upon both the aforementioned ambiguous definition of complexity and the obvious difficulty of matching genes with function. Several developments allow us to bring a new perspective to this old problem. On the one hand, genomic complexity can be defined in a consistent information-theoretic manner [the ‘‘physical’’ complexity (4)], which appears to encompass intuitive notions of complexity used in the analysis of genomic structure and organization (5). On the other hand, it has been shown that evolution can be observed in an artificial medium (6, 7), providing a unique glimpse at universal aspects of the evolutionary process in a computational world. In this system, the symbolic sequences subject to evolution are computer programs that have the ability to self-replicate via the execution of their own code. In this respect, they are computational analogs of catalytically active RNA sequences that serve as the templates of their own reproduction. In populations of such sequences that adapt to their world (inside of a computer’s memory), noisy self-replication coupled with finite resources and an information-rich environment leads to a growth in sequence length as the digital organisms incorporate more and more information about their environment into their genome. Evolution in an information-poor landscape, on the contrary, leads to selection for replication only, and a shrinking genome size as in the experiments of Spiegelman and colleagues (8). These populations allow us to observe the growth of physical complexity explicitly, and also to distinguish distinct evolutionary pressures acting on the genome and analyze them in a mathematical framework.

Moving on, the authors directly address a favourite canard of creationists (though they do not state explicitly that they are doing this), namely that information somehow constitutes a “non-physical” entity. Here’s what the authors have to say on this subject:

Information Theory and Complexity. Using information theory to understand evolution and the information content of the sequences it gives rise to is not a new undertaking. Unfortunately, many of the earlier attempts (e.g., refs. 12–14) confuse the picture more than clarifying it, often clouded by misguided notions of the concept of information (15). An (at times amusing) attempt to make sense of these misunderstandings is ref. 16. [b]Perhaps a key aspect of information theory is that information cannot exist in a vacuum; that is, information is physical (17). This statement implies that information must have an instantiation (be it ink on paper, bits in a computer’s memory, or even the neurons in a brain)[/b]. Furthermore, it also implies that information must be about something. Lines on a piece of paper, for example, are not inherently information until it is discovered that they correspond to something, such as (in the case of a map) to the relative location of local streets and buildings. [b]Consequently, any arrangement of symbols might be viewed as potential information (also known as entropy in information theory), but acquires the status of information only when its correspondence, or correlation, to other physical objects is revealed[/b].

Nice. In brief, the authors clearly state that information requires a physical substrate to reside upon, and a mechanism for the residence of that information upon the requisite physical substrate, in such a manner that said information constitutes a mapping from the arrangement of the physical substrate upon which it resides, to whatever other physical system is being represented by that mapping. I remember one creationist claiming that because the mass of a floppy disc doesn’t change when one writes data to it, this somehow “proves” that information is not a physical entity: apparently said creationist didn’t pay attention in the requisite basic physics classes, or else he would have learned that the information stored on a floppy disc is stored by materially altering the physical state of the medium, courtesy of inducing changes in the magnetic orientation of the ferric oxide particles in the disc medium. In other words, a physical process was required to generate that information and store it on the disc. I am indebted to the above authors for casting this basic principle in the appropriate (and succinct) general form.

The authors move on with this:

In biological systems the instantiation of information is DNA, but what is this information about? To some extent, it is the blueprint of an organism and thus information about its own structure. More specifically, it is a blueprint of how to build an organism that can best survive in its native environment, and pass on that information to its progeny. This view corresponds essentially to Dawkins’ view of selfish genes that ‘‘use’’ their environment (including the organism itself), for their own replication (18). Thus, those parts of the genome that do correspond to something (the non-neutral fraction, that is) correspond in fact to the environment the genome lives in. Deutsch (19) referred to this view by saying that ‘‘genes embody knowledge about their niches.’’ This environment is extremely complex itself, and consists of the ribosomes the messages are translated in, other chemicals and the abundance of nutrients inside and outside the cell, and the environment of the organism proper (e.g., the oxygen abundance in the air as well as ambient temperatures), among many others. An organism’s DNA thus is not only a ‘‘book’’ about the organism, but is also a book about the environment it lives in, including the species it co-evolves with. It is well known that not all of the symbols in an organism’s DNA correspond to something. These sections, sometimes referred to as ‘‘junk-DNA,’’ usually consist of portions of the code that are unexpressed or untranslated (i.e., excised from the mRNA). More modern views concede that unexpressed and untranslated regions in the genome can have a multitude of uses, such as for example satellite DNA near the centromere, or the polyC polymerase intron excised from [i]Tetrahymena[/i] rRNA. In the absence of a complete map of the function of each and every base pair in the genome, how can we then decide which stretch of code is ‘‘about something’’ (and thus contributes to the complexity of the code) or else is entropy (i.e., random code without function)?

A true test for whether a sequence is information uses the success (fitness) of its bearer in its environment, which implies that a sequence’s information content is conditional on the environment it is to be interpreted within (4). Accordingly, Mycoplasma mycoides, for example (which causes pneumonia-like respiratory illnesses), has a complexity of somewhat less than one million base pairs in our nasal passages, but close to zero complexity most everywhere else, because it cannot survive in any other environment—meaning its genome does not correspond to anything there. A genetic locus that codes for information essential to an organism’s survival will be fixed in an adapting population because all mutations of the locus result in the organism’s inability to promulgate the tainted genome, whereas inconsequential (neutral) sites will be randomized by the constant mutational load. Examining an ensemble of sequences large enough to obtain statistically significant substitution probabilities would thus be sufficient to separate information from entropy in genetic codes. The neutral sections that contribute only to the entropy turn out to be exceedingly important for evolution to proceed, as has been pointed out, for example, by Maynard Smith (20).

In Shannon’s information theory (22), the quantity entropy (H) represents the expected number of bits required to specify the state of a physical object given a distribution of probabilities; that is, it measures how much information can potentially be stored in it. In a genome, for a site i that can take on four nucleotides with probabilities

{pC(i), pG(i), pA(i), pT(i)}, [1]

the entropy of this site is

H- = -ΣC,G,A,Tj pj(i) log pj(i) [2]

The maximal entropy per-site (if we agree to take our logarithms to base 4: i.e., the size of the alphabet) is 1, which occurs if all of the probabilities are all equal to 1/4. If the entropy is measured in bits (take logarithms to base 2), the maximal entropy per site is two bits, which naturally is also the maximal amount of information that can be stored in a site, as entropy is just potential information. A site stores maximal information if, in DNA, it is perfectly conserved across an equilibrated ensemble. Then, we assign the probability p = 1 to one of the bases and zero to all others, rendering Hi = 0 for that site according to Eq. 2. The amount of information per site is thus (see, e.g., ref. 23)

I(i) = Hmax - Hi [3]

In the following, we measure the complexity of an organism’s sequence by applying Eq. 3 to each site and summing over the sites. Thus, for an organism of l base pairs the complexity is

C = l - [chr]931[/chr]i H(i) [4]

It should be clear that this value can only be an approximation to the true physical complexity of an organism’s genome. In reality, sites are not independent and the probability to find a certain base at one position may be conditional on the probability to find another base at another position. Such correlations between sites are called epistatic, and they can render the entropy per molecule significantly different from the sum of the per-site entropies (4). This entropy per molecule, which takes into account all epistatic correlations between sites, is defined as

H = Σg p(g|E) log p(g|E) [5]

and involves an average over the logarithm of the conditional probabilities p(g|E) to find genotype g given the current environment E. In every finite population, estimating p(g|E) using the actual frequencies of the genotypes in the population (if those could be obtained) results in corrections to Eq. 5 larger than the quantity itself (24), rendering the estimate useless. Another avenue for estimating the entropy per molecule is the creation of mutational clones at several positions at the same time (7, 25) to measure epistatic effects. The latter approach is feasible within experiments with simple ecosystems of digital organisms that we introduce in the following section, which reveal significant epistatic effects. The technical details of the complexity calculation including these effects are relegated to the Appendix.

Quite a substantial mathematical background, I think everyone will agree. I’ll let everyone have fun reading the rest of the details off-post, as they are substantial, and further elaboration here will not be necessary in the light of my providing a link to the full paper.

Thus ends Part 1. Part 2 will follow shortly.

2 Likes

Welcome to Part 2.

Moving on to the Kaila and Annila paper, here’s the abstract:

The second law of thermodynamics is a powerful imperative that has acquired several expressions during the past centuries. Connections between two of its most prominent forms, i.e. the evolutionary principle by natural selection and the principle of least action, are examined. Although no fundamentally new findings are provided, it is illuminating to see how the two principles rationalizing natural motions reconcile to one law. The second law, when written as a differential equation of motion, describes evolution along the steepest descents in energy and, when it is given in its integral form, the motion is pictured to take place along the shortest paths in energy. In general, evolution is a non-Euclidean energy density landscape in flattening motion.

Ah, this dovetails nicely with Thomas D. Schneider’s presentation of a form of the Second Law of Thermodynamics applicable to biological systems that I’ve covered in past posts. This can be read in more detail here. Note that Thomas D. Schneider is not connected with Eric D. Schneider whose paper is cited above.

Here’s how Kaila and Annila introduce their work:

[b]1. Introduction[/b]

The principle of least action (de Maupertuis 1744, 1746; Euler 1744; Lagrange 1788) and the evolutionary principle by natural selection (Darwin 1859) account for many motions in nature. The calculus of variation, i.e. ‘take the shortest path’, explains diverse physical phenomena (Feynman & Hibbs 1965; Landau & Lifshitz 1975; Taylor & Wheeler 2000; Hanc & Taylor 2004). Likewise, the theory of evolution by natural selection, i.e. ‘take the fittest unit’, rationalizes various biological courses. Although the two old principles both describe natural motions, they seem to be far apart from each other, not least because still today the formalism of physics and the language of biology differ from each other. However, it is reasonable to suspect that the two principles are in fact one and the same, since for a long time science has failed to recognize any demarcation line between the animate and the inanimate.

In order to reconcile the two principles to one law, the recent formulation of the second law of thermodynamics as an equation of motion (Sharma & Annila 2007) is used. Evolution, when stated in terms of statistical physics, is a probable motion. The natural process directs along the steepest descents of an energy landscape by equalizing differences in energy via various transport and transformation processes, e.g. diffusion, heat flows, electric currents and chemical reactions (Kondepudi & Prigogine 1998). These flows of energy, as they channel down along various paths, propel evolution. In a large and complicated system, the flows are viewed to explore diverse evolutionary paths, e.g. by random variation, and those that lead to a faster entropy increase, equivalent to a more rapid decrease in the free energy, become, in terms of physics, naturally selected (Sharma & Annila 2007). The abstract formalism has been applied to rationalize diverse evolutionary courses as energy transfer processes (Grönholm & Annila 2007; Jaakkola et al. 2008a,b; Karnani & Annila in press).

The theory of evolution by natural selection, when formulated in terms of chemical thermodynamics, is easy to connect with the principle of least action, which also is well established in terms of energy (Maslov 1991). In accordance with Hamilton’s principle (Hamilton 1834, 1835), the equivalence of the differential equation of evolution and the integral equation of dissipative motion is provided here, starting from the second law of thermodynamics (Boltzmann 1905; Stöltzner 2003). In this way, the similarity of the fitness criterion (‘take the steepest gradient in energy’) and the ubiquitous imperative (‘take the shortest path in energy’) becomes evident. The two formulations are equivalent ways of picturing the energy landscape in flattening motion. Thus, there are no fundamentally new results. However, as once pointed out by Feynman (1948), there is a pleasure in recognizing old things from a new point of view.

I advise readers to exercise some caution before diving into this paper in full, as it involves extensive mathematics from the calculus of variations, and a good level of familiarity with Lagrangian and Hamiltonian mechanics is a pre-requisite for understanding the paper in full.

In the meantime, let’s take a look at the Schneider & Kay paper. Here’s their introduction:

[b]Introduction[/b]

In the middle of the nineteenth century, two major scientific theories emerged about the evolution of natural systems over time. Thermodynamics, as refined by Boltzmann, viewed nature as decaying toward a certain death of random disorder in accordance with the second law of thermodynamics. This equilibrium seeking, pessimistic view of the evolution of natural systems is contrasted with the paradigm associated with Darwin, of increasing complexity, specialization, and organization of biological systems through time. The phenomenology of many natural systems shows that much of the world is inhabited by nonequilibrium coherent structures, such as convection cells, autocatalytic chemical reactions and life itself. Living systems exhibit a march away from disorder and equilibrium, into highly organized structures that exist some distance from equilibrium.

This dilemma motivated Erwin Schrödinger, and in his seminal book What is Life? (Schrödinger, 1944), he attempted to draw together the fundamental processes of biology and the sciences of physics and chemistry. He noted that life was comprised of two fundamental processes; one “order from order” and the other “order from disorder”. He observed that the gene generated order from order in a species, that is, the progeny inherited the traits of the parent. Over a decade later Watson and Crick (1953) provided biology with a research agenda that has lead to some of the most important findings of the last fifty years.

However, Schrödinger’s equally important but less understood observation was his order from disorder premise. This was an effort to link biology with the fundamental theorems of thermodynamics (Schneider, 1987). He noted that living systems seem to defy the second law of thermodynamics which insists that, within closed systems, the entropy of a system should be maximized. Living systems, however, are the antithesis of such disorder. They display marvelous levels of order created from disorder. For instance, plants are highly ordered structures, which are synthesized from disordered atoms and molecules found in atmospheric gases and soils.

Schrödinger solved this dilemma by turning to nonequilibrium thermodynamics. He recognized that living systems exist in a world of energy and material fluxes. An organism stays alive in its highly organized state by taking high quality energy from outside itself and processing it to produce, within itself, a more organized state. Life is a far from equilibrium system that maintains its local level of organization at the expense of the larger global entropy budget. He proposed that the study of living systems from a nonequilibrium perspective would reconcile biological self-organization and thermodynamics. Furthermore he expected that such a study would yield new principles of physics.

This paper examines the order from disorder research program proposed by Schrödinger and expands on his thermodynamic view of life. We explain that the second law of thermodynamics is not an impediment to the understanding of life but rather is necessary for a complete description of living processes. We expand thermodynamics into the causality of the living process and show that the second law underlies processes of self-organization and determines the direction of many of the processes observed in the development of living systems.

Finally, I’ll wind up by introducing Emory F. Bunn’s paper, which is a particular killer for creationist canards, because it involves direct mathematical derivation of the thermodynamic relationships involved in evolutionary processes, and a direct quantitative analysis demonstrating that evolution is perfectly consistent with the Second Law of Thermodynamics. Here’s the abstract:

Skeptics of biological evolution often claim that evolution requires a decrease in entropy, giving rise to a conflict with the second law of thermodynamics. This argument is fallacious because it neglects the large increase in entropy provided by sunlight striking the Earth. A recent article provided a quantitative assessment of the entropies involved and showed explicitly that there is no conflict. That article rests on an unjustified assumption about the amount of entropy reduction involved in evolution. I present a refinement of the argument that does not rely on this assumption.

Here’s the opening gambit:

I. INTRODUCTION

Daniel Styer recently addressed the claim that evolution requires a decrease in entropy and therefore is in conflict with the second law of thermodynamics.1 He correctly explained that this claim rests on misunderstandings about the nature of entropy and the second law. The second law states that the total entropy of a closed system must never decrease. However, the Earth is not a closed system and is constantly absorbing sunlight, resulting in an enormous increase in entropy, which can counteract the decrease presumed to be required for evolution. This argument is known to those who defend evolution in evolution-creationism debates,2 but it is usually described in a general, qualitative way. Reference 1 filled this gap with a quantitative argument.

In the following I present a more robust quantitative argument. We begin by identifying the appropriate closed system to which to apply the second law. We find that the second law requires that the rate of entropy increase due to the Earth’s absorption of sunlight, (dS/dt)sun, must be sufficient to account for the rate of entropy decrease required for the evolution of life, (dS/dt)life (a negative quantity). As long as

(dS/dt)sun + (dS/dt)life ≥ 0,

there is no conflict between evolution and the second law.

Styer estimated both (dS/dt)sun and (dS/dt)life to show that the inequality (1) is satisfied, but his argument rests on an unjustified and probably incorrect assumption about (dS/dt)life.1 I will present a modified version of the argument which does not depend on this assumption and which shows that the entropy decrease required for evolution is orders of magnitude too small to conflict with the second law of thermodynamics.

Once again, I’ll let you all have fun reading the paper in full. :slight_smile:

So, that’s five scientific papers containing detailed rebuttals of creationist canards about the Second Law of Thermodynamics. I think that’s sufficient to establish that the creationist canards ARE canards, don’t you?

2 Likes

My own pathetic little attempt at an explanation is put to shame by this tour de force by @Calilasseia. Bravo! :champagne:

1 Like

@Get_off_my_lawn I get that feeling too.

1 Like

I’ll make it simpler. Entropy is expressed in the dimensions of heat/temperature, typically in joules/Kelvin. When heat flows from object A to object B, we:

  1. add this heat flow (from B’s perspective) divided by the temperature of B, to B’s entropy.
  2. add this heats flow (from A’s perspective) divided by the temperature of A, to A’s entropy.

Since most people who talk about entropy don’t ever actually do these calculations, they don’t realize that the value “added” in part 2, is always negative. Of course, the value in part 1 is always positive. And what is not immediately obvious (but isn’t too hard with a little work) is to see that the absolute value of part 1, is larger than the absolute value of part 2. Meaning the total entropy (their sum) will always be positive: the total entropy will increase, not decrease (this is where we come back to what most people know).

The punchline is, if you want to show that a story violates the 2nd law, you must show that the TOTAL entropy decreased; not that the entropy decreased in some part of the system since the entropy in some small part of the system always decreases with any interaction!

BTW this is what a refrigerator does, it lowers the entropy inside the refrigerator, at the cost of increasing the entropy outside the refrigerator (by a larger amount).

There are 4 episodes in the Yale’s Physics 1 lectures on thermodynamics, and they are excellent. First one.

1 Like

And my coffee analogy has a bitter taste now…

An even simpler argument comes from Allen Harvey - an evangelical with a doctorate in chemistry and works at NIST in the field of molecular thermodynamics. His argument is targeted at his fellow believers, trying to get them to stop making wildly false thermodynamic claims about life/evolution:

If the 2nd law has not been violated as the number of humans grew from two to 5 billion, it is ridiculous to assert that it was violated in the comparatively minuscule change from zero to two.

2 Likes

Nice straw man, care to show a single post of mine denying there are single celled organisms?

Biology 101, everything that is alive, eats. Lol metabolism is one of the necessary characteristics of something to be called alive in the first place. So…your an idiot.

Another straw man, can you fucking read? I never claimed otherwise you illiterate buffoon, and this is your first and last warning on the ad hominem, Bullwinkle.

Do you truly have no education whatsoever? We covered both of these things in my high school biology class.

It’s a shame they didn’t cover a basic level of reading comprehension, as I have never claimed otherwise, wtf are you blathering about?

Here is my post you replied to, I’m quoting it verbatim.

What you’re doing there is irrational, it’s called an argumentum ad ignorantiam or appeal to ignorance fallacy. Not having contrary evidence for a claim doesn’t validate it.

I don’t know how life started on this planet.

I’m an atheist because no one can demonstrate any objective evidence for any deity.

Can you?

Incidentally the word eat, as Nyarl points out, might be a misnomer, as single celled organisms wouldn’t eat, in the sense we understand it.

Maybe if you learn to read, or get a literate adult to read it very slowly to you, you might offer something beyond straw man gibberish, and ad hominem. I have to say judging from your posts here, I’m dubious though. I notice you completely ignored the question, quelle surprise.

1 Like

What a particularly stupid claim. It’s clear your grasp of evolution is on a par with your mastery of English.

Species evolution through natural selection is an objective scientific fact, only a complete ignoramus would even try to deny it.

Your is a possessive pronoun ffs, not an abbreviation of you are. Christ on a bike…

Troll, I’m calling it…

2 Likes

A REAL SIMPLE QUESTION FOR YOU:
"What in the world makes you think that there was ever a “First Organism?” ( **a living thing that has an organized structure, can react to stimuli, reproduce, grow, adapt, and maintain homeostasis)

ANOTHER VERY SIMPLE QUESTION:
"Why in the world would you assume it ‘ATE’ anything?

Your question belies your assumed intelligence in any matter related to evolution.