40. Natural selection: a power measured in watts

Our goal is to confirm Darwin’s contention that:

Natural Selection, we shall hereafter see, is a power incessantly ready for action. (Darwin, 1859, p. 76).

That goal now seems far more attainable. Our investigation into the etymological fallacy, and so into the widespread misuse of some very basic scientific terms within biology and ecology, has brought us a veritable abundance of riches. We have our full complement of laws of biology, maxims of ecology, and some interlinking constraints. The procedure should therefore now be quite straight forward. We should at last be able to demonstrate the true simplicity and both the scope and the versatility of Darwin’s natural selection. We should even be able to produce our population equation.

If Darwin is correct in his assertion that natural selection is a power incessantly ready for action, then it should abide by the first law of thermodynamics. Since it must demonstrate itself as a material effect, we should be able to measure it in both joules and watts, which are joules per second:

power. Symbol: P. The rate at which energy is expended or work is done. It is measured in watts (Gray & Isaacs, 1991).

Darwin was clear in one of his most important ideas:

This gradual increase in number of the species of a group is strictly conformable with the theory; for the species of the same genus, and the genera of the same family, can increase only slowly and progressively; the process of modification and the production of a number of allied forms necessarily being a slow and gradual process, one species first giving rise to two or three varieties, these being slowly converted into species, which in their turn produce by equally slow steps other varieties and species, and so on, like the branching of a great tree from a single stem, till the group becomes large (Darwin, 1859, p. 335).

One of its best-known moden expressions comes from Darrow:

It is not the strongest of the species that survives, nor the most intelligent, but the one most responsive to change (Darrow, 1988).

Darrow’s above reference to ‘the one most responsive to change’ can be seen to mean the one that is the best and the most energy efficient at apportioning its energy or power across its mass, its chemical constitution, and its numbers always granted its specific resources and environment. These are imposed by its equilibrium environment parameter, ψ*.

All these things granted, then the solution would seem self-evident. What we are looking for seems to fit the following definition:

Power The time rate of doing work. Like work, power is a scalar quantity, that is, a quantity which has magnitude but no direction. Some units often used for the measurement of power are the watt (1 joule of work per second) and the horsepower (550 foot-pounds of work per second).

Power is a concept which can be used to describe the operation of any system or device in which a flow of energy occurs. … Any device can do a large amount of work by performing for a long time at a low rate of power, that is by doing work slowly. However, if a large amount of work must be done rapidly, a high-power device is needed. High-power machines are usually larger, more complicated, and more expensive than equipment which need operate only at low power (Parker, 1984, p. 1539).

It is surely more than a coincidence that the above defintion of power seems to fit natural selection precisely. So … what might be the problem?

Not the least of our difficulties is that some lingering confusions and misuses of terms remain, such as ‘open’ and ‘energy’. These must be resolved before we can put the problem that Darwin posed to rest using our methods of choice. Fortunately, however, the singular advantage of a sound mathematical model is that testable restrictions are easy to both apply and to relax. It can also validate itself through its results.

We must of course use some differential equations. These were first introduced by Newton and Leibniz. Solving such equations often requires solving linear first order differentials of the form:

dy/dx + P(x)y = Q(x)

where P and Q are the two scalar orthogonal components, upon the x- and y-axes, of some vector force, F; or likewise of the form:

φ(x, y)dx + σ(x, y)dy = 0;

and where y has a dependency on x; and/or where in the former case P(x) and/or Q(x) can be constants (MacDonald, 2004; Neudert and Wahl, 2000).

The difficulty for natural selection is that even as we measure it, y is often dependent on x. This can make it very difficult to isolate each component and make predictions. Fortunately, however, equations of these general forms can often be solved using an ‘Euler multiplier’ or ‘integrating factor’.

If we symbolize the integrating factor by λ, we can create one out of a relation such as P(x)dx to give:

λ = e∫P(x)dx

or alternatively, and referring to our other equation, we can say that there exists an alternative function, Ψ(x, y), and such that when it is differentiated we get:

dΨ(x, y) = λ(x, y)φ(x, y)dx + λ(x, y)σ(x, y)dy

where we have applied our integrating factor, λ, equally to both the original functions.

Our integrating factor now allows us to re-express our first differential equation as:

dλ/dx = λQ(x).

If we now integrate both its sides, with respect to x, we get:

λy = ∫λQ(x) dx.

Since y is now being multiplied by our integrating factor or Euler multiplier, we only need to divide both sides by that same multiplier, and we will know y precisely and in terms of x:

y =(1/λ) (∫λQ(x) dx).

Our second equation sets things out even more simply. Since we are applying the same multiplier to each side; and since dx and dy change equivalently and oppositely for each term, granted their dependency; then the multiplier always does the same thing no matter how x and y change. We therefore end up with an expression of the form:

Z(x, y) = k,

where Z is some relation between x and y we can determine, and k is now some constant we can measure and apply to Z. This is now easy to measure, and these methods are completely general.

Given Maxims 3 and 4 of ecology—which are x M =m̅/t - n/t and x H = m̅/t - n/t - V/t respectively; and also given our integrating factor or Euler multiplier; then all we now need do is isolate all changes caused byn/t, which is the rate of change of numbers with respect to time. It now seems straightforward. All we need do is run an experiment—such as we have already done with Brassica rapa—and we can soon determine if this Darwinian proposal of a biological responsiveness to changes in -n/t is or is not observable, and is or is not measurable. So … what might be the problem?

There is in fact a rather large problem concerning the role and definition of energy in biology. There is also a problem of concepts and their designations. For example, Daniel Brooks and E. O. Wiley declare in their book Evolution As Entropy: Toward a Unified Theory of Biology that:

It is a truism that living organisms must take up, utilize, and dissipate energy or they will die. It is also true that no living system can survive whose minimum needs exceed available energy supplies. From this, one might then assume that energy flowing through the surrounding environment creates the boundary conditions for organisms. But we would assert that the flow of energy cannot explain the structure of living organisms (Brooks and Wiley, 1986, p. 36).

If this is so, then everything we have tried to do is pointless. But since their book’s title is Evolution as Entropy their intent is, in fact, to shift the debate from energy to entropy as they understand it. It is most interesting to see them continue by saying:

Energy is modified or differentially utilized by organisms, and that modification or use is determined by properties intrinsic to organisms. These intrinsic properties characterize an organism. In this regard, we agree with recent workers such as Wicken that energy flow is open and essentially unlimited as far as living organisms are concerned, that is, more energy reaches the earth from the sun than is ever used by living organisms. Energy flows do not provide an explanation for why there are organisms, why organisms vary, or why there are different species (Brooks and Wiley, 1986, p. 36).

Therefore, in order to understand their position, it is important to understand two things:

  • what they mean by ‘open’;
  • what they mean by the second law of thermodynamics (which defines entropy).

Brooks and Wiley state their position regarding the second law as follows:

The second law of thermodynamics may be stated in a variety of ways, some more useful than others. In a general sense, it asserts that the universe as a whole, or any isolated part of the universe is moving towards maximum entropy given the constraints operating on the system. In terms of energy flow, this process is irreversible: the amount of free energy of the whole system decreases with time. Thus the second law is associated with the concept of time and history. On a more concrete level, the second law has a statistical aspect. We can say that a system will become more randomized over time or that the number of states a system may occupy will increase until there is an equal probability that any one part of the system will be in any of the states available to it. When this occurs, the system is in equilibrium. Systems that are not in equilibrium may be moving toward equilibrium, or they may maintain themselves some distance from equilibrium by processing free energy. The “distance” from equilibrium is a manifestation of the order and organization of the system (Brooks and Wiley, 1986, p. 36).

For reasons we shall see shortly, it is highly questionable if Brooks and Wiley’s understanding and expression of the second law of thermodynamics is “useful” in the way they declare. But there is also the issue of what they mean by “open”, because entropy concerns the interaction between systems. And … very strangely … nowhere in their 320-page book do they offer a definition for open: surely a strange oversight. The closest they get to a definition for open is when they say:

The fact that living organisms exchange energy and matter with their surroundings places them in the category of physical systems called open systems (von Bertalanffy 1933, 1952) [italics in original] (Brooks and Wiley, 1986, p. 36).

We should now carefully note two things. Firstly … they have used the word ‘matter’ rather than the more scientifically precise mass. Secondly, they make it quite clear that if we want to properly understand this important word ‘open’ then we must consult Bertalanffy.

We must again be acutely aware of the etymological fallacy … and of how easy it is, through it, for positions to become entrenched. Donald Haynie puts the position even more robustly in his Biological Thermodynamics when he says: “Without exception, all living organisms that have ever existed are open systems” (Haynie, 2001).

But unfortunately, Haynie follows Brooks and Wiley in again relying on that rather loose word ‘matter’, and not on its far more clear and robust cousin, ‘mass’:

Before getting too far underway, we need to define some important terms. … The system is that part of the universe chosen for study. The surroundings are simply the entire universe excluding the system. The system and surroundings are separated by a boundary …. The system is said to be closed if it can exchange heat with the surroundings but not matter. That is, the boundary of a closed system is impermeable to matter. … If matter can be exchanged between system and surroundings, the system is open [Emphasis in original] (Haynie, 2001, pp. 8-10).

This is the etymological fallacy. It usually requires a four year post-graduate course in philosophy to gain clarity in the varied denotations of ‘matter’. A quest for clarity in matter is close to hopeless, and it is therefore far more profitable to try to pin down exactly what these proponents mean by ‘open’.

Both Haynie and Brooks and Wiley refer us to Bertalanffy whose 1952 publication Problems of Life: An Examination of Modern Biological Thought is an extension of his 1950 paper The Theory of Open Systems in Physics and Biology. This 1950 paper summarizes his ideas, and contains his critical definition of ‘open’ (Bertalanffy, 1950). And there we at last get a definition of open—and by extension of ‘matter’— as currently used in biology:

From the physical point of view, the characteristic state of the living organism is that of an open system. A system is closed if no material enters or leaves it; it is open if there is import and export and, therefore, change of the components. Living systems are open systems, maintaining themselves in exchange of materials with environment, and in continuous building up and breaking down of their components (Bertalanffy, 1950).

In the first place, Bertalanffy has again used the word ‘material’, and not ‘mass’, to define an open system. In the second place, he uses the word ‘material(s)’ twelve times in this paper, while using ‘mass’ but once, and even on that one occasion he does so in a context in which it would simply have been linguistically infelicitous to use ‘materials’:

In the most common type of growth, anabolism is a function of surface, catabolism of body mass. With increasing size, the surface-volume ratio is shifted in disfavor of surface. Therefore, eventually a balance between anabolism and catabolism is reached which is independent of the initial size and depends only on the species-specific ratio of the metabolic constants. It is, therefore, equifinal (Bertalanffy, 1950).

Since Bertalanffy only uses ‘body mass’ because ‘body material’ would not make for particularly good English, then it is doubtful that he is doing so in this context because he is at last recognizing that there is a most important scientific distinction between mass and material, with the former being highly technical in its intent, the latter not so. Philosophers may still be arguing about matter, but arguments about mass have long ceased.

Since Bertalanffy’s use of ‘mass’ occurs while he is trying to define his concept of ‘equifinal’—which is clearly important to the development of his ideas—then we need to know what that word means. Fortunately, he defines it for us:

A profound difference between most inanimate and living systems can be expressed by the concept of equifinality. In most physical systems, the final state is determined by the initial conditions. Take, for instance, the motion in a planetary system where the positions at a time t are determined by those of a time t0, or a chemical equilibrium where the final concentrations depend on the initial ones. If there is a change in either the initial conditions or the process, the final state is changed. Vital phenomena show a different behavior. Here, to a wide extent, the final state may be reached from different initial conditions and in different ways. Such behavior we call equifinal (Bertalanffy, 1950).

And … here we have a mystery. It is extremely difficult to see why a new word like ‘equifinal’ needs to be invented, when there is a perfectly servicable phrase available, complete with its rigorous definition, already incorporated within the second law of thermodynamics. That phrase is ‘inexact differential’, which is very carefully distinguished from ‘exact differential’ where an exact differential is “a differential equation which is obtained by setting the total differential of some function to zero” (James and James, 1992). James and James also use x2 to give us an example of an “integrating factor”, which is:

a factor which, when multiplied into a differential equation, with right-hand member zero, makes the left-hand member an exact differential, or makes it an exact derivative. E.g. if the differential equation

dy/x + y/x2 dx = 0

is multiplied by x2, there results

x dy + y dx = 0

which has the solution xy = c (James and James, 1992).

Weiss’ paper The uniqueness of Clausius’ integrating factor makes the position completely clear:

The second law of thermodynamics can be divided into two important parts, which are summarized by the following statements:

  • The reciprocal of the absolute temperature 1/T is an integrating factor for the differential of the reversibly exchanged heat qrev for all thermodynamic systems (Carnot’s theorem), which implies the existence of a new state function, the entropy S (with the exact differential dS = T−1δqrev).
  • In any irreversible adiabatic process, the entropy increases—as implied by Clausius’ inequality dS >= δq/T, where the equality holds only for reversible processes (Weiss, 2006).

As Weiss makes abundantly clear, the second law of thermodynamics has two distinct and very important parts. The Carnot cycle is the foundation of thermodynamics, and Rudolf Clausius used his methods because he was trying to give it a rational explanation, and to make all its properties measurable. The heat involved in the reversible Carnot cycle is designated qrev. When Clausius tried to analyse this Carnot heat engine, he found that δqrev, the infinitesimal change in the heat needed at any point, could not be computed because it was not exact in the technical mathematical sense. Since it was not exact, the path the system took was indeterminate. Yet … it had to be in principle both determinate and computable, because it had a specific value for the whole cycle. However, Clausius saw that δqrev/T, where T is the absolute temperature in kelvins, is indeed exact. We can come to know this latter commodity—which we can measure as joules per kelvin—because all we now need to do is to set it as the differential of some new function. Since it is a new function in the Euler style, its integral is now independent of whatever path qrev happens to be taking. Absolute temperature, T, is now our “integrating denominator” for δqrev. And when we now divide our earlier inexact differential, δqrev, by T, our integrating denominator, we end up with an exact differential. We now symbolize our new function we created—entirely so we could solve this problem—by S. We call it “entropy”. And we have the new law of nature first hinted at by Carnot.

In Clausius’ text The mechanical theory of heat, with its applicatioons to the steam-engine and to the physical properties of bodies in which he presented his discoveries, he carefully states in an Appendix entitled “Integration of the Differential Equation” how and why he derived his method, and so produced the now famous second law. He explains his usage of the Euler multiplier, or integrating factor, and finishes his explanation by saying:

The object of the introduction of the term a + t in place of the quantity t, is to render the last term susceptible of a simple mechanical meaning. In fact, from the equation:

pv = R(a + t)

which applies to permanent gases, it follows that

AR (a + t)/v dv = Apdv;

and since pdv represents the exterior work done during the expansion dv, the last term of the equation obviously represents the heat-equivalent of the exterior work (Clausius, 1897, p. 77).

When Boltzmann’s later statistical approach is now adopted, entropy is defined in terms of molecules and their movements … all of which have mass and inertia.

Figure 39: New Perspectives
 New Perspectives

Argument by analogy is always very difficult, but making the issues clear beyond doubt is also very difficult. The issue is what may or may not be assumed, and clarifying what is and is not being assumed. We “know”—i.e. we believe—that the tiles on the left in Figure 39 are somehow all “the same tile”. But … this again evokes the etymological fallacy.

Words such as “point”, “line” and “plane” are strictly undefined. They remain undefined until we give them a specific meaning, within a specific geometry. We believe we know that all those tiles have “parallel” edges, and that they only look different because of the perspective. In the same way, the thinner lines in the middle graphic in Figure 39 form a “perspectivity” because, using the given “perspective point”, it is easy enough to apply them to the thicker rays. They pass through those lines from that point to transform any one of the thicker lines into any other. The tiles may look different and the distances between the points on the thicker lines may also be different, but these sets of transformations share a given “cross-ratio”, which makes each set invariant. And since rectangles can now be changed somewhat arbitrarily, then vectors can soon be transformed into other vectors which share this invariance with them under projection.

It is now our assumptions that need to change, as does our understanding of “parallel” and “infinity”. There is an entire branch of geometry called projective geometry which studies these transformations and denies the “obvious” constructs of standard Euclidean geometry (Davis, 2001; Lehmer, 2005). The angles, lines, lengths and measures so important in Euclidean geometry become irrelevant in the projective form.

Every branch of knowledge has its range of definitions that render it coherent. Thus two important axioms from projective geometry are:

  1. Given any two distinct points in a plane, there is a unique line that lies on both of them.
  2. Given any two distinct lines in a plane, there is a unique point that lies on both of them.

There are two important consequences to these axioms. Firstly, there are no such things as parallel lines in the Euclidean sense; and secondly, points and lines have become equivalent—a relationship known as “dual”. Since every point lies upon a line and every line has a set of points, any figure drawn can be turned into its point or line complement. All given rectangles have shapes equivalent to each other as are those upon the left of Figure 39, and whose areas and angles are indeterminate; with all lines ostensibly parallel to each other in fact sharing a “point at infinity” at which they meet. By the relationship of duality, then as in the graphic on the right in Figure 39, once given an origin, then for every line we can find a point such that we can draw a normal from the origin to that line, and whose distance is r. The dual point for each line is then the distance along the normal away from the origin and whose distance is 1/r. It is a feature of projective geometry that every theorem holding true for any shape upon the plane also holds true for its complement, and so when point is substituted for line and vice versa.

Although lengths, areas and angles are no longer relevant within projective geometry, it is easy enough to devise a “homogeneous coordinate system” to represent coordinated sets of points that “belong” together. Just as 1/2, 2/4, and 150/300 all represent the same essential fraction, then so also do the homogeneous coordinates (x, y, w) represent the same point as any (αx, αy, αw) where α is a non-zero real number.

Then comes the changed understanding of the otherwise familiar notion of parallel. Although y = ax + b is commonly accepted as the general equation for a line of slope a and intercept b in the Euclidean plane, it does not allow for lines parallel to the y-axis, whose slope is then undefined. The more general equation ax + by + c for the Euclidean plane corrects that deficiency—as long as a, b and c are real numbers with at least one of a and b being non-zero. The completely general equation for a line in projective geometry is now very similar, being ax + by + cw = 0, and where not all three may be zero.

Although the equations may be very similar, their implications could not be more different. From the Euclidean perspective the parallel lines of those tiles look as if they are converging to a point … which projective geometry then calls the point at infinity, and directly incorporates. All lines on any given Euclidean plane can now be found by setting w = 1 (most usually). We can now set a, b and c to any value to get ax + by + c.1 = 0 … exactly the same as the ax + by + c for the Euclidean formula above. However, its projective representation as ax + by + cw = 0 is more powerful for it then incorporates all points and lines at infinity—again a characteristic feature of projective geometry. The point at infinity for any given Euclidean line is found by setting c to zero, so giving ax + by + 0.1 = 0. This then means that ax = -by. We can therefore find our point at infinity by setting the values (-b, a, 0) for (x, y, w). This then gives ax + by + cw = -ab + ab + 0 = 0 which is exactly the point at infinity for that line. It now exists on our projective plane and can be approached. The whole line of points at infinity is found when all values are zero. Since w = 1 on the Euclidean plane, we have to create the equation 0.x + 0.y + 1.w = 0. We can therefore set (a, b, 0) for (x, y, w), and it is now of no relevance what the values for x and y may be respective to the Euclidean plane, for we are on the line at infinity in the projective one. Parallel lines definitely now meet in the way depicted by the tiles, and infinity is an integral part of this geometry.

As with mass, matter, and materials, points, lines and planes are very basic concepts that it is wisest not to assume. It is up to the geometer to state what they mean, and then build a consistent geometry. In the same way—as William James sought to do—a dynamicist or biologist is free to take the words mass, matter and materials and do what they want with them as long as they are consistent. Mass is not obliged to mean or to be the same in biology as it is in chemistry and physics, nor vice versa.

The implied struggle with the etymology of basic concepts is nothing new, for as our forays have shown, dynamics went through a tortured period of confusion such as when Leibniz championed his vis viva of mv2, while Newton championed his alternative mv. Newton’s error was in using m|v| as a scalar rather than mv as a vector. Leibniz’s kinetic energy also had to be carefully distinguished from momentum:

In 1686 Gottfried Wilhelm Leibniz publically set down some thoughts on Rene Descartes’ mechanics. In so doing he initiated the famous dispute concerning the “force” of a moving body known as the vis viva controversy. Two concepts, now called momentum (mv) and kinetic energy (½mv2) were discussed as a single concept, “force,” each differing from Newton’s idea of force. One of the many underlying problems of the controversy was clarified by Roger Boscovich in 1745 and Jean d’Alembert in 1758, both of whom pointed out that vis viva (mv2) and momentum (mv) were equally valid (Iltis, 1971, p. 21).

And then quantum theory went through a period of crisis until, for example, ‘amplitude’ had been clarified. This morphed into a complex number, the square of whose modulus represents the probability density of finding a given observable in a given volume element of space with a given energy density … all of which certainly requires a singular clarity in the application of several quite technical terms in a variety of associated disciplines!

Again an idea of Einstein’s gave me the lead. He had tried to make the duality of particles—light quanta or photons—and waves comprehensible by interpreting the square of the optical wave amplitudes as probability density for the occurrence of photons. This concept could at once be carried over to the ψ-function: |ψ|2 ought to represent the probability density for electrons (or other particles). It was easy to assert this, but how could it be proved?

The atomic collision processes suggested themselves at this point. A swarm of electrons coming from infinity, represented by an incident wave of known intensity (i.e., |ψ|2 , impinges upon an obstacle, say a heavy atom. In the same way that a water wave produced by a steamer causes secondary circular waves in striking a pile, the incident electron wave is partially transformed into a secondary spherical wave whose amplitude of oscillation ψ differs for different directions. The square of the amplitude of this wave at a great distance from the scattering centre determines the relative probability of scattering as a function of direction (Born, 1954).

Figure 40: Open to Natural Selection
 Open to Natural Selection

Concepts certainly have to be clarified. The best way to do this is to follow Galileo and construct a thought experiment. Suppose, as in Figure 40, astronomers spy a new planet, with a road on which a cart is rolling. If it rolls along at steady speed, in a right line, and with no deviations; then since it does not accelerate, its mass cannot be determined. Suppose, now, that exobiologists excitedly spot an atmosphere around the plant. After further investiation, they suspect that it is some kind of Gaia-like planetary-wide biological entity. Important questions then need to be answered. Firstly, is this entity alive? Secondly, although Bertalanffy was very clear indeed that biological forms are “open systems” that exchange “materials” with their surroundings, this planetary wide biological entity only has inter-galactic space about it and so cannot possibly be exchanging materials. Only sunlight arrives which is neither material nor possesses mass. Our single planetary entity absorbs and uses it for its own purposes.

Open and closed are now of no relevance. They are also not determinative in terrestrial biology. Or rather: just as projective geometry allows a whole range of questions and perspectives to be approached that cannot be approached with other geometries, so also with this insistence that biological systems are open. Biological systems are open if the investigator chooses to define them as such, but otherwise they are not.

In his paper An Outline of General System Theory Bertalanffy develops his ideas yet further, and again tries to define an open system:

The characteristic state of the living organism is that of an open system. We call a system closed if no materials enter or leave it. It is open if there is inflow and outflow, and therefore change of the component materials.

So far, physics and physical chemistry have been almost exclusively concerned with closed systems (Bertalanffy, 1950(b)).

Bertalanffy’s characterization of themodynamics rests on the fact that thermodynamicists used thought experiments to clarify their concepts. They discovered that if they conceived of a discrete sample of matter in an apparently closed box, and with walls, and so that its mass and molecules could not cross the system boundaries, then that system still somehow managed to exchange measurable properties with the environment. This had already been hinted at by Mayer. Thermodynamicists simply conceptually isolated and studied that box and its proposed interactions to formalize and put rigour in the discoveries made.

It is not possible to proceed in thermodynamics without being able to identify states. The first thing we can therefore do is propose some specific property to help identify states. We therefore propose a law, for thermodynamics. We express in two different ways to make its import clear:

There exists for every thermodynamic system in equilibrium a property called temperature. Equality of temperature is a necessary and sufficient condition for thermal equilibrium (Spakovszky, 2006).

The fourth (or zeroth) law of thermodynamics: If no spontaneous heat flow takes place between two substances in thermal contact then they can be considered to have equal temperatures (Encyclopaedia Britannica, 2002).

Now that we can measure the different states, it also becomes clear that this box, as a system, possesses some property over and above the mere mechanical attributes of its components. Sometimes the surroundings have an effect on the box; and sometimes the box has an effect on the surroundings. We can specify that new property as follows:

There exists for every thermodynamic system a property called the energy. The change of energy of a system is equal to the mechanical work done on the system in an adiabatic process. In a non-adiabatic process, the change in energy is equal to the heat added to the system minus the mechanical work done by the system (Spakovszky, 2006).

The first law of thermodynamics: For any process involving no effects external to the system except displacement of a mass between specified levels in a gravitational field, the magnitude of that mass is fixed by the end states of the system and is independent of the details of the process (Encyclopaedia Britannica, 2002).

We then notice another thing. Not all the states are absolutely identical. Sometimes we can get work out of the box; and when we do, it tends to cool down. If we want more work out of it, we either have to heat it up again or else somehow restore it to the state it was before. We believe that in an ideal world—such as like the thought experiment we can conduct with this box—we could get endless repetitions out of it, but observation and many trials indicate that hot to cold always happens spontaneously, whereas cold to hot requires that we do something. We therefore invent another property to adequately describe this state of affairs as follows:

There exists for every thermodynamic system in equilibrium an extensive scalar property called the entropy, S, such that in an infinitesimal reversible change of state of the system, dS = δQ/T, where T is the absolute temperature and δQ is the amount of heat received by the system. The entropy of a thermally insulated system cannot decrease and is constant if and only if all processes are reversible (Spakovszky, 2006).

The second law of thermodynamics: Among all the allowed states of a system with given values of energy, numbers of particles, and constraints, one and only one is a stable equilibrium state. Such a state can be reached from any other allowed state of the same energy, numbers of particles, and constraints, and leave no effects on the state of the environment (Encyclopaedia Britannica, 2002).

So in a simple step-by-step way, we have discovered three very important properties from our ideal and closed box—two of which we could only properly confirm in this way, and one of which we would never have discovered—that we can then proceed to study and use in the real world, open system or not. With this property called energy, all interactions with the environment can be accurately predicted and thoroughly explained. We also note that there are two exchanged properties, that are not the properties of the box but that are solely properties of its interaction with its surroundings, and that we can call ‘work’ and ‘heat’ … and that we can also measure and rationally explain.

Bertalanffy also proposes and further defines his proposed equifinality by saying:

A profound difference between most inanimate and living systems can be expressed by the concept of equifinality. In most physical systems the final state is determined by the initial conditions. …

Vital phenomena show a different behaviour. Here, to a wide extent, the final state may be reached from different initial conditions and in different ways. Such behaviour we call equifinal (Bertalanffy, 1950(b)).

Everything Bertalanffy has to say here is already covered by the distinction between exact and inexact differentials … although it is certainly true that ‘equifinal’ is easier to say. Entropy, however, is just as easy to say, and achieves the same purpose. Instead of titling their book Evolution As Entropy: Toward a Unified Theory of Biology Brooks and Wiley might as well have titled it: Evolution As Energy Conserved: Toward a Unified Theory of Biology for that is the sole import and purpose of entropy.

Thermodynamicists also discovered, through Clausius in particular, that if they included an integrating factor in their equations; set δqrev/T = dS; and called the factor entropy; they could solve all problems. The subject therefore requires entropy (a) to confirm both the existence and the behaviour of the conserved property energy; and (b) to make energy comprehensible and understandable…and also to ensure that it is always conserved.

Therefore, to seek—as Brooks and Wiley do—to explain objects and phenomena in terms of entropy rather than energy is immediately to explain them in terms of energy. That is how entropy is defined. Entropy arises as a result of a process to incorporate an integrating factor that guarantees that energy is conserved.

Bertalanffy also devised equations to justify his general system theory concepts. He certainly applied them very broadly, but their essentials did not change:

The application of a branch of applied mathematics to the study of marriage was presaged by von Bertalanffy, who wrote a classic and highly influential book called General System Theory. This book was an attempt to view biological and other complex organizational units across a wide variety of sciences in terms of the interactions of those units. The work was an attempt to provide a holistic approach to complex systems. … The work fit a general Zeitgeist. … The concepts of homeostasis (derived from the physiologist Cannon), feedback, and information provided the basis for a new approach to the study of complex interacting systems.

The mathematics of General System Theory was lost to most of the people in the social sciences who were inspired by von Bertalanffy’s work. Von Bertalanffy believed that the interaction of complex systems with many units could be characterized by a set of values that change over time, denoted Q1, Q2, Q3,…. The Q’s were variables each of which indexed a particular unit in the “system”, such as mother, father, and child. …

… Von Bertalanffy thought that these functions, the f’s, would generally be nonlinear. The equations he selected have a particular form, called “autonomous”, meaning that the f’s have no explicit function of time in them, except through the Q’s, which are functions of time. … However, von Bertalanffy presented a table in which these equations were classed as “Impossible”. He was referring to a very popular mathematical method of approximating nonlinear functions with a linear approximation which is rather limited. He also had no idea what the Q variables would be.

In the applied mathematics world, such nonlinear equations have long been studied, and several techniques are available to deal with them which give, at the very least, qualitative solutions (Gottman et al, 2003).

There is a further reason why the emphasis placed upon open and closed is not helpful with natural selection or in unmasking the forces behind evolution. Open and closed necessarily place the focus on individual biological entities. But this is not the aspect to which Darwin primarily directed his attention; and nor is it the prime location for natural selection:

Considering the importance of the influence which Malthus’s book was thought (even by Darwin himself in later life) to have exerted on Darwin’s work and ideas, it is significant that he devoted so little space to Malthus in the Notebook which he wrote immediately after reading his book. The reason, as explained in the Introduction to Darwin’s Third Notebook on Transmutation of Species, is that Darwin had already and independently thought out the principle of selection of favourable variations and seen the possibility that the transmutation of species might be explained by its means. What Malthus gave Darwin was evidence of the rigorousness of selection and of the inevitability of widespread mortality.

To this concept, Darwin introduced the notion of extinction as the extreme case of depopulation, and the notion of variation; and he showed that he was well aware that this would lead to results very different from those which Malthus thought that he had achieved (Beer, 1960, p. 153).

Natural selection shows itself, as Darwin correctly opined, in mortality. This of course permanently threatens individuals … but it also threatens populations. These latter cannot survive without individuals, but they can certainly survive in spite of individual mortality.

Natural selection operates at the population level. It operates more on reproduction, and rather less on mere existence:

If every lineage experiences the same tendency for rapid increase and the resources are limited, Darwin reasons, this will cause a tremendous pressure on all species against each other in competition for limited resources. The resultant effect is a ‘warring of the species’. The war is so intense because the crush of population is so great. As a result any slight change to ecological conditions may give one species an advantage over another and drive the other one out. Darwin writes: ‘One may say there is a force like a hundred thousand wedges trying force <into> every kind of adapted structure into the gaps <of> in the oeconomy of Nature, or rather forming gaps by thrusting out weaker ones’ (Ariew, 2007).

All terrestrial species, when considered together, are in the same situation as our putative exoplanet of Figure 40. Closed or open is an irrelevance in trying to determine the force and power of natural selection, which works upon populations at large, and also involves energy.

To return, for example, to Harte et al, they propose to measure the abundance of tree species, S0, within an area, A0, by making their S0 a ‘state variable’ that is intermediate between an intensive and an extensive variable; it neither adds linearly nor is it averaged when systems are adjoined and thus has no analogy in thermodynamics’ (Harte et al, 2008).

We have already seen that their equating of an area with a volume has no real validity. But they also suggeest that tree abundance, S0, is a state variable. They reference thermodynamics to justify it. But state variables in thermodynamics make the future of systems predictable in a very specific way. They are defined as ingegrating functions.

Although internal energy, U, for example, is a state variable, thermodynamic equations do not require that it have any given value. Only changes in internal energy, dU, are relevant to the discipline. Since a change of area, which would be dA0, has no real meaning or manifestation, it is hard to see how it can be a state variable (although it is still of course perfectly acceptable for it to be a state variable in information theory); and it is equally hard to see how it could be incorporated in any suitably formed equation so that it is the integrating factor. Thus the intent of state variables is very specific: it is to avoid the complexity of higher order differential equations, and to bring a uniform and readily approached structure to systems and their inputs and outputs. External inputs are joined to external outputs through whatever state variables are internal to the system. These are therefore designed to be predictable for they depend—as their name suggests—on the system’s state. The state X1 at time t1 includes the minimum information necessary to specify it completely, and to allow for a determination of all system inputs and outputs for X2 at t2. If state variables are without such properties then the inexact differentials they are intended to tame cannot be tamed:

Properties that are independent of how the sample was prepared are called state functions. The pressure, internal energy, and heat capacity are examples, for they depend on the current state of the system and not its previous history. Properties that relate to the preparation of the state are called path functions. Examples include the work that is done in preparing a state or the energy transferred as heat. We do not speak of a system in a particular state as possessing work or heat. In each case, the energy transferred as work or heat relates to the path being taken, not the current state itself (Atkins, 1990, p. 61).

These are extremely valuable properties which should surely not be compromised without good cause. Thermodynamics is for this reason clear what are and are not state variables or functions. The only other option is a path variable or function, and not some amorphous intermediary. And … path functions are inexact differentials. Many end states can result from a given initial state; and many initial states can arrive at a given end state. It is again difficult to see why a new word, such as equifinal, needs to be invented to discuss a pre-existing concept.

Thermodynamics is also very clear about the difference between intensive and extensive variables. There are again no intermediaries. Intensive and extensive variables are jointly used to define states, and if necessary to measure paths. As we have seen, both intensive and extensive variables are carefully defined through the explicit mathematical functions of F({ai}, {Aj}) = F({ai}, α{Aj}) and F({ai}, {Aj}) = αF({ai}, {Aj}) respectively. Therefore, if an important variable of this kind indeed ‘has no analogy in thermodynamics’ then surely an entire new modelling theory should be developed that transcends biology and ecology, in the same way that state variable theory transcends thermodynamics, to explicate how that important intermediary variable behaves in a broad range of phenomena (Harte et al, 2008).

Even though Jørgensen and Fath’s paper is carefully titled Application of thermodynamic principles in ecology, things do not improve overmuch. They for example say:

All ecosystem processes are irreversible (this is probably the most useful way to express the Second Law of Thermodynamics in ecology) (Jørgensen and Fath, 2004).

It is again highly questionable that their expression of the second law of thermodynamics is any more useful than the one earlier proposed by Brooks and Wiley (Brooks and Wiley, 1986). None of them gives us any way to separate a population expression from an individual one. Individual ecosystem processes are certainly irreversible. Organisms certainly age and die. But population ecosystem processes simply cannot be irreversible in the same kind of way otherwise—assuming special creation is denied and that Darwin is correct—we as biologists and ecologists would not be here to debate these issues.

It is vital to keep these two ecosystem processes—the individual and the population—conceptually distinct. Ecology can surely only exist because population ecosystem processes must somehow be reversible generation after generation, even though individual ecosystem processes are in sum not. But even there, at least some individually relevant metabolic processes must somehow and in principle be reversible, and for at least some spans of time, otherwise there would be no such thing as respiration, for example. It is all very well to point, as Bertalanffy does, to ‘steady states’, but these by definition cannot be maintained by individual entities, and again have little to do with the issue of open or closed. Steady states in thermodynamics are not maintained by individual particles. They are maintained by populations of particles.

As for the meaning and intent of the famous second law of thermodynamics, Rudolf Clausius’ ‘Clausius statement’ is generally credited as the first rigorous version:

No process is possible whose sole result is the transfer of heat from a body of lower temperature to a body of higher temperature (Garg et al, 1993, p. 126).

Thus according to the Clausius statement—a general statement pertaining to all systems using and composed of energy—no real process can transfer heat from a less energetic body to a more energetic one, such as in either biological growth or development, without also having some other and deleterious effect. That body can be a population, and need not only be an individual entity.

What does this Clausius statement mean for biology? It means that it is impossible for the members of a given population of biological entities of given numbers, mass, and energy to each become larger and reproductively capable—which is to maintain the same number of entities but in a higher energy condition—and then to transfer all of that gathered mass and energy to some given and designated progeny, again of the same initial numbers, mass and energy … and to do so without any loss in numbers. Thus it is a statement not simply about the individual members losing their separate masses and/or energies whether exactly or inexactly, but also about their population at large—the system they together form—and so with respect to those overall numbers of entities, along with their constituent molecules. Every population will suffer losses and must counter them or it cannot survive.

This double import of the Clausius statement—the individual and the population—is further clarified through Lord Kelvin’s alternative formulation of the second law:

No process is possible in which the sole result is the absorption of heat from a reservoir and its complete conversion into work (Garg et al, 1993, p. 126).

Since no such process is possible, then it is not possible for any given biological entity to have a cast-iron and one-to-one guarantee that it can exploit a given source of energy and chemical components for biological purposes with the only result being that it becomes a progenitor composed of a more numerous number of chemical bonds, components, and energy, some of which it can then and without failure pass on to its progeny. Or alternatively: it is impossible for an entire collection of entities of given progeny and chemical bonds to move from a low initial average individual mass to a higher one, and so to become progenitors, without there being an adverse effect upon any such collection of entities in the way of numerical failure for that set of entities. These are both definite statements about energy, and it is therefore difficult to understand Brooks and Wiley’s assertion that ‘the flow of energy cannot explain the structure of living organisms’. Entropy is the integral of some function within systems to which an Euler multiplier or integrating factor is applied, and we must eventually find its analogue in biology … but it surely exists to make the accounting of energy possible.

Failures for any given group of potential progenitors are inevitable. Darwin expressed these inevitables in his own somewhat different terms as follows:

More individuals are born than can possibly survive. A grain in the balance may determine which individuals shall live and which shall die–which variety or species shall increase in number, and which shall decrease, or finally become extinct (Darwin, 1869, p. 467).

This important distinction between the individual and the population differences accepted, then Darwin’s proposal for natural selection and evolution becomes a second law discussion of the difficulties that populations—and not just individuals—face in expressing their population ecosystem reversibility in the face of the overwhelming irreversibility imposed by the second law of thermodynamics. Darwinian variability and its proposal for heritability across the generations is then possible evidence for the prosecution regarding the struggle for survival Darwin describes.

Our given statement about the second law is hopefully a little more clear in exposing that struggle … Jørgensen and Fath’s surely considerably less so. Ours certainly covers Darwin’s intent as expressed in:

I should premise that I use the term Struggle for Existence in a large and metaphorical sense, including dependence of one being on another, and including (which is more important) not only the life of the individual, but success in leaving progeny (Darwin, 1869, p. 62).

Jørgensen and Fath also say:

In thermodynamic terms, ecosystem growth and development means moving away from thermodynamic equilibrium. At thermodynamic equilibrium, the system cannot do any work. … All its components are inorganic, have zero free energy (exergy), and all gradients are eliminated (Jørgensen and Fath, 2004).

Once again … this is not necessarily so. ‘System’ is imprecisely identified.

Ecosystem growth and development for the individual can easily mean a movement away from thermodynamic equilibrium for those specific individuals … but the same does not necessarily and simultaneously hold for their population. By Proviso 2, Maxim 1 of ecology, the maxim of dissipation, M → 0, if an adult reproduces then it moves itself closer to thermodynamic equilibrium or to heat dissipation, and eventually leaves the population. But … that same act has surely taken the population further away from thermodynamic equilibrium, for progeny results with a potential that can later be expressed. And, by the same token, it is eminently possible, as we have already seen and will see again, for a population at large to move closer to thermodynamic equilibrium, which is when members die without reproducing, while those members were—and the survivors are—in fact moving further away from thermodynamic equilibrium because they are individually growing and so expending one potential in readying themselves for development … while of course possibly creating another potential. Which effect outweighs which depends entirely on the rates and numbers involved.

Thermodynamic theory very carefully distinguishes between the behaviour of the individual components of a system, and the behaviour of the system at large. Without specifying which of these two systems is being referred to, and at what point in time, analysis is surely rife with ambiguity and confusion.

Jørgensen and Fath also say that “Growth is defined as an increase in a measurable quantity” (Jørgensen and Fath, 2004). This is again not necessarily so. Many a growth requires that a potential be exploited which then and by its very definition means a decrease in some measurable property or gradient for some individual … but then again not necessarily either for that individual or for the population at large and/or vice versa. What is confusing is not the growth in itself nor the act of measuring. What is confusing is what biologists and ecologists think should be measured, and how, and why, and the overall relations to other measures, along with the meaning.

And finally, Jørgensen and Fath begin their paper with the oft-stated remark that “Biology in general, and ecology specifically, are still struggling to fully understand and apply the ramifications of living systems as complex, open, hierarchical, granular systems” (Jørgensen and Fath, 2004). We have already noted that this is a common view in the literature. It is well-expressed in the title of R. B. O’Hara’s paper The anarchist’s guide to ecological theory. Or, we don’t need no stinkin’ laws (O’ Hara, 2005). A more comprehensive review can also be found in Lev Ginzburg and Mark Colyvan’s Ecological orbits: how planets move and populations grow (Ginzburg and Colyvan, 2004). But it is actually very much more likely that biologists and ecologists are in fact struggling to do some rather basic science, and to understand the interplay, within their discipline of some really very basic scientific concepts … including temperature heat.

Joseph Black took the first tentative steps in clarifying terms by carefully noting the difference between heat and temperature:

Heat may be considered, either in respect of its quantity, or of its intensity. Thus two lbs. of water, equally heated, must contain double the quantity that one of them does, though the thermometer applied to them separately, or together, stands at precisely the same point, because it requires double the time to heat two lbs. as it does to heat one (Law, 1775).

Figure 41: Energy and Temperature
 Energy and Temperature

Figure 41 is a return to the Joule experiment. Flask A is held at 100 °C and contains 2M kilogrammes of a given gas with a given specific heat, and such that the amount of substance gives it 1,000 joules of heat energy. Since we want Flask B to also contain 1,000 joules, then since it is held at twice the temperature, 200 °C, its molecules are the more active and we must ensure that it contains only half the mass and amount of substance, M. We in other words exercise great care over (a) our terms; (b) how our intensive and extensive variables interact; and (c) the molecular behaviour of these substances.

If we now combine Flask A end-to-end with another exactly like itself, we produce a Flask C that is twice the size, and that contains twice the mass and amount of substance. It also has twice the number of joules … but remains at the original temperature of 100 °C. We have again taken care over terms, over the interaction of intensive and extensive variables, and over the molecular construction of the substances.

If we now want to combine Flasks A and B so the final result is Flask D at the median temperature, but also so that it contains only 1,000 joules, then we have to take a little more care. If we simply combine them, then although the temperature will be correct at 150 °C, the total heat energy will be 2,000 joules. We must therefore remove some amount of substance, and therefore heat content, from the proposed mixture and as depicted in Flask D at 1.5M. And, by the same token, if we want to additively combine the resulting Flask D back with B but so that we end up with the same 2,000 joules of heat content as Flask C, then we must make sure that, after mixing, we again have the correct mass or amount of substance, which is now 2.5M, as compared with the 4M of Flask B. Such is the power of state functions and intensive and extensive variables that we could just as well have got here by first heating Flask C by 75 °C which would increase the heat content to 3,500 joules, and then jettisoning M kilogrammes to leave the same 2.5M and the requisite number of joules and amount of substance.

Although some of the thermodynamic variables involved here are intensive while others are extensive, they are well understood. This is because thermodynamicists have gone to considerable pains to clarify their terms so that the substances can be combined simply enough to produce predictable measures and results. Furthermore: the measures provided for use in thermodynamics—such as volume and temperature—automatically incorporate the behaviours and tendencies of the molecules comprising that system.

Figure 42: The Engeny of Species
 The Engeny of Species

Unfortunately, and as is made clear in Figure 42, biology and ecology lack this kind of clarity and simplicity in terms. They in particular do not reference molecular behaviour. Even Gurney et al, for example, who produced many valuable equations describing equilibrium age distribution populations, quite failed to distinguish between growth and development, which are very different kinds of molecular behaviours (Gurney et al, 1996). We could also only link their equations to stoichiometry and so to molecular DNA through the metabolic and so molecular work of Brown et al (Brown et al, 2004).

Since biology and ecology have no clear machinery for cogently discussing either the growth or the development of any single species in terms of its molecular behaviour, which is its DNA; nor of how these molecules might intersect with Darwinian competition; then it is hardly surprising the subjects lack any clear machinery for handling genera or phyla and changes within them.

Our first law of biology, the law of existence, along with our three constraints, tell us that the biological entities depicted in Figure 42 must constantly do work. The fourth maxim of ecology, which is the maxim of apportionment, tells us that there are only three things a population can do with any biological energy at its disposal: expend it on mass, expend it on changing its chemical configurations, or expend it on changing its population numbers. It would surely be prudent to clarify which of these is extensive, which intensive, and how they interact.

We see in Figure 42 a given cell exercising pure mechanical chemical energy at the molecular level to move from A to B, which is simply to put on mass. Since the mass and the number of components increases then the Gibbs energy also increases. However, the visible presence, V— which is the Gibbs energy per unit mass, and an intensive variable—remains the same. Following Mayer, this is then entirely a constant pressure, Cp, situation. There are no constant volume or Cv changes … and therefore no changes in nonmechanical transformations. Since pure mechanical chemical energy has been exercised, then the chemical bond energy, H, needed to bind all that additional mass has increased along with the absolute increase in the potential and in the Gibbs energies. This expenditure of mechanical chemical energy of course requires a given amount of power in watts, exerted over a given time period.

If we now observe the transition from A to C in Figure 42, we see the cell remaining at the same mass. Therefore, no additional mechanical chemical energy is being expended. The cell instead only exercises its ability to change its visible presence, V, which is again the Gibbs energy per unit mass. As in the previous case, the absolute quantity of Gibbs energy changes, but (a) in the opposite direction, and (b) for a very different reason. Since the Gibbs energy now declines, instead of increasing, then the total chemical bond energy, H, over the population must again increase because the Gibbs energy is a thermodynamic potential which is being realized. Thus on the previous occasion the net stock of chemical bond energy increased because further components had been added, whereas on this latest occasion it is increasing because a potential is being exploited. Even though the change in visible presence, and the Gibbs energy, is this time intensive, rather than extensive, the change in the chemical bond energy, H, is extensive and increases regardless. This is now a constant volume, Cv, situation with no mechanical molecular transformations. There are only nonmechanical ones. And although this entire transition is funded by only nonmechanical chemical energy, and so is a qualitative change in state, it again requires a given power in watts, which must still be exercised over a given time period.

Finally, the transition A to D incorporates both transitions. So although, for example, Gurney et al gave us some very useful equations describing the behaviour of age distribution populations, they failed to recognize those distinctions (Gurney et al, 1996). But by analogy with thermodynamics, raising a given system’s temperature requires a given quantity of heat energy, which is extensive, yet produces a result—i.e. a temperature—that is intensive; whereas simply augmenting the system’s mass is extensive and increases the heat energy, which is also extensive, with both the input and the result being extensive. The intensive temperature can easily be left the same. Whichever of mechanical or nonmechanical energy may take precedence at any instant, and for any given population, a power in joules per second is still required. These different ways in which these different effects are achieved certainly needs to be recognized.

The only other variable relevant to a biological population—even an equilibrium age distribution one—is the change in numbers. There will be various transitions from any n1 to any n2, over any time period t1 to t2. But no matter how varied or otherwise these number changes may be, and no matter how they are associated with any ongoing changes in mechanical and/or nonmechanical energy, the population can only fund its number density changes through given power usages in watts. Such changes are therefore subsumed under the same parameter. It is impossible to isolate and predict any changes in biology or ecology when these various variables are not clearly distinguished.

By Maxim 4 of ecology, which is the maxim of apportionment, only increases and decreases in mass, in numbers, and in energy density are possible. This independently confirms our three constraints of constant propagation, constant size, and constant equivalence which can now be seen to be expressions of them. It also confirms our gradient function of the line integral, f = (f/n, f/q, f/w). This last states that all the population’s changes—which are changes in its biological inertia—depend upon the population numbers, n; the moles of chemical components per entity, q; and the rate at which those components are processed or configured, w. Two of these are extensive, one intensive. Therefore, we must distribute the watts or power measured for natural selection across these three variables in due proportions so that Darwin’s ‘power incessantly ready for action’ stands revealed. For that, we need an equation that shows how natural selection, evolution, and competition all get their due measures as they operate across both individual entities and entire populations.