Previous: 49: “A fine hypothesis”  Next: 51: Conclusion: A general theory of biology
50. The demonstration
Our ambition now is very simple. It is to show:
 that the following Euler equation gives a comprehensive and accurate description of all biological phenomena:
µ = (∂S/∂U)V,{Ni} dU + (∂S/∂V)U,{Ni} dV + Σi (∂S/∂ui)U,V,{Nj=/i} dui + Σi (∂S/∂vi)U,V,{Nj=/i} dvi;
 that Brassica rapa’s measured value for the lefthand term summation term in the above Euler equation, Σi(∂S/∂ui) U,V,{Nj=/i} dui, is 0.19 grams;
 that B. rapa’s measured value for the righthand summation term in the same equation, Σi(∂S/∂vi) U,V,{Nj=/i} dvi, is 1.285 joules;
 that the above two summation terms are insufficient to account for all variations in the population;
 that in reference to the GibbsDuhem equation of:
m̅µ = m̅dS = dU + dH  Σiµi(dvi  dmi)
the measured value for the term dU as exhibited by B. rapa totals 3.08 grams for the population over the generation;
 that in reference to the same GibbsDuhem equation, the term dH totals 52.787 joules for the population over the generation;
 that the terms dU and dH refer to entire populations, and are therefore present in all populations;
 that variations in dU and dH, considered alone, are also wholly insufficient to account for the totality of variations in any population, be they evolving or nonevolving;
 but that when dU and dH are combined either with the summation term in the GibbsDuhem equation (or else with the two summations in the Euler equation), that combination is entirely sufficient to account for all possible behaviours and variations across all possible populations;
 that the variations in the above combined terms are sufficient to lead to the creation of new species in the manner stated by Darwin;
 that new species can have no cause other than the above combined set of variations;
 that since the variable µ can be apportioned across all possible and pertinent biological variables, so giving a comprehensive overview of all biological potentialities and capabilities, a population free from the influence of heritable variations as caused by variations in numbers is impossible;
 that the summation term Σi(∂S/∂ui) U,V,{Nj=/i} dui cannot be zero for any population, thereby again demonstrating that a population free from Darwinian competition and evolution is impossible;
 that the summation term Σi(∂S/∂vi) U,V,{Nj=/i} dvi also cannot be zero for any population, thereby and yet again demonstrating that a population free from Darwinian competition and evolution is impossible.
A request for a practical biological demonstration to validate Darwin’s natural selection, and using Brassica rapa plus the vector calculus we have examined, is all very well … but it is rather hard to provide when the terms used in biology allow the requirements to remain opaque. Our first job is therefore to make the requirements for an experiment, such as we have already conducted with B. rapa, completely clear so its validity cannot be questioned.
We have used the abstract mathematical relationships of the vector calculus that Maxwell introduced—along with its fluxes, curls, divergences and Gaussian surfaces—as our guide. With the support of the clarity it introduces we have determined that every biological species and/or population can be uniquely specified by stating:
 its two fluxes, M and P;
 its two divergences, m̅ and p̅; and
 a time period for the above fluxes and divergences over which any entities n, or chemical components, q that are lost are replaced, and which can be expressed as Z seconds per biomole (which we have already defined) over those entities.
Sadly, and in exchange for this, the only substantive information biology and ecology have so far provided us, in trying to frame an experiment, is encapsulated in (c). This is that the cycle of the generations somehow involves changes in the entities, and possibly also in their numbers.
Numbers are, in fact, quite clearly relevant. Biology obviously depends upon the relationships between those two totals and their two averages: M and m̅, and P and p̅. They are each linked by n, the numbers in the population. But “the scientific method” requires something a little more than that before we can run experiments and draw conclusions.
The Maxwell electromagnetic theory, which James Maxwell derived to explain Faraday’s discovery of induction, is one of science’s greatest achievements. Maxwell’s discovery of his four field equations is an examplar of the scientific method … which we have carefully followed throughout. That method is a product of “the scientific revolution” which is:
… the name given by historians of science to the period in European history when, arguably, the conceptual, methodological and institutional foundations of modern science were established. The precise period in question varies from historian to historian, but the main focus is usually held to be the seventeenth century, with varying periods of scenesetting in the sixteenth and consolidation in the eighteenth. Similarly, the precise nature of the Revolution, its origins, causes, battlegrounds and results vary markedly from author to author. Such flexibility of interpretation clearly indicates that the Scientific Revolution is primarily a historian’s conceptual category. But the fact that the notion of the Scientific Revolution is a term of convenience for historians does not mean that it is merely a figment of their imagination with no basis in historical reality (page 1).
…
… The raison d’être of history of science is, essentially, to try to understand why and how science became such a dominant presence in our culture (page 3) (Henry, 2002).
Biology and ecology may be sciences, but “proving” things in either one of them is extraordinarily difficult because as a general rule, and as Quenette and Gerard point out:
… biologists do not follow the four rules of reasoning formulated by Newton which characterize a positivist conception of science which founds any knowledge on the experience and the repeated observation of phenomena (Quenette and Gerard, 1993).
In other words, and as we have constantly seen with the etymological fallacy, biologists simply misunderstand and misuse the most basic of scientific terms, and consistently fail to apply them rigorously and scientifically. We have seen this several times already with ‘volume’ (Harte et al, 2008). It is also present in ‘fitness’, ‘adaptation’, ‘competition’, ‘reproduction’, ‘natural selection’ and other such terms vital to the discipline. It is even present with numbers, and is the sole source of our present difficulties in refuting what we have called the proposal of the Aristotelian template.
As far as the proposal of the Aristotelian template goes, there is a very specific consequence to the scientific revolution that greatly—and adversely—affects biology:
There can be no doubt that the late sixteenth and early seventeenth centuries saw not only the origins of modern science, but also the origins of modern atheism (Henry, 2002. p. 95).
It is modern biology’s lack of rigour in thought and presentation that leaves its central principle—Darwin’s theory of evolution—starkly exposed to that particular consequence. This can only cease when all terms in biology have been suitably clarified … something we must obviously do or we cannot draw valid conclusions.
Another problem biology and ecology face is that Darwin’s theory of natural selection lacks a proper mechanism. This is largely because the effect it is supposed to have on ‘number’ and on ‘variations’ is insufficiently rigorous. We can best make progress on this front by noting a very similar circumstance—i.e. a lack of a mechanism—in the first great modern use of both calculus and the scientific method. These of course came via Newton. Since we face the same difficulty, we will probably find his methods useful.
One reason for the success of Newton’s first great modern usage was his clear statement of the four rules of reasoning he used to establish his theory of gravitation. These helped him succeed in the face of all opposition that he had left its central principle—a mechanism for gravitation—unspecified. He also succeeded because his Principia was a masterpiece of clarity in defining and using some very basic terms. It promoted the search for clarity in others … and left his refusal to provide a ‘proper mechanism’ for gravitation (his hypotheses non fingo or “I frame no hypotheses”) inviolate.
Gravitation and evolution, as initially presented by both their proponents, both therefore lack a proper mechanism. As Turchin might express it … “the similarity … is striking” (Turchin, 2001). There is, however, a caveat. The two proposals might be similar in each failing to provide a mechanism, but Newton’s theory has suffered a very different fate from Darwin’s. This is for two important reasons. The first is related to Newton’s mode of presentation as encapsulated in his rules of reasoning. The second is related to that presentation’s form.
Quenette and Gerard state Newton’s first rule of reasoning as “the principle of parsimony for the determination of the causes of natural phenomena” (Quenette and Gerard, 1993). Newton himself states it, in his Principia, as:
rules of reasoning in philosophy
rule i
We are to admit no more causes of natural things than such as are both true and sufficient to explain their appearances.
To this purpose the philosophers say that Nature does nothing in vain, and more is in vain when less will serve; for Nature is pleased with simplicity, and affects not the pomp of superfluous causes (Newton, 1689).
Darwin was also in many ways parsimonious but—and here comes the caveat—he was not parsimonious in a very critical way and over a most crucial issue. He was not parsimonious in quite the way Newton was for he failed to provide a supporting mathematical model. Biology and ecology still sorely feel the lack.
It does not yet help us, directly, with providing such a model, nor in clarifying the relationship between numbers within biology and ecology, but it sets a direction. We can again see a supreme example of Newton’s principle of parsimony in Figure 61 where we see his third law of motion at work. When combined with his principle of parsimony, it indicates that if we can come up with a similarly simple explanation for biology, then we will have a good head start in evading all more complex—not to say erroneous—“explanations” for biological phenomena as refuse to acknowledge the inevitability of numbers (Hagen, 1998).
Newton’s third law of motion, which is a central part of his model, explains the most important aspects of the Joule experiment. Although he has a concise mathematical model for his third law, he nevertheless states it in words in his Principia as:
law iii
To every action there is always opposed an equal reaction; or the mutual actions of two bodies upon each other are always equal, and directed to contrary parts.
Whatever draws or presses another is as much drawn or pressed by that other. If you press a stone with your finger, the finger is also pressed by the stone. If a horse draws a stone tied to a rope, the horse (if I may so say) will be equally drawn back towards the stone: for the distended rope, by the same endeavour to relax or unbend itself, will draw the horse as much towards the stone as it does the stone towards the horse, and will obstruct the progress of the one as much as it advances that of the other.
If a body impinges upon another, and by its force change the motion of the other, that body also (became of the quality of, the mutual pressure) will undergo an equal change, in its own motion, towards the contrary part. The changes made by these actions are equal, not in the velocities but in the motions of bodies; that is to say, if the bodies are not hindered by any other impediments. For, because the motions are equally changed, the changes of the velocities made towards contrary parts are reciprocally proportional to the bodies. This law takes place also in attractions … (Newton, 1689).
Whatever may be the implications for biology, we once again note that the gas at the top of Figure 61 is at a given temperature, T, and is in its initially compressed and low entropy state of PinitialVinitial (Hagen, 1998). The valve is closed. There is a specified number of molecules contained in that gas. They bounce around conjointly as they fill the available space. Their actions and reactions balance meaning that, as according to Newton’s third law of motion, in conjunction with d’ Alembert’s principle of Σi (Fi  miai) ∙ δri = 0, the sum of all the momenta is zero. This is the mathematics underlying that model.
When the valve is now opened the volume, V, immediately increases. The important biological aspect is that we have an expansion. It can again be represented in the mathematics underlying that model. The continuing molecular actions and reactions allow the system to enter the second container. Since the temperature remains constant the entropy—which can be measured—increases along with the volume. Newton’s third law, along with d’ Alembert’s principle, continues to hold. The momenta must still balance, so we have some mathematical teeth to go along with these verbal descriptions.
We of course consider the ideal case in which there is no change in heat, and no exchange with the surroundings. There is a freedom from the environment which circumstances must at some point be more clearly defined so they are relevant to biology. Darwin, by constrast, insists that biological populations are greatly affected by the environment.
Since this is the ideal case then our various mathematical models are still operative. The temperature remains constant—dT = δQ = 0—and we obtain PfinalVfinal = PinitialVinitial. Since the environment has no effect—which must again be given a biological significance—then there is no piston to be moved; no weight is attached; and no external pressure is being applied. There is therefore no mechanical effect upon those surroundings and this is a completely adiabatic expansion: Pexternal = 0. That latest verbal statement can again be given a concise mathematical representation. And since we for the present accept this assumption that no external mechanical work is being done, then we must also at some time seek for a corresponding biological situation in which a population of entities can show no net change of any kind in its overall mechanical chemical work, meaning that its entities only partake of the nonmechanical variety … and under broadly similar conditions. We must give all that some kind of biological mathematical representation in due course.
Since Pexternal is zero and there is no piston and no external work done, then there is no external volume increase. The system does not expand, even infinitesimally, into the surroundings: dV = 0. Since any external work done would be due to a pressure acting through a given volume and as a piston expanded into the surroundings, then since there is none we have PexternaldV = 0. But also since no such external or mechanical work is done then δW = 0. And finally since both the work done and the heat either gained or lost are zero, then by the first law of thermodynamics we have dU = δQ  δW = 0 and there is no net change in state. Thus, and as Joule discovered, the gas’s internal energy, U, remains unchanged. We must eventually understand the biological implications, but we have (∂T/∂V)U = (∂U/∂V)T = 0 from this Joule experiment. Those partial differentials mean that both the rate of change of temperature and the rate of change of internal energy with respect to volume are zero when each of them is in turn held constant while the other one is varied. The amount of space the system occupies—i.e. its size or volume—is free to change internally, as it were, but no effect is induced on given core variables as others nevertheless change. Volume changes can occur independently of effects in the environment. We must find similar biological expressions.
Quenette and Gerard state Newton’s second rule of logical reasoning as his principle that “the same effects involve the same causes” (Quenette and Gerard, 1993). Newton more fully states it as:
rule ii
Therefore to the same natural effects we must, as far as possible, assign the same causes.
As to respiration in a man and in a beast; the descent of stones in Europe and in America; the light of our culinary fire and of the sun; the reflection of light in the earth, and in the planets (Newton, 1689).
We can now add Newton’s new principle of similarities in effects and their causes to his above principle of parsimony in the third law in assessing the Joule experiment. These together tell us that we can exploit the gas’ ability to expand into any arbitrary volume—i.e. the environment—while always maintaining its internal energy—i.e. while undertaking no other changes in state. We must just be careful how we represent it mathematically.
We can exploit a gas’ ability to expand into an indefinite volume by making a rocket that will move freely into outer space, and as depicted at the bottom of Figure 61 (Brain, 2000; Hagen, 1998). We can look on this, biologically, as a more rigorous formulation of the proposal that a given biological population could expand indefinitely, and without any change of effect, in the manner indicated by Darwin when he considered the reproductive capabilities of elephants:
There is no exception to the rule that every organic being naturally increases at so high a rate, that, if not destroyed, the earth would soon be covered by the progeny of a single pair. Even slowbreeding man has doubled in twentyfive years, and at this rate, in less than a thousand years, there would literally not be standingroom for his progeny. Linnæus has calculated that if an annual plant produced only two seeds—and there is no plant so unproductive as this—and their seedlings next year produced two, and so on, then in twenty years there would be a million plants. The elephant is reckoned the slowest breeder of all known animals, and I have taken some pains to estimate its probable minimum rate of natural increase; it will be safest to assume that it begins breeding when thirty years old, and goes on breeding till ninety years old, bringing forth six young in the interval, and surviving till one hundred years old; if this be so, after a period of from 740 to 750 years there would be nearly nineteen million elephants alive, descended from the first pair (Darwin, 1872, pp 50–51).
As for our rocket, which actualizes this indefinite expansion potential, Newton’s third law of motion now combines—molecularly—with Boyle’s law of PV = T to state that the molecules will move to the rear and propel themselves out as an exhaust into the indefinite volume of outer space. But by the mathematics of Newton’s law of motion, which is that the momentum must be conserved, the rocket will react by moving forwards. It will do so no matter what may be the difference in mass between any one given molecule and the rocket. Momentum is a vector and pmolecule = mmoleculevmolecule = procket = mrocketvrocket along the axis of the rocket’s length and direction of motion. The rocket will experience a change in velocity per every molecule now escaping from the rear, and in direct opposition to each and every one.
Since we want to build many rockets in potentially many Milky Ways and galaxies, we turn to Quenette and Gerard’s statement of Newton’s third rule of reasoning, which they give as “generalization by induction which should allow the extension of the range of applicability to all natural things” (Quenette and Gerard, 1993). Newton himself states it as:
rule iii
The qualities of bodies, which admit neither intension nor remission of degrees, and which are found to belong to all bodies within the reach of our experiments, are to be esteemed the universal qualities of all bodies whatsoever.
For since the qualities of bodies are only known to us by experiments, we are to hold for universal all such as universally agree with experiments; and such as are not liable to diminution can never be quite taken away. We are certainly not to relinquish the evidence of experiments for the sake of dreams and vain fictions of our own devising; nor are we to recede from the analogy of Nature, which uses to be simple, and always consonant to itself. We no other way know the extension of bodies than by our senses, nor do these reach it in all bodies; but because we perceive extension in all that are sensible, therefore we ascribe it universally to all others also. That abundance of bodies are hard, we learn by experience; and because the hardness of the whole arises from the hardness of the parts, we therefore justly infer the hardness of the undivided particles not only of the bodies we feel but of all others. That all bodies are impenetrable, we gather not from reason, but from sensation. The bodies which we handle we find impenetrable, and thence conclude impenetrability to be an universal property of all bodies whatsoever. That all bodies are moveable, and endowed with certain powers (which we call the vires inertiæ) of persevering in their motion, or in their rest we only infer from the like properties observed in the bodies which we have seen. The extension, hardness, impenetrability, mobility, and vis inertiæ of the whole, result from the extension hardness, impenetrability, mobility, and vires inertiæ of the parts; and thence we conclude the least particles of all bodies to be also all extended, and hard and impenetrable, and moveable, and endowed with their proper vires inertiæ. And this is the foundation of all philosophy. Moreover, that the divided but contiguous particles of bodies may be separated from one another, is matter of observation; and, in the particles that remain undivided, our minds are able to distinguish yet lesser parts, as is mathematically demonstrated. But whether the parts so distinguished, and not yet divided, may, by the powers of Nature, be actually divided and separated from one another, we cannot certainly determine. Yet, had we the proof of but one experiment that any undivided particle, in breaking a hard and solid body, offered a division, we might by virtue of this rule conclude that the undivided as well as the divided particles may be divided and actually separated to infinity.
Lastly, if it universally appears, by experiments and astronomical observations, that all bodies about the earth gravitate towards the earth, and that in proportion to the quantity of matter which they severally contain, that the moon likewise, according to the quantity of its matter, gravitates towards the earth; that, on the other hand, our sea gravitates towards the moon; and all the planets mutually one towards another; and the comets in like manner towards the sun; we must, in consequence of this rule, universally allow that all bodies whatsoever are endowed with a principle of mutual gravitation. For the argument from the appearances concludes with more force for the universal gravitation of all bodies that for their impenetrability; of which, among those in the celestial regions, we have no experiments, nor any manner of observation. Not that I affirm gravity to be essential to bodies: by their vis insita I mean nothing but their vis inertiæ. This is immutable. Their gravity is diminished as they recede from the earth (Newton, 1689).
There is, however, one very important further rule or premise for reasoning. It was hinted at by Morgenstern in his speech on the Limits of the Uses of Mathematics in Economics to the American Academy of Political and Social Sciences; and Murray referred to it in his discussion of Einstein and Infeld (Morgenstern, 1963; Murray, 2001, p. 265). Galileo and Newton built an entire dynamicalmechanical system on an exercise in imagination, and by referencing something that could not exist: a perfectly moving ball that never changed its speed, and so moved into indefinite space. In the same way, Sadi Carnot created thermodynamics by imagining a perfectly reversing steam engine that could not exist; and Huygens and Bernoulli created wave theory by imagining a perfect wave spreading freely into open space through a perfect material medium and that also could not exist. Newton’s equally parsimonious first law of motion is something that cannot exist, but that nevertheless rigorously defines inertia, and so helps us to determine the ultimate behaviour of our rocket:
law i
Every body perseveres in its state of rest, or of uniform motion in a right line, unless it is compelled to change that state by forces impressed thereon.
Projectiles persevere in their motions, so far as they are not retarded by the resistance of the air, or impelled downwards by the force of gravity. A top, whose parts by their cohesion are perpetually drawn aside from rectilinear motions, does not cease its rotation, otherwise than as it is retarded by the air. The greater bodies of the planets and comets, meeting with less resistance in more free spaces, preserve their motions both progressive and circular for a much longer time (Newton, 1689).
This law—entirely based as it is on something that cannot exist—nevertheless has a rocksteady mathematical representation. It now tells us that our rocket is going to keep on moving for ever. The greater is the component of the momentum of each molecule that escapes in this given direction, where pmolecule = mmolecule x vmolecule for that direction; and the greater is their number, n in that direction; then the higher is the rocket’s final velocity in the opposite direction through procket = mrocket x vrocket = n(pmolecule). And when we now bring in Newton’s universality principle of reasoning, we conclude that all rockets of this general kind, irrespective of time and location, will also keep on moving indefinitely through free space into an indefinite volume, because all their n molecules of exhaust as keep moving in that given rearwards direction will have a set of suitable actions and reactions, and such that all such molecules—both inside and outside any such rocket—will eventually attain the same density and the same unit number of collisions per unit volume everywhere throughout all indefinite space, and as the entropy increases. This will continue until there is no further pressure or unexpanded congregation of molecules—i.e. no further potential force for acceleration—located anywhere within the rocket.
This whole matter can be easily handled by the integral and differential calculus introduced by Newton and Leibniz. Throughout the molecular redistribution and equalizing process that the molecules will undertake under Newton’s laws, no net heat will enter either the gas or the rocket from the surroundings, for there is none. There is therefore a net heat loss from the rocket of δQ. And since there is still no external pressure or piston applied, we again have Pexternal = 0. There is, however, and on this occasion, a change in volume. We do not initially seem able to compute that net volume change because since the rocket’s far end is open, it is as extensive as the universe. But the power of the calculus is that we can give it a correct and usable mathematical representation: dV = ∞. And since we have both a zero and an infinity, which are each acceptable limits within calculus, then the term PexternaldV—which is the mechanical work done—is initially undefined. We must of course avoid a similar situation in biology … or, if we cannot avoid it, then we must properly and similarly represent it so we can appropriately handle it and so that, as with Darwin’s elephants, populations can in principle expand indefinitely, and without any change in their defining characteristics.
The mechanical work done in this rocket situation may initially look undefined, but both Newton’s third law of motion—which states the equality of action and reaction—and the first law of thermodynamics— which states the interconvertibility of energy and was discovered by Mayer (and Joule)—give us mathematical control over the situation and through integrations and differentiations. The former gives us the forces and velocities concerned; and by the latter, we know that the equation dU = δQ  δW still holds. Thus the rocket’s entire chemical change in state, which is its expansion in entropy, is converted into a thrust, through the indefinite volume expansion and the preservation of momentum, that propels the rocket into an indefinite outer space, and at a given final velocity. The amount of thrust derived depends entirely—and only—upon U, the initial internal energy. That initial internal energy is now completely converted into the δQ of heat surrendered to the environment as the entropy increases to the maximum possible value. We can compute that amount of thrust, and so the acceleration, through the Gibbs and Helmholtz energies of the reaction concerned in conjunction with Newton’s second law, all of which are capable of mathematical precision through calculus and its use of limits:
law ii
The alteration of motion is ever proportional to the motive force impressed; and is made in the direction of the right line in which that force is impressed.
If any force generates a motion, a double force will generate double the motion, a triple force triple the motion, whether that force be impressed altogether and at once, or gradually and successively. And this motion (being always directed the same way with the generating force), if the body moved before, is added to or subtracted from the former motion, according as they directly conspire with or are directly contrary to each other; or obliquely joined, when they are oblique, so as to produce a new motion compounded from the determination of both (Newton, 1689).
And finally, we know all these things to be so through Quenette and Gerard’s statement of Newton’s fourth and final rule of reasoning, which is his declaration that the validity of a theory is not affected “until one makes contrary observations regardless of what one may hypothesize” (Quenette and Gerard, 1993). A mere hypothesis of that last kind, unsubstantiated by any contrary observations, cannot stand up against propositions inferred by inductions using this scientific method which are always based—as Darwin’s observations also were—directly upon experience. Newton states his fourth and final rule as:
rule iv
In experimental philosophy we are to look upon propositions collected by general induction from phænomena as accurately or very nearly true, notwithstanding any contrary hypotheses that may be imagined, till such time as other phænomena occur, by which they may either be made more accurate, or liable to exceptions.
This rule we must follow, that the argument of induction may not be evaded by hypotheses (Newton, 1689).
Newton’s three laws of motion are now incorporated in the Joule experiment. Through that, they are the foundations for the laws of thermodynamics. When combined with his rules of reasoning they have generalized themselves to embrace the totality of all natural phenomena. Both Newton’s laws of motion and the laws of thermodynamics are therefore cosmic in scale … meaning that they surely embrace biology. We have expressed them in biology and ecology as (a) the extrinsic “four laws of biology”; (b) the intrinsic “four maxims of ecology”; and (c) the three constraints that serve to link and quantify them.
The scientific method now requires that we apply these ideas and concepts to biology … and that we substantiate them directly with measurable values in our Brassica rapa experiment, and particularly in so far as we can counter any hypothesis not based on direct observations, no matter how “reasonable” such a hypothesis might seem to its proponents.
We first examine volume, V, more closely. We have already noted cases of severe misunderstandings: the etymological fallacies that biologists and ecologists are prone to because they completely misunderstand volume, and through that both energy and inertia. This is again because volume fulfils at least the following three different functions in science:
 it specifies a given system’s size;
 it states the energy density or joules per unit volume imposed across its extent; and
 it states the specific energy or joules per unit mass enjoyed by all material components across its extent.
Although all three are subsumed in ‘volume’, each is both separate and mathematically precise. We also carefully noted that although volume itself is extensive, two of its three components are in fact intensive. Only (a) is extensive. We note carefully that an intensive construction does not necessarily make a commodity intensive. We also and further noted that volume does not in itself directly reference a determinate quantity of either mass or inertia, two properties vital to science, but that it does assume them more indirectly through either (b) or (c).
When it comes to mass, then the Joule experiment does not see mass—when defined as whatever manifestation of inertia, or resistance to energy, is relevant to that specified system, and which is in this case presented as molecules—either leave or be introduced. We have also already noted the severe misunderstanding this causes for the terms ‘open’ and ‘closed’, and the equally severe misuse biologists make of those terms. Since a uniform size or count for molecules is maintained in the Joule experiment, then the first aspect of volume, that in (a) above, remains unaffected. However, when the gas expands then the other two aspects of volume—being (i) the energy density of joules per unit volume and (ii) the specific energy of joules per unit mass—are each affected. The precise amount depends upon the constant volume heat capacity, Cv, of those components.
The numbers of molecules, i.e. their count, does not change in the Joule experiment … but their net energy does. Energy as heat and work is certainly capable of crossing system boundaries. But that is what is at issue for heat and work have been very carefully defined as phenomena that can—and do—cross system boundaries due to the way mass is conjectured to behave and configure itself within those boundaries. There are clear definitions of what may and may not cross those boundaries. This emergence of heat and work over the boundaries is then carefully measured as a change in entropy inside those boundaries, but only so that energy can continue to be conserved as it crosses those boundaries. That is entropy’s purpose. Thanks to entropy, the energy inside the boundary changes its type or quality, and this can now be quantified. This is all so by those definitions, which involve the calculus. Mass is therefore defined as that which does not itself cross stipulated boundaries. It does not, by definition, cross those boundaries, and even though the heat and the work that arise from its inertia and its momenta do exactly that. Therefore: anything that does not cross a boundary in this way as it produces heat and work is that system’s mass or inertia and is to be found amongst its exact differentials as defined in calculus; and anything that crosses such a boundary is then that same given system’s manifestation of work and heat, and constitute its inexact differentials, again as defined in calculus.
There is another important consequence. Since the temperature remains invariant, then even though the range of molecular velocities between the smallest and the greatest in the final state may be different from the range adopted in the initial state; and even though the net number of collisions per unit time and per unit volume may also change; then the distribution of molecular velocities itself remains constant. The mean value does not change. This is how temperature is in its turn defined. Without that definition then entropy cannot be defined; and entropy cannot then change while “other things” such as pressure and volume also change. The distribution itself does not change but the range and intensity does, and with it the behaviour and unit number of collisions the molecules enjoy in each unit volume of space.
Since the temperature in the Joule experiment remains constant, then it is an integrating factor in the calculus and of the Euler kind. It partners entropy, and with both being exact differentials that describe states. So no matter how many molecules may be moving slower or faster than the mean in either of the initial or the final situations, any change on one side of the mean in one situation is exactly balanced by an equivalent change over on the other side of the mean in the other, and so that the average molecular velocity—a manifestation of temperature and an exact differential—is unchanged. Again by the fourth or zeroth law of thermodynamics this is how temperature is defined … this is what constant temperature means … and it is one of the various factors involved in any proposed change in volume, and inside proposed boundaries.
When we turn to biology and closely examine the scientific intent of “volume”, we see that its first task—which is to indicate a given system’s size—is now achieved through the entity count. In the Joule case, this is the number of molecules. But that entity number is ultimately achieved through the Avogadro number, and so as the total molecular mass, which is the system’s mechanical inertia. But a biological system—whether it has or has not yet reproduced—is also specified by its entity count. It is therefore always of a given size of n entities or N biomoles. The first task for volume is therefore achieved through that count. But we carefully note that as soon as we specify an entity count, then through the Avogadro number we also specify both a component mass and a chemical bond energy holding that mass together. One or the other of these, or perhaps both, must therefore be the system’s biological inertia.
The second aspect of volume is the energy density measured as joules per unit volume. But as we have already determined, this is just the joules per entity of a biological system. It is the average individual Wallace pressure. But by Maxim 2 of ecology, which is the maxim of number, it is in fact the divergence in the energy flux, p̅. We also already know, through the Helmholtz theorem in the vector calculus, that this is immediately a defining property for any population or species. We also again note that either this joules per entity, or else the upcoming joules per unit mass, and which is the specific energy … but perhaps both together … is the system’s inertia.
We further and also know—thanks to the sterling work of Mayer and Helmholtz, and with the latter being responsible for the mathematical formalism within thermodynamics—that PV, or more accurately PdV which is the change in volume at a given application of pressure, is a measure of the mechanical work being done at any time by a given thermodynamic system. As V increases, then the ability to do work lessens as the pressure lessens and the quality of the energy—which is its increase in entropy, S, and as can also be noted through that decrease in pressure, P—changes. This is a steady decrease in the ability to do further mechanical work and PdV tends to zero. This continues until that capacity has reached its maximum, at which point both the volume and the entropy are also at a maximum for that system, and under those conditions. This decline in the ability to do further mechanical work into the surroundings can also be noted in the system’s change in its internal configuration over that volume … which again shows itself in the decline in the energy held per unit volume. Each of the molecules now patrols a greater amount of space, for the volume over them all has increased. The energy density per unit volume has thus decreased; and the number of their mutual collisions per unit time has also decreased, which is again reflected in the increase in entropy. And when scaled up and expressed per the biomole, it is the constraint of constant size of R joules per biomole. And since we always know n—for it is the population’s size—then we also always know the Wallace pressure, P, and which is immediately the population’s constraint of constant propagation. So we have now specified two of the three constraints active on any biological system, with either this second constraint of constant size, which is the energy density, or the upcoming third of constant equivalence, which is the specific energy, or else a combination of both, being a biological manifestation of inertia.
The third aspect of scientific volume is the specific energy or joules per unit mass. We currently measure it by its inverse which is the visible presence, V. This is the Gibbs energy per unit mass—the specific Gibbs energy—and which is reflected in the quantity of mass a given biological population can build per each unit of energy made available to it, and at that configuration. This visible presence is thus determined either by P/M or else by p̅/m̅. It is determined from the conjoined fluxes in mass and energy, each of which uniquely defines a population through the Liouville theorem; or else from the two divergences which, through the Helmhotltz theorem, are also definitive. We also realize that either this or the previous joules per entity, or perhaps some combination of both, is the system’s inertia.
The quantity of mass a given population and its entities can support per each unit of energy depends directly upon the quantity of energy each of those entities can circulate about themselves, and so as can partake of measurable work towards the completion of the generation, which is the circulation. This is the curl about each entity. Taken over the entire population, and over the entire generation length, the sum over all of them creates the total circulation in the mass of chemical components. Therefore P/M and p̅/m̅, which state the inverse of specific energy or the visible presence, V, are measures for the curls in both mass and energy … each of which is again—and through the Helmholtz theorem of the vector calculus—a defining property for any population. The divergence in energy therefore states the energy size, or energy density, per each entity over the population; and meanwhile the specific energy, which is the curl in energy or its inverse, states the mass held per each. It therefore also eventually states the divergence in mass … and by that means further defines the population. These two curls and divergences in mass and energy, taken together, and each of which we measured with our planimeter, state the entire chemical type and chemical configuration … and are the third and final constraint of constant equivalence. They are together stated as joules per unit mass and the specific energy or its inverse, the visible presence. These three constraints of constant propoagation, constant size, and constant equivalence therefore between them state the three components, in science, and for biology, of volume. This finally and irrevocably confirms and corrects the misuse of this term, and its implications for energy and entropy within biology and ecology.
If p̅ and m̅ change together—meaning that P and M also change together—then only mechanical chemical work is currently being done. But if the joules per entity, p̅, also holds constant as P and M change together, then m̅ is also changing by the same degree—which means it also holds constant. The only thing that can be changing is n, the entity count, the first component of volume. Given these conjoined changes then the mass flux over the population must be changing … but there are no accompanying changes in either the joules per entity, or in the joules per unit mass, because the total energy flux, P, is also changing by the same degree, since those two remain invariant. Both the joules per entity and the joules per unit mass are intensive rather than extensive and since the two fluxes of P and M in this case change proportionately, then there is no accompanying change in individual entity size or in entity configuration, or in the nonmechanical work being done over the population. There can only again be changes in numbers.
It is very evident that the consequences of changes in population size, n, must be treated with the utmost care, for they at every turn have the capacity to affect the population’s specification and definition equally, for both the mass and the energy fluxes are being equally affected when only the numbers change. But we have seen how a pure change in numbers, no matter what the cause, can be defined. This greatly affects Law 4 of biology, which is the law of reproduction.
The third maxim of ecology is the maxim of succession and so governs the heritability of chemical components. This is a quite separate matter from “pure reproduction” as we have just noted it above and which only covers numbers. Maxim 3 is ∇ x M = ∂m̅/∂t  ∂n/∂t. We saw, when we derived it, that if a mass flux of M kilogrammes per second circulates about all the entities at every t over T, then the total circulation in mass for that generation is ∫M dT. And if every distinct entity contributing to that circulation has a mass m; and if there are n in the population; then the total mass of components they conjointly use over that same time span is ∫dm dn. Since the former is the circulation while the latter is effectively the area, then the curl, which is the circulation per unit area, is ∫M dT/∫dm̅ dn. This mass flux is now the measure of the force driving the heritability and succession of chemical components in that these are imported, sustained, and exported by the F1 aspect of F(F1, F2) which causes the circulation about, and operates upon, the system at large; and so on these measured chemical components, and as they succeed to each other and so are handed on from t to t and so over time. This matter of heritability and succession is again separate from pure reproduction which only involves numbers and changes in numbers.
By the same token, and using the same means, we can turn to the fourth maxim of ecology, the maxim of apportionment: ∇ x H = ∂m̅/∂t  ∂n/∂t  ∂V/∂t. With it we can determine both the accompanying circulation in energy, and its curl which is the means by which all configurations and manipulations of the acquired components succed and inherit each other. That curl in energy is therefore given by ∫P dT/∫dp̅ dn. So both the quantity of chemical components curling about the entities, and which we have determined above; and the way in which those components are configured which is this curl in energy; both of these again and directly depend upon n, which is the number of entities maintained at each t over the entire generation length, T. The force applied at each such moment upon the molecules is F2 and is the curl in energy, with the recipient of that force being the curl in mass. So once again the matter of (a) reproduction of numbers, is distinct from (b) the matter of the succession and heritability of given components; which is distinct from (c) the succession or heritability of configurations. It is of course possible for given components to be configured in different ways even as they succeed to each other. The heritability of components is therefore distinct from the heritability of configurations.
Since the pointenergyentities that are the biological entities are each the recipient of—and so the centre for—all energy interactions and which cross boundaries, then these biological point entities are exactly this system’s inertia and quite irrespective of the gravitational or mechanical mass attributable to the chemical components that exhibit that specific manifestation of inertia, and so that biochemically structure them. The divergence in energy, which is also the ultimate determinant of the curl in energy, p̅ is also therefore the manifestation, within biology and ecology, of the inertia of energy. It is the biological equivalent—in strict mathematical terms—of the mass of a standard Newtonian or dynamical system.
By the Einstein equation of E = mc2, mass and energy are equivalent. Mass is simply the confinement of energy. It is a locus for energy's inertia. In accelerating and circulating components about itself, a biological pointcentre acts as a centre for inertia. The chemical components it manipulates can and do cross system boundaries and are evidence of its work and heat, but the pointcentres themselves do not cross any boundaries and instead draw those components in, circulate them about themsleves, and then emit them exactly as work and heat. Those components would not move or cross boundaries without that work done and heat emitted by the pointentities, in the same way that in a standard thermodynamic system work and heat would also not emerge without the activities of the molecules which in that case also function as the pointcentres of energy. A biological entity is a limitpoint and these two are distinct, with any other usage or understanding within biology and ecology being a misunderstanding of the scientific import of both of those fundamental terms: ‘mass’ and ‘inertia’.
It is absolutely critical that this clear distinction between the gravitationalmechanical aspect of biological entities on the one hand, and their biologicalenergetic aspects on the other be properly recognized as it is source for yet more of the etymological fallacies with which biology and ecology are rife. Biological entities and their populations—as limitpoints and collections of limitpoints—exhibit curl, flux, circulation and divergence. These cause given phenomena to cross system boundaries under heat and work, but the points and collections of points themselves cross no boundaries. They nevertheless exhibit the inertia associated with energy as they cause fluxes that then pass through our Gaussian surfaces and the volumes they delimit, and as in Figures 19, 25, 36 and 60.
The proposal of the Aristotelian template insists that no changes in n will affect the entities’ characteristics or behaviours. This means, in mathematical terms, that neither p̅ nor m̅ may change simply because n changes. We therefore and again have the situation that if the proposal holds good, then both M and P must always change in exactly the same way—i.e. by direct proportion—so that the specific energy is always left unchanged when n changes. If M and P and m̅ and p̅ do not change identically when n changes so the specific energy remains constant, then the chemical type and the chemical configuration over the population changes. For example, if every entity exhibits some kind of reproductive or mitotic imperative and instantaneously splits in two, then m̅ and p̅ can change equally instantaneously, even though the population’s specific energy, which is the ratio between them, stays the same. Both those can immediately halve, and even though both their mass and energy fluxes, M and P, remain the same over the population, which is in this case to remain equally instantaneously unchanged even as n doubles. Maxims 3 and 4 of succession and apportionment thus again deal with distinct issues from Law 4 of reproduction, and they must all be capable of separate analysis.
The only other alternative available to a population is for T, the generation length, to change … but it is also a defining property for any population or species for it establishes the boundary conditions for both the Liouville and the Helmholtz theorems. The proposal of the Aristotelian template therefore and in essence demands that when the population count increases or decreases, in any given environment, and for any generation, then both ∫M dT/∫dm̅ dn and ∫P dT/∫dp̅ dn must maintain the same value—unity—throughout, with neither m̅ nor M, nor p̅ nor P, nor T, howsoever they may choose to interact, changing any differently, overall, from n, and so that the curl in both always remains constant, and/or so that no force or energy acts on any entity for a longer or a shorter period in any effort to make up any indicated deficiency either in mass or in energy or otherwise, and as may be caused simply by changing numbers.
All of this immediately implies that if the proposal of the Aristotelian template is to hold good, then all entities must at all times (a) process their masses, and (b) give off and absorb energies in a uniform manner relative to each other across the entire generation. There must be no relative changes in mass, in energy, or in time. All relative masses and relative energy densities must hold constant for all entities at each t over T. Therefore, if a given population’s values for mass, for visible presence, or for both alter, then every entity within that population must also alter its individual mass, and/or its individual configuration, to precisely match both each other and the entire population values. No entity may process or use or configure its energy or its mass any faster or slower than any other, or change the time scale, no matter what may be happening to n. If one entity at any given t differs in any way from another at that same t in that, in a prior, or in a succeeding generation, then a curl has been immediately introduced, and we have a direct response to a change in numbers. Therefore, both the pairs of the values dP/dt and dp̅/dt, and dM/dt and dm̅/dt must again vary together in and throughout all generations. And since the total energy flux depends upon the number of entities, while the curl depends upon the number constituting the circulation and boundary, then if either dM/dt is at any time different from dm̅/dt, or if dP/dt from dp̅/dt, or if T tends to change due to n, then the population must be exhibiting Darwinian competition and evolution. It simply cannot, under these conditions, be a population obeying the proposal of the Aristotelian template. Those are the conditions that the template proposal imposes.
When it comes to analysing our data, then we have successfully derived two population equations. It is surely time to start putting them to good use in examining this situation. We stated them both at the head of this section, the Euler one being:
µ = (∂S/∂U)V,{Ni} dU + (∂S/∂V)U,{Ni} dV + Σi(∂S/∂ui)U,V,{Nj=/i} dui + Σi(∂S/∂vi)U,V,{Nj=/i} dvi,
the GibbsDuhem one being:
m̅µ = m̅dS = dU + dH  Σiµi(dvi  dmi).
Each contains our biological potential µ, which we already know we can measure in watts, or joules per second, at every point t over the generation length, T.
Our two equations also contain terms for what we have called the essential development, λ, which is all populationwide changes of whatever kind. It is therefore composed, firstly, of the probundance, γ, which is the populationwide mechanical chemical energy. We can measure it in kilogrammes. It is also, secondly, composed of the procreativty, ψ, which is the populationwide expenditure in nonmechanical chemical energy, and so also all populationwide changes in chemical configuration or in the visible presence, V, which is P/M, and which we can measure in kilogrammes per joule. If P changes while M does not or vice versa; or if they change at different rates; then we can measure this in V which is the unit nonmechanical chemical energy and that establishes the energy flux.
Our two equations additionally contain terms for what we have called compensatory development, L, which is the sum of all possible individual changes incurred by all individual entities and as they are introduced or removed. This is also divided into two parts. There is firstly the abundance, C, which is the sum of all individual masses and/or changes in mass undertaken as each is independently lost and introduced. It is again measured in kilogrammes. And since it is sensitive to numbers it gives us access to m̅. Secondly, there is the accreativity, Y,which is the sum of all individual configurations and/or changes in chemical configuration, or visible presence, over the population and as each is again lost or introduced … and which we can also measure in kilogrammes per joule. This gives us independent access to p̅.
The proposal of the Aristotelian template now insists that all the compensatory development—which is L = C + Y and is all changes in m̅ and p̅ as numbers change—is zero across any and all populations and no matter what the scale of changes in n: i.e. L = C + Y = 0. The proposal instead insists that everything about a population is stated in λ = γ + ψ. The alternative, Darwinian, proposal opposes this and say that L is never—and can never be—zero. But since we now have, through our equations, the necessary analytical tools, we only need to clarify what exactly we should measure to contrast these Darwinian and Aristotelian proposals, and so we can conduct our experiment that unambiguously provides us with the necessary values for these four terms and variables, and as given earlier, and that will once and for all settle this debate.
Given our two population equations, this has now become a simple exercise in something at which science excels: using partial differentials to determine respective rates of change in a variety of dependent quantities. Since their two components are expressed as measurable partial differentials, the issue has now become that of determining the quantity of compensatory development, L, which is the net changes in m̅ and p̅ when numbers change … and including in reproduction. Its continuous value of zero at every t over T is absolutely required if the proposal of the Aristotelian template is to be sustained while Darwin is refuted.
We have only two possible alternatives. Either n is stationary or it is not. That is to say, either dn/dt has an effect, or it does not. If the proposal of the Aristotelian template holds then P and p̅ and M and m̅ must always change in concert, for a dn/dt = 0 of that kind is the definition of freedom from changes in n. Nothing else will suffice. This is certainly testable.
The only possible variables over the population are mass, energy, and number—and they are all present in our given equations. We must now describe both essential and compensatory development, λ and L, so that we can always indicate their magnitudes in any real case. The apparent difficulty is that mass and energy will always in any case be changing at the direction of DNA, and whether numbers do or do not change. We must somehow separate all such pure DNA changes—some of which are in mass and some of which are in configuration—from any due instead to n.
By the canons of the vector calculus the rate at which flux density increases or decreases in any given infinitesimal volume element is proportional to the flux flowing towards or away from that element. If our proposed model using our Gaussian surface is to be of any utility then it must allow us to rigorously track numbers and rates of change in numbers so we can quantify all their possible effects. And since we have identified Darwinian evolution as the biological induction that occurs whenever a change occurs in any circulating mass flux, then we need only identify the contributions that changes in numbers make to the induced fluxes, divergences, curls and circulations.
We have already shown that our biological potential, µ, is eminently measurable. It is the joules per second measured in any population, at any given time, as the instantaneous energy flux. It is the measure of the instantaneous changes a biological population undertakes in its apportionments of energy and according to Maxim 4 of ecology, the maxim of apportionment. Biological potential measures the changes in energy at every t over T, no matter what the reasons. It is in our case, and in the Brassica rapa experiment, the photosynthesis rate in so far as this is the means used, by the population, to take in and give off energy at any given time.
The proposal of the Aristotelian template insists that when an entity is lost, then an abstract template immediately exerts itself to limit any and all ensuing changes or variations in biological potential to those of the template. The essential claim is that there therefore exists a “perfect” biological system, along with entities comprising it, that are utterly free from all influence of the environment, and so from all changes in numbers. The proposal is that it does not matter what the environment does, nor how many entities are reproduced, nor how many survive or do not survive in any given population. There will be no changes in energy or activity attributed to dn/dt—which influence must always remain at zero throughout. We must somehow build or give mathematical expression to that template so we can measure it.
Our two equations make it clear that there are only two reasons why a biological population can ever change in its mass flux, M. One is the probundance, γ, of essential development, λ. The other is the abundance, C, of compensatory development, L, and so that M = γ + C. The same holds for the energy flux. The only two reasons it can change are the procreativity, ψ, of essential development, λ, and the accreativity, Y, of compensatory development, L, and so that P = ψ + Y. If the proposal of the Aristotelian template holds then we must again have C = Y = 0, so that λ = γ + ψ is the population's complete expression, with M = γ and P = ψ holding at all times.
We must therefore find a way to determine, in any given situation, the magnitude of each of the mass and the energy fluxes, M and P; the magnitude of the changes in those fluxes, dM/dt and dP/dt; and how large the contribution to those fluxes and changes is for each of the probundance, abundance, procreativity, and accreativity so that we can determine, in a real case, if C and Y are indeed zero. Data from a test species—such as our Brassica rapa—would now be most helpful.
We now observe, from our Brassica rapa data, and as given in Table 1, that for example a typical flowering plant, as computed for our equilibrium age distribution population, is one amongst n = 724 of its kind. Our experiment also determined the population’s biological potential at this stage as µ = 1.949 x 104 watts. We measured it as the photosynthesis rate which is the biological potential or instantaneous energy flux, dS. It is the tendency to (a) take on energy; and then (b) apportion that energy within the population and over the entities. But it is also (c) the tendency to change both the quantities of that flux and its apportionment ratios in accordance with the plant’s “need”—and as encoded in its DNA—to develop and to complete the cycle of the generations. This is so no matter whether or not each plant is or is not responding to a relative superfluity or deficiency in numbers, and at that moment. No matter what its chosen apportionment ratios and what theory is applied to this given instant in its cycle, the biological potential over the population is as stated in watts. This is, once again, the population’s measured photosynthesis rate which is its uptake of energy, and its tendency and ability to both do work and change its manner and style of work no matter how the conditions are described.
We can now take a step closer towards settling this debate. Let, now, one of those 724 flowering plants be lost. The Darwinian position is simple. It is that the remaining 723 plants will immediately invoke Maxim 4 of ecology, which is the maxim of apportionment. By the Darwin proposal, the remaining 723 plants will immediately augment their individual biological potentials such that they will immediately engage in additional and incremental gains in mass and/or in energy to make up for the numbers lost. This is the ∂n/∂t factor present in our equations for Maxims 3 and 4 of ecology, the maxims of succession and apportionment; being ∇ x M = ∂m̅/∂t  ∂n/∂t and ∇ x H = ∂m̅/∂t  ∂n/∂t  ∂V/∂t respectively.
The Darwinians argue that when a plant is lost, the ∂n/∂t has an immediate effect. The remaining 723 plants will immediately institute a compensatory development, L, of that specific size. They will begin doing work at that increased rate, and entirely because of the plant lost. But since all the surviving 723 will be doing the same, then each survivor will most likely only increase by the mean of the potential surrendered by that plant, which is µ̅ = µ/n over the population. And this factor of µ/n gives us a way to begin making measurements. By this Darwin proposal then the net energy flux will increase by 2.696 x 107 watts per entity over the population because of the loss of this one entity … which is certainly measurable.
The difficulty now, of course, is that there are other changes going on within the population at the same time as all the entities continue to follow the programming in their DNA. An underlying trajectory for mass and energy is sure to continue whether an entity is lost or not. Our difficulty, therefore, is determining how much of any increment in a mass or an energy flux, or in a biological potential, amongst the survivors is due to that underlying and possibly invariant pureDNAimposed trajectory; and how much is due simply to any ongoing loss in numbers.
The singular advantage of our approach and our model—which is but one of four known to this author—is that this proposed Darwinian competition, i.e. the ongoing changes in biological potential due entirely to changes in numbers, will be ultimately be measurable in:
 the energy flux;
 the mass flux; and
 the biological potential itself, which is in our case the plants’ photosynthesis rate.
But not only is this biological potential now measurable, we can even apportion it between the three elements we have identified in Maxim 4 of ecology, which is stated as ∇ x H = ∂m̅/∂t  ∂n/∂t  ∂V/∂t. There is the current biological potential itself, µ. There is also of course the infinitesimal rate at which it is changing, for whatever cause, dµ. These express themselves in the prevailing apportionment ratio, as well as a rate of change in those apportionments. These then express themselves as (a) the instantaneous numbers of components held along with an acceleration or change in that number; and also (b) a configuration for those components along with an acceleration or change in the nature of the configuration.
We can again examine the data. When Brassica rapa first germinates, the average individual number of chemical components maintained produces an average individual components mass of u̅ = 1.171 x 103 grams. In addition to that, the distinctive configuration of their chemical bonds gives to each one an average individual energy content of h̅ = 1.017 joules. These are simply the divergences in mass and energy. We of course do not yet know how much of this is due to ∂n/∂t, i.e. to changes in numbers.
Since there is always both an underlying mass and energy flux, and an underlying divergence in each, then each Brassica rapa entity must contribute to those fluxes and divergences through the mechanical and the nonmechanical components of energy. Since some energy must be expended simply in maintaining each entity’s biological integrity, as against the ravages imposed by the environment’s escaping tendency, then some of that chemical bond energy—but it simply cannot be all—must express itself intrinsically and internally and metabolically, and so nonmechanically. This is in the configuration of the components demanded by B. rapa and its unique DNA. But by the same token, some of that chemical bond energy—but it again simply cannot be all—must always be reflected in the ongoing mechanical work done, which is the absorption and relinquishing of chemical components to and from the environment as in nutrition, excretion, cellular respiration and the like. These are the complete mass flux and its divergences, and also along with the specific chemical bond energy required to maintain that flux, both of which differ by being exact differentials. The two together form the complete energy flux and work done and heat given off, along with their inexact differentials. The issue then is how much energy is apportioned to each of these … and … how much, if any, is being simultaneously apportioned to counter any potential loss in numbers within all those fluxes, and so to offset those ongoing losses in numbers—assuming the Darwinian proposal is correct. What, then, is the size of the change in biological potential, µ, that would occur if there were no change in numbers so we can see if we can measure it?
Since this is science, then we of course give primacy to the data. And the data tells us that when Brassica rapa first germinates, its configuration of its biological matter allocates its Gibbs energy between its mechanical and its nonmechanical work such that its measured energy flux emerges at the rate 3.276 x 105 watts, or joules per second, over the population. But when it reaches its full leaf stage, the mass of chemical components held has increased by 43 times to approximately u̅ = 0.05 grams. Is any of this change due entirely to a change in numbers, and if so, how much?
Having determined the changes in mass over this time period, and at the hands of the change in biological potential, we can now look at the changes in energy. Over the same time interval that the average individual Brassica rapa mass has increased 43fold, the average individual energy content has increased by only 6.90 times to h̅ = 7.02 joules. Most of the apportionment emphasis has therefore clearly been on mechanical chemical energy, because the apportionment made into nonmechanical energy, over that same period, has been only 16% of the scale of the apportionment made into mass. Since the relative apportionments have changed over time, then the nature and character of the biological potential—which distributes energy amongst all available paths—has also clearly changed. That change in the fluxes and the divergences for the individual plants is a part of Brassica rapa’s definition. Once again, is any of this due entirely to a change in numbers, and if so, how much?
But as well as the above changes in apportionment between the mass and the energy fluxes and divergences, the biological potential has itself grown by a factor of 13 to 4.151 x 104 watts. So B. rapa has not only increased in its mass over the period, it has constantly changed the quality and the character of its biological potential. It has reconfigured itself so that biological potential has not only increased overall, but it has changed in its apportionment ratios. As a result, B. rapa can now take on more energy more rapidly; more mass more rapidly; and/or reconfigure itself more rapidly so it becomes even more differentiable; and emit and absorb mass and/or energy, and/or change them, all the more rapidly than ever. That is the complete catalogue of changes in its biological potential over the period. The only question, once again, is how to determine how much of these changes—if any—have been due entirely to changes in numbers, and so therefore how much those changes we can express as ∂n/∂t are contributing to the ongoing biological potential.
It is now extremely difficult to proceed and to determine those specific values for the effects of ∂n/∂t on the fluxes and divergences in mass and energy without entering one of the most contentious areas of biology. The terms ‘character’, ‘variation’, ‘heritability’, ‘mutation’, ‘ecophenotype’, ‘free variation’, ‘breeding system’, ‘continuous variation’, ‘multifactorial inheritance’, ‘genetic variability’, ‘phenotypic plasticity’, ‘potential variability’, ‘qualitative inheritance’, ‘polygenic inheritance’ and many others set out the conceptual difficulties (Martin, 1990; Abercrombie et al, 1990; Hale and Margham, 1988). But since the majority are, as ever, without a clear definition, they are best avoided. It is best to stick to clearly definable and measurable properties such as mass, energy and number.
Darwin defined natural selection and set out the issues regarding variations and inheritance in clear and exemplary fashion by saying:
I have called this principle, by which each slight variation, if useful, is preserved, by the term of Natural Selection … (p 61).
…
… Let it be borne in mind how infinitely complex and closefitting are the mutual relations of all organic beings to each other and to their physical conditions of life. Can it, then, be thought improbable, seeing that variations useful to man have undoubtedly occurred, that other variations useful in some way to each being in the great and complex battle of life, should sometimes occur in the course of thousands of generations? If such do occur, can we doubt (remembering that many more individuals are born than can possibly survive) that individuals having any advantage, however slight, over others, would have the best chance of surviving and of procreating their kind? On the other hand, we may feel sure that any variation in the least degree injurious would be rigidly destroyed. This preservation of favourable variations and the rejection of injurious variations, I call Natural Selection. Variations neither useful nor injurious would not be affected by natural selection, and would be left a fluctuating element, as perhaps we see in the species called polymorphic (pp. 8081) (Darwin, 1859).
Darwin does not quantify and give magnitudes, but his theory is certainly based on his ‘slight variations’. According to him, any ‘slight variations’ that also prove themselves ‘useful’ are passed on to progeny. But … this immediately begs the question of what ‘nonuseful’ slight variations might be. We must somehow distinguish between the two so we can isolate those—if any—that directly contribute to evolution.
As for a mechanism for natural selection, we have already used our vector calculus model to suggest that all variations and properties are passed on to—and through—progeny by a process of biological induction, and as per our Maxims 3 and 4 of ecology, succession and apportionment respectively.
Darwin’s principle is in other words about number, that ∂n/∂t factor; and about the effect that transmissibility through numbers has on the continuity of the mass and energy of biological systems through their fluxes, divergences, curls and circulations. These properties have the singular advantage of being measurable through both molecular and systemwide variables. The divergences and curls are differentiable and describe the behaviour of points and so are molecular in scale, while the fluxes and circulations are integrable and describe the overall behaviour of the systems containing those populations of points.
Darwin also clearly points to the difficulties of principle his theory faces because Chapter 1 of Origin of Species is entitled “Variation Under Domestication”, and he has a subsection entitled “Difficulty of distinguishing between Varieties and Species”. His carefully chosen phrase ‘fluctuating element’ as he used it above in fact describes the situation we have already seen in the Liouville theorem. This embraces a variety of phase densities, but subsumes them all within the same boundaries, energies, and forces. We can now therefore conjecture that one of Darwin’s ‘nonuseful’ and so nonhereditary variations is whatever ‘fluctuating element’ enables a given set of subpopulations to respect the Liouville theorem such that given properties are held in common, but without any longterm changes in trends or values. Variations or fluctuating elements within populations of this kind might then well not be either dependent on, or else transmissible simply through and because of, numbers. Since these ones respecting the Liouville theorem do not seem to lead to longterm changes, then they might well also not depend upon variations in entity density between young and old within the generations, which would then be between the pre and the postreproductive and in the manner Darwin suggests. Such variations would also seem not to depend on the vagaries of the environment in so far as those are the very vagaries responsible for changes in numbers. This could therefore be a suitable fixed template proposal for the proposal of the Aristotelian template.
All this taken together, then the proposal of the Aristotelian template is that there are zero ‘useful’ variations. There is in other words nothing that depends on ∂n/∂t in either mass or configuration. All generations and subpopulations simply cluster around given mean values and exactly in the manner suggested by the Liouville theorem. In this view, the possibility for such a coordinated range of variables that support a given set of stable values then exhausts the possibilities for all variations, with none being useful or heritable or transmissible in the evolutionary manner intended by Darwin.
If the immediately above is so, and the Liouville theorem could indeed indicate the ability of populations to vary both (a) within the scope of a set of given variations; and (b) from one generation to the next; but always (c) by clustering around given means and boundary values; then we must certainly try to separate useful or evolutionary relevant variations from the nonuseful or nonevolutionary variety. We can then see if the Aristotelian template claim that Darwin’s useful variations—i.e. his proposed longterm changeinducing and heritable ones—number zero does indeed hold good. We must propose a suitable set of such nonuseful and noninheritable variations and test the hypothesis in the real world.
But we must first, of course, try to define those proposed nonuseful—i.e. supposedly nonevolutionarily relevant—variations. If the proposal of the Aristotelian template is truly to be indifferent to all changes in n, then we must properly define a set of variations to which it can refer so that entities in such a population can never inherit due to changes in n … but also so that they can always inherit in response to all other changes and causes.
By the Liouville theorem the hamiltonian, H, of a biological system remains constant and such that the density of subpopulations, along with their moles of components, q, and their numbers, n, hold constant around given values as biological cycles repeat. Subpopulations exhibit variations that circulate around given mean values … all of which are in principal determinable. Since we have, through that theorem, a clear description of what we need, we need only correctly define the values and terms and then conduct an experiment—such as with some test species such as Brassica rapa—and determine exactly those indicated values. We now construct that very experiment.
As per Law 3 of biology, the law of diversity, we must first determine what ranges of values may be allowed, and so what to measure and what not to measure. In our earlier thought experiment, based on Black’s discovery of latent energy, we added energy to a population and then separately assessed its mechanical and its nonmechanical components, which we varied independently. Based on that idea we now add the energy Qv to a given population of constant size such that its visible presence, V—which is kilogrammes per joule and the inverse of the specific energy—holds constant meaning that dV = 0. Since the visible presence is being held constant, then the population’s average individual mass, m̅, must be changing. There is no other possibility granted that dn/dt = 0 and numbers are now not changing. So let the change be ∆m̅. The total amount of energy, Qv, that we add depends only upon the precise chemical configuration in force. There will now be a specified increment in mass per each input of energy of Qv/∆m̅. In the standard formulation of calculus, this ratio will become a derivative, in the limit, and as Qv tends to zero. This gives:
limQv→0 (Qv/∆m̅)V ≡ (∂Q/∂m̅)V = Ev.
We can call Ev the given population’s energy capacity at constant configuration. It is the population’s response to the F1 component of the vector force F = F1i + F2j that, through Green’s theorem, produces the circulation that exists all about the boundary; and that through Stokes’ theorem determines all behaviour over the surface, and that moves always relatively to its two components F1 and F2. This Ev is also a defining property for it governs the response under all conditions; and it establishes both the divergence and the mass flux, and entirely depending upon the number of entities concerned including in reproduction. Our Ev stays the same even when the population reproduces, and so all properties and qualities are handed on in succession, and according to the template.
We can now rearrange the limit definition for Ev to give the differential equation:
dQv = Ev dm̅.
This can in its turn be integrated to find the energy used in any given and finite increment, over any given range, for any population size, and all at a constant visible presence or constancy in chemical configuration. If the initial and final average individual masses—which are also the divergences—are a and b respectively, then the total amount of energy required to effect this given change in the mass flux will be:
And since the proposal of the Aristotelian template requires that the population’s behaviour be independent of the vagaries of the environment, in so far as that environment might randomly and arbitrarily impose changes in numbers, then Ev will be constant, or very nearly so, over any given range, and for any population size, including those in the midst of reproduction. This is again the definition of freedom from the environment. Therefore, the Ev term can be taken out of the integral to give:
Qv = Ev (b  a) = Ev(m̅final  m̅initial).
Thus the total amount of energy needed to effect a given change in mass flux now depends only on the numbers concerned. Those numbers and that energy change are always in direct proportion and are independent of population size. Whether we have five, five thousand, or five hundred million in our population, or effect any change from one to the other, then all the entities will follow the same abstract template given by Ev. They are all configured in the same way over all those populations, and are again independent of their sizes, or times, or circumstances. If we measure the energy, we know the numbers. This is the definition of a template. We now have something tangible we can measure.
By an exactly similar argument we can now hold the average individual mass, or the divergence in the mass flux, constant such that dm̅ = 0. We now add the quantity of energy Qm̅ to the population. The visible presence must now change by ∆V. Its increment is given by Qm̅ /∆V which again in the limit, and as Qm̅ tends to zero, becomes the derivative:
limQm_→0 (Qm̅/∆V)m̅ ≡ (∂Qm̅/∂V)m̅ = Em̅.
We can now call Em̅ the population’s energy capacity at constant individual average mass, and in response to the F2 component of the same vector force, F = F1i + F2j, that produces the circulation and the curls about the surface. It can in its turn be rearranged to give:
dQm̅ = Em̅ dV,
which can in its turn be integrated to find the energy evolved in any given and finite increment in the energy flux at a constant average individual mass, and over that given population. We can in other words always determine the exact divergences in energy. If the initial and final values for visible presence are again a and b respectively, then the total amount of energy required will be:
And since the proposal of the Aristotelian template once again requires an independence from the environment, it will also be constant, or very nearly so, over any given range, and so that the term Em̅ can also be taken out of the integral to give:
Qm̅ = Em̅ (b  a) = Em̅(Vfinal  Vinitial),
and which is also something tangible we can measure. It is again a template and independent of numbers.
Once again, the proposal of the Aristotelian template insists that the total quantity of energy required depends only upon the numbers, but does not vary as numbers vary, with the total energy always being in direct proportion, such that the total flux depends only upon, and is directly proportional to, those numbers concerned. All populations and entities again follow the same abstract template given by Em̅, with the entities being configured and behaving the same way over all populations regardless of size, of time, of circumstance … or of Darwin’s evolutionarily useful variations. So no matter how many get reproduced, they all behave exactly the same way, generation after generation, and there is no such thing as evolution.
We can further conclude two things. Firstly, if the proposal of the Aristotelian template holds good then there exists some sum or ratio—either Ev + Em̅ or Ev/Em̅—that uniquely describes the entities’ behaviour—i.e. its template—irrespective of the population size, or of the environment in which those entities find themselves. Secondly, since volume, in science, consists of (a) the number count; (b) the joules per entity; and (c) the joules per unit mass; then the proposal of the Aristotelian template now insists that the population size can vary without bound, with the intensive variables of joules per entity and joules per unit mass remaining ever the same throughout all such changes, and also being independent of the environment. In other words, the proposal of the Aristotelian template contends that the constraint of constant propagation is independent of the other two constraints … which are also independent of each other. This can again be tested by experiment.
Since the proposal of the Aristotelian template claims complete independence of numbers, then it can equally well be stated, in emulation of the Joule experiment, as:
(∂m̅/∂V)U = (∂m̅/∂U)V (∂U/∂V)m̅ = (1/EV) (∂U/∂V)m̅
The lefthand equation states that if a given biological population currently holds its given overall quantity of biological matter, U, i.e. its stock of chemical components, constant; and if we change the average individual mass, m̅, over that population; then either the chemical configuration must remain static relatively, or else it must change immediately and complementarily, so that the total energy changes by proportion to preserve that identical stock of biological matter and those numbers but differently configured; and it must always do so in exactly the same way no matter how large or small might be that population. Or to put it another way, if a biological population starts reproducing (the only way in which average individual mass can change while biological matter remains constant) then they must all do so in the same way, and irrespective of numbers. We will always measure given and specified—but identical—values for all intensive variables, with the extensive ones then remaining constant so that they vary only by a factor of n across all populations.
We can easily test for this condition by varying population sizes and noting the results. In acts of reproduction cells bifurcate and we again have mass decreasing due to a change in numbers. The change in the visible presence, V, which is the change in the configuration, must be zero throughout and no matter what the population or its size … which can be measured. But we did not see this with our Brassica rapa which is not promising for the proposal.
The lefthand term in the middle equation clarifies the claims made by the proposal of the Aristotelian template. It is in fact Ev, the constant visible presence energy capacity as we have just defined it … and as seen in the lefthand term of the third or righthand equation. This again tells us that a given population’s chemical configuration is in no way affected by its size, i.e. before or after reproduction. The claim is again that reproduction and bifurcation are of no consequence. Entity characteristics are invariant. So no matter how large, or how small, any given population may be, the total quantity of its biological matter or stock of chemical components depends only and directly on its specific population numbers. All entities over all population sizes behave the same way and follow the same template. The same holds for each and every developmental stage and configuration through which the population and its entities may pass. It does not matter how many survive or do not survive. Whenever average individual mass changes, the total stock of biological matter retained immediately and commensurately changes and without variation and conversely. If any entities enter or leave the population by whatever means, the configuration still does not change, and the same ratios are maintained. This is again a template.
The final term upon the right now states that should the average individual mass be held constant while the quantity of biological matter is changed, then the chemical configuration must change by a precisely matching proportion, and again quite irrespective of how many entities are involved, and how many are added and/or removed. But … we can again test all this by varying population sizes and taking measurements.
The proposal of the Aristotelian template is now claiming that both the mechanical and the nonmechanical energies are independent of each other, and independent of number, size, and environment … so making it very easy to test. We simply get some seeds—Brassica rapa, for example—and put first four seeds; and then ten seeds; and then fourteen seeds into each pot; and we take measurements of mass and energy to see if there are indeed no differences amongst the plants as they are grown, and all the time keeping all growing conditions the same, which we in fact did by carefully monitoring heat, light, and water. Our results are in Table 1. They again do not bode well for this proposal.
Indeed … before running any such tests and perusing their results, we can spot an immediate problem. All these claims made by the proposal of the Aristotelian template are in flagrant breach of the Liouville theorem. This instead states that all values—including the n entities maintained and the q moles of components constituting each of them as well as how they are configured—change in complementary ways to maintain their common values. These are the impulsestoattainstates and the impulsestoacquirecomponents we met earlier. They are the q and ncoordinates for the Liouville theorem, and as are encoded in DNA. These all change complementarily to preserve the system … and they are linked. By the Liouville theorem, if q or the number of components held per entity changes, then there is a matching change in the number of entities maintained over the population. But this linkage through n is, of course, the very thing the template proposal seeks to deny. The template proposal instead insists that n and q are completely independent. And … anything that breaches the Liouville theorem also breaches the second law of thermodynamics, the Heisenberg uncertainty principle, the BiotSavart law, the Helmholtz theorem, Newton’s laws of motion, d’ Alembert’s principle, and almost every other known scientific law. These are serious consequences.
It is nevertheless important to keep the core issue in mind. In spite of these breaches; and even if any experiment we conduct confirms that the proposal of the Aristotelian template simply must fail; that would still not do what is required. It would still not positively prove Darwin’s central assertion about the existence, the capabilities, and the role played by his slight but useful variations. The whole principle underlying the template is that known scientific laws may be freely flouted to maintain it. That is, so its proponents declare, the singular virtue and intent of that template. It is beyond science; it is beyond measurement; and it is beyond that “limiting” form of rationality:
Perhaps the only rational theological claim that can be made about the design argument is the one that was once made by John Henry Newman, the leader of the Oxford Movement in the Anglican Church in the nineteenth century: “I believe in design because I believe in God; not in a God because I see design” (Sarkar, 2007, p. 48).
The immediately above granted, we must test Darwin’s ideas independently of the truth or validity of the proposed Aristotelian template. The core issue is not simply variations, but their heritability—their ability to cause evolutionary changes of the kind Darwin asserts. Darwin seems to imply that phase volumes and boundaries and circulations can indeed be amended by succession, and such that given populations and subpopulations can gradually come to hold values in common that are altogether different from those that their progenitors previously held in common. He claims that given groups of progeny can become reproductively differentiable from other groups of progeny, with all of them being derived from the same progenitors, and in spite of huge current differences between them. This is the principle we must test and validate, and again quite independently of any attempts to refute the proposal of the Aristotelian template, for many of those arguments simply cannot be refuted by any means:
… Reformed Creationism accepts parts of evolutionary biology, including some role for natural selection. It accepts that blind variation and natural selection—”Darwin’s law of higgledy piggledy” as the physicist John Herschel dismissively called it—can explain phenomena such as the evolution of drug resistance in bacteria or pesticide resistance in insects. Most versions of Reformed Creationism even accept that natural selection may have modified traits such as the size and shape of bird beaks. For instance, they sometimes accept that natural selection molded the beaks of Darwin’s finches in the Galàpagos Islands where the size of available beaks selected for the form of beaks. These versions of Reformed Creationism generally accept common descent: that all extant organisms are descended from a single ancestor in the recesses of deep time, presumably the first cell.
Nevertheless, Reformed Creationism urges us to reject the view that evolutionary theory, coupled with our increasing knowledge of the physics and chemistry of living organisms, will eventually explain the emergence of all biological phenomena. Moreover, to get a full theory, it claims, we will have to embrace supernatural (or at least extranatural) mechanisms. In particular, we will have to invoke the operation of a designing intelligence guiding the process of organic change. Reformed Creationism is called Intelligent Design (ID) (Sarkar, 2007, p. 2).
No matter what the arguments deployed, the centrepiece of Darwin’s argument would seem to be this ∂n/∂t—the responsiveness of biological populations to number. We have already seen, however, that every population can be described as a vector, P, that incorporates number, both as the number of entities in a population, n, and as the moles of chemical components, q, of which those entities are composed: P(n, q). We also have as our population function f(n, q, w), along with its gradient, ∇f = (∂f/∂n, ∂f/∂q, ∂f/∂w). These together determine every possible permutation for every possible population and entity. The entirety of the forces active upon a given population that produces its circulation in mass is also a vector, being F(F1, F2). The relevant axes are all orthogonal. They in each case define the entire space or volume. As in Figure 59, the circulation of F(F1, F2) occupies an entire plane orthogonal to the Wallace pressure, P. The component F1 in F derived from P, and itself orthogonal to both P and F2 is source to all mechanical chemical work in the circulation; while F2, in F, is source to all nonmechanical chemical work through being the measure of the energy density. Every possible biological permutation is therefore covered.
The Wallace pressure, being a flux, depends upon the numbers, n. It curls about those n entities, thus establishing the circulation in energy from p̅, with the resulting force F from that curl, about those point or molecular entities, then acting directly upon the resources taken from the environment to produce the mass flux, M. This mass flux is then distributed both about the boundary and over the entire surface as the circulating mass of chemical components, and as it also curls about the same entities, at the divergence of m̅, and also at the energy density of P/M, thus enabling the entities to (a) do their internal physiological and metabolic work due to their chemical configurations; and to also (b) do their external mechanical work which is to gather up all the energy and the resources of chemical components, as the doing of their work with the inevitable accompaniment of heat, and so to maintain themselves and produce progeny.
We can now be more precise and say that the proposal of the Aristotelian template’s additional claim is that since Ev is independent of Em̅ then metabolism and physiology are completely independent of population size. There can therefore never be stresses imposed by the environment that causes changes through n, for that would be a dependency of both Ev and Em̅ on n. So by the template proposal, Ev, Em̅ and Ev/Em̅ are all independent of n, for there can be no such thing as Darwinian competition, and so therefore no inheritable change in behaviour, nor in any of these three variables simply on account of numbers. This claim is also easy to subject to testing … and as we in fact did in our Brassica rapa experiment. But since the proposal breaches the Liouville theorem, which insists that changes in q and n—chemical components and entity numbers—are linked, there is little hope for the proposal’s success in any such empirical test. But we nevertheless need to isolate variations of different kinds, along with their causes. This means attending to the work done and heat emitted.
The total energy that any given population uses to do its work, over its generation, is ∫P dT. This covers:
 all the work, δW, done, which is an inexact differential;
 all the heat, δQ, given off, which is also an inexact differential; and
 all the chemical bond energy, H, the population uses to bind all chemical components, in whatever configuration, and as components are removed from the environment to transact both (a) and (b), but with this last instead being an exact differential.
The proposal of the Aristotelian template is now that the exact differential of H is completely independent of whatever changes might occur in the inexact differentials in (a) and (b). So if a given task previously requiring B joules of energy can now be achieved by the lesser quantity A, say because external temperatures have risen, then the proposal of the Aristotelian template states that H is left unchanged even though less work is being done, and less heat is also being evolved. But if less of H is used, then less has emerged in work and heat, and there is more that can arrive at the next t … but it is forbidden to arrive if this proposal holds. Or if the same task now requires the greater quantity of C joules of energy because, say, external temperatures have in fact fallen, then the proposal still insists that H remains unchanged even though the work and heat needed to effect the same change has increased. This removes more energy through work and heat, thus leaving less to arrive at the next t … which the proposal again ignores. This is the independence it is claiming and that we must validate or otherwise, and by measuring.
The proposal of the Aristotelian template now seeks to override the fact that the work and the heat implied in any given Wallace pressure of P watts are at all times inexact differentials. Any given P watts may be directed towards a given quantity of mechanical work initially described as a specified W = Fd joules, and so seemingly needing a specified quantity of energy to effect it. But the work and the heat actually required, in a real situation, can vary freely along an infinite array of more circuitous paths forming a continuum. So a first liana creeper could easily gain say two feet of vertical support, and reach the light of the upper forest canopy using 13 turns about its supporting tree, while two others near it could use 9 and 17 turns respectively to reach the same vertical height … which are differences in work and heat. Or as Dinerstein discovered in his study of Rhinoceros unicornis, if two rhinoceroses browse on two otherwise identical samples of Litsea monopetalas saplings, but the one rhinoceros stands one centimetre taller at the shoulder than the other, then even though the work done by each rhinoceros may be the same, with identical masses of herbivory being removed; then (a) due to their differences in height the saplings will show differential responses in their subsequent growth and leaf abundance; and also (b) there will be consequences for all future rhinoceros browsing potentials for each (Dinerstein, 1992, pp. 701704). The proposal of the Aristotelian template insists that there is no longterm effect, and that these differences caused by work and heat are not heritable.
As we have already observed, by the first law of thermodynamics the sum of work and heat—δW  δQ—is always an exact differential, dU … and even though the independent quantities are inexact. Any other proposal, such as that of the Aristotelian template, is in clear breach of that first law.
The sum of the work done and the heat evolved by a biological population is at all times the total chemical bond energy, measured as H joules, retained by the population. Since the total biological energy evolved over the generation is Hg = ∫P dT, then the total at each point t over T must simply be H at every point, where H = P. Thus the chemical bond energy, H, describes the population’s entire state at any given moment, t, and as it also follows its designated path(s) of work and heat … and so that it can arrive at the next stationary state, at the next t, using the exact energy of dH, and so that it can thus complete the cycle of the generations by proceeding through all its succeeding ts over T. This is so no matter what may be the precise quantities of work done and heat evolved as δW and δQ. These cannot be separately and independently specified, even though their sum can always be, for each is again separately an inexact differential, with their total being exact. So the Darwinian and the Aristotelian proposals are at present identical. We must separate them. We must do it through work and heat.
The chemical bond energy, H, that the population uses over its cycle is at all times a state variable. When it is integrated we obtain the correct and exact value between any given upper and lower limits. However, and by contrast, separately integrating the inexact differentials of the work done and the heat evolved—δW and δQ—does not of necessity produce any such exact difference because the population is never in possession of any such heat and work. These are instead processes and exchanges with the surroundings. Liana creepers, rhinoceroses and Litsea saplings do not always do the same thing. This is again how heat and work are defined, and this is again the first law of thermodynamics. There is thus no unique value for work or heat to match the unique values for H, ∆H and/or dH which the population can instead possess, and through which precise ranges it can pass. Those values and ranges in H do not in themselves define specific quantities of work and heat, either of δW or of δQ. The population could therefore have followed a completely different path from whichever one it did in fact follow in any specific case. It could have returned the same values for H and so forth in spite of processing different quantities of work and heat. Any string of consecutive stationary states of exact differentials in H, over any time span ∆t, together uniquely define the states for those n entities … but they once again do not define its work and heat and as may have emerged from those states either individually or as a population. Each state in H adopted over any given ∆T is then one that permits whatever might be the ongoing energy transactions of work and heat occasioned by the prevailing capturing and escaping tendencies imposed, and in conjunction with the given states adopted. But as in our Law 3 of biology, the law of diversity, other transactions are still allowed. The chemical bond energies, H—which are again exact differentials—are handed on, at each t over T, to the next instant, and so across the entire time span, T and so from generation to generation. Thus for every value of the Wallace pressure of P watts of work and heat, there exists a stationary value of H joules and whose measure is identical to P, such that H = P. This exact chemical bond energy then permits the given entity or entities to undertake any and whatever ongoing but inexact transactions of work and heat totalling P watts … with all continuum distributions or transactions of work and heat being possible using exactly the same H or chemical bonds, for that is again the nature of inexact differentials and their transactions. The chemical bonds and their energies are exact and welldefined and are the sum of work and heat, but the individual processes of the work and the heat are not individually specified … although they at all times remain determinable and measurable for their sum is eminently predictable through those states, but the apportionment between work and heat is not … until it has occurred.
The energy flux P watts prevailing at any time over a population gives rise to the force F(F1, F2) that additionally produces the mass flux of M kilogrammes per second at every point t over T. That mass flux is a measurable consequence of the work done through those P watts, and using the H joules of chemical bond energy. The total mass of chemical components taken on and given off circulates over the generation, and entirely on account of that work done by the population P(n, q) at each t over T.
The circulation in mass produced by a population over a generation totals Ug = ∫M dT kilogrammes. The total mass of components held at any given time t, and that is directly incorporated within the entities, is immediately U kilogrammes where M = U. This U is the stationary state of the ongoing mass flux of M kilogrammes per second. Both the mass flux of M kilogrammes per second and its stationary state of U kilogrammes are exact differentials. The mass flux M is the mechanical and the gravitational mass that is at any time being lost, gained, or exchanged with the environment, while the biological matter, U, is that same mass of chemical components with the same numerical value. The two are linked through the chemical bonds of H joules the biological entities use to bind the components, and so through the energy density of V kilogrammes per joule, and as is being explicitly used to link those two through its curl created of P/M. However … that mass of components still does not represent either any given entity’s, or the population’s, biologically relevant inertia as inertia is defined in science, and so relative to a biological system.
Specific energy is also a part of the work being done, and the heat being evolved. It is exact. But it must also have—as all other variables do—both its stationary and its dynamical equivalents. Visible presence, P/M, is a stationary expression. It describes a state. It can also of course be used to describe a succession of states. It is through the visible presence, which is the inverse of specific energy, and the kilogrammes of chemical components it immediately implies per each unit or joule of energy, that we can eventually access the moles, q, of chemical components being bound together, and as the M kilogrammes of flux per second provided as U kilogrammes of biologically bound matter. But once that specific energy is known then there must also be a processing rate for the energy, and per each unit of mass that institutes it, and that is expressed as W watts per kilogramme or joules per second per kilogramme or similar. It is a time rate of consumption for the specific energy in force; or, in the case of visible presence, specific energy’s inverse rate. There in other words exists a work rate, W, for the visible presence where W is the dynamical rate and V is its stationary and inverse complement so that 1/W = dV/dt. The two share the same but inverted units as each other.
As per the first law of thermodynamics, the quantity of mass contained in the mass flux—and whether it be stationary as U or dynamic as M—is at all times independent of the work being done and of the process or the path being pursued to acquire it. Its values are exact, and it has the same value of U or M either in kilogrammes, or as a time rate, no matter what may be the precise path or paths of work and/or heat used to acquire it. If a first rhinoceros walks three paces to browse two kilogrammes of herbivory while another walks four to do the same, the mass of herbivory is the same regardless. The total and the exact differentials of both M and U and P and H describe the population’s complete state and path in mass; and as also in moles of specified chemical components retained; as also per unit time. These mass values can each be integrated between given upper and lower limits to produce unique values for M, U, ∆M, ∆U, dM, and/or dU. But since M and U and P and H are exact, whileQ, and W are inexact, then some integrating factor is of course required to specify the energy used, or else important core biological values will remain unknown … and as is indeed currently the case throughout biology and ecology where this difference between exact and inexact differentials is not properly recognized.
As a partner to the H joules of chemical bond energy we also have an h̅, over the population. This has already been noted as the population’s true biological inertia where numerically, h̅= p̅. When scaled up and expressed per the biomole this h̅ or p̅ remains intensive but becomes R darwins with only the scale differing, and where one darwin is one joule per biomole. So … as our two above rhinoceroses have expended different quantities of energy to browse the same quantities, here is where we can register that circumstance. Since the number of darwins for a population is intensive it defines both a divergence and a curl and is thus a defining property for any population. Since work and heat are inexact differentials then it is entirely possible for two given entities or populations to have the same R or h̅ but to have different apportionments in work and heat, and also to have acquired different quantities of mass by having again partaken of different quantities of mechanical and nonmechanical work even though the totals for energy are ever the same … and vice versa. But since the locus of that emission or acceptance of energy—however it might ultimately have been apportioned as work and heat—is the biological entity and the chemical bonds it deploys; then that chemical bond energy, and that h̅ over the population, is again the correct reading for a population’s biological—rather than merely mechanical—inertia. It is the nexus of chemical bonds that is doing and receiving work and it does not cross any boundaries. It instead causes the transference of work and heat.
Given these consequences of the difference between exact and inexact differentials, then if the same work can be done by completely different sets of chemical bonds and/or configurations of components, it is vital this be recorded. So let there, and more generally, be two arbitrary functions G(x, y) and H(x, y) and that together produce, as we have here, an inexact differential equation whose difference, δF, is given by δF = G(x, y)dx + H(x, y)dy (Fitzpatrick, 2006b; Saad, 2007). The integral over a closed path is not necessarily zero, ∮ δF ≠ 0; and their partial differentials are not necessarily equal, ∂G/∂y ≠ ∂H/∂x. In the same way, two given quantities of chemical components formed with given quantities of chemical bond energy need not necessarily represent the same masses and/or types of chemical components acquired or utlized over the same time interval. There must, however, exist a solution such that δF = Gdx + Hdy = 0, and which we can suitably generalize. It is in other words still possible to do exactly the same work over the same time interval using the same energy, even though the components and/or bond arrangements fulfilling it might be different. We simply need to specify the differences so we can take measurements and draw conclusions. This is the definition of a variation and we want to know the size, scope, and frequency.
We now take our expression for equivalence, δF = Gdx + Hdy = 0, and divide it through by H dx. We can then express dy/dx—which would in our case be the visible presence—in terms of given values of G and H and so as a constant: dy/dx = G/H. We are now in other words highlighting, and searching for, some path within an entire family of possible paths that all have the same visible presence. We are instituting that search by expressing it as a part of an entire family of curves in the xy plane, all of which share this common value of G/H, as their slope. They all in other words share the constant value c = G/H. We can now express that entire family of curves, which also contains the solution to our original inexact differential, as Γ(x, y) = c. But whereas we could not properly differentiate our original inexact differential, this Γ is easy to handle. When we differentiate it we get: dΓ = (∂Γ/∂x) dx + (∂Γ/∂y) dy = 0. We can now get a value for dy/dx by dividing the whole by dx to give dΓ = (∂Γ/∂x) + (∂Γ/∂y) dy/dx = 0. But since we have already determined that dy/dx is the constant G/H, we can substitute that constant straight back in to give dΓ = (∂Γ/∂x)  (∂Γ/∂y) G/H = 0. Our original two partial differentials, which were inexact and did not equal each other, now hold something in common—their function, Γ—and so they indeed now equal each other, through that function’s differential, for their slopes are now known to be exactly equal. When taken together, these two are simply H (∂Γ/∂x) = G (∂Γ/∂y). That being so, we are at liberty to set them independently equal to some further arbitrary function, σ(x, y), and such that they are as the independent variables (∂Γ/∂x) = σ(x, y)G and (∂Γ/∂y) = σ(x, y)H. So thanks to our new function we now have G = 1/σ (∂Γ/∂x) and H = 1/σ (∂Γ/∂y).
We have once again gone through a process that should by now be familiar. We have tamed our original inexact differential of δF = G(x, y)dx + H(x, y)dy. We now have δF = 1/σ ((∂Γ/∂x) dx + (∂Γ/∂y) dy) … or, very much more simply, δF = dΓ/σ. Our original inexact differential has become exact using a suitable integrating factor, σ. In spite of their differences in their mass flux M, these various biological populations now achieve the same purposes for they hold that σ in common, no matter what their separate paths. We now only have to give that σ some viable and measurable biological significance and we can compare them directly in that property.
We can now use the same processes we have used earlier, but this time upon the mass flux—or, even better, upon its divergence in mass, m̅—so we can properly study energy, work, heat, mass and number in biological populations, and by applying a suitable integrating factor … and again in exactly the same way that temperature and entropy together describe all a system’s states and changes in configurations, with temperature—which is easy to measure—then being the integrating factor. In the Joule experiment, temperature can stay the same while entropy changes, and we will soon have a similar situation in biology where different entities can have the same masses and the same components, but we can measure their differences. We are interested in those biological entities that can achieve exactly the same purposes with different chemical components, or else those that have identical components but achieve different purposes. We must have access to both possibilities.
A ‘conjugate pair’ is a combination of variables, one intensive and the other extensive, whose product is energy, and that therefore allow both systems and their energies to be tracked. Pressure and volume, PV, form such a conjugate pair. Intensive pressure differences are the force that drive extensive changes in volume, dV, so causing the system to change in energy. The two together can track all energy gained and lost to or by a system through mechanical work. If, as in the Joule experiment, a valve is opened internally but there is no net volume change out into the environment and across the boundary, then no net mechanical work is done into that same environment and the PdV term is zero to show that no energy is lost by that specific mechanical means measured by this conjugate pair.
Temperature and entropy also form a conjugate pair. Temperature is the intensive factor whose gradient, or difference, drives extensive changes in entropy, with the product of the two again being energy. The difference is measured in heat. However, unlike the pressurevolume conjugate pair, not all of that heat is able to do mechanical work. Entropy nevertheless tracks that energy in the same way. Importantly: temperature can spontaneously follow a gradient, at which point entropy increases. Volume similarly increases when pressure spontaneously follows its own gradient.
If temperature and volume are held constant—and since the surroundings are external to the valve arrangement placed within the system’s boundary this is exactly the Joule experiment—then the Helmholtz energy measures the amount of work that could be extracted from the system, given its internal increments in volume, and which would be accompanied by suitable changes in pressure, should they have happened externally. In other words, the Helmholtz energy decreases whether or not mechanical work is done into the surroundings. Some work could have been done before the valve was opened; much less can be done after it is opened; and this change is reflected in the system’s decrease in both pressure and Helmholtz energy and whether that work is actually done or not. This last depends upon a boundary over which a volume change can occur, but a measurable change within the system occurs whether that boundary is crossed or not. Entropy allows us to measure this.
The driving force in any such conjugate pair is always the intensive property. Its conjugate is the extensive variable that measures the extent of some displacement, thereby stating an extensive energy value. The intensive variable is then also the derivative with respect to the extensive variable, and when all other extensive variables are held constant.
When temperature is held constant, as in the Joule experiment, then the entropy increases (while, under those conditions, the volume must also increase for although the measurable space is extensive, two intensive properties are disposed across it). That entropy change can be measured. It reflects the change in the system’s configuration and so in joules per entity and in joules per unit mass as any mechanical work necessary is either not done, or else could have been done, across the boundary, and as mechanical capability is or is not surrendered, energy and configuration changes occur regardless.
If an entire cycle of heat and work is undertaken and no net mechanical work is done in that as much is surrendered in one half of the cycle as is received on the other, then that special limit cycle can be studied and its properties determined … as is the case with the ideal and reversible Carnot cycle or heat engine. With a suitable integrating factor in place for biology, it will then be similarly possible to find such limit cycles and variables to measure, and that will similarly respond to all possible changes in energy over the population as all those variables—including reproduction and changes in population sizes—are systematically varied.
If, for example, a population’s numbers are changing, then the reproductive potential that we have already learned to measure, A, is also being either lost or gained, for it is an extensive property. The lost entities take their potential with them. Which of these depends upon the entities’ chemical configuration—whether they are pre or postreproductive and by how much—and as they are being lost or gained. We must therefore be able to determine and quantify that configuration so we know the reproductive consequences. This is of course the Gibbs energy which—expressed per unit mass—we already know how to measure as the visible presence. Since the Gibbs energy is a potential, then it is the inverse of the specific energy.
If a thermodynamic system’s temperature and volume are both held constant, but heat is added, then the pressure must increase and the Helmholtz energy must also increase as the entropy decreases. The system can now do an increased amount of work through now being able to move through an increased volume, and/or the same volume at a greater pressure. In the same way, if biological entities of the same configuration and average individual mass are added to a system, then its reproductive potential must increase by that same proportion. If we bring in more young, we will eventually have more adults. Therefore, the potential to produce those adults increased.
If a system’s temperature and pressure are now held constant, while heat is added, then the chemical configuration must also be holding constant even as the volume increases at that constant pressure, and we this time have a change in the total of the Gibbs energy as that heat is introduced, and the necessary mechanical work is also being done at that constant pressure. Entropy incorporates the energy concentration as joules per kelvin, which stays constant as the number of joules within the system increases. On this occasion, we have VdP rather than PdV, and the concentration of energy, the pressure, is remaining constant as the volume through which that concentration is expressed increases as both it and the temperature remain the same, while there is again an overall increase in energy. There is no change in configuration, merely a change in scale. This is also the essence of the possibility for reproduction in that the number of entities can easily increase as all other factors remain the same. Entities of exactly the same kind and developmental stage can easily immigrate into a given community.
Entropy assists in tracking configurations under energy, and so the behaviour, the number, and the configurations of particles. Thus whenever temperature and pressure are held constant, then volume is always directly proportional to q, the number of moles contained, as long as their configuration is held constant. And when temperature and volume are instead held constant then pressure is directly proportional to numbers. And if numbers and temperature hold constant then pressure is instead inversely proportional to volume; and if temperature changes but numbers then pressure, or volume, or else both, change in direct proportion to temperature, with entropy always recording numbers and configuration. We want—and we have in point of fact already found!—an integrating factor that will achieve similar purposes in biology.
By the Helmholtz theorem of the vector calculus the divergence in mass, m̅, uniquely specifies any population. It is also easily measured. The vector calculus tells us that the mass flux implied by this m̅ curls about the entities—∇ x M—and that when this curl is combined with the divergence in energy—∇ • P = p̅—it creates the visible presence or energy density, p̅/m̅, that is equally easy to measure, and that is also the curl in energy—∇ x P—and that together with the divergence in mass—∇ • M = m̅—guarantees that any population is unique. The reverse correspondences also hold in that when the average individual energy, p̅, is measured, then the curl in energy is also and immediately measured through that divergence; and it also and reciprocally returns the values for both the divergence and the curl in mass for we can again measure it through that P/M, and so through m̅. Since the average individual mass, which is the divergence in the mass flux, establishes the uniqueness of the curls and the divergences in both of the defining fluxes for any and all populations, then it can uniquely specify a population, and quite independently of the population’s specific size at any time. The two together—p̅ and m̅—define the states and the distributions of those states.
As we saw with Gurney et al, and with Brown et al, then—and as confirmed by the Liouville and Helmholtz theorems—to specify the population count at any one point is to specify both the actual count and the distribution everywhere (Brown et al, 2004; Gurney et al, 1996). The divergence in the mass flux thus specifies a population everywhere in terms of both its mass and its energy once we have made a count and so established the flux anywhere. We therefore use this divergence in the mass flux, m̅, as an integrating factor, and so that we can always have an exact differential available when considering the inexact differentials of work and heat.
Every stationary state or variable in a population again has its path or dynamic equivalent whose magnitude is the same, but that is expressed per unit of time. Thus the energy density, V, has its inverse dynamic equivalent in the work rate or processing rate for the population, W. This dynamic work rate must now be made extensive and given suitable units of measure. It must also act as an exact differential so it can partner the intensive variable of average individual mass, m̅, which will then be its complement integrating factor.
Now we know the requirements, we can straight away see that the GibbsDuhem equation we have already given in fact illustrates our usage of divergence as an integrating factor, along with its partnering extensive variable of the engeny, S, we have also already derived and introduced. The equation is once again:
m̅µ = m̅dS = dU + dH  Σiµi(dvi  dmi).
As well as that pairing, it contains both a biological potential over the whole population, µ, and a biological potential per each entity, µi, which is given by µi = (∂U/∂mi) V,S,N  (∂U/∂vi)m̅,S,N .
We already know, for we defined it when we first met it, that the biological potential, µ, is a function of the three engenetic burdens of fertility, φ; of components mass, κ; and of conformation, χ. But even though we met them earlier from the perspective of vectors and unit normals, we have just now—and so for the second time—derived both this κ and this χ. They are, respectively, Em̅ and Ev. They each contribute to µ which is dS and the differential of engeny, S. And … this engeny is the quantified time rate expression of the specific—and so the Gibbs—energy. These intensive components are scaled up and expressed per the biomole, as required by the Euler and GibbsDuhem equations. As we learned when we met it previously, engeny is therefore measured as joules per kilogramme per biomole per second; or else as watts per kilogramme per biomole; or else as darwins per kilogramme per second. It therefore measures all possible variations in mass, in energy and in numbers as may be characteristic of any population and entity. But its rate of change, which is again the biological potential, is expressed simply as watts. It is expressed as watts because all biological matter—whether expressed through kilogrammes, or else through the entities in which those kilogrammes are incorporated—is held together with chemical bonds which are simply joules of energy. The rate of change of those joules is the creation, processing, transformation, reproduction, metabolism or otherwise of some biological entity and/or its mass. It is therefore always simply watts, and quite irrespective of cause. As such, biological potential, µ, states how the engeny is changing no matter whether it is changing because of the moles and/or types of chemical components assigned to each entity; or because the biomoles of entities across a population are changing; or because the energy density is changing; or otherwise. These are all similar in requiring a change in joules per given time period to effect them. Biological potential is therefore a statement of how the population is responding, in its energy, via all possible variables, this being again through numbers and/or mass and/or energy.
Biological potential is also of course sensitive to all changes in F(F1, F2) and so to both mechanical and nonmechanical energy and to how these do or do not change as population numbers do or do not change. Mechanical chemical energy changes the total mass held by the population whether through increasing the numbers, or else by increasing the average individual mass as numbers stay constant, or through both. Nonmechanical chemical energy changes the energy over the population by changing configurations whether the numbers and the masses do or do not remain constant.
Since engeny is exact, its various partial differentials are trivial to compute. The explicit contributions, from its three components, can soon be determined from or for any real case, and as a population uses its DNA to direct the incorporation of resources and energy over its population, and also over its individual entities. With engeny in hand, which is exactly measured, the average individual mass—which is the divergence of the mass flux and a unique and specifying factor for any given population—is now an integrating factor … and with all the power that that implies.
Although both the divergence in mass and the engeny are each exact differentials, with the former now being the integrating factor, work and heat nevertheless remain inexact differentials. But when the divergence in mass now conjoins with engeny, all the work, all the heat, all the energy, all the components and masses—and also all the entity numbers and changes in numbers—throughout the entire population and at every t over T can be precisely computed no matter what may be the path. Engeny is to average individual mass as entropy is to temperature for that is how they are each derived and defined as their partnering integrating factors. All possible effects upon (a) all individuals; and (b) the population at large; whether in mass or in energy; and whether because of changes in numbers can now be determined. So if it is indeed possible, as the proposal of the Aristotelian template insists, for the numbers in a population to change without either the masses or the configurations of those members changing, then we can measure that as easily as we can measure a proposed temperature change that does not also change entropy or any other such permutation. If, for example, we hypothesise that temperature can change while entropy remains invariant, then we can soon determine the appropriate physical situation for that expression, run that experiment, and take some measurements.
Now we have the tools to tackle our ongoing search for the ∂n/∂t factor, we can adopt the language that makes the best and clearest measurements. This is the language of constraints … which is best spoken by Langrange multipliers. Suppose, for example, that we need to find the maximum possible volume for a box whose surface area is known and fixed (Sinclair, 2008) . The box’s volume is V = xyz, while its total surface area is S = 2(xy + xz + yz) = k. So we have V = f(x, y, z) for the solid, and S = g(x, y, z) for the surface as the respective functions in x, y, and z. The conditions are satisfied when their various values for x, y, and z are equal. The two curves will then be equal. They will be at a tangent with their instantaneous rates of change being equal and so zero with respect to each other. The function values will therefore be some specified multiple of each other … which is the Lagrange multiplier. Since g is static and sets the constraint, we can set f(x, y, z) = λg(x, y, z) which is xyz = λ(2xy + 2xz + 2yz  k). When we take the various partial derivatives we have xy = 2λ(x + y), xz = 2λ(x + z), and yz = 2λ(y + z). The last equation gives λ = yz/2(y + z) which we can substitute in the second; and which gives us another value we can then substitute in the first; and we can eventually substitute the values into the constraint to determine the values for x, y and z. By taking the partial derivatives we are in fact taking gradients and finding tangents and normals. The gradient of any given function, h, is a normal to a function on which it has become a constant: n = ∇h(P). Any constant multiple is also a normal, which is the basis of the Lagrange multiplier (Steuard, 2004). Thus if a function, f(P), needs to be maximized subject to the constraint g(P) = 0, we can define the function F(P, λ) = f(P)  λg(P) where λ is the Lagrange multipler. We can then set the gradient ∇F(P, λ) = 0, and then set the partial derivatives with respect to λ each to zero to get the value g(P) = 0 for the constraint. This can be extended to any number of variables and constraints. We thus have access to all values we need for our biological potential. Whatever the proposal of the Aristotelian template may suggest, there is certainly a combination of Lagrange multiplier and biological potential to match it.
We now apply this constraint concept to the proposal of the Aristotelian template. As suggested by Sarkar above, modern versions of this doctrine accept the possibility for limited adaptations within species, but reject the broader more strictly Darwinian implications (Sarkar, 2007). In so far as it is possible to characterize their positions, they thus assert that an abstract template exists that determines the fundamental characteristics of all biological entities, thus rendering them free from all but minor variations … how ever ‘minor’ might be described. However it is described, it eventually means a range of applicability for some Lagrange multiplier; for our biological potential; or for both.
The proposal of the Aristotelian template now becomes the insistence that a constraint of some kind is imposed upon all natural and biological populations. Thanks to this proposed constraint, all variables uniquely characterizing any and all populations are prevented from adopting any and all ‘unauthorized’ values. They are in particular constrained from deviating from their abstract template in ways and by amounts that might lead to new species … again how so ever species is defined.
As to the problem of defining species, neither the Darwinian nor the Aristotelian proposals can do this coherently. However, the general purport was laid out by Ray, and later developed by Paley (Ray, 1686; Paley, 1802). That ‘species’ is a fluid concept, with illdefined boundaries, is certainly what the Darwin proposal would assert. So even though the Aristotelian proposal insists that species is fundamental and defining, we shall proceed without demanding a clear definition simply because (a) the Darwin proposal does not need one; and (b) the Aristotelians cannot provide one … albeit that the implications of this inability do not seem to affect the strength and vigour of the arguments in its favour.
Thanks to the approach we have taken, based as it is upon the vector calculus, we now have a veritable embarrassment of riches with which to first describe, and then to test, any and all suppositions. It is enough, for our purposes, to appreciate that the proposal of the Aristotelian template denies that changes in numbers could have any “significant” effect on any of the variables T, M, P, m̅, p̅, V, F1, F2, S, µ, φ, κ, or χ. If the proposal holds, then every one of these variables must demonstrate itself independent from all others, excepting only for explicitly defined relations such as V = P/M. Yet in spite of this independence, each must still somehow act in concert with all others to determine biological populations and their entities.
A first claim from the Aristotelian camp of course concerns the proposed regularity or constancy of values in those variables over historical time, and as defines a species. The claim is that generation after generation, given values repeat such that entities maintain a constancy in traits and behaviours.
This proposal of constancy in variables immediately harks back to the Liouville theorem:
Imagine that we have a sufficiently large number of mechanical systems so that their representative points in phase space may be treated like a fluid. Then the density ρ of the fluid in the neighbourhood of each representative point P is constant with respect to time, if P remains in the interior of the fluid.
We derive from Liouville’s theorem the following corollary:
If a density ρ of representative points in phase space is to be stationary in time, it is necessary that it be constant along every trajectory (Wannier, 1987, p. 56).
Both the Liouville theorem and the stated corollary would initially seem to support the basis—if not the details—of the proposal for constancy made by the Aristotelian template.
We have already pointed out the practical difficulty the Liouville theorem causes for the proposal of the Aristotelian template. The theorem insists that values for population size n, moles of components q, and processing, w, must vary consistently and conjointly. But this is a linkage the Aristotelian proposal would prefer to deny. If, however, that proposal simply wants to circumvent those difficulties and continue to look on this theorem as an indicator, and so as a proof, for the overall concept of the fixity of species, granted some limited variability in n and q, and which at first sight seems reasonable, then the Liouville theorem certainly bears closer examination. Unfortunately, however, even as a generalized abstract, its implications about constancy—such as that a few bird beaks can change, without anything ‘fundamental’ changing—are damaging for the proposal.
As described by Wannier above, the Liouville theorem treats phase space as if it were a liquid of unknown construction (Wannier, 1987, p. 56). Each system is then a “representative point” that will eventually traverse all possible trajectories, in phase space, whose energy is equivalent to its own. But unfortunately, if we start contemplating the nature of this theoretical and supporting fluid medium, say by letting it become molecular, then the Liouville theorem does not necessarily hold. It becomes increasingly possible that given systems will leave the neighbourhoods ascribed to them and pursue other trajectories in alternative phase spaces, so leaving the interior of their initially conjectured fluid. It is then possible to try to “save the appearances” and instead suggest a ‘quasiergodic hypothesis’ in which our given representative system can approach any other trajectory as closely as it wishes, provided that the energy of that new proposed trajectory is the same as that of its current trajectory at that point. It could then conceivably travel from one neighbourhood to another in a wellbehaved fashion. But under certain conditions, even this alternative breaks down (Wannier, 1987, pp. 56–57). The most we can eventually try to say is that energy is independent of all other parameters, and that it “tends to hold” to its constancy … but that it is nevertheless liable to stray. Random interactions whose energy is small, and generally at a molecular level, can shift given systems to different neighbourhood trajectories. Thus although the Liouville theorem can broadly and in general guarantee that systems hold given quantities in common, this cannot be absolutely guaranteed for all possible time periods and under all possible conditions, which is what the proposal of the Aristotelian template needs even as it tries to circumvent the link between numbers and variation prescribed by the theorem. The Darwinian proposal of course has no trouble with these possible occasional and discontinuous shifts in trajectory that can have longterm heritable consequences, but it does now mean that the Liouville theorem cannot provide the theoretical backbone that the fixityofspecies concept requires. That idea will therefore have to find some alternative formulation.
As for our list of variables, we can easily test them by establishing a range of differentials and partial differentials with respect to n. We can even test for δQ and δW for we now have an integrating factor to make it possible. But we in particular note the two fluxes and their divergences, M and m̅ and P and p̅. We have already studied them extensively so we already know that they each uniquely define any population … again meaning that should the two couples of M and m̅ or P and p̅ ever deviate from each other, then the magnitude of that deviation, which depends solely on the scale by which n changes, is also definitive for a population.So dP/dt and dp̅/dt should always be the same at all ts over the generation, T. If these rates differ anywhere, then the only explanation is a change in n. There is no other possibility.
We now turn to hard data. Table 5 sets out both the individual and the population values for mass, for energy, and for the energy flux, which is µ, the biological potential, over an entire Brassica rapa equilibrium age distribution population so they can be directly compared:
Stage 
Number (biomoles) 
Mass (grams) 
Energy (joules) 
Flux (watts) 

Seed 
1.096 

Indiv. 
1.171 x 1003 
1.017 
3.276 x 1005 

Pop. 
1.283 
1,114.587 
3.591 x 1002 

Leaf 
0.767 

Indiv. 
4.977 x 1003 
7.021 
4.151 x 1004 

Pop. 
38.153 
5,382.197 
3.182 x 1001 

Flowering 
0.724 

Indiv. 
6.503 x 1002 
15.232 
1.949 x 1004 

Pop. 
47.070 
11,024.883 
1.410 x 1001 

Fruit 
0.662 

Indiv. 
8.717 x 1002 
13.959 
1.124 x 1004 

Pop. 
57.710 
9,241.168 
7.443 x 1002 

Dry Seed 
1.751 

Indiv. 
1.060 x 1001 
15.558 
0.000 

Pop. 
183.686 
27,249.289 
0.000 

Seed 
1.096 

Indiv. 
1.171 x 1003 
1.017 
3.276 x 1005 

Pop. 
1.283 
1,114.587 
3.591 x 1002 
The equilibrium Brassica rapa population decreases by Δn = 40% as the plants transform from seed to fruit. But it of course increases again by exactly the same margin from fruit back to seed. The issue now rests on whether or not the changes in the pairs of values Δm̅ and ΔM and Δh̅ and ΔH do or do not track each other.
As we noted earlier, the individual Brassica rapa plants increase in their mass by 43fold as they develop from seed into fully adult plant. But we now see their population mass increase 143fold over that same interval—a gain that is just over three times as great. Since dM/dt must then overall be three times the value of dm̅/dt. This is not independent of n. There is, in fact, a cyclical dependency. Our B. rapa population immediately refutes the proposal of the Aristotelian template, which is the attempt to deny Darwin.
We now turn to the energy. As Brassica rapa transforms from seed into fruit, the average individual energy, h̅, increases by Δh̅ = 1,372.5%; while the total population energy only increases by ΔH = 829%. Therefore, the plants somehow manage to increase their individual energies by a factor that is 1.6 times greater than the overall increase evidenced by their population at large. Therefore, dP/dt is also consistently different from dp̅/dt over the entire population. This is again not independent of n. These experimental results provide yet further refutation for the proposal.
And then finally, there is the biological potential. Necessarily, the fluxes at the fruiting stage will be relatively modest. The greatest fluxes will of course occur in the previous stages when leaves, flowers, fruits and seeds are being built. The issue, however, is whether or not the individual and the population fluxes are the same. They are Δµi = 343% for the individual flux and Δµ = 207 % for the population. The scale of the individual differences is greater than for the population one, which once again refutes the proposal of the Aristotelian template.
There are further issues with the generation length, T, itself. The two expressions for the curl are ∇ x M = ∫M dT/∫dm̅ dn and ∇ x P = ∫P dT/∫dp̅ dn. If the proposal of the Aristotelian template holds then not only should M and m̅ and P and p̅ not vary, but the curl must be constant so all entities process mass and produce energy in a uniform manner relative to each other. Since there must be no relative changes in chemical reactions or processing speeds, then one entity may not differ from another or a curl has been immediately introduced. The proposal of the Aristotelian template in other words insists that biological populations follow an irrotational vector field.
Since the denominators in each of ∫M dT/∫dm̅ dn and ∫P dT/∫dp̅ dn contain n, then the greater is the number of entities processing any given circulation, the more slowly they will each do so per each unit time. Thus the less mass and energy they can each process at each t. Greater quantities of processing or configuring must now be done per each unit of mass. Thus the more rapidly each entity must cycle through its transformations for the generation. The generation length must therefore vary inversely with numbers. So if T indeed changes inversely when n changes, then the template proposal’s credibility is further and fatally undermined.
When we now look at the data there are indeed measurable differences in curl, in circulation, and in processing rates. The Brassica rapa generation lengths consistently shorten as the number of seeds we plant per pot increases. At four seeds per pot we return T = 44 days. At ten seeds per pot, for our second generation, we return T = 35 days, a reduction of 9 days or just over 20%. At fourteen seeds per pot, for the third generation, we return T = 28 days, a further reduction of 7 days or 20%, so giving a total reduction of 16 days or 36% on the original 44day cycle. No other variable has changed. This confirms that generation length is a statement of energy that varies with number, and again refutes the proposal of the Aristotelian template.
This is again all very well, but the history of these debates shows that simply proving that the proposal of the Aristotelian template is impossible is not enough. The main thesis is that biological matters belong in a ‘special’ realm incorporating the creation, maintenance, and fixity of species. They are thus a ‘mystery’ inaccessible to rationality and counting.
Since the above would seem to be a good general description for the position adopted by supporters of the proposal of the Aristotelian template, it is necessary to do more. It is necessary to positively prove that Darwinian competition and evolution actually occur, and provide substantive values in the form of kilogrammes of components and joules of energy. Fortunately, and as we saw earlier, we can attack this problem of the behaviour of ∂n/∂t and the dependency, or otherwise, of mass and energy upon number, as a constraint, and within biology. We can use our biological potential in conjunction with Lagrange multipliers; the gradient theorem of line integrals; and the directional derivatives we already have in place.
Figure 62 presents biological potential as a set of contours in a manner similar to that we saw earlier (Figure 43). Each has the general function f(n, q, w) = an. The proposal of the Aristotelian template presents the constraint that the numbers must not vary. Our circuit of biological potential must therefore satisfy the condition g(n, q, w) = c, or even g(q, w) = c where n is simply ignored. Since we need to maximize f(n, q, w) subject to g(n, q, w) = c, we as ever introduce our Lagrange multiplier to establish the necessary conditions. We establish the further function:
Γ(n, q, w, λ) = f(n, q, w) + λ(g(n, q, w)  c)
We then, in the usual way, determine those points where Γ’s partial derivatives are zero. Since n is already known—it is fixed—we immediately define a circuit of biological potential and declare all needed values for q and w, the moles of chemical components for the entities, and the chemical bond energy with which they must be bound. We can then measure a given population—such as Brassica rapa—to see which, if any, of its members have those values.
But before we can do so, we still have three anomalies to resolve. The measurements we are about to make must be thoroughly rigorous and beyond reproach or they have no validity. The first of the anomalies is that pointed out by Harte et al: ‘the inverses of the Lagrange multipliers are neither intensive nor extensive, and we do not know if they can be associated with a generalized ecological ‘‘temperature’’’ (Harte et al, 2008). This is based on an informationtheoretic approach to entropy whose mathematical provenance is not doubted, but whose general applicability sometimes is. Our only reference to it, however, is to finally resolve the etymological fallacies involved, and to confirm the fundamental mathematical properties of biological systems and populations.
By the second law of thermodynamics the ideal Carnot cycle—which is the perfect reversible heat engine—is the limit case for entropy. Its net change over that cycle is zero—which is also how entropy is defined. Since a zero entropy change is unattainble in any real case, it must always increase to the maximum possible value. This is a constraint. This therefore leads to “the Jaynes principle”, a statistical interpretation of entropy through the Boltzmann theorem. The Jaynes principle is based on an algorithm first introduced by Gibbs, and turns thermodynamics into an expression of the calculus of variations using Lagrange multipliers and also based on Shannon’s information theory (Biró, 2011, pp. 5658; Gull, 1991; Stewart, 2007, pp. 970–977).
It is very important to be clear, at this point, about the differences and/or similarities between the information entropy of Shannon, Jaynes and others, and the thermodynamic entropy of Carnot, Clausius and others. Shannon’s work on information entropy is concerned with the possibility or probability of extracting an “original” and “intended” message from a message stream, granted the inevitable inaccuracies of coding, of transmission and of decoding. Every message must be signalled. Thus to quote a wellknown example, if the final received message appears to be ‘P_ZZ__’ then granted the structure of the English language, what is the probability that the original message is either ‘PIZZAS’ or ‘PUZZLE’? What, indeed, are the chances that we have completely misread the signal so that it is something else altogether? Thus information entropy seeks to determine the likelihoods or probabilities of given sets amongst other given sets of probable sets.
James Maxwell proved that heat is an electromagnetic radiation. Let our original message now be sent as an electromagnetic beam, whether as a radio wave, or as light. Its probability and accuracy—as a message—is immediately entangled with the more wellknown thermodynamic entropy of microscopic particles and their probabilities as the photons that carry the intended message and signal interact with the environment. Thus the laws of heat and electromagnetic radiation are suddenly relevant. Jaynes’ discovery was that information theory—with its discusssion of probable sets concerning the accuracy of the transmission of the content of messages—becomes a broader and more overarching theory about the general behaviour of microscopic particles … and therefore embraces thermodynamic entropy. The actualities and metaphysics of the information entropy versus thermodynamic entropy debate do not concern us for they are of no relevance here. We are concerned solely with information theory’s exhibition of a most useful mathematical strategy: the method of Lagrange multipliers within the calculus of variations. The mathematical techniques hold independently of the objects to which they are applied.
Since entropy—and we restrict ourselves to the thermodynamic variety—always tends to a maximum, then the Lagrange multiplier approach of MaxEnt theory, as it is called, points out that all such problems can be solved by finding the tangents and normals of the equivalent properties, with the normals then again being multiples of each other. Thus maximizing entropy becomes the problem of maximizing a function such as:
S(U, Xi)  λuCu(U)  ΣiλiCi(Xi)
where S is entropy, U, as ever, is the internal energy, and the Xi can be any other constraint or set of constraints, such as volume and/or the number of particles in the system etc. These are all summed. The λu is then the Lagrange multiplier that tells us the value that internal energy must have, while λi does the same for whatever other phenomenon is of interest. The partial derivatives taken with respect to the Lagrange multipliers are then incorporated in Cu(U) and Ci(Xi) to become C’u(U) and C’i(Xi) which are both then set to zero in the usual way. We then want their values. We therefore produce partial differentials of the form (∂S/∂U)  λuC’u(U) = 0, and ∂S/∂Xi  λiC’i(Xi) = 0. The value for the first constraint is now C’u(U) = (∂S/∂U)/λu, that for the latter C’i(Xi) = (∂S/∂Xi)/λi.
Entropy is measured as joules per kelvin, which at first sight makes it intensive. It is, however, an extensive property. Entropy is a measure of the microstates available to a system’s molecules, and that they can explore with their myriad motions. The larger is a system and the more material or components it has, the greater is the number of microstates available and the more substance there is to conduct joules of heat away out into the environment, and at that given temperature. It therefore scales with size and is additive. And since entropy is extensive, then systems can be brought together and summed.
A far more general function for entropy states the total entropy of a number of systems and subsystems brought together to create one system, α:
ΣαS(Uα, Nα, Vα, …)  λuΣα Lu(Uα)  λnΣα Ln(Nα)  λvΣα Lv(Vα)  …
and where α represents an entire set of systems and subsystems (Honig, 1999; Biró, 2011). We now summarize all values such that Lu(U) = ΣαLu(Uα) and so forth, their values being fixed and representing the composite values and constraints as summed as appropriate. This entire function must now be maximized which is in principle possible, thus confirming the mathematical properties of the thermodynamic variables. As usual, λ is multiplying the constraint function.
In all these cases, when we have taken suitable partial derivatives, the values we need to state the system’s properties under these conditions are multiples of the Lagrange multiplier. We therefore ultimately determine them from their inverses: i.e. by dividing by the Lagrange multiplier. The behaviour of this composite system’s internal energy is thus expressed as:
λu = (∂S/∂Uα)/ L’u(Uα) = 1/T.
The most important aspect, for our purposes, is that the entire internal energy of all the distinct systems can be summed over all of them and over their joint constraints. When this has been done, the quantity 1/λu states the rate of change of internal energy—i.e. the change in the rapidity with which they explore their conjoined microstates—under the constraint that entropy—the absolute number of those microstates—is maximized over the α entities or subsystems composing them … which is a formal definition of temperature. Temperature is therefore the inverse of the Lagrange multiplier that expresses its applicable constraints, in the relevant function, as entropy is maximized. This is the maximum entropy formulation of thermodynamics, and as Harte et al pointed out, temperature is now the inverse of the Lagrange multiplier for any given system or composite system’s average energy constraint, and over all its particles.
It is of course possible to produce Lagrange multipliers for all other intensive variables. The expression:
λv = (∂S/∂Vα)/ L’v(Vα) = P/T
gives us pressure as a constraint within the same maximum entropy function, and irrespective of its size. It therefore tells us the values volume must adopt with respect to all those myriad microstates that the conjoined systems hold in common.
We can now use the Lagrange multipliers to determine values and behaviour of microstates for a particularly important conjugate variable discovered by Gibbs: chemical potential and particle number. The constraint for chemical potential expresses that substance’s behaviour through its particles, and at any time. We procure the Lagrange multiplier for the average particle constraint as entropy is maximized … and which is the measure and the influence of its Gibbs energy in all possible interactions. It is:
λn = (∂S/∂Nα)/ L’v(Nα) = µ/T.
The chemical potential, µ, that Gibbs was the first to analyse now speaks to the changes in the configuration, concentration, and reactivity of chemical components due to their molecular movements, intensities, and positions within a given system, and through their ability to induce and participate in chemical reactions. These cause measurable changes in energy and entropy as molecules explore their various interactions.
If we consider a system of two different chemical species or components, then the two expressions (∂S/∂n1)U,V,n dn1 and (∂S/∂n2)U,V,n dn2 state, respectively, their rates of change of entropy with respect to changes in each of their amounts of substance—i.e. in their numbers of particles—as internal energy, volume, and numbers are held constant. Each expression thus states the extent of the transformations each substance can induce through those particle numbers, which is its concentration, and as those numbers are infinitesimally altered. The partial differentials are a measure for the energy each carries as it is introduced into, moves about within, and/or is removed from the system. We now therefore have a statement for the energy per unit of amount of substance, which is Gibbs’ chemical potential.
Biological entities are entire collections of interacting molecules. Each biological entity contains q moles of chemical components which can be summed together as a discrete system. Their resulting chemical potential, which is intensive, then states the reactions throughout that volume of cellular materials. Of particular importance are their transformations such as osmosis, and the traversal of molecules both (a) across cells walls; and (b) through internal membranes. When the former is acts of cellular respiration and so forth, as components are exchanged with the environment, then it is mechanical chemical energy; with the latter being nonmechanical chemical energy. But all these biochemical reactions occur under a pressure, and through the expression (∂S/∂V)U,n dV. This entire suite of DNAcontrolled activities is an expression of chemical potential, which we can measure through visible presence, which is the specific Gibbs energy, or the Gibbs energy per unit mass, over any given population, and at any given time. The given partial differential measures the change in entropy with respect to that entity’s entire volume of chemical components, while its internal energy and its amount of substance are held constant. That volume is defined by the net sum of its cellular components, and contains a definite amount of substance in moles for each and every substance comprising it. And if we now keep those moles of components or particle numbers and cellular volumes constant, we derive the partial differential (∂S/∂U)V,n dU which states all its changes in entropy with respect to internal energy, which is the sum total of its chemical reactions. Its given chemical potential thus embraces all its possible ongoing reactions and in DNA, which is again the chemical reactions and transformations being wrought throughout its entire volume of cells and at that time. The conjoined chemical potential over all the reactions and particles within a given entity is then its biological potential.
We have now gained access to the chemical and all other reactions involving energy and entropy within individual biological organisms. We now need to relate them to each other across their populations and within their given environments. We can do this through their heat and work which are their processes and paths. By Law 1 of biology, which is the law of existence, all biological entities must always do measurable work. This immediately links us to the state principle, in thermodynamics, by which a stable equilibrium state exists for every system, and for every value of energy, number of particles, and applicable constraints.
The state principle declares that the values held by any given property in stable equilibrium can be expressed as a function of whatever parameters operate on that system. This is because stable equilibrium—which each and every system possesses at each and every moment—means that the internal energy, U, is at a minimum. A stable equilibrium is distinct from a steady state which could be, but is not necessarily, the property of some path or process. Stable equilibrium is a property possessed by any system capable of undertaking a path. By the state principle, all those allowed microstates—i.e. highspeed molecular and microscopic movements—that do not produce stable equilibrium are shed until U = TS, meaning until its internal energy, U, is equal to its temperature, T, multiplied by the number of available microstates it has at that temperature, which is its entropy, S. All the microstates that are incompatible with its current conditions and constraints are shed—as heat energy—in an irreversible process. This is the second law of thermodynamics and this is the meaning of heat. Therefore, if a given system is having an effect upon the environment as either heat or work or both—δQ, δW or δQ  δW—then it is by definition moving from one state of stable equilibrium to another and shedding excess microstates, as the transition dU. Therefore all measurements made of any system in a stable equilibrium state produce values for the properties concerned, and as are a direct function of its state. A system in stable equilibrium thus engages in zero microstate interactions as can further reduce the number of its available states without also engaging in work and heat. This is so by definition and is simply a restatement of the first law of thermodynamics: dU = δQ  δW. The result is that a system cannot leave a state of stable equilibrium without inputs or else outputs of energy as either work or heat, which then means that some collection of specified molecules within some system have had irreversible effects upon the environment. Therefore, if a system of any kind, biological or nonbiological, is having an effect upon the environment as either heat or work then it is moving from one stable equilibrium state to another courtesy of some specified molecular movements, and on a specified path involving that measurable work and/or heat. This is the second law of thermodynamics. The first law of biology simply insists that viable biological entities do not pursue paths that only emit heat, but that they must also and at all times emit work, which is a part of the law of existence: n >= 1; δW = (δQ  dU) > 0; m → ∞; m̅ > 0.
Law 2 of biology, is the law of equivalence: (δW1 = δW2) ∧ (δW2 = δW3) ⇒ (δW1 = δW3). It is based on Maxim 2 of ecology, the maxim of number, where ∇ • H = h̅ x n = δW for each relevant population. Thus if two biological entities or populations pursue similar paths, then by the state principles of the first and second laws of thermodynamics, they must use similar energies and at similar values for entropy. Furthermore, by the second law of thermodynamics, they must each be maximizing their entropy and shedding all possible allowed states as that work is done; as that heat is given off; and as they each seek for their respective stable equilibria. This is for each of them to take their individual internal energies, u, to the minimum possible value consistent with those conditions and through the change in state du. Therefore, every biological population and every biological entity within every population seeks constantly to maximize the work, δw, that each one can produce from its given masses, m, and at its given numbers of components and entities, q and n. All those n entities are at all moments doing the maximum possible work for each, at the minimum possible heat, and with the maximum possible number of chemical bonds retained, for that is simply the state principle and the second law of thermodynamics active upon them all. And since this is true for all, then the population at large is always seeking for the maximum possible work, δW, from the maximum possible masses and changes in masses, U and dU, with the minimum possible losses in heat, δQ … and also from the maximum possible numbers of both components and entities, which is again Maxim 2, for it holds for all, and so is true of the maximum possible numbers, which is over all those comprising the populations. And since entropy is extensive then we can sum all the entropies over all. We can then call this sum of all the entropies over the entire population and per unit of its mass is its work rate, W. When expressed per the biomole, it is then the engeny, S.
As we have now defined it, a population or an individual entity’s biological potential, µ, which is its rate of change of engeny, dS, tells us how that population or entity is affected both by the constraints imposed by the environment, and by other entities as it and they are subjected to changes in mass, M, in configuration, V, and in their numbers. These are their ongoing chemical and biochemical interactions both internally and externally and through the environment.
If we now consider a fourentity population, then the expression m̅dS = dU + PdV  µAdnA  µbdnb  µcdnc  µddnd stated over that system tells us the results of a change in engeny at any given value for average individual mass, m̅dS. We produce the wattage or energy change made available at that value for mass. This net change in engeny depends upon: (a) the current changes in the population’s stock of biological matter or net number of components, dU; (b) the population’s changes in its overall energy which is its overall Wallace pressure, P, and its stock of chemical bonds, H, given its prevailing change in visible presence, PdV; and (c) the sum of all the changes caused by any ongoing removals from or introductions into that population by any components contained in any entities in so far as these can also cause a redistribution of values in masses, in configurations, and in energies through the values those particular entities hold for those components. Thus if, for example, any entity that is introduced or removed possesses exactly the average values, over the population, for its m and p, then the population’s average values remain unaffected. The overall mass and energy fluxes, M and P, will change proportionately, leaving the population’s defining divergences unchanged. But … if any entity introduced or removed does not have exactly the average values for its population, then the average values over those remaining will immediately change … and therefore the definition for the survivors will also change, even if nothing else about them changes … and simply by that exit and removal. This is an important ingredient.
Since entropy must be maximized, then engeny must also be maximized so that the work rate, which is simply the sum over all the entropymaximizations, can also be maximized. We of course then have a Lagrange multiplier to express these biological constraints due to engeny. By the first law of thermodynamics, every expression of energy must take material form. And since biological entities must always maximize their work and keep their expenditure on heat to a minimum, we immediately have a Lagrange multiplier of the form λu = 1/m̅ where the average individual mass over the population, which is the divergence in mass, now states the mass that must be maintained in order to procure the maximum possible work over that population, given its current numbers, its current chemical configuration, and its stage of development within that specific cycle which is at that moment t over T. Average individual mass is therefore a biological population’s Lagrange multiplier for the average energy— h̅ or p̅—over all its members as they each maximize their individual entropies, and according to the second law of thermodynamics which is, again, to maximize their population engeny. So these two together, m̅ and p̅, again state a population’s complete behaviour under the stable equilibrium of the second law of thermodynamics; but they also do so in the face of the total fluxes M and P. These once again state a dependency on n, and so in direct contravention of the proposal of the Aristotelian template which erroneously insists that changes in numbers cause no physiological or metabolic changes in populations.
We have now stated the force, F(F1, F2), all about the population’s boundary for the population function f(n, q, w) and that we measured with our planimeter. Given the first and second laws of thermodynamics and the behaviour of biological entities, all biological populations will always strive:
 to procure the maximum possible energy over the generation, Hg, under the prevailing conditions, which is ∫P dT and the area bounded by the planimeter in its curve or circuit of µ for each of the three possible twodimensional measures—n and q, n and w, and q and w—and as according to Green’s theorem; and
 to surround any given and stated region formed for any such twodimensional area or surface with a curve or boundary of the greatest possible length, and so that the maximum possible work can be done within that boundary, and again as decreed by the second law of thermodynamics; and finally
 every volume of energy formed by all three dimensions of n, q, and w will be similarly surrounded by surfaces of the maximum possible area; and every such area or surface will in its turn always try to surround the maximum possible volume of biological flux and energy.
Since maxima and minima of this kind prevail for all three axes of biological potential, then every permutation of circuit, of area, and of volume over all three dimensions is easily handled by Lagrange multipliers—and independently of all issues as might be raised or not raised by the Max Ent principle within information theory—for this is a simple problem of maxima and minima under constraints. It holds regardless, and no matter how entropy is philosophically or theoretically understand, for there are other methods of finding these extrema, and entropy is a wellrealized and measurable physical commodity. We simply used this approach to clarify concepts and to puncture an etymological fallacy.
All possible biological behaviour can now be described via our general function that produces the circuits, areas, and volumes of form f(n, q, w) where n is the number of entities in the population, q is the moles of chemical components of which they are each composed, and w is the biological processing given to those components. By the gradient theorem of line integrals we then have ∇f = (∂f/∂n, ∂f/∂q, ∂f/∂w) as an expression for biological potential, µ, at every point t over T. It states the potential to which the population responds by maximizing entropy, and as then summed over the population as engeny.
The required change in biological potential, dµ, from one t to the next, over a given generation length, T, can now be written in terms of the gradient operator as dµ = ∇µ • du where u is the unit vector in whatever given direction, and as a directional derivative. It is always possible to determine the precise values for any axis—n, q or w—no matter where the directional derivative points at any time. This is then the contribution, to the net behaviour, from that particular commodity. This is of course the acceleration in energy: the rate of change of the rate of change of energy with respect to time.
There is always a direction in which the biological potential is a maximum. Just as gravity, temperature and pressure also and straightforwardly follow gradients, then that is also the sense that drives this biological potential. It also always follows the maximum difference trajectory … and … this can have but one result.
Every biological entity in any population has a greater or a lesser capacity to participate in biological activity. This depends upon both its own potential, due to its own mass and configuration, and on the population’s current state or description. Populations and systems can change states because each of their entities can effect a given amount of transformation due to each one’s ability to affect both the environment and that population. This influences the capabilities and potentials of all other entities as they each exploit resources and energy. It is engeny—which when applied to numbers produces the sum over all entropies over all members—that determines the extent of those effects, and that therefore determines natural selection … which is now simply a statement for the amount of activity available to all entities, and as is more formally declared in our biological potential, which has all the rigour that natural selection has up until now lacked.
Since Darwin did not have easy access to these concepts, particularly as we understand them today, he was forced to express his ideas about natural selection considerably less rigorously and by saying:
Geometrical Ratio of Increase. A struggle for existence inevitably follows from the high rate at which all organic beings tend to increase. Every being, which during its natural lifetime produces several eggs or seeds, must suffer destruction during some period of its life, and during some season or occasional year; otherwise, on the principle of geometrical increase, its numbers would quickly become so inordinately great that no country could support the product. Hence, as more individuals are produced than can possibly survive, there must in every case be a struggle for existence, either one individual with another of the same species, or with the individuals of distinct species, or with the physical conditions of life. It is the doctrine of Malthus applied with manifold force to the whole animal and vegetable kingdoms; for in this case there can be no artificial increase of food, and no prudential restraint from marriage. Although some species may be now increasing, more or less rapidly, in numbers, all cannot do so, for the world would not hold them (Darwin, 1872, pp 50–51).
This leads straight to our second and third anomalies, which are linked. It is best to be clear what the deeper issues are as we proceed. Every biological entity is a set of chemical components, q, along with a set of chemical bonds, h, of given internal energy, u, governed by a given Gibbsian chemical potential, µ, and which we are now calling its biological potential. This potential controls all its particle numbers and its chemical reactions in and with the environment as it does work and maximizes its entropy, s, in a search for stable equilibrium and as per the second law of thermodynamics. All this granted, we can immediately refer to each biological entity as a distinct “chemicalbondset”.
Given all these laws and considerations then all the entities forming a given population have a given value for all the chemical components comprising all the entities. These can now also be summed together to state the joint biological potential for that population granted the sum of all those internal energies or components of which they are each composed, and as they jointly seek to maximize their distinct entropies … which is to maximize the population’s overall engeny, S. An entire population can therefore be referred to as a “set of chemicalbondsets”.
Now that we understand that a population of biological entities is a set of chemicalbondsets, we can more clearly understand the biological implications of the Clausius statement given earlier (Garg et al, 1993, p. 126). According to that Clausius statement—which is a general statement governing all systems using and composed of energy—no real process can transfer heat from a cold body to a hot one without also having some other irreversible and undesired effect upon the environment. In other words: it is impossible for a Population A, which is an entire set of chemicalbondsets of initial progeny, and each of which is a distinct chemicalbondset of low initial average individual mass, m̅1, to undertake a cycle of work interactions in which the initial progeny seeks to become chemicalbondsets, or progenitors, of the higher average individual mass, m̅2, and so that they can then do further work and effect a transfer of their chemical bonds and components to a Population B which is a second set of chemicalbondsets of the low initial average individual mass, m̅1. By the Clausius statement, Population A cannot achieve this in favour of Population B without an adverse effect in the way of losses of both components and distinct chemicalbondsets—i.e. a Δn—being imposed upon itself.
Lord Kelvin’s formulation of that same second law gives us an alternative appreciation of this important realization (Garg et al, 1993, p. 126). On the KelvinPlanck (as it now is) rendition, no process exists whose sole result is the complete conversion of heat absorbed into work. There is therefore no process in which an entire Population A, a complete set of complete chemicalbondsets, can hope to manufacture a set of new chemical bonds and so absorb added chemical components from the environment, and so such that it can—without failure—transfer those absorbed bonds plus components to an intended Population B of progeny, this being a second complete set of complete chemicalbondsets, and with each such progeny in B being similar in its size—as chemicalbondsets—to those that first sought to beget them. There will inevitably be losses of bonds, of components and—most importantly—of entire and complete chemicalbondsets in the original set of chemicalbondsets which is Population A, and its numbers will inevitably decrease: Δn. The issue, now, is the consequence of this inevitable decrease.
The clear advantage of mathematics is its precision in terms. The second and third anomalies arise from the fact that, as should by now be evident, an experiment to directly prove the proposal of the Aristotelian template is impossible because the proposal itself is impossible. The template proposed to oversee biological events does not and cannot exist. Or rather: it cannot influence the real world.
Even though the proposal of the Aristotelian template is demonstrably impossible, we nevertheless have to devise a convincing and unarguable method for its demonstration and refutation. But we cannot do this until we have produced a clear expression for it. We can then conduct an experiment—such as we have already done with Brassica rapa—to highlight that proposal’s manifest deficiencies.
It is difficult to find a clear expression for the proposal of the Aristotelian template because biology and ecology are in themselves unclear in their terms and definitions. Fortunately, we can use our vector calculus language to highlight exactly what the template proposal is claiming. Those claims are perhaps best understood through an example.
In their book Genetics: Human Aspects, Arthur and Elaine Mange describe a proposal for the possible origins of Homo sapiens as follows:
In the 1960s scientists discovered that aerobic (oxygenusing) cells of animals contain two genetic systems: the main one in the nucleus and a very tiny one in the mitochondria, the “powerhouses” of the cell. The latter are actually descended from some freeliving aerobic bacteria that eons ago invaded and took up permanent residence in primitive anaerobic cells. During this evolutionary process, the mitochondria retained some of their own DNA, which resembles bacterial DNA but now includes just a few genes involved in energy production.
All the mitochondria present in our cells are derived exclusively from our mothers, and none from our fathers. This is because egg cells contain many mitochondria, but sperm cells contain very few mitochondria—and none of the latter survive inside the fertilized egg. Thus, any trait associated with a mitochondrial gene must be transmitted by the mother to all her children, both male and female. In the other direction this lineage should extend from any individual through the maternal grandmother, the grandmaternal greatgrandmother, and so on, all the way back to Eve! Indeed, because mutations in mitochondrial genes accumulate at a much higher rate than do those of nuclear genes, geneticists use the former as “tags” to help trace human evolutionary history. One study of mitochondrial DNAs taken from 147 people representing five population groups (African, Asian, Australian, Caucasian and New Guinean) revealed that they all “stem from one woman postulated to have lived about 200,000 years ago, probably in Africa. All the populations examined except the African population have multiple origins, implying that each area was colonised repeated” (italics in original) (Mange and Mange, 1990).
The difficulties here are with ‘traits’, ‘tags’ … and, indeed, with the mere concept of genes, mutation, and descent. These are a manifestation of both energy density and specific energy, which establish the biochemical behaviour of any cell. The population’s specific energy, P/M, is its curl. This takes us all the way back to Bjerknes and the founding of meteorology—a topic so airly dismissed by Turchin—in that as we have already hinted, this proposal of the Aristotelian template requires that a vector field be irrotational, i.e. one without curl or variation (Turchin, 2001, p. 17).
As we pointed out earlier, meteorology has a commendable scientific rigour. This comes from the models upon which it is based. In 1858 Helmholtz further developed the underlying theory for vector fields and fluid dynamics by demonstrating that any material line element or “vortex filament”, as he called it, within a fluid of constant density that moved along a line of motion aligned with the vector describing the tendency for that element to rotate about itself would always remain aligned once aligned (Thorpe, 2003). Kelvin took a different angle and in a paper he wrote in 1867 discussed the problems of rotation in any material fluid, again being defined as one in which the material parcels are always the same (Cole, 1991; Thorpe, 2003). Kelvin was the first to define a quantity he called ‘circulation’ which he proved was a consequence of the preservation of angular momentum. If the circulation is denoted by C then Kelvin proved that it is given by:
C = ∮u • dl
where the integral is being taken all around a closed curve, u, within the fluid, and dl is a line element vector always pointing along the curve. He demonstrated that any body whose constitutive elements do not rotate about themselves—i.e. it acted like a rigid body, such as a top or a turntable—will then orbit like a planet about a sun that always keeps the same face to that sun. Thanks to Maxwell we now say that this is due to its lack of curl.
Helmholtz and Kelvin’s formulations of fluid behaviour both pertained to fluids of constant density. In 1898 Bjerknes extended this to his Bjerknes circulation theorem which considerably developed and generalized Helmholtz and Kelvin and—as we have already seen—founded modern meteorology (Bjerknes, 1904; Cole, 1991; Thorpe, 2003). The abundance of data meteorological events provide, and the consequent difficulties of computation this produces should not obscure the basics of the underlying theory.
The reason an irrotational population is problematic becomes clear when we consider the formal mathematical definition of curl. A curl’s magnitude upon a proposed vector field P—and therefore a biological population’s specific energy—is formally defined as (Fleisch, 2008; Weisstein, 2011a):
∇ x P • n^ ≡ limA→0 (1/A)∫oC P • ds.
The problem this definition poses for the proposal of the Aristotelian template is that it puts evolution and variation firmly within the limit definitions for calculus. The value for the function’s limit holds until a new and more precise value is adopted, and as that function and its phenomenon heads ever closer to its limit, and as defined by whatever function is at hand. Thus the limit value depends entirely upon the index currently being used, and input, to compute it. The greater is the number of indices and input terms, the more precise they become, so the more closely and precisely the limit is known.
This definition produces an insuperable problem for the proposal of the Aristotelian template because according to the fundamental theorem on limits, which supports the entire edifice of the integral and differential calculus:
 If a function u has a limit l and c is a number, then cu has the limit cl.
 If u and v have the limits l and m, respectively, then u + v has the limit l + m.
 If u and v have the limits l and m, respectively, then uv has the limit lm.
 If u and v have the limits l and m respectively, and if m is not zero, then u/v has the limit l/m.
 If u never decreases and there is a number A such that u is never greater than A, then u has a limit which is not greater than A.
 If u never increases and there is a number B such that u is never less than B, then u has a limit which is not less than B (James and James, 1992).
To express this in biological terms, the fundamental theorem on limits allows us to pose a question about energy density and biochemical reactivity, for any species, population, entity, organ, or cell. Since each of these collections of material phenomena is becoming ever smaller, each is—in the most precise and technical of terms—approaching zero as a limit. A biological entity is itself a limitpoint … and its existence can only be observed in the material events it induces.
If we choose to remain “in the large”, i.e. at a distance and a remove, from any specific entities or organisms and their features, then their entire sets of traits and characteristics—induced by their limitpoints—could indeed by handled by some abstract template, and there is no issue for there is little to discuss. “How do all biological organisms behave as they do” is an extremely general question. The mass and the energy fluxes of the indices and input terms concerned are very imprecise, and one proposal is as good as any other. There are many possible values for M, P, and P/M, each of them vague and inaccurate for there are many limitpoints involved, and copious phenomena. But as we zero in on some specific entity or phenomenon; and as the topic of investigation becomes ever smaller; then so also do the limit terms both increase and tend to zero … which is also to ever more accurately evaluate the relevant fluxes in their mass and energy and as we approach a distinct limitpoint. These are simply the line integrals for mass and for chemical configuration and the like about an increasingly specified boundary, and so about any given region or set of cells, and as its limit also approaches zero and A → 0. Thus “how does a humming bird remain in flight” is a very much more spcific question, surrounded by limits, and it is possible to take measures and start proposing that the material events associated with a given limitpoint have something to do with the Bernoulli principle, the laws of aerodynamics, and the shape and disposition of wings. This is a mass and an energy that can be much more easily measured. The more precise the question, then the more precisely can we put numbers upon properties. And as we evaluate P and M, we get closer and closer to an accurate and limiting value for P/M. The biological entity concerned is itself a limitpoint.
The proposal of the Aristotelian template is somewhat different. In its declared limit, and within its chosen precision of inquiry; and as we care to ever more closely scrutinize any given trait in any given cell or organism and pose the question: “is this feature in population A or Entity B fully handled by those organs, features, chromosomes or genes?”; then the answer the proposal must ever more closely approach is “no”. This “no” answer holds as the questions become less and less general, and more and more specific. As with Paley and his watchmaker, the answer always becomes “only a template can be responsible for that trait”. The same goes for every trait … and this is then the limit..
A curl is always assessed, and defined, by its prevailing limit. Since the more closely we approach a limit the more certainly the Aristotelian answer is “no”, then this is again the limit proposed … and so the limit is exactly zero everywhere, and the answer is always undefined. Although an undefined limit is easily handled by calculus, and permits of calculations, the result is unlikely to be anything precisely measurable or concrete.
If the limit is either undefined or zero everywhere, then there can be no curl in any population, and P/M simply cannot vary. Indeed, P/M cannot exist and there simply are no variations. Not only are there no variations, but there simply is no mass and no energy anywhere that could even be evaluated, and so assigned a limit, for since there is no curl there is no specific energy, and so also no entropy and no engeny. Biological entities are then both utterly undefined … and impossible. There is no limit about any proposed limitpoint, and so no limitpoint. The inexorable consequence is that templates and their properties can never at any time have any material presence for no limitpoint has one. This is the equally inexorable consequence of a clear definition for energy density from a limit … which is somewhat paradoxically the most careful and the most rigorous of definitions in the entire pantheon of mathematics.
But should supporters of the proposal of the Aristotelian template still try to insist that there are indeed chemical reactions, and therefore variations, that the template has produced, but that are not caused by the energetic behaviour of any given and specific cells or limitpoints, then this immediately provides a set of numbers or configurations for those given entities or cells in so far as they must still have mass, inertia, and energy. They can be measured. This provides a limit, along with a limitpoint to which it is applied. That limiting value can then be compared to other limiting values and to other generation lengths, for they are all simply numbers no matter how derived. That proposed number from the Aristotelian template must then demonstrate itself to be more accurate, as a limit, than some other proposal, and no matter how the numbers for that other proposal are arrived at. This is then and immediately a question of molecules and their congruency with those traits. Even if molecules are also an abstract template, then they remain the cause of all material objects, and of all energy, for that is how energy and entropy are defined. They are demonstrably subject to the limit theorem of the calculus with all that that implies. Newton’s fourth law of reasoning then immediately comes into play.
There is a similar problem with the divergence, whose formal definition is (Weisstein, 2011):
∇ x P ≡ limV→0 (1/V)∫oS P • da.
As variations bring about curls and circulations, so do divergences and fluxes invoke questions about inheritance, and about the reproductive capabilities of populations and subpopulations over historical time. Curls are invoked as the flux passes through the surface of interest.
Once again, and in the limit, if we begin to suspect, as evolutionary geneticists did, that a given group of progenitors is responsible for placing given traits into any population, then the answer from the supporters of the template proposal to all questions about that population’s reproductive prowess must be “no” … and increasingly so as the population size diminishes towards zero and we point at increasingly specific groups of limitpoint entities. By the proposal, only a template, and not the reproductive abilities of any population or subpopulation, is responsible for each and every trait. Therefore, the limit stated is zero. And so therefore, the vector field representing the population must have zero divergence, because no population can be either a source or a sink; and nor can any population even give an impression that it is a source or sink. Once an impression is given, or a general indication concerning a given group of progenitors is in place, then a divergence can be computed in both time and space that gets ever more accurate as the sample size diminishes and the information improves, for this is about measurable limits. Thus the “mitochondrial Eve” hypothesis points to a population existing some 200,000 years ago which is now the limit for that particular divergence and as assigned to a collection of limitpoints.
In their The History and Geography of Human Genes, Luca L. CavalliSforza, Paolo Menozzi and Alberto Piazza add to this debate of origins for human populations by saying:
There have been several attempts at calculating world population sizes for the late Palaeolithic and these vary considerably. We use provisional estimates … [of] 400,000 to 800,000 for the period immediately before the expansion to Europe of a.m.h., increasing thereafter 3–5 million, until 8000 b.c., when the Neolithic revolution began to expand [a.m.h = “anatomically modern humans”] (CavalliSforza et al, 1994, p. 68).
These are populations of specified sizes of discrete limitpoints. But by the template proposal, and granted this definition of a limit, it must not even be possible to conjecture a capability for any population for any trait, for if a broad limit can be conjectured, than a closer one will soon follow. Since impressions implying that any population could be directly responsible for any trait or feature whatever are impermissible—for otherwise a proposed limit exists and that could be ever more accurately computed—then the divergence must again be zero everywhere and at all times for all populations.
A divergence of zero for a vector field of this nature establishes another insuperable problem. A zero divergence means the field is solenoidal. No mass or energy can enter or leave anywhere, because to do so would allow given entities to begin looking as if they were at some time and in some place either a source or a sink, and so allow for even the vaguest of evolutionarily relevant calculations and limits such as that Phylum A might have diverged from Phylum B at some time in the Cambrian or Ordovician periods. This is the inevitable consequence of a divergence of any size, for it immediately indicates a limit. If a template again holds, then the mass flux must have completely uniform rates over all times and populations. Thus biological entities are simply not allowed to either dissipate or reproduce, for each of these institutes a divergence, be it positive or negative. The same holds for growth and development, for these involve changes in mass and energy fluxes, and they therefore allow for calculations of sources and sinks that will become ever more exact.
The above two circumstances mean that each and every biological population must now be described by a vector field that is both irrotational, or without curl; and solenoidal, or incompressible and without divergence. But … this is in fact theoretically possible. We simply use our three constraints of constant propagation, constant size, and constant equivalence. These describe our Brassica rapa population under exactly these conditions and are:
 P’ = 5.013 watts or joules per second for the constraint of constant propagation;
 R’ = 54.012 darwins or joules per biomole for the constraint of constant size;
 W = 164.720 watts per gram for the constraint of constant equivalence; and
 T = 36 days for the generation length.
The generation length is of course utterly without meaning for this is an invariant and continuous flow of energy and materials. But the generation length allows us, in principle, to “slice” the streams. We can then accurately define the population by stating the generation totals of the mass and energy fluxes. We then have B. rapa’s full and complete description under these conditions of the proposal of the Aristotelian template. This is of course without any real biological meaning for there are no births, no growths, no developments, no maturations and no reproductions. It is nevertheless the most accurate and realistic representation possible of the proposal of the Aristotelian template. Indeed, the description is unique for no other population can have those values. By the Helmholtz theorem every vector field U has already proven to be the unique sum of a divergence and a curl, U = ∇ • U + ∇ x U, with a unique generation length T, which is the statement of the boundary conditions.
A prime difficulty for the proposal of the Aristotelian template is, of course, Maxim 1 of ecology, the maxim of dissipation—and which is ∫dm < 0; ∇ • M → 0; M = nm̅. An irrotational vector field creates an impossible situation because biological entities cannot survive without work and energy. This immediately means a specific energy about a pointentity, and so a curl. And if a population without curl is impossible then such a one is beyond direct testing. The same arises for divergence because biological entities—even ones of this Aristotelian style—will certainly abide by the second law of thermodynamics. They will dissipate in the same way as all other molecular systems. In dissipating, they will institute sources for divergence. Thus the two requirements of irrotational and solenoidal do not allow us to conduct experiments. They do not allow us to contrast the essence of the Darwinian and Aristotelian proposals … although they do not necessarily make the proposal of the Aristotelian template beyond refutation. We can still formulate a suitable description and run an experiment on mass, energy and number. We need only consider the general limits and the general constraints the proposal seeks to impose, and then find evidence against them, even as such broad generalities.
If we speak the language of constraints, we can try to determine if a distinction between, for example, useful and nonuseful variations— i.e. between evolutionarily and nonevolutionarily relevant—is possible, and if so what the magnitude of difference might be. These are, in Darwin’s eyes, what drive evolution. CavalliSforza et al define a mutation by saying:
At cell division, DNA is replicated so that each of two daughter cells generated by the division of one cell contains DNA that is practically identical to that of the parent cell, with very few errors in replication. Such errors are transmitted to progeny because the new DNA is the master from which all copies are made. Transmission error in the reproduction of DNA is called mutation. … Mutation in the dividing cell of an organism made up of many cells, like humans, may lead to an alteration of part of the organism, but it is not transmitted to descendants unless it occurs in germinal cells or gametes. Gametes are dedicated to the production of individuals of the next generation, and mutations occurring in them can be passed on to progeny and thus have evolutionary consequences (CavalliSforza et al, 1994, p. 5).
But natural selection is, of course, the issue at stake. CavalliSforza et al still cannot offer either a formal and rigorous definition or a mechanism, but they do underscore its undoubted importance by insisting that it is:
… the only evolutionary factor that has direct adaptive consequences, because it is the automatic process sorting out and favoring useful mutations while eliminating deleterious ones. It thus makes the functional improvement of living organisms possible. ….
Natural selection is the automatic choice of “fitter” types, which can eventually make an initially very rare type, a single mutant, the most common in a population, provided it is advantageous to the individuals carrying it. The complex adaptations we observe in living organisms would have essentially zero probability of spreading to whole populations and species by mere chance. Natural selection is responsible for these extraordinary functional adaptations and the complex mechanisms responsible for them. Before Darwin, and after him for people who have not really grasped the power of natural selection, these adaptations have often appeared, understandably, as the product of design, and hence of intelligent creation. Under closer scrutiny, biological adaptations are wonderful but clumsy, like the result of “tinkering” (Jacobs 1977), the accumulation of useful mechanisms not by design, but by trial and error, in a historical process, dictated by the chances of spontaneous mutations happening at particular times and places. When mutations offer acceptable solutions to the needs of organisms, they are adopted via natural selection. But they inevitably set later constraints on the further evolutionary process (see e.g. Crick 1988).
Seen at the most elementary level, natural selection is simply the automatic enrichment of populations in genetic types that produce more descendants, and impoverishment in those that produce fewer. The rate of change under natural selection can be predicted on the basis of the numbers of descendants of each genetic type, strictly speaking, the number of children reaching sexual maturity. This number is called Darwinian fitness and is based on demographic parameters like survival and fertility. It is usually expressed in a relative scale, comparing two or more phenotypes or genotypes in the same population. On the basis of Darwinian fitness of two genetic types, one can predict which type, if any, will prevail in the end, and the rate of the process of change in gene frequencies, provided fitnesses do not change over time (CavalliSforza et al, 1994, pp. 1112).
We can discuss the proposal of the Aristotelian template’s relevance and take the measurements we need by proceeding as follows. In 1850 Rudolf Clausius recast thermodynamics by using an integrating factor to distinguish between its first two laws, which had until then been conflated (Encyclopaedia Britannica, 2002). He thus confirmed that the energy of an isolated system can remain constant even as its entropy ‘strives towards a maximum’ of stable equilibrium.
Discoveries since Clausius—the work of Boltzmann, Waterston, Maxwell and others—demonstrated that entropy is intrinsically molecular. Both entropy and the Boltzmann constant, kB, have the dimensions of joules per kelvin. The Boltzmann constant is effectively a statement of the thermal energy carried by molecular entities. And not only is entropy about molecules, it is always evaluated with respect to some imposed constraint acting upon those molecules … which is most usually the environment.
Clausius then applied his new concept of an isolated system to the internal energy, U, developed by Helmholtz who had rigorously distinguished between the constant pressure and the constant volume expansions of systems and as previously described by Mayer (Mayer, 1841). Clausius then showed that although an isolated system—i.e. one not in contact with any environment—might not suffer loss, it was still free to move about and transform itself between its allocated number of explored and explorable microstates. He then defined a limit form of cyclic process where the initial and the final end states were identical and so where entropy, as he conceived of it, could similarly go through a cycle of exploring all possible microstates and yet still return to its initial value. But once there was an environment, that environment would immediately impose limits on what was possible by removing allowed microstates. The modern understanding is that in the Carnot cycle he studied so closely, these microstates were first removed, and then introduced.
In 1909, the Greek mathematician Constantin Carathéodory reworked thermodynamic laws with a much improved understanding (Encyclopaedia Britannica, 2002). He successfully avoided usage of either of the terms ‘heat’ or ‘energy’. His version of the first law of thermodynamics—which both announces the existence of energy and defines its properties—is:
an extensive property exists whose increment is the work received by a system while surrounded by an adiabatic wall (Encyclopaedia Britannica, 2002).
Figure 63 shows the consequences of this definition. The adiabatic wall—i.e. nonheat transmitting—means that the system is separated from the environment. By Boyle’s law, PV = T, when mechanical work is done to uplift the piston, the volume decreases; the pressure increases; and the temperature also increases. The change in PV results in a change in T, which is the net absorption of heat energy. This is the extensive property called energy. Its precise measure is now the mechanical work done in moving the piston. As Mayer first taught, the one was transformed into the other and this is again then its definition. Energy—of whatever kind—is the property evolved when mechanical work is done on a suitable system held in adiabatic conditions throughout. Mechanical work affects molecular behaviour and has been transformed into heat … with both being forms of energy.
There is, however, another way of interpreting the piston’s movement. This is to propose that the walls are now porous to heat. So as the piston is moved let all the heat—i.e. the imposed molecular activity—now leak out into the surroundings. In this new situation, then as the piston is moved and same quantity of mechanical work is done, the system is placed under pressure but the temperature remains the same as the heat energy created is instead absorbed by the environment. That environment now increases in its number of microstates.
Athough the pressure has still increased, while the volume has decreased, these two situations are not the same. The extensive property energy—in its specific manifestation as heat—has still been produced by the mechanical work, but much of it has on this second occasion been utterly lost to the environment through those porous walls. Therefore, although these two systems in their final states have the same volume and pressure, the former system is hotter and contains more heat energy, whereas the second has remained at the same temperature; has surrendered that evolved heat energy; is cooler; and so has remained at the same temperature as the environment. Its energy has increased, but definitely not in the same way.
Carathéodory has now suitably defined energy and next deals with entropy. His version of the second law of thermodynamics is the principle of Carathéodory and states:
… in every neighborhood of any state S in an adiabatically isolated system there exist other states that are inaccessible from S (Honig, 1999, p. 83).
This seminal idea of the inaccessibility of states can now be used to define the similar constraints that uniquely identify—and that therefore separate—the Darwinian and the Aristotelian populations.
In the first heat pump situation—where the walls are adiabatic and there is no heat exchange with the surroundings—then since the heat energy is fully retained we can recover the original situation simply by reversing the work done, i.e. by doing negative mechanical work, to restore the piston and system to their original conditions. The heat energy originally absorbed will be surrendered back to the same system as that original work is also undone. No extra work has to be done. There will be precisely reversible, and corresponding, changes in pressure and in volume as the pressure reduces and the volume increases back to their original values. The temperature will also reduce to its original. And since the original situation is entirely recoverable—the two situations balance out and reverse—then the two have the same entropy or are “isentropic”. In spite of their differences in P, V and T, they belong to a common isentropic set. The initial and final states are mutually accessible through the same mechanical work, and this is their shared neighbourhood.
The second situation is entirely different. Since heat energy was surrendered to the environment, then the piston cannot be returned to its original situation simply by reversing the original work done by the piston. When we have undone the original work, the piston will not be at its original location. Extra work will now have to be done to compensate for the heat energy lost to the environment when the piston was first moved because the heat energy concerned was surrendered as the system remained at its original temperature. Thus in this second situation there was a change in entropy, and the magnitude of that entropy change is the exact measure of the heat energy lost, and also of the extra work that will now have to be done to restore it to its original position. A given number of microstates have been lost—and surrendered to the environment.
The two situations, taken together, tell us that although many kinds of processes that satisfy the first law of thermodynamics are possible, the only ones that could occur in the real world are those for which (a) the entropy remains constant, which is the perfect or ideal case first analysed by Carnot and Clausius; or else (b) all other cases in which it increases. And entropy will always increase because the ideal situation requires no losses in friction, such as in moving the piston, no losses to the environment and so forth. Thus the number of microstates surrendered to the environment by any system or event is always greater than zero and can never be less than zero, even in the maintenance of a steady state cycle. Any real steady state cycle will still increase the net entropy of the environment as that cycle restores itself to as close to its initial conditions as possible for there will always be losses that must be overcome. The kind of frictionless template needed for zero entropy simply does not exist.
The ideal cases studied by Carnot, Clausius, and the early thermodynamicists can still differ from each other, even though all remain perfect or ideal. They can differ due to the amounts by which their respective pistons move. This is entirely due to differences in the specific heats of whatever substances form their working media. There will always be a specified amount of substance stated in moles, and which thus declares a given number of microstates. Mayer was the first to propose that the constant pressure, CP, and constant volume, CV, specific heats of any given substance differ due to the mechanical work that is done by the former, and is not done by the latter. He also suggested that they were linked through the Mayer relation: CP = R + CV, and where R is the universal gas constant whose value is 8.314 46 21 joules per kelvin per mole. Since CP and CV —the two heat capacities—differ only by R, the universal gas constant, then the Mayer relation states that a substance’s thermal behaviour depends entirely upon the number of its molecules which is its amount of substance. The behaviour under energy is completely independent of the precise nature of any substance, and is entirely quantitative. By the first law of thermodynamics any energy exchange undertaken under one heat capacity can in principle be substituted for by the other. They are separated only by a change in magnitude. The heat needed for a CV interaction is always less than for a CP one. Or alternatively … for any change undertaken by CP or constant pressure, it is always possible to increase the number of moles and then produce an equivalent change but instead undertaken by CV. Thus the two differ only in the amounts of substance required and by a set proportion or index depending on the difference between CP and CV, and no matter what the scale.
Although thermal behaviour depends only on the amount of substance present, substances can still differ in the rates at which they individually respond. They all have different specific heats. These differences help determine the prevailing entropy for each substance. They govern the precise amounts by which any given substance will expand under heat—and so therefore also under mechanical work—to produce changes in volume and pressure. The ratio between any given substance’s two specific heats—its CP and its CV—is its “heat capacity ratio”, or “adiabatic index”, or “isentropic expansion factor”. It is given by γ = CP/ CV.
We now have both CP  CV = R, and γ = CP/ CV. We can therefore set R = CP(1  (1/ γ)). That is to say, the precise rate or amount by which a substance will expand, when mechanical work is done, depends only upon its expansion factor, γ, and its amount of substance, with the relation between them being a universal constant applicable over all substances and all scales.
Given that each substance is composed of a specified number of particles, and with a given density, then when each such substance is under any given compression: (a) its final and initial temperatures and pressures have the ratio Tfinal/Tinitial = (Pfinal/Pinitial)1(1/ γ); and (b) its final and initial volumes and pressures have the ratio Pfinal/Pinitial = (Vinitial/Vfinal)γ. These are the sole reasons for all proposed qualitative changes under energy. The amount of substance is immaterial, and these relations hold true for all substances, again quite irrespective of type.
We have now demonstrated that the volume change elicited in any given substance on an energy transaction depends entirely on the ratio between its two heat capacities. By the same token, its constant pressure and constant volume processes can produce entirely different changes in temperature, depending upon the substance. And since temperature is an integrating factor, such that dS = δQ/T, then the change in entropy over any given temperature change is Sfinal  Sinitial = ΔQ/T.
Given that we have two different heat capacities, which respond differentially, then the net change in entropy over any given dT can be evaluated in two different ways. It is either dS = CV(dT/T) + R(dV/V) or else dS = CP(dT/T) + R(dP/P). The universal gas constant is used on both occasions, meaning that any change due to a change in amount of substance—i.e. in the number of particles maintained—contributes directly to the net change in entropy. The total contribution from such a loss can be evaluated, and that contribution depends upon the scale of the differences between the two heat capacities.
If we now heat a substance from a given temperature, T we then undertake the temperature change dT; and if we then apply to that change the constant volume specific heat, CV; we can produce the entropy change, dS, by applying the universal gas constant, R, to the accompanying volume change of dV/V. And if we instead apply the constant pressure value for specific heat, CP, to that same temperature change, we can then apply the gas constant to the change in pressure to produce the same entropy change. In either case, the universal gas constant allows us to track any changes in particle numbers, and it will determine for us the contribution made by those relative changes to the net entropy change.
We now turn our attention back to biology. We have already determined separate energy values and constants for the mechanical and the nonmechanical chemical energies used by biological organisms, and which are fully equivalent to CP and CV. We also already have a Franklin constant fully equivalent to the Avogadro number.
Carathéodory’s version of the first law of thermodynamics uses mechanical work to declare the existence of energy. Using the same definitions for freedom from the environment, we can recognize that link between energy and mechanical work by recognizing the existence, in biology, of mechanical chemical work, F1. Law 1 of biology, the law of existence, emulates this exactly. It similarly recognizes the link between general mechanical work and chemical mechanical work by saying:
There is an entity such that it must always lift a weight; and such that it must, and by this means, at some time increase in its mass.
In other words, biological populations possess an extensive property expressed through chemical mechanical work whose increment is the mass their limitpoints gain when surrounded by the biological equivalent of an adiabatic wall, and so that they are free from the environment. There is now no other change in state—including no change in their numbers. Mechanical chemical work allows biological populations to increase the numbers of components held by persuading those chemicalbondsets that contain their pointenergycentres or limitpoints to use energy to increase their stocks of chemical bonds, through adding further components, but with any other factors that might cause changes in the configurations of those bonds remaining constant. The work rate must be increasing for both the numbers of components and the net stock of microstates have increased.
Carathéodory’s version of the second law then uses the same adiabatic wall to define entropy through differences in states. By an elegant and sophisticated argument, the principle of Carathéodory distinguishes between those states that are entirely internal, and those that require an interaction with the environment (Honig, 1999, pp. 6581). The entirely internal ones share a neighbourhood of sets and do not change entropy; and so are fully reversible. But in neighhbourhoods very close to those entirely internal states are other states requiring interactions with the environment, and that are therefore not fully reversible without further injections of energy, and so of mechanical work.
The reversible changes Carathéodory successfully highlights require only the constant volume or CV heat capacity to effect them. These can be the reversible phase changes first discovered by Black: his latent energy that initiates changes in state. If these are to remain fully reversible, and so without need for any extra injections of energy, then they must not incur any penalty whatever. They must happen without loss. The only possible recovery to any such losses is through CP, and so through additional mechanical work, which is an extra tug on the piston, even only infinitesimally, to make up that loss and difference. Therefore all CV movements must happen perfectly, and with no losses whatever.
In biological terms, these constant volume specific heat changes are those changes involving only F2, nonmechanical energy. This is a reshuffling of a stated number of biological components as chemical bonds are reworked in various developmental and sexual maturation processes. These are again distinct from mechanical chemical energy, which is instead a net change in the total number of components held. Since no biological population can survive without at least some exercise of the previous mechanical chemical energy to first acquire the components that will ultimately become progeny, this necessary distinction is recognized in our Law 3 of biology, the law of diversity:
The sum of all the paths that satisfy Law 2 constitutes the allowed set for the entity and its equivalents; while that which permits them to satisfy Law 1 constitutes the required set.
By the first and third laws of biology, every entity is constantly required to exercise mechanical chemical energy, which is an interaction with the environment and is given by Mdt. By the first law of biology, this can never be zero. A mass of components must always be carried over time, and for every moment of time there is a mass of components. This mass flux undertakes such acts as respiration, digestion, excretion and the like. These are all work interactions requiring the exchange of components with the environment, and so that biological entities can then do a more internal metabolic work. Thus other and nonmechanical chemical processes, such as development and reproduction, are “allowed” to biological entities … but the entities must all still express a mechanical component—i.e. an ongoing workbased and mechanical transaction with the environment:
To return to Carathéodory’s original analysis, if any constant pressure, CP, transactions and transitions in states are also not perfect; and if additional energetic transactions are necessary to offset any losses such as in friction and as are invoked while moving the piston itself; then any additional molecules and microstates taken on to offset such losses must in their turn undergo their own and additional CV interactions, so they can complete the given cycle. There must be an increase, over the ideal, in the usage of CV because of those losses, which are mechanical. Thus if extra mechanical work is indeed done to replace any energy lost, then the universal gas constant—which bespeaks a simple increase in amount of substance, and thus a straight molecular count—will accurately measure the extra thermal energy concerned, for the universal gas constant measures all thermal effects as concerns both their pressure and their volume. The one is substitutible for the other with only a difference in scale separating them. The environment’s inexorable effects on the system are therefore measured by the increase in entropy, which is that increment in pressure, and in volume, occasioned by the extra mechanical work undertaken to compensate for any lost components, no matter what the stage in which they are lost. Only the perfect or fully reversible case can happen without a net change in entropy … which simply means that there has been a complete cycle involving work and heat, and which has had no net effect on the environment in that it is fully returned to its original state and both CP and CV have been perfectly used.
A perfect biological cycle similarly means that there have been no losses whatever in numbers of entities or components anywhere throughout the cycle. Some template is in force so that all remains unchanged. There has therefore been no usage of either mechanical or nonmechanical chemical energy to replace any entities lost. But should any entities lost in either the mechanical or nonmechanical stages in fact be replaced, then it is now easy to compute the net effect. We can do so through engeny, and through the similar usage made of biological potential.
Since each substance has a specified number of particles, each of which can have a different mass, each substance behaves differently under heat and/or under mechanical compression. This is reflected in the entropy in that different quantities of energy are diverted into heat and mechanical movement. The combination of initial and final temperatures and pressures is unique to each substance and depends upon its compression ratio, (Vinitial/Vfinal). Since the net change in entropy, dS, depends upon the number and density of particles, then that entropy change is equally well stated in terms of those particles. It is thus given by dS = CP loge(Tfinal/Tinitial)  R loge(Pfinal/Pinitial). These two different expressions for the entropy are related through (a) natural logarithms (loge ) and (b) the thermal energy of Boltzmann’s constant: kB = R/NA = 1.3807 x 1023 joules per kelvin. The Boltzmann constant is the physical constant relating individual particle energies to the prevailing temperature, and again has the same units as entropy. Multiplying the Boltzmann constant by the Avogadro number produces the universal gas constant, R … which then states the thermal energy held per mole of any substance at that given temperature.
Since we already have a Franklin constant, and can also distinguish mechanical from nonmechanical energy, it is now trivial to apply these same methods to biology. Every biological entity is a chemicalbondset, with each population then being a set of chemicalbondsets.
‘Habitat’ is used in many different ways. But from our perspective, its most important characteristic is that it is the resource base from which a given population extracts the energy it needs to maintain its Wallace pressure, P. Our Gaussian surface represents the energy flow through that habitat.
The energy received by the Earth, at temperature latitudes, is of the order of 100 joules per second. This can be taken as representative. It is now a reference and a definition for biology. This energy received therefore now serves duty as our ‘standard habitat’. It is now the reference habitat to which all others can be compared … and which makes them all measurable.
Now we have determined a habitat and energy source, we need to define our ‘standard entity’. The average terrestrial euraryotic cell has a mass of m = 1012 grams. Whatever activities it might undertake, its generation length can be taken as T = 1,000 seconds … which is now also a standard of reference for all biological organisms. Thus if a given biological organism has a generation length of 2.78 hours (approximately 10,000 seconds), then we can say that T’ = 10 since its generation length is 10 times as long as the reference length. It will in other words use ten times the energy used by our standard or reference cell to complete its cycle … or our reference cell would need access to ten times the mass and energy to achieve the same purpose. We now have a standard of measure.
Now we have a reference cell, we must attend to its energy needs: its metabolism and its physiology. At a representative 25° Celsius, biological organisms contain matter whose thermal energy is of the order of 0.5 kcal per mole. Glycine, as an example, has an energy content of approximately 979 kilojoules per mole (Haynie, 2001, p. 12). It is thus in principle possible to establish a “Virchow constant” or similar whose value we define as kV = 1,000 kilojoules per mole of whatever biological chemical components. It states the needed energy in thermal terms … and as a known multiple of the Boltzmann constant, a fundamental constant of nature. Our reference cell now contains a known quantity of mass and energy which it can use for its sundry biological purposes.
The Virchow constant is now a reference value for the watts of energy that our “standard cell” uses to maintain itself and to reproduce its progeny, when held under “standard conditions”. Our reference cell (a) contains q = one mole of components; and (b) configures them so it can produce p = 1,000 kilowatts per mole per second. Setting the Virchow constant as a multiple of the Boltzmann constant states the thermal energy capable of driving the biological affairs of our m = 1012 gram cell over its T = 1,000 seconds, and no matter what may be the specifics of its interior construction, its habitat, or its generalized ecology. This Virchow constant, kV, can then provide a total energy reference, for a generation, directly related to the behaviour and the thermal energy of the particles constituting those cells, and again in direct reference to the Boltzmann constant, with all that that implies.
Many entities are multicellular. Now we have a reference cell, we need a reference entity; and we also need a reference population or number of such entities and cells. As for the latter, we already have our one biomole of entities. And as for the former, estimates for the number of cells in the human body vary from 50 to 75 x 1012 or trillion (Englebert, 1997). We can therefore propose a reference entity whose cell count is 6.022 136 7 x 1020 cells (or else some convenient and specified subset of the Avogadro number). By for example taking 6.022 136 7 x 1020 cells, then one biomole of such entities will give us exactly the Avogadro number of cells within that population, who all also abide by the Boltzmann constant. This population number now establishes the “engenetic constant”, Ω … and also carries the Boltzmann constant straight through into biology to join the Avogadro number, and so to allow indisputable access to molecules and to entropy.
We have achieved a noteworthy result. Our engenetic constant, Ω, is now biology’s equivalent of the universal gas constant. It links the attributes of all populations directly to the Boltzmann constant and so to their molecular components as they go through the biological cycle. All biological organisms and populations can now be directly compared in terms of this standard mass, energy and particle number of a reference entity and a reference population, all composed of reference cells with known and determined quantities of mass, energy and numbers … and also engaged in reproduction over a specified period.
We have noted that average individual mass is the integrating factor for engeny. Biological potential, µ, is the rate of change of engeny, dS. Now that we have established a set of suitable reference constants, then the change in engeny over any given change in average individual mass can be given as dS = χ(dm̅/m̅) + ΩT’(dV/V) or else as dS = κ(dm̅/m̅) + ΩT’(dP/P) where χ and κ are, respectively the engenetic burdens of conformation and of components mass for that population; Ω is our new engenetic constant; V is the visible presence; P is the Wallace pressure; and T’ is the ratio between a given population’s generation length and our reference one. We have now stated the complete behaviour of any arbitrary population in terms of our reference one, and therefore in terms of important and known physical constants. This is completely general.
According to the proposal of the Aristotelian template we can take populations of any size. We can then measure: their average individual masses, m̅; their Wallace pressures, P; and their visible presences, V. According to that proposal, the only difference between their various extensive values for their mass and energy fluxes, M and P, and no matter what their stages of development, will be by scale. That scale will be due to their numbers, configurations and mass. Again according to the template proposal, all population values are entirely dependent upon m and n—but do not vary, independently, and according to n or when n varies. All values measured are also irrespective of individual configurations at each stage. And since this supposedly holds for all populations, then they are all comparable.
We now, however, have standards of measure; with a variety of defined constants. We can therefore express them all in ways that will expose their comparability. By the first law of thermodynamics all energy is equivalent, whether it is mechanical or nonmechanical, and no matter how many entities are involved. For every entity in any population that transacts energy, we can now match it with some change in some determinate population in our reference entity … and this holds universally.
Although the precise values for χ and κ will vary for each population, they are nevertheless both expressed in joules. They therefore only differ, in and for all populations, by a specified number of biomoles of entities per joule incurred. The energy used for a population change based upon one value, be it mechanical or nonmechanical, can be replicated by a change based upon the other, with the two differing only in scale … which is entirely quantitative and so simply by the number of entities concerned. This is the intent of the first law of thermodynamics, and was discovered by Mayer. Mechanical chemical energy is therefore equivalent to nonmechanical chemical energy. They again differ only by the numbers of entities, n, required to effect each. They can always be freely substituted for each other. Therefore, all such changes can be isolated, measured, and compared across all populations.
If the proposal of the Aristotelian template holds then neither κ nor χ, nor their effects, may change as numbers or population sizes change. We must in other words be able to compute an “isengenic expansion factor” applicable to all populations. This is suitably measured by κ/χ and 1  (χ/κ). These biological expansion factors must now be constant over all conceivable biological entities, and over all sizes and stages of development, thus making all expansion factors the same over all possible species, populations and values. All this can now be measured, to test the hypothesis.
We can now be much more specific about the proposal of the Aristotelian template. It is insisting that the two values dV/V and dP/P show no changes due to simply to n. That is to say, since we must evaluate either ΩT’(dV/V) or else ΩT’(dP/P) to evaluate the total change, then N (which is n expressed in biomoles) must be the same on all occasions. If the proposal is to hold good, then the dN/dt contribution to any change in engeny must always be zero. Assessing this is of far greater significance. All population changes must then depend only upon κ and χ, so being independent of any possible scaling by numbers. But … we already know this supposition to be false. Brassica rapa’s numbers measurably decline between the seed and the fruit stages, and so in that case there are definite contributions to both dV and dP from n.
The above phenomena are very easy to measure. We simply select any test species, such as the wellknown semelparous organism Brassica rapa; hold its conditions as steady as possible; and take measurements.
But it is in fact more constructive to temporarily set the proposal of the Aristotelian template aside, and deal with a far more general issue. This is a matter of constraints. Where the proposal of the Aristotelian template is specifically about numberbased constraints, which are in addition to any imposed by the environment, the only constraint in the Darwinian proposal, is the environment. There is—and can be—no other constraint, no matter what its size or origin. As a general scientific principle, if biological populations are subject to any constraint whatever then the source must be found … even if it is this proposal of the Aristotelian template. We therefore need a far more general method of applying and detecting constraints. If biological populations can be shown to be free from any such more generalized and arbitrary constraint then they will certainly be free from any more specific constraint … such as the conjectured Aristotelian one. We therefore deal more generally with constraints.
We still have our n = 724 flowering plants. They enjoy a biological potential of µ = 1.949 x 104 watts, which they allocate amongst their members according to Maxims 3 and 4 of ecology. The Darwinian position is now simple. When one amongst them is lost, the population’s potential; its gradient; and its total energy are all affected. Since entropy and engeny must increase, then each entity will strive to take on the energy now made available. This will be reflected in a change in biological potential, µ, over the survivors.
According to Darwin and his supporters the population will now strive to attain the maximum possible values for entropy and engeny. But this is now simply our work rate, W. According to Maxim 4 of ecology, this means maximizing masses, energies, and numbers. Each of the survivors will therefore seek to increase its individual biological potential to the maximum. By Law 4 of reproduction, this includes maximizing those reproductive pathways accessed through nonmechanical chemical energy. This will allow the population to in the future exploit resources being made available to the maximum possible degree. But it is more generally achieved simply by maximizing entropy, which is again the work rate, W.
Our general biological function is f(n, q, w). Since biological potential is an intensive property—it is the Lagrange multiplier for particle and entity numbers—it shows itself extensively in (a) the numbers of entities maintained, n; (b) the numbers of components maintained per entity, q; and (c) the configuration of those components, w. These will each be maximized. Since it is intensive, the biological potential to achieve this is shared out amongst the survivors. It increases by µ̅ = µ/n and so by 2.688 x 107 watts in this specific case. Each of the entities will therefore arrive at the next tsuch that when the population is measured, their conjoined numbers, numbers of components, and energy activities will reflect that increased value.
And … this is precisely where the proposal of the Aristotelian template disagrees. According to it, the only permissible increases in mass and energy will be due to the governing template. Gains and/or losses in numbers have no effect, and so no increment due to ∂n/∂t is allowed to biological potential. We must therefore substract ∂n/∂t to produce the “correct” predicted value for biological behaviour. We must in other words always determine, at every stage in the cycle, a suitable directional derivative for number, and then subtract it from biological potential to determine the Aristotelian constraint’s effects.
However … biological potential is simply measured in watts. And so since the increase in biological potential, which is the change in gradient of ∇f, is simply the sum of ∂f/∂n, ∂f/∂q, and ∂f/∂w, then we can very simply subtract the ∂n/∂t component. This is easily determined—as we did in our Brassica rapa experiment—by counting entities. We again then immediately know how much energy should be made available to any population. That is the Aristotelian constraint.
We have a clear and measurable value for the ∂n/∂t component of biological potential, and at every stage. It can always be measured, determined, and applied. The biological entities concerned are all too visible, and they can be counted, weighed and measured … just as we did with Brassica rapa.
But since there is a much larger principle at stake than simply the proposal of the Aristotelian template, we introduce the ‘quantum of Aristotle’, qA. This is a method for applying constraints. Along with it goes a matching change in biological potential, µq.
If the proposal of the Aristotelian template holds, then biological potential may never increase by the maximum possible degree. It can only ever increase by the quantum of Aristotle, which is from µ to (µ + µq), and where µq depends upon ∂n/∂t. The quantum of Aristotle is thus an attempt to apply any “reasonable” constraint to biological populations. The quantum then potentially restricts biological entities to that constraint. Predictions can now be made against it and compared to observed behaviour. If that observed behaviour exceeds and/or ignores any arbitrarily imposed constraint of this kind, then it will ignore all other and all possible constraints no matter what their proposed source.
The proposal of the Aristotelian template may now try to constrain biological potential, and to refuse to allow it to follow the directional derivative of number, but a wattage is just a wattage. We can thus handle the proposal with the “concession of the quantum of Aristotle”. By this concession, we shall generously allow to any population whatever an increase in its biological potential of any given arbitrary value. We shall concede that as being consonant with, and due to, whatever is the constraint. We could for example incorporate an estimate, along the lines of the Liouville theorem, that is “large enough” to allow Darwin’s finches to change their beaks; but that is not so large that it could be “evolutionarily relevant”, however that is defined. We can now, and in principle, introduce any such constraints.
The conession of the quantum of Aristotle accepted, let us begin with onehalf. Even though a change in biological potential incorporating such a magnitude—i.e. 50% of the increase made available when an entity is lost—already causes severe difficulties for the Aristotelian proposal because it is still entirely dependent on n, we will nevertheless agree to overlook such gains and to assume that they are “variations” characteristic of the template. The proposal is that this allows birds' beaks and other such traits to change, without allowing them to affect the boundaries between species.
We therefore now and arbitrarily stipulate that we will allow the biological potential to increase, at any point, by any value up to µq = qA = µ̅/2(n + 1) without interpreting it as due to a change in n. Thus in our given Brassica rapa example, the quantum of Aristotle at our chosen point is approximately onehalf of the change we attribute, on this basis, to each of 725 entities … i.e. to the state the population was in immediately prior to our current and chosen point of interest, which is n = 724 entities.
Now that we have a suitable quantum based upon our concession, we define both the “sieve of Aristotle” and the “essential development” or “ED” band. The former is named after the ‘sieve of Eratosthenes’, an ancient algorithm or process in number theory for finding all the prime numbers up to and including a specified integer.
Let us now suppose that a supporter of the proposal of the Aristotelian template lodges the claim that our flowering Brassica rapa plant is proof, by its mere existence, of the validity of the Aristotelian template proposal. By the quantum and the sieve of Aristotle, we can now predict both maximum and minimum trajectories for the masses, the energies, and the population sizes for these entities … if they are in fact following a template. Our sieve of Aristotle—which can be made larger or smaller than 50% as required—now calculates a range, an applicability, and a time scale at each t over T, for how rapidly entities can be sieved out until none of any proposed initial set should remain as possible contenders for the proposal of the Aristotelian template, assuming a maximization of biological potential.
If the Darwinian proposal is correct; and if the plants are indeed going to increase in mass and energy to the maximum possible degree; then their values are eventually going to be great enough to allow them to exceed any proposed constraint, and so be sieved out. If, however, the proposal of the Aristotelian template is instead correct; or if the population abides by any constraint whatever, then the B. rapa entities will remain permanently within either this or some other essential development band. They will follow this or another constraint. They will not be sieved out.
If the proposal of the Aristotelian template holds good then both (a) a given set of entities; and (b) all of its descendants must all live entirely—and only—within the calculated band. That is the nub of that claim. But if—by application of the sieve of Aristotle—it is instead the Darwinian competition model that indeed holds good, then after a given and determinable period of time, which we can certainly calculate, then there will be zero plants left in any such arbitrarily created ED constraint band. We will then know that there is no conceivable constraint applicable. All this has the singular advantage of being eminently measurable and testable … and was again the basis of our experiment.
The quantum of Aristotle for the flowering stage of our Brassica rapa plants is now very easily calculated. It is equivalent to an energy flux of 1.344 x 107 watts. With this concept of the quantum and how to calculate it in hand we can now determine—by experiment—a value at every t over T. And such is the scale of absorption—through the abundance, C, and the accreativity, Y of compensatory development, L, and under Darwinian competition—that, even using a sieve as large as we did, which was once again a concession of almost half the available biological potential, it takes on average only around 4.6 days (i.e. just under 13% of the cycle) before the sieve of Aristotle removes all plants from all possible governorship by any abstract template or implied constraint thereof, and as suggested by the quantum of Aristotle. And … if this particular conjectured constraint fails—which it does—then all others will also fail. The simple reality is that a population entirely dependent upon the probandance, γ, and procreativity, ψ, of essential development, λ, is utterly impossible, and can now be measured to demonstrate that impossibility.
It is of course possible to make the concession of the quantum of Aristotle zero. This entirely removes all increases in energy due to ∂n/∂t. And once that much more restrictive sieve is applied, our Brassica rapa plants are removed from the ED band at an even faster rate. It now takes, as an average over the cycle, somewhere in the region of 2.8 days before all surviving plants are sieved out. This is around 8% of the Brassica rapa generation length, which is T = 36 days. There is no template active here. There is only Darwin.
But … with the equipment we have developed, we can even compute what values Brassica rapa should have had at every stage, without any abundance or accreativity, and so without any Darwinian compensatory development, L. Our GibbsDuhem and Euler equations make this possible. They each contain the requisite partial differentials that give the values for the probundance and procreativity. It is only necessary to apply the differentials systematically.
The proposal of the Aristotelian template is making the same request that Carnot, Maxwell, Clausius, Carathéodory and the other thermodynamicists made. It is asking for an ideal. Unlike Aristotle, however, Carnot and the others established their subject by creating a rigorous and ideal mathematical cycle that made all issues—including the existence of energy—clear. The adiabatic or “impassable to heat” proposal that forms a part of the cycle definition states a freedom from interference by the environment.
We must now find a way to create a biological cycle free from all influence from the environment. Carnot achieved his purposes by envisioning two heat engines linked together as in Figure 64. Two such perfectly reversible heat engines are engaged in adiabatic processes. They are each attached to a heat reservoir and exchange mechanical work and heat energy directly with those, and with each other. The entirety of the mechanical work done by a first engine is then restoring the other, and vice versa. They each get all the energy they need for that restoration from their attached reservoirs. They do not absorb it from, or surrender it to, the environment. Neither receives nor donates any to that environment. There is only this perfect conversion of heat into mechanical work, and as defined by the first law of thermodynamics. No matter how much work each of these engines might each do, neither degrades or is affected by the environment. That environment quite fails to induce either one of them to change their states in its favour and these two Carnot engines are isolated from all other systems in the environment. Each therefore follows an ideal process or template. Neither leaves any historical record either upon itself or the environment. This is a zerowork adiabatic process.
We have now followed the early thermodynamicists and made all biological issues equally clear. We have isolated the biological equivalent of a zero work adiabatic process. No matter how many entities, or how few, a first generation either loses or produces, the proposal of the Aristotelian template insists that this has no effect on its “essential nature”. As with the Carnot system, this requires only a steady and invariant stream of energy from a reservoir of resources. There are going to be no changes. A first population—which previously received all its pertinent attributes and properties from a prior one—passes these on, unchanged, to a second. That transfer occurs irrespective of the environment. We now only need to give it a practical expression, for this is a template.
If we now turn to our vector calculus, then the requirements a biological population must possess to satisfy the above Carnot requirements are exactly those it must satisfy to be a solenoidal and irrotational vector field. In either case, the population’s total energy is determined by only three variables: (a) the difference between its initial and final masses which establishes its mechanical chemical energy; (b) the configurations to be adopted at each value for mass, which then establish the work rate and the nonmechanical energy; and (c) the numbers present at each stage. We measured the expression of these requirements for Brassica rapa, over a generation, at 5.013 watts, 54.012 darwins and 164.720 watts per gram … the respective constraints of constant propagation, constant size and constant equivalence.
But unfortunately, the three constraints as stated still provide no testable variables. They therefore do not allow for realistic experimentation. However, our unit engenetic Brassica rapa population is, in its own way, a theoretical ideal. It is an equilibrium age distribution population containing B. rapa forms in the precise quantities, and at the exact masses and energies, as create its steady state equilibrium.
We know by experiment that Brassica rapa’s the engenetic burden of fertility ranges from φ = 0.912 to φ = 1.510 seconds per biomole over the cycle. But although the numbers change constantly, they by definition maintain a cycle average of N’ = 1 biomole. And since the numbers first increase and then decrease around that stated average, then there are two occasions at which population numbers are exactly N = 1 biomole or n = 1,000. At those two points we have a numeracy of Q = dN/dt = 1 biomole of entities being processed per second. We also then have—at those two points—a guaranteed steady state situation, with a known number of entities that contribute to an equally known—and attainable—equilibrium. We have our steady numeracy of Q = 1 biomole of B. rapa entities per second and without variation. We need only extend this all over T, and we have our ideal cycle for this species.
Since we now have two points at which the population is in different states and configurations, but has N = 1 biomole regardless, we certainly now have real variables for q, m, and p. We carefully also held experimental conditions constant throughout. So we now build a cycle where there are no relative losses. This is quite irrespective of what they have done, or what has happened to them over the cycle. This is what indifference to the environment demands.
Our two points are adiabatic. They have acted, relative to each other, as if free from the environment, and so with no losses. Their sole difference is that d2N/dt2 is positive on one occasion, and negative on the other. But since Q = 1 biomole per second at both, any and all differences between them must be entirely due to intrinsic properties, and quite independently of all extrinsic factors … once again including any tendency to change in numbers. Therefore: these real values can be regarded as belonging to the same isentropic and isengenic set. We are considerably closer to the “perfect” biological cycle.
We now have known values for the mass and the energy fluxes, as well as for whatever difference in configuration or energy density, ΔV, is needed to reproduce progeny. It is now a trivial mathematical matter to determine all values for m̅ and p̅ for a biological cycle that can meet these stated constraints. They will also state the values that Brassica rapa must have if it is to express freedom from the environment. We can use this ∆V, and the ∆m and ∆p that go with it, to define an isengenic and isentropic population that never changes in its numbers; that expresses complete freedom from the environment; and that can still undertake all the transformations necessary to produce progeny. We can now call this ideal biological cycle the “Franklin cycle”.
The Franklin cycle describes a set of ideal biological entities that are guaranteed completely free from the environment … but that are also based upon real measurements. We can construct a Franklin cycle for any population using a Lagrange multiplier and through a function of the form Γ(n, q, w, λ) = f(n, q, w) + λ(g(n, q, w)  c). This will give the values any entity must demonstrate if it is to produce a potentially viable entity that follows a characteristic template … but that is also guaranteed free from Darwinian competition and evolution.
We can now and at last grow some real entities. We can then compare the real values we measure, at every stage, to the values an ideal Brassica rapa that follows the Franklin cycle—and that is therefore an examplar for the proposal of the Aristotelian template—must display. And … that was the basis of our experiment.
The reality is, of course, that just as the ideal thermodynamic cycle described by Carnot, Clausius and Carathéodory is impossible, so also is this ideal Franklin cycle impossible. Outside the two carefully chosen critical points, then as we have already proven, it is mathematically impossible to find values for a Franklin cycle that have any hope of matching the real values displayed by the entities in any real population. And … the plants we grew did not follow this Franklin cycle.
Just as is Galileo’s perpetual motion, the Franklin cycle is instead a limit. It states the average values around which individuals and populations will oscillate as they contend with the environment. The Franklin cycle’s virtue therefore lies in indicating the unit normals: i.e. the directions in which the individuals in given sets of entities must move, and how they must do work, if they wish to produce progeny and to maintain the population. Just as Galileo and Newton defined mechanical inertia as the failure to display a frictionless and perpetual motion, so also is a population’s failure to follow the Franklin cycle now a measure of its Darwinian fitness, its Darwinian competition, and its Darwinian evolution. And since we measured that failure, so also did we measure the stated properties.
We can now calculate Brassica rapa’s population and generational totals for mass and energy—i.e. with all number variations abstracted, and as given by the dU and dH of the GibbsDuhem equation, and by the (∂S/∂U)V,{Ni} dU and (∂S/∂V)U,{Ni} dV of the Euler one. These state that B. rapa’s essential development, λ, over the generation as α = 3.08 grams of probundance, and ψ = 52.787 joules of procreativity, each per the biomole. That is what we should measure if B. rapa is indeed free from all environmentl influence.
Table 1 shows that we in fact measured 3.28 grams of biomass and 54.012 joules of energy per the biomole for Brassica rapa. The difference between these two values is B. rapa’s compensatory development, L, which can also be calculated from the equations. We measured C = 0.19 grams of abundance and Y = 1.225 joules of accreativity. This is mass and energy the plants expended in response to ∂n/∂t … and that they expended for no other reason than the force imposed by a directional directive in response to changes in numbers, all across the cycle.
We have now achieved what we set out to achieve:
 We have provided an Euler equation and shown that it accurately and comprehensively describes all biological behaviour.
 We have measured Brassica rapa’s abundance, being the sum of all the masses taken on entirely in response to ∂n/∂t, the losses imposed by the environment, and whose measure is given by our Maxim 1 of ecology. We have also shown that this cannot be zero. A zero value has both a reproductive and a metabolic requirement. (A) it requires that every seed, egg and sperm successfully attain maturity, i.e. without loss, and that each one also produce viable progeny, which progeny must then do the same. In other words, the engenetic burden of fertility, φ, must at all times be unity and must never deviate. This is impossible for it for example means that there can be no predators and no prey. (B) It also requires metabolic perfection in that every entity must navigate its way to maturity without loss of a single cell or limb, and it must be free from disease. This is the biological equivalent of the Carnot cycle or Galileo’s rolling ball, and it is again impossible.
 We measured B. rapa’s accreativity at 1.285 joules, being the sum over all the population of all the energy taken on by all survivors in response to ∂n/∂t, and again as according to Maxim 1 of ecology. For the same reasons, it cannot be zero for any population.
 We have demonstrated that the variable µ is a true biological potential, measurable in watts. It measures the rate at which a population or entity’s work rate, W, and its engeny, S, are changing. By Maxim 4 of ecology, which is the maxim of apportionment, this biological potential then determines how an entity and a population’s energy is apportioned across mass, energy, and number, and in a fashion determined both by its own characteristics and that of the population of which it is a part.
 We have further demonstrated that biological potential always strives for the maximum possible value, and that it cannot therefore be constrained or restrained in any manner, and particularly not with respect to numbers and as all theories opposed to Darwin propose. By the second law of thermodynamics all systems of energy strive to increase entropy to the maximum possible value, and this striving to increase is simply for biological entities and populations to increase their work rates and engenies to the maximum possible values consistent both with the environment, and with the entity density of others around them. This loss of entities is the sole cause for the increase in engeny displayed by any and all populations and entities and as are subject to the four laws, the four maxims, and the three constraints we have outlined. There is no other cause for variations amongst populations and entities, and there is therefore no other cause for species.