35. Natural selection is put on the line

The words ‘natural selection’ are easy enough to say … but they come with their own etymological baggage. There is an innate tendency to fallacies as different meanings can be—and have been—exploited. Our intention is to find an equation that not only summarizes natural selection, but that also clarifies its mechanism. Until such time, however, we accept the following as a general indicator for its effects, and therefore as a good working definition:

… natural selection was also the part of evolutionary theory considered most revolutionary in Darwin’s time, and it is still unsettling to many. Selection is both revolutionary and disturbing for the same reason: it explains apparent design in nature by a purely materialistic process that doesn’t require creation or guidance by supernatural forces.

The idea of natural selection is not hard to grasp. If individuals within a species differ genetically from one another, and some of those differences affect an individual’s ability to survive and reproduce in its environment, then in the next generation the “good’ genes that lead to higher survival and reproduction will have relatively more copies than the “not so good’ genes. Over time, the population will gradually become more and more suited to its environment as helpful mutations arise and spread through the population, while deleterious ones are weeded out. Ultimately, this process produces organisms that are well adapted to their habitats and way of life (Coyne, 2009, p. 29).

Figure 33: The Biot-Savart Law
The Biot-Savart Law

What is also “unsettling” is how diversely natural selection is understood. We intend to call to our aid, in defining it and making it rigorous, the Liouville theorem and the Biot-Savart law, amongst the most rigorous and demanding in all of science. The Liouville theorem states that the ‘hamiltonian’ of a system—effectively the sum of whatever may be its given expressions of its potential and its kinetic energies—will remain constant no matter what the permutations through which that system is taken.

The Biot-Savart law is somewhat more complex. As in Figure 33, it in essence states that the intensity of a field decreases with the square of its distance from the line-element-of-current or given current-segment generating it. This is not the same as the point-charge we have already met. A current-element or line-segment-of-current has both: (a) a sense of direction; and (b) a rate at which it flows. A point-charge has neither. Any field induced by such a flowing current element will be strongest at right angles to that generating flow or current, and its evaluation requires the vector cross product of the direction of the radius. Since a flowing element of this kind cannot exist in a single point, then its derivative must be taken. Integrals must also be taken around its boundary to determine it, or else along whatever may be the path of the point-current-elements producing the field of interest (Cooper, 1968, p. 210; Fleisch, 2008, p. 47).

What the Liouville theorem and the Biot-Savart law together declare is that if we observe a relevant “flow”, we can immediately deduce a point of generation; a direction for that generation; a magnitude to or from that point of generation; and an entire neighbourhood of both associated points and current elements of flow. We shall take these results garnered from combining biology and ecology with the Liouville theorem and the Biot-Savart law, and add them to the Helmholtz decomposition theorem of the vector calculus, which in its turn states that any given vector field, including any generated by any flow, is itself generated by both a scalar potential, such as -φ, and a vector potential, such as x U.

Granted these tight mathematical conditions we impose on our view of natural selection, all verbal renditions are by their very nature imprecise if not inacurrate. We nevertheless add the following to the above definition from Coyne, for although both are rendered entirely verbally. They probably come the closest to establishing the general framework for natural selection:

Natural selection starts with two observations:

  1. There is a vast overproduction of new individuals in nature. Every organism produces many more offspring (or eggs or seeds) than will survive to reproduce themselves, as anyone who walks through the woods can see.
  2. There is a great amount of variation between individuals, which a causal observer may not see. All zebra foals or bullfrog tadpoles may look alike at first glance, but a naturalist who spends years studying them is struck by the wide range of variability within the same species.

In each generation, many individuals will not reproduce. Selection pressures may include predators, climate, other members of their own social group, competition for space, food or mates, parasites and disease. The popular notion that “survival of the fittest” means simply that strong “winners” kill of weak “losers” is ideology, not biology.

Creation of new life forms, according to current theory, requires only a source of genetic variation and a “sorting sieve”, which lets some alleles through to the next generation and blocks others. (An allele is a variant of a gene) (Milner, 1990, p. 319).

Howsoever we choose to define it, natural selection arises as biological entities undertake specified chemical interactions in order both to maintain themselves and to create others of their kind. These interactions are in their turn manifestations of their Gibbs energies: of their abilities to acquire chemical components, do work on them, and so transform and reconfigure a specified mass of molecular biological matter. This is in its turn a property and a demonstration of their DNA. Biology is therefore intimately connected to both chemistry and to energy. It requires a given set of chemical interactions that can deliver a given quantity of energy to attain and maintain a given equilibrium age distribution. That energy again arises from the Gibbs and the Helmholtz energies. This is the visible presence, V, and the reproductive potential, A, both of which are measurable … and both of which biological entities exploit. If we are going to locate a force for natural selection, then it must be somewhere in these commodities.

Of course … this immediately depends upon a further assumption: that molecules can in themselves maintain an unchanging condition so that all events they give rise to, biological or non-biological, can be measured against them. In other words, molecules must not themselves evolve. They must not themselves show variation through heredity. But this is exactly as Maxwell describes them when he provided an article for the 9th edition of the Encyclopaedia Britannica:

It is well known that living beings may be grouped into a certain number of species, defined with more or less precision, and that it is difficult or impossible to find a series of individuals forming the links of a continuous chain between one species and another.

In the case of living beings, however, the generation of individuals is always going on, each individual differing more or less from its parent. Each individual during its whole life is undergoing modification, and it either survives and propagates its species, or dies early, accordingly as it is more or less adapted to the circumstances of its environment. Hence, it has been found possible to frame a theory of the distribution of organisms into species by means of generation, variation, and discriminative destruction.

But a theory of evolution of this kind cannot be applied to the case of molecules, for the individual molecules neither are born nor die, they have neither parents nor offspring, and so far from being modified by their environment, we find that two molecules of the same kind, say of hydrogen, have the same properties, though one has been compounded with carbon and buried in the earth as coal for untold ages, while the other had been “occluded” in the iron of a meteorite, and after unknown wanderings in the heavens has at last fallen into the hands of some terrestrial chemist (Maxwell, 1875, Vol. III, p 48).

Since, by all the laws of science, molecules do not change their state no matter how many the transformations they undertake, then natural selection can be measured against them. Molecules thus become an absolute reference point.

An equilibrium age distribution in its turn depends upon a specified number of biological entities using a specified number of these components, formulated as their resources, and as are made available to them both by their own distinct capabilities and the environment. The net value for those components attained is the equilibrium environment parameter, ψ*, for that population, and that we have already determined. It is, in essence, the mechanical listing of all the chemical components and resources required by the population. These are also molecules and establish the growth, the physiology, the development, and all the metabolic rates over all the entities over T.

Biological entities must first use mechanical energy to acquire all components. They must then use nonmechanical energy to arrange them. But we cannot yet assess magnitudes because we only at this point have statements for the relative abundances, and the relative distributions of entities and their constitutive components. Before we can know any absolute amounts in either moles or kilogrammes for the components needed for a distribution, we must specify either a value for f(0), the initial population size, or else a value for a specified number of entities at some other point in the generation length. As soon as n is specified at any one t then the entire distribution over all entities is also specified. The constitution of our line segment of molecules—which is the environmental parameter, ψ*—is then also known.

If volume is simply a count of entities, then the molecular constitution of those entities clearly contributes to that volume’s character in terms of the specific energy and the energy and mass densities. We must therefore ensure that there is no possibility for an etymological fallacy regarding the word “amount”, for the molecular commodities concerned are measured in “amount of substance”.

Just as volume has been extracted from ordinary language and given a specific meaning in science, so also with “amount”. It refers directly to both the mass and the numbers of whatever given elementary particles constitute whatever substance is under debate. The values for the reaction enthalpies of the Gibbs energy—and for the Helmholtz energy where relevant—are stated in amount of substance. This is a dimensionally independent physical variable of specified magnitude. It is essential to the measuring and treating of chemical and biochemical reactions. It is a function of a fundamental constant of nature—the Avogadro constant, NA—which defines the mole; with moles then being proportional to the number of components, of whatever kind, contained within whatever system is under investigation:

The quantity used by chemists to specify the amount of chemical elements or compounds is now called “amount of substance”. Amount of substance is defined to be proportional to the number of specified elementary entities in a sample, the proportionality constant being a universal constant which is the same for all samples.

The unit of amount of substance is called the mole, symbol mol, and the mole is defined by specifying the mass of carbon 12 that constitutes one mole of carbon 12 atoms. By international agreement this was fixed at 0.012 kg, i.e. 12 g.

Following proposals by the IUPAP, the IUPAC, and the ISO, the CIPM gave a definition of the mole in 1967 and confirmed it in 1969. This was adopted by the 14th CGPM (1971, Resolution 3; CR, 78 and Metrologia, 1972, 8, 36):

  1. The mole is the amount of substance of a system which contains as many elementary entities as there are atoms in 0.012 kilogram of carbon 12; its symbol is “mol”.
  2. When the mole is used, the elementary entities must be specified and may be atoms, molecules, ions, electrons, other particles, or specified groups of such particles.

It follows that the molar mass of carbon 12 is exactly 12 grams per mole, M(12C) = 12 g/mol.

In 1980 the CIPM approved the report of the CCU (1980) which specified that

In this definition, it is understood that unbound atoms of carbon 12, at rest and in their ground state, are referred to.

The definition of the mole also determines the value of the universal constant that relates the number of entities to amount of substance for any sample. This constant is called the Avogadro constant, symbol NA or L. If N(X) denotes the number of entities X in a specified sample, and if n(X) denotes the amount of substance of entities X in the same sample, the relation is

n(X) = N(X)/NA.

Note that since N(X) is dimensionless, and n(X) has the SI unit mole, the Avogadro constant has the coherent SI unit reciprocal mole.

In the name “amount of substance”, the words “of substance” could for simplicity be replaced by words to specify the substance concerned in any particular application, so that one may, for example, talk of “amount of hydrogen chloride, HCl”, or “amount of benzene, C6H6”. It is important to always give a precise specification of the entity involved (as emphasized in the second sentence of the definition of the mole) …. Although the word “amount” has a more general dictionary definition, this abbreviation of the full name “amount of substance” may be used for brevity (BIPM 2006).

The Avogadro constant is thus the invariant constant of proportionality linking amount of substance to particle numbers, and so to molecular configuration and composition. It is a vital ingredient in that once given a specified amount of substance in moles, it is possible to distinguish chemical elements and the substances they form simply by their values for mass. That is to say … amount of substance is the scientific way to reference mass and molecular configuration and composition. With the Avogadro number in hand, mass is an immediate reference to chemical composition for we can freely interchange between the one and the other. This holds even when those molecules are encased in a given biological entity and are thus undergoing natural selection. Mass can therefore assist us in deriving a value for both natural selection and biological inertia.

The energy for the total work done by DNA is a given quantity of Helmholtz energy. Thus to state—in moles—the quantity of chemical components present in the equilibrium age distribution population as the entities concerned circulate a mass of chemical components about themselves to complete their cycle, is also to state that they have a unique chemical configuration or composition that can be stated in kilogrammes. It is further to state that this unique chemical configuration and composition is in principle determinable—along with the associated Gibbs and Helmholtz energies—using nothing more than the standard techniques of analytical chemistry. No other components nor configurations nor compositions in mass could create that specific distribution—which thus points straight at the molecules composing their DNA, again along with the associated Gibbs and Helmholtz energies. That is the meaning and import of the Avogadro constant.

This is not, however, enough. Since biological entities incorporate chemical components into themselves, we need some way to reference the amount of substance both (a) within individual entities; and (b) within populations. Since both the number of entities in a population and the numbers of molecules of which they are composed are critical to biology and to ecology, we now establish the “Franklin constant”, NF, for use in biology. As are the Franklin factor and the Franklin energy, the Franklin constant is named after Rosalind Franklin who did the essential crystallographic work that enabled Watson and Crick to unravel DNA. The Franklin constant has a value of NF = 6.022 136 7 x 1020 biomole-1.

We define the Franklin constant in full analogy with the above BIPM procedure for defining the Avogadro constant. We define it such that if N(X) denotes the number of biological entities, X, in a specified population, and if nb(X) denotes the biomoles of entities, X, in the same population, then the relation between the two is:

nb(X) = N(X)/NF.

And we further say that if N(Q) denotes the number of components, Q, held by a given set of entities, and if nm(Q) denotes the amount of substance in moles of those components, Q, held by those entities then the relation:

nb(X) = N(X)/NF = nm(Q) = N(Q)/NA

holds. When the Franklin and Avogadro constants are brought together, we therefore have a direct particle count of all the chemical components contained within the entities in that given population. When the Avogadro constant, NA, is divided by the Franklin constant, NF, we get the 1,000 entities, we have already defined as one biomole. Scaling with the Franklin constant, NF, simply allows more biologically relevant population sizes—such as 1,000—to be taken, but without losing the connection to mass, to molecules, to entropy, to energy and to configuration energy implied in the Avogadro constant.

The effect of the biomole is that the numbers given in Table 1 for our Brassica rapa age distribution population immediately and now also state the molecular and chemical configurations and compositions—and so the Gibbs and the Helmholtz energies—of the diverse states in which its various plants can be found as they move across the cycle of the generations to establish that equilibrium. Their entire chemical compositions, configurations, and behaviours, under energy, have been referenced. Their average number over the entire generation is exactly one biomole—n’ = 1,000 entities = N’ = 1 biomole—over the course of the cycle. This scaling by the Franklin constant allows an average of 1,000 biological entities to be taken over a given biological cycle, but so that it also and immediately retains the scientific properties and attributes of the Avogadro constant as concerns all molecular movements and their entropies and configurations—and all without loss of generality.

By the first law of biology, which is the law of existence, biological entities must constantly do work. This is equivelant to them moving constantly along our line segment, and thereby doing work upon the molecules they pass over per each unit of time. And as entities and time “flow” over the molecular components on our line segment, then a “biologically charged” entity is “moving” at a specified rate and velocity over each t and towards T. The entity is “moving over” those components, arranged along the line segment; and it is using them to compose itself as a biological entity. If we now regard that “biological charge” granted by those molecules as having a magnitude of q moles per each second for that given entity; and if we consider that “forward charge” upon that line to be a part of a “flow of current” of biological component energy; and if we have n such entities; and if we consider those n entities to each hold an average moles of chemical components per each second; then that flow along our line segment is simply an average of d/dt moles per second per entity. It gives a time rate of processing for those components per each entity. We have here a specified flowing chemical force of natural selection.

We can now further clarify the dynamic variable, Q, we earlier introduced, and whose intent was to match the stationary population count, n. We first use the Franklin constant and express n in biomoles as N. We then set Q = N(dq̅/dt). We now have a total rate of processing, in both moles and biomoles per second, of the chemical components on our line segment. That processing is expressed through our entities in biomoles per second for that population. This Q states the biomoles of distinct N sets of moles of components over the population. So where N simply states the number of entities in biomoles present at any t upon our line segment in biomoles, Q now refers to the active sets of such processing units extant, and it declares the number of such distinct sets per second, and also the work they do, and the force they exert. It is the number of biological entities of whatever kind as are being actively maintained per each second at that t, and so therefore the number of moles, q, of chemical components actively being processed by those biomoles of entities along our vector line segment.

This Q biomoles of biological entities now establishes a flow of chemical components along our line segment which is a “linear charge density” or “time processing density”, for those components. There is a density or quantity of components at each t. It is reflective of the biological charge density of the mass of components flowing along our line segment and as t moves to T. This biological current of a mass of chemical processing, at this given density both per entity and per the population, now “passes by” a given location, t. This simply means that if we can consider a time point, t, and an infinitesimal time span of Δt stretching away from there, then there will immediately be a given quantity of chemical processing of a specified number of components both for each entity and for the population, that passes over that infinitesimal distance, and as if at a velocity for that density. We can always and immediately recover the total number of precisely specified components used by that population in moles, and over their entire generation. This line segment of molecules thus reflects, and gives, specificity to the constraint of contant propagation in the energy required to bind those components. If natural selection now exists in any biological population or entity, we shall certainly locate it in some specified and specifiable collection of molecules at some t over T, for we have access to them all. Natural selection is now a definite and identifiable force of nature.