Previous: 50: The demonstration  Next: 52: Epilogue
51. Conclusion: A general theory of biology
We have now met two of the three challenges we laid out in Prolegomenon II. We have produced:
 a fully general, precise, abstract, fundamental mathematical theory for biology;
 universal laws, maxims, and constraints to cover ecology, evolution, and biology, and that are as powerful and predictive as any in any other area of science.
Before we can meet our third and last challenge—producing a quantum and relativistic theory for biology complete with such characteristic quantum traits as uncertainty and waveparticle duality—we must ensure that we avoid all traces of the etymological fallacy. We must first clarify important terms from the quantum and relativity theories; properly understand their basic principles; and establish the significance to biology of quantum physics’ great discovery that microscopic particles are no more than fields of probability. We must also clarify the speed at which biological entities reproduce, and which we have expressed as Z seconds per biomole, and understand all relativistic implications.
The first quantum principle, which we will set aside temporarily, is that all energy at the discrete atomic scale is dispensed in the quantumsized units first discovered by Max Planck, and now immortalized as the Planck constant, the fundamental unit of action, h.
The second quantum principle is the waveparticle duality that was formally established by Erwin Schrödinger towards the end of 1925, shortly after he had visited Peter Debye’s laboratory in Switzerland (Mehra and Rechenberg, 2002, Vol V, Part 2, p. 420). Schrödinger and Debye were both familiar with Louis de Broglie’s 1923 proposal that the Planck constant could be used to convert any particle’s momentum into its associated quantum wavelength. The electron—until then conceived of as a strictly material particle—could therefore also be regarded as a wave. Schrödinger had initially been dismissive, but Debye was interested enough to ask Schrödinger to lead a symposium investigating the idea. Schrödinger grew increasingly impressed with de Broglie’s then bold conception of waveparticle duality. A few months later he produced the Schrödinger wave equation which describes how a physical system’s quantum state changes over time. It gave de Broglie matter waves a power and a comprehensiveness equal to Newton’s laws of motion.
Since classical physics gives a wave’s energy as E = 2π2ν2μA2λ, Schrödinger and the other early quantum theorists had temporary difficulties interpreting the terms in his new equation to match those of the classical one. The standard way of presenting the ultimate resolution is to conduct a thought experiment.
We imagine ourselves doing two things simultaneously:
 We hold a piece of string at one end and tie the other to some object. We can now propagate a wave by shaking the string. Its length determines the resulting wavelength.
 In concert with the above string, we imagine a particle confined to a straight onedimensional track. It travels between the two ends.
As our wave propagates, the associated particle bounces back and forth between its walls. The faster we shake our string the more waves we can fit in, and the faster the particle moves on its onedimensional track. If we double the number of waves upon our string, we double the number of endtoend bouncings by our particle on its track, and so it double its speed.
Schrödinger and the early quantum investigators soon noted that the string’s wavelength, λ, shortens as we fit more waves in. The particle must then bounce back and forth and from end to end faster to reflect each one of these increased numbers of waves. They rapidly determined that this corresponds directly to the particle’s momentum, p, for it moves more quickly back and forth to match this. They also noted that the more rapidly we shake the string, and the shorter the wavelength gets, the greater also gets its frequency, ν, and the more energetically the particle buzzes back and forth on its track. As it moves more quickly it increases its kinetic energy and transports more energy through each volume element, ΔV, per each unit of time. And since the medium in which the accompanying wave is being formed is oscillating more frequently, it transports more energy per each oscillation. The more rapidly the string shakes; and the more rapidly the particle moves; then the greater is the number of energetic transformations that they each undertake in each unit of time in each volume element. These energy levels and behaviours change all the more rapidly per each unit of time, within each ΔV, as the two rise from zero, at each of their ends, to a steadily increasing maximum in their middles, and as the number of shakings and bouncings jointly increases. Thus frequency, ν, matches kinetic energy, Ek. If we are to produce a fully quantum biology, then we need to replicate this aspect of quantum physics.
The third fundamental quantum principle is the ‘uncertainty’ first formally proposed by Werner Heisenberg in 1927 (Heisenberg, 1930). Heisenberg’s quantum uncertainty—which is distinct from quantum physics’ famous ‘observer problem’—now states that since neither the wave nor the particle are ever stationary, they are always passing through some given volume element, ΔV. The more accurately we want to know each of their properties, then the smaller must be the volume element to which we try, for the purposes of measurement, to confine them (Marhall and Zohar, 1997).
Quantum principles are inapplicable at the macroscopic scale, but it is at those very scales that they are most easily understood. So when—as an example—we try to “trap” both a Ferrari and an antique ModelT Ford to a specific location by photographing them right beside a milemarker, we might now know exactly where they both are … but since the photograph has immediately fossilized only the one moment, it gives us no idea how fast they were each travelling when the photograph was taken. In fact, the only way to ensure they are both in a precise location, and so that we have an extremely high quality photograph, is to make them both stationary and to “pose” them right by that mile marker. But since the photograph is static, this eliminates their entire velocity, and therefore their momentum.
Now we have understood and applied the concept macroscopically, we return to the microscopic domains and to the highspeed particles to which the Heisenberg uncertainty is more directly relevant. The uncertainty principle says that the smaller is the volume element we use to trap a particle, then the more likely it is to have moved elsewhere and vacated that element, for the shorter is the time interval we are using. We cannot take an accurate measurement of our particle … or “get a good photograph”. But should we indeed trap any particle within a given volume element, then just as with the two cars placed millimetrically accurately and posed for a good photograph by our mile post, we will immediately lose all knowledge of that particle’s momentum. Momentum and position thus form a complementary conjugate pair of observables.
Energy and time form a similar conjugate pair of observables because energy is always moving through volume elements, no matter how small the timescale we use. If we get an accurate reading for energy then we do not know exactly at what moment, or over what time span, Δt, it was taken; whereas if we specify a long enough timespan over which we try to get an accurate measure, then the energy is so busy developing across it that we can do no more than get an indication of some kind of “weighted average”. The Heisenberg uncertainty principle states that fixing one member in any such quantumbased microscopic conjugate observablespair always increases the uncertainty with which the other is known. A quantum biology similarly asks that the greater the accuracy of our knowledge regarding one aspect of biological populations, then the greater must be our uncertainty about the other.
The fourth fundamental quantum principle, the ‘principle of complementarity’, was introduced by Niels Bohr in a paper he released in 1927 at the International Physical Congress held in Como, Italy, shortly after Heisenberg released his above paper (Bohr, 1928, pp. 580–590). As commonly understood, this philosophical motif—quantum physics’ most characteristic trait—has two closely associated aspects:
 The first covers the waveparticle duality introduced by the de Broglie matter wave and the Schrödinger wave equation.
 The second covers the Heisenberg uncertainty paradox that physical reality seems to have multiple but mutually contradictory pairs of properties that cannot be captured simultaneously in a single measurement. It may be possible, in successive situations, to switch around which one we want to measure, but as soon we select one, its conjugate collapses or vanishes into uncertainty. The restrictions on the measurements that can be made in such microscopic situations result from the tradeoffs between these complementary quantum conjugates, and is based upon the Planck constant, h.
Quantum complementarity is often taken to include ‘the observer effect’. In order to see how it applies to biology, we must combine it with relativity. It is also first necessary to understand the BiotSavart law as depicted in Figure 65.
Point magnetic charges do not exist. Only magnets with conjoined, but opposing, northsouth poles exist. They therefore cannot generate magnetostatic fields similar to electrical ones. Magnetcouples— i.e. with opposing northsouth magnetic poles—can only produce the kind of circular field lines depicted in Figure 17. Those field lines do not radiate. They are obliged to pass straight through the two poles that generate them.
This behaviour of coupled magnetic poles should again be compared to Pasteur’s proof that independently sourced biological organisms arising entirely from some inorganic matter do not exist. Once again, all biological entities diverge from others like themselves, and every biological entity has a progenitor.
Confusingly, the lefthand graphic in Figure 65 shows the magnetostatic field M1(L1). It appears to emerge, as if like an electric field, from a “discrete magnetic charge” ostensibly located at L1. However, magnetostatic fields of this kind can only be generated by moving electric charges for, once again, magnetic monopoles do not exist. Nevertheless, the pointmagnetlike effect that the apparent magnetostatic field M1(L1) has on the point P1 can soon be determined from the Maxwell field equations—but only if a current of appropriate magnitude flows along that wire.
In the same way, an individual biological organism’s effect on the environment can always be indepedently determined … but that organism is always and only a part of a biological cycle moving between progenitor and progeny.
There is an immediate contrast between the lefthand and middle graphics in Figure 65. The middle one shows an idealized straight and infinitely long wire, with a long succession of pointelectric charges. Each charge is stationary and can therefore generate its independent electrostatic field. Charges E2 and E3, located at L2 and L3 respectively, establish the electrostatic fields E2(L2) and E3(L3) about themselves. The Maxwell equations immediately allow us to calculate the intensity of the force each one exerts at any arbitrary point in space away from them, as at points P2 and P3. Each point—and each charge—is of course located in the other’s field. The two therefore exert electrostatic forces upon all points, as well as upon each other. These interrelated effects are soon determined.
Relativity is all about the effects of motion. The middle graphic in Figure 65 makes it clear that if we have a collection of stationary electrical charges; and if we are ourselves stationary; then there is no magnetic field. As in the lefthand graphic, however, by the BiotSavart law if we are stationary while charges move past us in a flowing current, then each moving charge will not only carry its electric field with it, but it will also induce a magnetic field … as it passes.
Since motion is involved, the Lorentz transformations and Einstein’s theory of special relativity—i.e. the speed of light—become relevant. Understanding their significance is best done through two further thought experiments. In both of them we imagine ourselves running alongside the wire.
In our first thought experiment, we run at the same speed as do the charges in the current. We are then stationary relative to those charges. This puts us back in the same situation as the middle graphic. Since we are stationary relatively, the magnetic field vanishes immediately. Only the electric one remains.
In our second thought experiment, we keep the entire line of charges stationary. But since we are once again in motion relative to those charges, then the Lorentz transformations and Einstein’s special relativity and the speed of light once more become relevant. As in the lefthand graphic, then due to the relative motions, the magnetic field promptly reappears.
It should now be completely clear that the BiotSavart law invokes relativity and only applies to moving electrical charges. It allows us to calculate the strength and disposition of the ensuing magnetic field at any arbitrary point away from the charge, and as we see in the righthand graphic in Figure 65. The strength of the field M4(L4) is being evaluated at the point P4, some distance removed from a flowing current element, dl. The relevance of all this to biological entities and species will become selfevident shortly.
When Maxwell successfully used the vector calculus to present his electromagnetic theory and analyse Faraday’s induction, he initiated several simultaneous scientific revolutions. The two most important, from our perspective, are that (a) he introduced the speed of light to physics; and (b) he made the power of Faraday’s original field concept clear. Thanks to Maxwell’s great success physics is now littered with many different kinds of fields … including the quantum fields of probability that compose biological entities.
If a biological population is indeed a field of quantum probability, then the first issue to settle is whether or not it is a material field. Material fields highlight some changing property of whatever phenomenon has authored the field. Thus density, pressure, temperature, and velocity are all examples of material fields. An electric field, by contrast, is a nonmaterial field with no such measurable properties. It contains nothing directly observable or measurable. We only become aware of an electrical field by observing the behaviour of charged objects inserted into it. The strength of the force depends upon the laws of the electric field; the specific details of each field; and the point at which the given object is inserted. The magnetic field is another nonmaterial field. The electromagnetic field arises when the two intersect. Nonmaterial fields—which include quantum probability fields—elicit their effects, when given conditions hold, by exercising their potentials. The question of whether or not a biological field of quantum probability is or is not material remains.
Fields can also be classified by the ways in which their properties are measured. Scalar fields such as temperature and mass, for example, can be measured in any direction within their given medium. Since they are fully specified when one variable has been given, they have only one component. Other kinds of fields can depend not simply on distance, but upon a linear displacement—which is a vector. The displacement of a line segment stretches from a given point (a) in a specified direction; and (b) for a specified distance. It therefore has two components. Velocity needs both displacement and time to be brought together before we have a total distance travelled, and it is therefore also a vector with two components. Gravity, electricity, and magnetism are other examples of vector fields. Maxwell’s electromagnetic field is one of the most important vector fields in all of physics. And … is a biological field scalar, or vector?
We can now refer back to the meteorology we briefly reviewed and used as a model right at the beginning of our journey. Bjerknes showed that fluid dynamics, one of the most technically demanding areas of physics, stands at the heart of meteorology. A flowing fluid, such as an ocean or the atmosphere, is an amalgam of many different commodities. Its motion is affected by its density which can change from point to point or region to region. Fluid flows are also affected by pressure, temperature, and the volume rate of flow. The most important of the vector commodities are average velocity; heat flow; and the force per unit surface area that the fluid imposes upon whatever medium transports it. Being vectors, all of these have two components and so require that their directions and magnitudes be specified at every point. A fluid in motion is therefore a combination of several different fields that interact dynamically with each other. Many different combinations of scalars and vectors are active at every point.
Since a flowing fluid has many interdependent ingredients it is an example of a “tensor field”. This is a much more general variety of vector. Any tensor that operates in a space where the number of its components equals the number of dimensions is then a vector. But unlike vectors, tensors do not demand that their number of components equal the dimensionality of the space in which they operate. Nevertheless, any given tensor must have a specific and definable number of interrelated components. A change in any one component must register in one or another of the available dimensions. A flowing fluid may, for example, have three different vectors, but each direction in space is still responsible for reporting the values of the several different vectors. The mere fact that heat flow changes in one direction does not mean that there will be accompanying changes in the average velocity, or the pressure, in that same direction. There will, however, be coordinating changes in one or more of the remaining components, and across one or more of the dimensions. It is therefore more likely that a quantum field of biological probability is a tensor field.
A tensor describes the exact relationship between the magnitudes of its various components over the various dimensions for which they are defined. Its transformation laws reflect the physical laws for that system or phenomenon. Once we know the appropriate law, tensors can predict components values. And since the tensor obeys a known transformation law, it permits us to recalculate all changes in all dimensions regardless of which coordinate system is adopted. Thus if a biological field is indeed a tensor, we must find the coordinates that measure it: i.e. its metric.
Einstein’s general theory of relativity describes the behaviour of all matter, under gravitation, using the metric tensor and the metric field first discovered by Bernhard Riemann. A metric tensor is always needed to define distances along curves in noncartesian coordinate systems. A metric field is an attempt to refer more directly to the “space” and the coordinate systems produced by the tensor, and thus to the kinds of calculations that reveal its operations. The metric field for gravitational theory is in four dimensions: x, y, z and t. Since the four values refer to different aspects of the same body, they are necessarily interrelated. The four number set pinpoints the initial and final positions of all bodies. If a material body is stationary, then the metric tensor acting on its field is flat or stationary. Although x, y and z are zero, t must be positive. Since t is positive and increasing, the object traces a line parallel to the taxis in reflection of the metric tensor creating the force defined upon that field. A movement in this field is now a line segment within that geometry. The length reveals that given body’s displacement, under the influence of that metric tensor, in both space and time, and thus reveals its equations of motion. For this reason the metric field, in which the metric tensor is ‘flat’, is much more commonly known as the field of spacetime. It was first fully studied, from this perspective, by Hermann Minkowski. A line segment in this fourdimensional spacetime states an object’s movement from an initial time and position to final ones under the influence of its metric tensor and is termed a “worldline”.
Einstein showed that gravity is one of many different kinds of fields that surround any given body or mass. Since the strength of the gravitational force depends upon the distance from that body, then a complete gravitational theory requires a unique description for every possible point around that mass. Einstein produced the general theory by using a combination of Maxwell’s field concept and Riemann’s elliptical geometry in which the tensor is not flat, and spacetime and the axes that measure it are curved.
Newton based his very different gravitational theory on the qualities of ordinary Euclidean space … which makes the tensor ‘flat’ and ‘empty’. It insists on a straight forward and invariant relationship between the various directions of space—updown, leftright, forwardbackward—and the flow of time from past to future. Thus both the tensor and the field are flat. So when a body starts at t1 and moves from x1, y1 and z1 to x2, y2 and z2 at t2, we can easily calculate its velocity, v, from Newton’s laws of motion. We can even split that velocity into a vx, vy and vz component for each direction. These axes, coordinates and scales of measure are assumed independent of each other. They are also assumed to be unaffected by the sizes of the masses involved for both the tensor producing the forces and the movements, and the field providing the space and the coordinates used to measure that movement are presumed invariant and identical.
Darwin presaged Einstein and made the properties of the biological field—i.e. of biological space—dependent upon the amount of biological activity; upon the axes of measure; and also conversely. That is to say, Darwin saw a difference between (a) a biological metric tensor; and (b) a biological field in which the results of that tensor are measured. He also recognized that they are both either curved, or else are sources of curvature. In contrast to this, the proposal of the Aristotelian template, which denies competition and evolution, follows Newton. It declares that all axes and measurements are independent of the scale of biological activity and of any possible biological tensor. Thus where, as in Figures 29 and 60, Darwin’s theory would suggest that the shapes and positions of the curves describing the generation vary as the mass and energy fluxes and the numbers vary, the proposal of the Aristotelian template instead counters that those various axes, as well as the curves’ absolute positions, are all fixed. And where, additionally, Darwin’s theory suggests that the values and units of measure on those various axes will vary as the vector unit normals vary with the biological events being measured, the Aristotelian proposal instead suggests that those units are constant and regular and so quite independent of all biological activity and of the objects being measured.
Einstein’s general theory refutes the implicit assumptions of the Aristotelian and Newtonian views about the uniformity of space, and therefore about gravitational behaviour. In Einstein’s theory, as is well known, space and mass follow a metric tensor and are not independent. A large mass creates a highly curved gravitational field. Gravitational lines of force are no longer uniform. Relationships between x, y, z and t, within a metric field, cease to be straightforward and the tensor now indicates how spacetime and the gravitational field vary around any given mass. Since the intensity of the field varies with the tensor and so with distance from the mass, a body experiences variations in force, and therefore alters its potential for motion at each location. It is no longer possible to produce accurate equations of motion without first referring to the shape and properties of spacetime.
The Einstein field equations describe the gravitational field’s variations in intensity. These create the space and time surrounding every mass, thus revealing how matter is distributed in space while interconnecting to and interacting with itself. Since the resulting equations of motion linking the various components summarize the gravitational field’s behaviour, the tensor reveals that of matter, space, and time. It states the rules of gravitational fields … which is how x, y, z and t will vary with each location. The Einstein field equations connect the four dimensions of spacetime to masses and to each other. The metric tensor’s detailed behaviour is governed by Riemann’s geometry, with the Riemannian curvature declaring the behaviour of each point in spacetime through twenty components which obey Einstein’s law of gravitation.
Quantum mechanics contains a variety of probability fields which state the energy available to quantum particles. Although Schrödinger and the other early quantum theorists were quick to spot the first correspondences between waves and particles, they had a problem interpreting both amplitude, A, and the unit mass or density, μ. Neither amplitude nor density surrendered itself readily. A de Broglie matter wave is a largely theoretical wave that—like an electromagnetic wave—has no need of any medium to support it.
Born eventually offered the now accepted interpretation: that the squared amplitude of the de Broglie wave should be interpreted statistically. It states the probability of finding a particle within any given volume element. The higher the value of A2 then the greater is the statistical likelihood that a particle is occupying that volume over the stipulated time period with a given mass and transporting a given quantity of energy. The variations in those probability densities state the likelihood of the times, the intervals, and the energies with which given volume elements will contain given particles. But even then, quantum physics does not view particles as pointlike objects. We locate an expected particle by searching in the location described by a relevant probability field. Where the probability is strongest we are most likely to find a particlebehaving entity; and we are simply unlikely to find such behaviour where the associated probability is weak.
The problem, now, is how to apply this idea of tensors, fields, and the statistical probability dimension to biology, and so we can recognize waves, particles, and fields. Murray gives us an indication in his paper Universal Laws and Predictive Theory in Ecology and Evolution:
… in physics, determinism refers to the prediction of events that should occur in given circumstances (i.e., initial conditions), often with a controlled experiment. Predictions of future eclipses are always accompanied by the unstated but understood assumption that a massive body does not pass through the solar system between now and the predicted event. The prediction of future events by physicists look good because that massive body has not passed through the solar system—yet.
Biologists cannot predict the future as well as physicists can. This does not mean, however, that we should not be able to make any predictions at all, from laws and specific initial conditions, of existing patterns or past events (providing that we either know or can establish the initial conditions) (Murray, 2000).
As Murray makes clear, and as quantum physics points out, scientific laws expect certain probabilities before they can hold. As an example of the issues Murray has cogently highlighted, in December 2005 the twoyear old Siberian tiger “Tatiana” was moved to the San Francisco Zoo. She was intended as a mate and companion to the 14yearold “Tony” (Rubinstein and Block, 2007). In December 2007, eyewitnesses stated that three young men—who later admitted to being both inebriated and to have smoked cannabis—taunted Tatiana. The wall to her enclosure was unfortunately below the height recommended by the (US) Association of Zoos and Aquariums and the California Occupational Safety and Health Administration, and she eventually clambered her way out of her cage; stalked the men concerned; killed one of them; and clawed and injured the other two before being shot and killed by police officers.
This is now a question of biological probabilities. Thus even when human beings live in the Siberian tigers’ natural habitat, the probability of them being stalked and killed by one is vanishingly low. However, if a Siberian tiger should choose to deliberately track down and stalk a human being, whether in or out of its natural habitat, then the probability of that human surviving is also very low. The probabilities and improbabilities surrounding death and reproduction—with the former being unity and with the latter being under an obligation to somehow avoid ever being zero—are the very stuff of biology. They are the forces that determine the activities and behaviours of biological fields and/or tensors.
Our third law of biology, the law of diversity, divides all biological activity into a required set and an allowed set. Our fourth law, the law of reproduction, then states that at least one path in that allowed set must be reproduction. Biological forces and events stem from the probabilities of reproduction as populations struggle to maintain a nonzero reproduction rate in the face of the ∫dm < 0 of Proviso 1 Maxim 1 of ecology—the maxim of dissipation—that hangs remorselessly over all their members. This is a part of the biological tensor and the biological field.
Richards and Waloff provided us with the data that we used to construct an equilibrium age distribution population for the cricket or grasshopper Chorthippus brunneus (Richards & Waloff, 1954). That data tells us that if the species is to survive then any adults reaching maturity and breeding successfully must produce 22 fertilized eggs each. We can therefore gather up 22 such eggs and select one at random as the one that is “going to make it” … the sole survivor that will in its turn be the successful breeding adult responsible for producing 22 more fertilized eggs, and so continuing the generations. We place the remaining 21 into a fragile reserve in case the one we have selected fails … which is unfortunately very likely.
Our chosen egg has only a 95.4% probability that it will last the course. When it eventually succumbs, which is the far more probable outcome, we replace it with another selected equally randomly from our reserve.
The probability of failure for our second egg is still quite high at 95% (straight line, unweighted). Since this is only a marginal improvement, then should it also fail we can replace it with yet another from our reserve. At 94.7% our third egg has a slightly lower probability of failure … but it will also have higher values for mass and energy which have helped raise the probability of ultimate success. If we keep on making such substitutions we can complete the cycle.
Every biological entity is the physical manifestation of a probable path through a biological system. The number of such probable paths diminishes when progeny is lost. The path of reproduction is always hard to find. It always requires ever increasing—but ultimately finite—quantities of energy to traverse. It is by definition the least probable of all the paths initially available to any population. The individual probabilities for any survivor entities, and so that can amass further masses and energies, simultaneously increase until a given number of progenitors remains. Each survivor must then realize its potential and successfully reproduce. Each such successful path is then immediately the sum of all the masses, and of all the energies, of all the paths previously lost. Once the path of reproduction has been found, the population’s mass and energy is immediately distributed over an increased number of entities. This immediately increases the number of probable paths … but simultaneously reduces the probability per each one. A generation, and a population, is the workings out and the waxings and the wanings of such probabilities … and these are again what we must describe.
Each intended Chorthippus brunneus adult needs 22 eggs to preexist it before there is a good probability the species can continue with its biological cycle. Reproduction is therefore the corporate property of an entire collection of adultplus22fertilizedeggs sets. It is the property of their joint interactions and probabilities involving their mass and energy over historical time. A completed biological cycle is not—and again cannot be—the property of any single entity. It is, instead, the property of a population.
Since a biological cycle must include a path of reproduction; and since it is not and cannot be the property of any individual entity; then at least two entities must be enumerated. At least one entity must always succeed to another, and every entity must be the successor to some previous one. A time interval, some Δt, must therefore extend between them. The biological cycle is a statement of the heat emitted and the work done, over historical time, in pursuit of this objective. That work and heat is the statement of those paths explored over intervals of time. And if the population is to continue, then that work and that interval extending between those original two entities must extend to a third to become T, and it must then repeat continuously beyond that to others.
Granted that reproduction is the path that results over time and as juveniles transform into adults, we can find the clear and scientific language we need to discuss the paths and probabilities—and thus to build a full quantum biology—by using worldlines such as are depicted in Figure 66. Each worldline or spacetime diagram represents the total motion of its associated object as a function of time. Each worldline has time—that most important component of reproduction, of competition, and of evolution—on the vertical axis. And since the events must repeat, then we may well find our wavelength for biology.
If we conduct a thought experiment and imagine a car doing endless circuits around a racetrack, we can in principle photograph it at every single point; record its coordinates at every t; and write them down in a log book or photojournal. Each point or event is a specified set of four values, x, y, z, and t. We can then use that photojournal to reconstruct the entire circuit. A worldline is a complete record of all such events over time, with each distinct event being a single point—a logged photograph and data set—upon the worldline. It tells us the behaviour of objects in its field or metric system under the influence of the force and metric tensor acting upon it. And if there is a T for the circuit, then we have the interval over which the tensor repeats.
But instead of setting the car in motion we could equally well ‘pose’ it at those selfsame points. Once we have posed and logged the car at one point, we can apply Newton’s laws of motion and precalculate the next point’s coordinates; pose the car; take another photograph; and log everything in our data set. If we keep repeating this, we will create the identical photojournal. Even though, on the latter two occasions, we do not follow any car on its actual journey, we can still use the resulting worldline to recreate the journey, which is to recreate the interaction between the metric tensor and the metric field that measures the tensor’s effect with the appropriate coordinate system.
Each point on a worldline is historically associated, through its variables, with all other points and events. Each event is a point or a ‘charge’ on a long line with others in spacetime. We will thus experience the worldline’s effect if we “live in the real world”. This is effectively to “hold ourselves static” while the events and interactions making up the worldline rush past us, under the tensor’s influence, “at the speed of time”. But we can equally well look on the worldline as a static series of charges or snapshots that we can instead “bring to life” by “rushing ourselves past them”, “at the speed of time”, in the relative sense of our previous BiotSavart law thought experiment. It is the force and metric tensor acting upon it. Similarly, whether a projectionist displays a film on a screen for us so that the frames “move” past us; or we instead mount the same frames in a long picture gallery, and then ride past them in a motorized cine slider (or carriage) at the appropriate frames per second; we will see the same film either way … as long as the tensor, field, and coordinate system are invariant. The film is effectively a worldline or photojournal composed of the events constituting the narrative. In an ideal world, the same historical sequence results.
Since the Earth also has a worldline, we can extract its orbital data from Figure 66 and represent its spatial locations on a standard xy graph as a circle or ellipse. But its worldline as shown has a constant and slight upwards slope and thus is helical. This is because its worldline includes the time dimension. The Earth certainly arrives at the ‘same place’ on every orbit; and it consistently repeats the same spatial coordinates. However, since it does so at different, and later, points in time, then there is a constant upwards progression of its worldline on the taxis. This spiral and its helicity, its apparent “rate of ascent” in time, is a property of the worldline and the tensor acting to produce it. It is not a property of the Earth. It has no real meaning relative to the Earth. The Earth is only ever “at” a given point in space at a given time, and with an acceleration caused by the tensor and the forces it occasions, and that take it to other points in space at other times. This process is not in itself ‘helical’, much as the worldline might try to convince us otherwise.
A worldline is a property of a body or particle’s entire history. It makes that history available at a glance. Each event states both an x, y z set—i.e. a location—and a time, t, at which the object occupied that location. Each individual point upon the worldline is simply the logging of a specific event in the relevant historical record and under the influence of whatever causes it.
As in Figure 67, a particle on a worldline has a discrete (x1, y1, z1, t1) set. When it abides by its given laws of motion for a specified time interval Δt, it creates a onedimensional worldline with a terminal (x2, y2, z2, t2) coordinate. The metric field and its coordinates reflect the activity of the relevant metric tensor.
But as in the middle of Figure 67, we can also begin with two objects that have a relative distance or difference. We could for example track the masses of both a buck and a doe in a given deer population over their lifetimes; or we can instead track the masses of a Siberian tiger and a human being as the former is stalking the latter. We then note the changes in their relative values over time which is the relative shifts in position of their joint worldlines. We will produce a “worldsheet” that flexes and transforms—broadening and thinning as they change at different rates and with different slopes—according to whatever laws or events govern those relative changes in mass over time. These are signs of possible forces exerted or received be it from the environment, or else that they exert upon each other over historical time. We thus record the size differences of the buck and the doe as they each mature; or else the eventual and possible disappearance of the human’s mass just as we note a slight increase in the tiger’s!
And finally, if we record multiple values in multiple dimensions, then as on the right in Figure 67, we will produce a multidimensional ‘brane’ (or membrane) composed of whatever values and numbers of points and coordinates produce that area or shape. For example, our three dimensional function f(n, q̅, w̅), for a biological population, allows us to track those different commodities. The values at each t compose a brane. As this brane moves through historical time, it creates a “worldvolume” that states all totals and all rates of change for whatever properties or commodities have moved and or changed, over the period, and so contributed to the brane’s shape and texture at each moment t. In a biological situation, this worldvolume is the total mass and energy the population uses over that interval, Δt. And … when that interval is a generation, then we produce the energy we have already learned to analyse with a planimeter, and whose length T, along with its m̅ and p̅ is definitive for that population through the Liouville and Helmholtz decomposition theorems.
We should note two important things:
 Since we have tangent vectors, inner products and the like, we have a metric tensor and a Riemannian manifold.
 Since we have a repeating motion it is properly justifiable to think of the repetitions in values of t, and therefore of T, as a cycle of operations, and therefore a wavelength.
A worldline’s slope or tangent always indicates its velocity or rate of change for whatever property at that point. If the slope changes then the commodity is accelerating. But since a worldline is a form of logbook or photojournal, then it is a set of spacetime snapshots, or pictures, of a body or particle’s entire historical behaviour. Each set of coordinates states the body’s values or positions through space, but always as a function of time. And if they are stated as functions of time, then they are rates of activity … and rates of activity are most reminiscent of particles.
A worldline is not, in itself, any physical orbit or trajectory. It is not directly visible in real time. Only its individual events exist, with only those being visible, or measurable, and as according to the field or object concerned. A worldline is simply a graphic representation. It is always the property of an entire history. It is never the property of any one timepoint or location.
The metric tensor and the metric field are distinct. The former defines a manifold and/or surface with tangent vectors as its inputs; while the latter is the space produced and in which measures may be taken.
A worldline is a “timelike curve” within spacetime. While its crosssection states the prevailing properties, its tangent indicates the instantaneous velocity and acceleration the body enjoys as it travels away from the stipulated location in spacetime, and so through whatever space is represented by the coordinates in force. Thus the tangent is its timelike future.
Worldlines suffer from the restriction that only two dimensions are available on a piece of paper. Those two dimensions then attempt to depict a third out of the four that constitute them. Thus the helical nature of the Earth’s orbit, when seen as a worldline, is only clear pictorially in Figure 66 because the needed taxis is accompanied by only two of the three axes in space, in this case the x and the y. The zaxis has been suppressed. But we could just as easily draw the Earth’s worldline in its three spatial dimensions, and so pictorially suppressing the fourth taxis. We would still be implicitly labelling all points with their t values. The resulting diagram might then look like an ordinary three dimensional representation … and it would—in certain senses—not really “be” a worldline, for it would seem to be missing the time dimension. However … even though that fourth time axis had been suppressed, it is still technically a worldline, for we can still compute all locations at each t from the given equations of motion, and we can label all locations with those time values. This would then reveal the distance travelled or the time spent on any journey, or line segment, which is traditionally denoted by τ. If we do this for a biological population, then that line segment—which is the distance travelled at whatever rate—can also reveal the generation length we instead know as T.
Before finally turning to applying all this to biology, we must examine how constraints can be applied to the worldline concept. The most important constraint, and which has direct bearings on evolution, is that no object can travel faster than c, the speed of light. Each worldline must in other words follow all scientific laws. This immediately constrains their slopes. No worldline may slope at an angle greater than c. Worldlines are thus constrained to indicate historical possibilities. They must indicate what objects could reach any event, or any given body, at any given point. Biological populations must in their turn be subject to the three constraints of constant propagation, constant size, and constant equivalence. And if we are to analyse evolution, then they must also only show what is historically possible.
As at the top right in Figure 66, a given body’s ‘hyperplane of the present’ depicts all points throughout all of space that share the same ‘now’ as any given particle located at that specified point, and at that given time t. All events on that hyperplane (or hypersurface) are currently inaccessible given the restrictions imposed by the speed of light. They are, however, accessible within a given time should a lightspeed signal be issued. That establishes the coordinate system.
The spacetime coordinates for hyperplanes of the present are measured in lightyears. But these measure both (a) time periods; and (b) distances. Thus all events taking place at a spacetime coordinate of one light year away in socalled “geographic” space can reach the given origin of measure once that one light year in time has elapsed.
The sun, for example, is at this moment undertaking a set of nuclear fusion and combustion reactions at the rate of 120 million tons of solar material per minute. Four hydrogen atoms form one helium atom, with 600 million tons of the former becoming 596 tons of the latter (Kennewell and McDonald, 2010). The photons produced as the heat and light lost—which are electromagnetic radiations—will then undertake a random walk process throughout the sun’s interior.
This is now a question of how the sun’s interior is measured. If we take one perspective, it has a physical or geographic diameter of ‘only’ 865,000 miles, 1.39 million kilometres. But that does not give us a coordinate suitable for the hyperplane of the present. We can express the same diameter as 4.6 lightseconds, and so as a function of the speed of light when it is moving in free space. But due to the tremendous forces exerted by the sun’s mass, those photons its nuclear fusion reactions produce take an average of between 10,000 and 170,000 years to reach its surface … and not the 4.6 seconds they would take if they could cover that “same” distance absent the sun’s interior forces, and so in free space. Once the photons have punched their way across the 432,500 miles, 695,000 kilometres, of interior solar material separating them from the surface, they are at last free from the powerful gravitational forces of that interior. Those photons then take only eight minutes to fly across the apparently vastly greater 96 million miles, 155 million kilometres, of free space across to Earth. Therefore, a given terrestrial event’s hyperplane, which is its ‘now’, includes all those photons on the sun’s surface whose spatial coordinate, or distance, is eight minutes … because they can reach it in eight minutes from that spatial point away on the sun’s surface. But that same hyperplane coordinate does not include the apparently geographically closeby locations containing photons subject to the sun’s mass and that have yet to begin that free space journey; and nor does it include locations where photons have already begun that journey at some prior moment in time. Those each have different coordinates on the given hyperplane of the present, for all coordinates are expressed in lightspeed terms.
The hyperplane of the present is “spacelike”. It also incudes all solar materials whose spatial coordinates are between 10,000 to 170,000 lightyears … for as long as they will take that time period to arrive. It does not matter where they might be physically or geographically located. Thus objects at the heart of the sun, and so “only” 432,500 miles or 0.7 million kilometres from its surface, in physical or geographic terms, now share the same coordinates, on the hyperplane of the present, as do all objects in free space that are “actually” between 10,000 and 170,000 lightyears distant from the Earth, and in spite of that huge difference in their seetming “physical” distances. The density of the sun’s mass greatly affects the hyperplane coordinates of those at its centre. But since the hyperplane is spacelike, it represents all points in threedimensional space, at any given moment, that share the same putative and hypothetical “universal time” through their coordinates, and when measured by the speed of light. All of this advises us to exercise caution when considering “distances” in both biology and ecology, for spacelike geography in one metric is not the same as spacelike geography in another.
The worldline and the hyperplane of the present also give rise to the timelike measures shown in Figure 66. The “past light cone” incorporates all events that could reach a given location at that time moving at the speed of light, or else at all relevant sublightspeeds. What an observer sees in space, at any given t, is entirely contained within his or her past light cone. All light arriving from onelight year away shares the same ‘now’ as all light arriving from twenty light years away. Similarly, the “future light cone” incorporates all objects and events to which a message or influence can be sent from that point at the speed of light; or else at the appropriate sublightspeed. Only the light cone relative to a given observer is ever experienced. ‘Now’ is simply the singularity on the hyperplane of the present where the past and the future light cones intersect.
The hyperplane of the present is in this sense essentially meaningless. It is an attempt to preserve a conception of an “absolute now” in the face of the quite contrary demonstration of Einstein’s special and general theories of relativity and the importance of the constancy of the speed of light.
As an example, on the morning of 27 December, 1831, Darwin famously set off on the Beagle. Any curious beings living on the sun who had access to a light speed craft (or more accurately, a very neartolightspeed craft), and who wanted to join him, would only have needed to have left the sun eight minutes prior to his departure to be there in time. They would have to travel that spacelike distance in a timelike interval.
But if those beings had only had an ancient jalopy with a top speed of sixty miles per hour, 97 kilometres per hour, then they would have had to set out 180 years prior: i.e. in 1651. And if their lifetimes were equivalent to those of mere earthlings, then the generation that set out would not live long enough to join Darwin, for only their descendants could. They would have travelled that same spacelike distance when viewed from a lightspeed perspective in a different timelike interval and so at a different rate: making the spacelike difference appear vastly different in relation to the two groups. The first group could join Darwin themselves, whereas the latter group would have to hope that their descendants were disposed to complete the mission.
In similar fashion, any being living anywhere and any when could have joined Darwin in his “now” by first leaving in their appropriate “then”. All those different “thens”, however, are ultimately subsumed in the “now”; and were never in a certain sense “different” from the “now”.
The apparent ‘growth’ in a past light cone’s girth simply means that it ultimately becomes “large enough” to contain all possible events. The past light cone gets more inclusive or larger because the Big Bang lies in the past of all possible light cones. In that sense, a host of beings from many distant locations could have joined Darwin, and they would all always have shared the same past history, and would share the same future no matter how different the acts they engaged in once the Beagle’s journey had been completed.
Similar considerations hold with respect to the future light cone; and also with respect to any worldlines formed in any potential worldline coordinate system. Since both the past and the future light cones keep on ‘growing’, then once a sufficiently long period of time has elaped they can each encompass any possible transports of light any when and any where, but always relative to the given observer. Although the past light cone will include an ever increasing number of events, that does not “draw” those events “any closer” to each other. So if two spacecraft leave the earth at the same moment, and in opposite directions, they will eventually be 10,000 light years apart from each other. They will both continue to be in the earth’s future light cone even as they recede from each other and no matter how physically apart from each other they may each get. Their enormous distance apart in that future “then” is a direct reflection of their potential to be that far apart in the “now”.
The light cones of the past and the future must take into account the fact that photons do not have an inertial frame. Therefore, photons cannot “catch up” with each other. Or alternatively, it is perfectly possible to conceive of a photon with a coordinate frame, but it is not possible to reconstruct what a photon might observe via the intake of other photons which is what would be required for any kind of proposed interaction between them. Photons do not in themselves interact with each other. They are instead—and roughly speaking—the agents of interaction. So any photons in the above two spacecraft that left the earth could facilitate electromagnetic communications, but they would have greater and greater distances to cover. Photons will eventually be the only realistic things the spacecraft can exchange for the more separated they become, the less the number of material interactions they can arrange between each other. Their timelike and spacelike intervals are in this sense, always entangled.
Heat and work are properties of an interaction. They require photons before the material interactions they seek to propagate can be executed. For “heat” to interact directly with “heat”, or for “work” to do so with “work”, is essentially meaningless for nothing material is stipulated that could be getting “hotter” on the receipt of heat. A material object is composed of fermions, and can therefore interact with photons and become hotter as it is bombarded by an everincreasing number of photons. Those photons can arrive by propagating through free space, emitted by some far distant material object. The photons concerned do not separate further from each other or catch up with each other, nor mutually interact on their journey—which concept is essentially meaningless. If a first material object does not interact with a second then it has not received any photons from that other. Thus the pasts and the futures of noninteracting photons are strictly undefined.
The furthest possible “away point” in every past light cone is the Big Bang in which there are no interactions remotely “like” any occurring in any here and any now … although they gave rise to all here and now interactions every where and every when. And the furthest possible “away point” in every future light cone is presently undetermined. The “Big Chill” and the “Big Crunch” are the two scenarios with the greatest contemporary scientific support … although the preponderance of evidence is now against the Big Crunch. In the former, the universe expands forever and becomes increasingly dark and cold; and in the latter, the current expansion reverses and the universe implodes back on itself (perhaps even running all events backwards?). Another hypothesis is the “Big Rip” in which the size of the immediately observable in each location shrinks continuously, even as the size of the overall universe expands, so that the universe is internally dismembered piecemeal. Our own solar system would then be torn apart some three months before “the ultimate end”, dated perhaps some fifty million years hence. Whichever hypothesis for future light cones is correct, events in every here and every now will again be completely unlike any in any here and any now, and even though all those future heres and nows must be both different and out of immediate contact with each other, and yet all in the same way.
In much the same ways, Darwin’s theory of evolution, as currently understood, proposes the following about biological entities:
 that there is a “biological hyperplane of the present” they all inhabit in a “biological now”;
 that they are largely reproductively inaccessible to each other on that hyperplane;
 that they all also reside within their respective “cones of reproductive inaccessibility”;
 that a prior biological evolutionary history exists that all biological entities on the hyperplane of the present hold in common;
 that the prior biological evolutionary history concerned is a common descent;
 that in the case of all possible terrestrial entities, this common descent is enshrined in their DNA;
 that all currently existing entities are in principle capable of standing at the head of their own reproductive cones of the future;
 that there is a potential biological evolutionary future history stemming from them all that is also ultimately different from all others;
 that these future reproductive cones in their turn lead to streams of common descent, originating as they do all over the biological hyperplane of the present (which is, again, a statement of current reproductive inaccessibility).
If all of this is to hold good, then we must make the probability that any biological entity currently reproducing on the biological hyperplane of the present can reproduce outside its own and given reproductive cone equal to zero. But this refers us straight back to the first quantum principle. The fundamental unit of action—to which even biological entities are subject—is the Planck constant, h. Since we have demonstrated that biological entities are composed of molecules and are governed by f(n, q̅, w̅), then this is virtually the demand that every molecule in every biological entity be predictable and predicted in all possible historical times.
As is quantum physics, reproduction is about probabilities … and worldlines are of course affected by quantum uncertainty and probability. By the Heisenberg uncertaintly principle, the greater is a given particle’s velocity; and the more accurately we are able to determine it; then the less accurately we can know its position at any time; and conversely. But if we try to avoid this inevitable uncertainty by creating a photojournal that artificially “poses” the particle, then the resulting worldline can only possibly be as accurate as are the snapshots we take of real particles, or our prevailing equations of motion and our computations. And since those snapshots and/or equations necessarily incorporate quantum probabilities, then we end up with the same uncertainties—and the same worldline—regardless. But even with such manifest uncertainties, accurate equations of motion are still possible. Quantum physics is the most accurate and precise of all physical theories. The issue now is whether or not it can also explain reproductive inaccessibility upon the biological hyperplane of the present and do so in the language of vector or tensor fields; of waves; of particles; and of relativity.
We are now ready to apply these ideas to biology, to competition, and to Darwinian evolution. We must of course begin with a biological entity. So we now observe some entity, such as Brassica rapa or Chorthippus brunneus. We record all pertinent data over its life span—i.e. all values for mass and energy for every time t that it survives. We can now construct a photojournal, for that individual entity. As with the white line in Figure 68, we now have a biological trail or “biotrail” for that individual entity, as it navigates biological spacetime. We have recorded its mass and energy, but the time axis has been suppressed. It has also undertaken a cycle, always at a given pace or rate.
We know from our experiences with both Brassica rapa and Chorthippus brunneus that one entity and biotrail is not enough. C.brunneus needs an adultwith22fertilizedeggs set before we have a reasonable prospect of completing a biological cycle and producing viable offspring. The basis of our B. rapa experiment was building up the nset of biotrails, in the appropriate densities at each t, to produce the values for its equilibrium age distribution population. We must therefore observe and overlay many biotrails before we get a clear picture of the biological and reproductive events surrounding any entity. It therefore takes a given number to complete a cycle, and a given number to provide a rate or velocity.
Once we have got enough biotrails, we can draw them together using the standard methods of population biology. We can construct the coordinated values that produce the “biological worldline” or “biopath” for a population. This is the broader darker line in Figure 68. This biopath—a collection of biotrails—is the historical record of a unit engenetic equilibrium population. Since we now have (i) a T for the generation length; (ii) an m̅ and a p̅ for the divergence and the flux; and (iii) an n for every t over T; we have the unique signature for any population capable of successfully reproducing and creating such a biopath … which is simply the worldline for an equilibrium age distribution population over its generation. This is a complete cycle, always executed at a given rate or velocity.
At each timepoint, t, on our biopath we can take a crosssection. This forms an “engenetic brane” or “enbrane” that states the population values at that t. The enbrane’s size, or area, depends upon n, the number density of the trails or entities existing at that time. Our enbrane now flexes and transforms with the mass and energy fluxes, M and P, over the ts over T to create the generations and the biopath. Its length is again T, the circuit of the generations, and the work done is a rate.
Although we can certainly produce a biopath by recording many entity values, we could instead embalm—or “pose”—a sufficiently large number of entities selected from random points all over the generation, and over many generations. Once embalmed, we can photograph those “posed” entities and again record all pertinent data. Since their properties are accurate, we will produce the same biopath regardless of the fact that we have observed no single entity over its entire lifespan.
Each point on any suitable biopath—which is its enbrane crosssection—also has its tangent, which is the timelike future for that population, at that t. Thus the set of biotrails per each entity upon the biopath in Figure 68 describes a set of biological events at that time and per each entity on the relevant number of biotrails. For every point on the biopath, and so on each biotrail composing it, there is a mass, an energy, an energy density, a population density, and a rate of change in all values, including in m̅ and p̅ and so in all curls, divergences, and fluxes. Since each biopath is made from a given number, n, of biotrails, then the biopath and its enbranes state all mass, energy, and number values over those n members at each t in timelike fashion, as that population heads towards the next t over T. We can now predict, from the biopath, the most likely rates and values for all biotrails and enbranes, and so for all entities and populations, within any given reproductive cone on the biological hyperplane of the present.
We have seen several times that the socalled hard sciences have invariably expended a great deal of energy in avoiding the etymological fallacy by carefully clarifying terms. Many—such as mass, energy, volume, velocity, heat, work—began in ordinary language and took on a clear and specific scientific meaning. Now that we have suitably defined a biopath as a worldline in biological space, and so as a collection of biotrails, we can properly define any object capable of creating one. That defined object will also be capable of completing a cycle of accelerated rates and paces, and so of existing in a definite cone of reproductive inaccessibility relative to all other biological entities composing its biological hyperplane of the present.
The tangent to a biopath declares a specified timelike future and rate of activity for an enbrane: i.e. for a biological population in a given and specified state at the given time t and that defines both its traits and properties and its immediate instantaneous future. The population concerned can also consistently repeat a directional derivative, which is to complete a cycle. It exists in one or another cone of reproductive inaccessibility. But as is the case with worldlines, neither the tangent to the biopath, nor the state it declares, are properties of either the enbrane itself or the entities that construct it. They are instead properties of the biopath, which is a rigorous scientifichistorical narrative for that collection of enbranes and its constituent biotrails. It states (a) rates and paces of activity; and (b) intervals of time for a cycle.
We now need a rigorous scientific term. It must:
 refer specifically to any collection of biological entities whose work done and heat emitted produces the tangent to the collection of biotrails that creates enbranes upon a biopath;
 stipulate that the entities abide by: (1) all vector rules; (2) the BiotSavart law; (3) our three constraints of constant propagation, size and equivalence; (4) our four laws of biology; (5) our four maxims of ecology; and (6) the Liouville and Helmholtz decomposition theorems;
 further stipulate that all relevant entities occupy a discrete domain of reproductive inaccessibility with respect to all other entities that share the same biologial hyperplane of the present or ‘now’;
 define a reproductive history they all hold in common through that reproductive cone of inaccessibility they together occupy;
 imply both a rate of activity over all entities, and an applicable cycle length.
We can extract a term from ordinary language to fulfill the above specifications. Our chosen candidate term is “species”.
Now that we have a suitable biological term we must immediately clear it of all traces of the etymological fallacy. This—unfortunately—is fraught with great difficulty. Both “species” and the accompanying process of “speciation” are as vague and undefined as any other term in biology and ecology. Our ability to clarify terms is as usual made more difficult by biology’s insufficiently sound grasp of the most basic of the scientific terms needed. This mitigates badly against providing any clear definition:
How can one discuss the general mechanisms underlying a process (speciation) when one does not have a definition for its outcome (species)? There has of course been no lack of attempts to find such a definition. Many eminent biologists have tried it. In fact, almost every student of biology will have tried it, or will do so sooner or later. And everyone will fail – like his or her predecessors – but will have learned a lot about biology in the process. It is simply impossible to combine all aspects of species into a single concept–especially when dealing with organisms as diverse as palaeontological species, asexually reproducing species or bacteria. Thus, when discussing mechanisms of speciation, one tends to reduce this to the “normal” sexually reproducing taxa and to the socalled biological species concept. In 1895 Wallace gave the following version of this concept, “A species is a group of living organisms, separated from all other such groups by a set of distinctive characters, having relations to the environment not identical with those of any other group of organisms, and having the power of continuously reproducing its like”. In 1942, a shorter version, with important omissions including the ecological references, was popularized by the late Ernst Mayr, “Species are groups of actually or potentially interbreeding natural populations, which are reproductively isolated from other such groups”.
This shifted focus to reproductive isolation, rather than environmental adaptation, is for good reason (Tautz, 2009).
We deal first with species, leaving the associated process of speciation until later. We approach “species” by understanding that it is the expression of a difference between a biological metric tensor and a biological metric field as a given group of biological entities defines its principal trait … which is reproduction. This reproduction is a statement of mass and energy over time.
If species exist, then boundaries will exist between them. Those boundaries of differentiation are in their turn caused by, and are also evidence of, their differences in reproduction. The standard claim, which is very different from our own, is that differences in species will show themselves in traits and features. Speciation is then perhaps an exploration of those boundaries between traits. Modern biological science asserts that these are Darwin’s variations and their effects. Although this approach diverts attention away from the mass, number, and energy density we have chosen to highlight, it is important to be clear about its claims.
The British anatomist Richard Owen, Darwin’s contemporary, was the first to use the idea of ‘homology’ as a way of suggesting a difference or ‘distance’ between the kind of phenomena that could establish—more or less rigorously—the differences between species and their distinctive traits (Hanson, 1981, p. 273). Reproduction and homology are clearly intertwined because homologies are handed down through the generations and become characteristic of species. We must certainly concede that Owen’s homologies involve mass and energy.
Owen recognized a serial and a special form of homology. By serial homology, he meant any similarity of structures we can observe as we examine an entity, such as by proceeding along its length. Front and back legs, for example, are often remarkably similar to each other. If we examine two very different creatures in this way, such as an anteater and a dog, we can often see broad similarities along the lengths of each. Those are two distinct cases of serial homology. We can certainly find a host of others. And by special homology, Owen meant similarities between structures in different organisms. Thus we can now directly compare the anteater’s legs to the dog’s and any others’.
Owen’s ‘serial’ homology had often also been referred to as ‘general’ homology, and thus in reference to some Platonic or Aristotelian archetype and ideal (Laubichler, 2000). Darwin’s theory, however, allowed Owen’s homologies to be placed under that lens: signs of a possible descent from a putative common ancestor. Darwin’s theory of natural selection immediately allowed biologists to reexamine Owen’s homologies in a very new way: one based on reproduction. Instead of homologies conforming to some ideal type, current differences between entities could perhaps be better explained via Darwinian variations handed down through the generations.
The possibility of common descent meant that if homology was going to be useful, discussion of similarities and differences had to become more precise. How “far apart” or “closely related” were any two entities with a given homology? Was there some standard of reference? Thus begin the difficulties characteristic of this imprecise species concept.
Adolf Remane brought some clarity to homology by proposing three major criteria: (a) the positional; (b) the compositional; and (c) the serial (Hanson, 1981, p. 274). The positional insists that homologies are only sound when the proposed structures lie in similar positions on the proposed entities. The compositional insists that they must have similar parts, structures, numbers, and chemical, structural and other compositions. With positional and compositional in place, serial became the most evolutionary relevant. Serial homology now states that if two structures initially appear dissimilar; but if they can “reasonably” be related through proposed or actual intermediate structures that satisfy the positional and compositional criteria; then those apparently different structures are most probably homologous. Thus, for example, bones lying outside the skull in one set of creatures could well now be located inside the skull in others and so homologous—and as would seem to be the case between mammals and some fish.
Homology as proposed by Remane required the comparison of complete sets of traits or features. His checklist of criteria led to a probability based approach to homology. If his criteria were fulfilled then homologues were more likely due to increasing similarities in relative positions and structural plans. Any transitional forms or structures found further increased the probability of homology.
A “seme” is a trait or character set, in the Remane sense, that is sufficiently complex to allow for meaningful discussion of proposed homologies. If a group of entities is being studied to establish whether homologies are present, then the total set of proposed semes is t. A “plesioseme”, p, is then a trait or seme that would seem to be ancestral. An “aposeme”, a, would be a derived trait that is little changed across a variety of entities, or else that is held in common by many, and in some closely allied respects. For example, no matter how different they might be when compared to each other, all tetrapods have four legs, and only tetrapods have four legs. A “neoseme”, n, is now a trait that appears in one group and not in others. It might then be considered the more evolutionarily recent, and so the more likely to be inherited through descent and from the evolutionary prior plesio and aposemes.
Owen’s and Remane’s ideas for homology and phylogeny are all very well, but science requires numbers, wherever possible. It demands reproducible measurements. A ‘phyletic distance’—generally denoted R in honour of Remane—is then the proposed measure of separation between two entities or groups of entities based on an anaysis of their semes (Hanson, 1981, p. 298). It is effectively a reproductive distance.
Remane’s phyletic distance is given by:
R = 1 + {1/t • [p + (2a)2 + (3n)2]}.
Thus plesiosemes, p, reveal “nothing” about evolution. The argument is that there is not much probability that those particular sets of traits and entities evolved from each other. They are more likely to be the ground from which the other traits observed have emerged. The p therefore removes them from consideration. The remaining aposemes, a, and neosemes, n, are then weighted—by 22 and 32 respectively—for their perceived relative importance. If the set of organisms placed under observation all happen to be from the same species, then all semes are the same across them all; they are effectively plesiosemes; and so they are subtracted from contention. And since the semes across these individuals are the same, then the apo and neosemes are zero for nothing in any of them is derived from any other, and nothing is new in any of them. Thus the phyletic distance is R = 0. This would seem to imply that nothing changes in reproduction.
Unfortunately, semes lack objective criteria. Discussions about ancient, derived, or new therefore become somewhat arbitrary and inconsistent. They surely seem to assume, at the outset, the very thing they are trying to prove. The same criticism can be levelled at the more modern variant of “phylogenetic systematics” or “cladistics” founded by Willi Hennig (Hennig et al, 1966). It would surely be better if terms such as plesio, apo, sym, neo and the rest emerged as outputs. They should not be the original inputs … otherwise … the case is surely only being “proven”—or rather, not proven!—by default.
It may seem a small point, but to assert that plesiosemes say “nothing” about evolution, is to gloss over the fact that they say rather a lot about reproduction’s accuracy and stability. To even concede that plesiosemes and plesiomorphs exist is surely also to concede that reproduction without evolution is at least theoretically possible. It is therefore surely the template or force behind the plesioseme—its notable accuracy over aeons of reproduction—that is the issue, and not variations away from it. Plesiosemes then preserve the species with proposed apo and neosemes being aberrations rather than evidence of evolution.
The problem is not just species, but speciation. Although true that some of those who criticize cladistic methodology question Darwin’s theory of evolution entirely, and so are antievolutionists who do not accept the possibility of speciation, many critics are scientists who only question cladistic methodology—and homologies in general—and not evolution and speciation. Hennig, for example, begins by saying that “evolution is the transformation of organismal form and behavior” (quoted in Laubichler, 2000). But … this is surely just a definition, if not an unproven assumption. It can be no more than a suggestion for how to proceed to test a hypothesis. Hennig—who is already a convert to evolution—is searching first for organisms, and then second for the very traits and characteristics that he clearly believes will “prove” his assumptions that:
a species x is more closely related to another species y, than it is to a third species z if, and only if, it has at least one stem species in common with species y, that is not also a stem species of z (Hennig et al, 1966).
Hennig’s systematics is driven by its analysis of “generalogical relationships” amogst organisms, which are in their turn driven by the following five principles:
 the cohesion of all biological organisms, current and extinct is caused by their genealogical relationships;
 these relationships exist (a) within the individual organisms of a population, (b) between the organisms in a population, and (c) between species;
 relationships of all other kinds, such as the genetic and the phenotypic, are correlated with their genealogical relationships, and are therefore best understood within the context of a genealogical descent with modification, which is evolution;
 genealogical relationships among populations and species can be recovered and discovered by searching for the particular characters that document such relationships;
 the best general classification for organisms is one that exactly reflects their genealogical relationships among these organisms—their phylogeny can be recovered from an exact classification of those relationships.
Quite apart from being selfserving, this involves yet another unproven assumption: that it is indeed possible to reconstruct the evolutionary process by examining contemporary organisms that are reproductively inaccessible to each other through being upon the same biological hyperplane of the present.
Hennig and the cladistic method produce a cladogram complete with proposed plesiomorphs, synapomorphs and many others, all analyzed into clades or groupings of organisms based upon their proposed common descent. A cladogram’s treelike structure then proposes that the sequence of transformations used to derive it is indeed evidence of the phylogeny that not only proves evolution, but the specific route it has taken through the organisms inhabiting those nodes or clades.
But once again: the scientist has selected the traits in the first place, and entirely because of the preconceived agenda of evolutionary inevitability:
And because they [cladograms] restrict evidence to new, unique evolutionary features as a way of determining relationships among closest relatives, they are more consistent theoretically with the expectations of evolution than any other method.
…
… The scientist determines which characters and organisms to choose and which states are primitive and derived (Padian, 2000).
Padian’s derision towards those sceptical of cladistics, as quoted in his above paper, is surely a little misplaced. Scepticism towards cladistics does not immediately imply a fundamental disbelief in speciation and evolutionary theory. Those sceptical towards cladistics are not always overtly questioning such propositions as that birds can be seen, with cladistic reasoning, to have evolved from dinosaurs (Feduccia, 1996). The broader question is whether or not evolution and speciation can in any way be ‘proven’ when the only evidence available is a methodology that does not seriously question evolution as a hypothesis. Cladistics and similar approaches simply stack all the cards in their favour. But if Darwin’s theory is correct; and if all existing boundaries between species are caused only by variations in descent from a common ancestor; then an alternative and more objective method to test, validate and prove it would surely be desirable. We certainly need some standard so we can discuss the distances and speeds involved in reproduction and any attendant speciation.
If an alternative method to measure reproduction exists, then it should start from inputs that are entirely objective and that do not preselect traits and features. Any measure proposed to replace phyletic distance, R, should instead emerge quite naturally as an output from those much more objective inputs. That measure could then become a phylogenetic proposal: a hypothesis to account for the distance and the ways it might be traversed. That suggestion and that distance should then be validated by observation—i.e. by taking the appropriate measurements—to see if they match the values predicted from the original inputs. And … our function f(n, q̅, w̅) has highlighted only mass, number, and chemical configuration. These are surely objective. It is hard to see how anything could be more objective. No traits are mentioned. We now seek a metric that incorporates these three, while also highlighting species and speciation.
Reproduction requires that traits be replicated down through the generations. Our position is that all biological entities must use energy to achieve this. We must somehow measure their internal features and constructions through energy. But thanks to Mayer and Carnot, we already in fact have a sound and irreproachable scientific method to track the internal behaviour of all systems behaving under energy.
Both Mayer and Carnot—prime architect founders of the energy doctrine—noted two fundamental facts each that account for the behaviour of all systems, including the biological, under energy. First there is Mayer who:
 noted that all substances—including those used by biological entities—have two different specific heats, one mechanical, Cp, and the other nonmechanical, Cv; and he also
 linked them via Mayer’s relation of CP + CV = R where R is the universal gas constant.
Therefore a substance’s thermal behaviour—biological or nonbiological—depends only on its amount of substance, which is the number of molecules concerned. By the Avogadro constant, this reduces to its mass. We can therefore track all traits through nothing more than mass, numbers, and energy. It is surely not possible to be more objective.
Mayer linked the external and internal. Comparing the mechanical or external work done by a system to the heat received by it internally tells us about all those changes in state it undertakes that do not involve volume, and so are not mechanical. This is in other words its internal condition. If we can therefore measure the volume and the heat, we will know the extent of those other changes. This is the province of entropy which we already know how to measure, biological, as engeny.
The ratio between the two specific heats for any substance is their adabiatic index or heat capacity ratio or isentropic expansion factor and is denoted by γ. It implies a prevailing value for the system’s entropy, for this depends upon those specific heats. They govern the mechanical work done and not done, and so the changes in volume and pressure which are effects in the environment:
γ = Cp/Cv = [(dP/P)/(dV/V].
Since, by Boyle’s law, PV = T, then integrating the above gives PVγ = constant for any substance. And since we move from a set of initial to final values, then Pfinal/Pinitial = (Vinitial/Vfinal)γ.
The adiabatic index for atmospheric air—i.e. the ratio of its specific heats—is γ = 1.4 or 40%. Thus if we start with a system whose temperature is Tsurroundings; keep its piston immobile; and heat the air behind that piston to our chosen temperature of Ttarget; then by PV = T the pressure rises immediately. But since the piston is stationary, the heat is added using the air’s constant volume heat capacity, Cv , the lesser of its two values. This also causes entirely internal changes in state. If we now remove the source of heat and free the piston, then its increased pressure will do expansion work out into the environment. Its volume will increase while its temperature falls back to Tsurroundings. If we now keep the piston free, and heat the air from Tsurroundings back to Ttarget, then since the piston is again moving we will heat it under its Cp heat capacity value and not the Cv one we used originally. The ratio of these specific heats means we will have to input approximately 40% extra heat than we did the first time. Thus by measuring the external work, we know the internal states or traits, but entirely objectively.
The value Vinitial/Vfinal for a system’s expansion is its compression ratio, r. It is the mechanical work the system does as it undertakes its changes in state. It therefore also states all qualitative changes in state, for these do not produce mechanical work as the heat is applied. Therefore, the compression ratio, which can be measured in the environment, is an entirely objective measure that gives information about a system’s ongoing changes in state, including (a) all its irreversible losses to the environment; and (b) all internal changes in state. We already know how to measure their biological equivalents which are the visible presence and its dynamic inverse, the work rate. W.
Although Carnot also made two scientically important discoveries, he could only personally quantify one of them:
 The discovery Carnot could quantify was Carnot’s theorem: that the amount of work a system can do, even in the best of all possible cases, is limited to a value that depends entirely on the temperature difference through which it operates. He quantified the Carnot efficiency as:
η = 1  (Tinitial/Tfinal)  Carnot could not quantify his second important discovery. This was his observation that heat energy could only ever move from hot to cold, and never vice versa. Carnot’s insight was later quantified by Clausius as the second law of thermodynamics: the declaration of entropy which can only stay the same over the perfect and reversible Carnot cycle, for otherwise it must inexorably increase. Clausius showed that dS = δQ/T and that the entropy change over any given temperature change is Sfinal  Sinitial = ΔQ/T which declares, through entirely objective measures, the system’s total change in thermal energy, which is its net molecular motions undertaken, and are necessarily—and by the third law of biology which is the law of diversity—all changes in traits.
Carnot’s and Mayer’s discoveries state internal conditions, yet are entirely objective. The thermal efficiency any real system exhibits can always be compared to its Carnot ideal (a) moving between the same temperatures; and/or (b) doing the same quantity of work; and/or (c) exhibiting the same compression ratio. Temperature is the integrating factor for entropy and this entire suite of objective measures is now the probability distribution of the system’s molecules; of their energies; and of their most likely velocities and configurations, all of which are also of internal—and therefore biochemical—significance.
Since we have determined the complete set of biological correspondences, we can understand the significance of relating temperature to volume and to pressure via the adiabatic index. This gives TV(γ  1) = constant, and P(γ  1)/Tγ = constant for any population. We can also relate a substance’s compression ratio to its temperature through Tfinal/Tinitial = Vinitial/Vfinal(γ1) = r(γ1)… which then gives Vfinal/Vinitial = TinitialTfinal(1/(1γ)) = Pinitial/Pfinal(1/γ) = 1/r. And since, by Mayer’s relation, Cp + Cv = R, then if we know a substance’s constant volume heat capacity, we can always determine its original temperature per each mole of its amount of substance, from the change in volume that accompanies its change in temperature:
Tfinal = Tinitial(Vfinal/Vinitial)(R/Cv).
We can also of course determine a substance’s original volume from its final volume through its accompanying change in temperature:
V2 = V1(T1/T2)(Cv/R)
And, finally, since the Carnot efficiency is a ratio between an initial and a final temperature, it can be stated in terms of the adiabatic index and the accompanying compression raio, r:
η = 1  (1/(r(γ1)).
We now have a whole set of entirely objective measures that predict all accompanying changes in the behaviours and configurations of the molecules of the systems to which they apply—all of which are entirely internal. And since reproduction requires the replication of internal processes and structures, we now turn to similarly predicting both the internal and the external changes biological entities can undergo, and thus their abilities to evolve … and when measured objectively. We must also measure the speed of reproduction, which is the essence of speciation … if it exists.
We already know that every biological population can be expressed through a function f(n, q, w). This measures the mass flux in kilogrammes per second; the energy flux in joules; and the work rate in watts per kilogramme. In addition to these, we measure the absolute generation length as Z seconds per biomole. This is the rate of processing over a given number of entities as some are lost and then replaced. The population is then uniquely specified by its values for m̅ and p̅ at every moment t over its given generation length T. These together create the biopath that states the population’s history. Therefore, the biopath is a function of f(T, n, q, w), and is in its turn a declaration of that population’s reproductive inaccessbility from all others sharing the same ‘now’.
A biological population is a statement of energy. We have noted that its average individual mass, m̅, is the integrating factor for its engeny, S, which in its turn determines its microscopic molecular behaviour over all its members. Their biological potential, μ, is the rate of change of their engeny, dS. It is the manner in which they apportion their energies amongst their various activities—mass, number, type of chemical activity, and time—and so determines their most probable instantaneous paths to create their states of reproductive accessibility and inaccessibility.
We can measure, for every population, its engenetic burdens of conformation and components mass, χ and κ, respectively. These are a function of their respective vector unit normals … which therefore measure, relatively, the same commodities that the three constraints do absolutely. The engenetic constant, Ω, is the sum of every population’s engenetic burdens of conformation and components mass. Thus its change in engeny, consequent to any given change in its average individual mass, is dS = χ(dm̅/m̅) + T’(dV/V) … or dS = κ(dm̅/m̅) + T’(dP/P) where V is the visible presence, P is the Wallace pressure, and T’ is the ratio between the given population’s generation length, and that of our reference or defining population, which we have declared as T = 1,000 seconds. All biological behaviour—including evolution—is therefore and again a function of mass, time, energy and number of chemical components as distributed over the entities maintained, and all of which must maintain a given rate of operations.
Since we now know that evolution—if it exists—is a function of f(T, n, q̅, w̅), we know that there must be four evolutionary factors or potentials based on the times, rates, masses and energies made available to populations. Darwinian evolution must therefore be a manifestation of changes in these four aspects of the function. We express them as:
 a “Haeckel potential”, ηH, based on generation lengths;
 a “Darwin potential”, ηD, based on numbers;
 a “Mendel potential”, ηM, based on mass and chemical components;
 a “Gibbs potential”, ηG, based on chemical configurations.
All these are suitably objective. They together determine the speed of reproduction. They are a combination of the cycle length and the rate of activity at every point and come together to create the evolutionary potential, η. We set them out for the real case of Brassica rapa in Table 6:
Evolutionary Potential, η 

Minimum 
Maximum 
Multiple 
Haeckel potential, ηH 
Generation time 
3 days 
14 days 
3,110.4 
Darwin potential, ηD 
Numeracy 
0.662 biomoles per second 
1.096 biomoles per second 
1 
Mendel potential, ηM 
Mass 
1.171 x 103 grams per second 
1.049 x 101 grams per second 
9.248 x 1010 
Gibbs potential, ηG 
Visible presence 
6.740 x 103 kilogrammes per joule 
1.120 x 102 kilogrammes per joule 
8.970 x 1012 
The first factor driving evolution is the Haeckel potential, ηH, which is a measure of T and Z. There are, however, immediate consequences to the variability in generation lengths we measured in our Brassica rapa experiment. If, for example, another species has an equilibrium generation length of 50 days but with entities ranging between 35 and 60 days, then there is a good probability that—other things being equal—they are compatible with B. rapa. There is an overlap between B. rapa’s maximum, and this candidate plant’s minimum, meaning at least some entities will be exploiting chemical components at comparable rates. The overlap increases the probability that equivalent chemical reactions can be in force at equivalent times, and that progenitors and progeny could be jointly and reproductively accessible.
Defining a species is initially problematic because defining a generation length is also problematic. Thus although some species of mosquito have Z = 4 days per biomole, our representative mosquito has Z = 20 days per biomole, while the blue whale has Z = 31 years per biomole (Masaki, 1967; Taylor et al, 2007). These are vastly different time scales, with the latter being over 565 times as great as the former. Their biopaths will be of vastly different lengths, masses and energies. And if we consider an arbitrary time period of t = 4 days, then while some species of mosquito can amass enough biological mass, and undertake enough biochemical reactions, at a sufficiently rapid rate to complete reproduction, most mosquito species will only have amassed enough mass and energy to have travelled 0.2 of their generation length or 20%. This is a considerably slower rate of activity. Over the same arbitrary four day span, the blue whale will only have been able to access enough mass and energy to have travelled 0.00035 of its generation length, a yet slower rate again. These objective measures tell us that the probability that any one of these groups of entities has either evolved directly from or reproduced into the other is exceedingly low. Their enbranes and tangents—their timelike futures—depict unbridgeable differences in both the amounts and the rates of biological processing occurring at every time point t.
We currently express “biological time”, the pace of reproduction, as seconds per biomole. Since it is the energy expended at each moment in the cycle, we must account for the numbers over which the energy is distributed, at each t, to give an nt all over T. Our Brassica rapa plants produced generation lengths of 28, 35 and 44 days, leading to a calculated equilibrium length of 36 days. Generation length, however, is a directional derivative based upon n and q, rather than a specified time span, which is why absolute generation length, Z, is expressed as seconds per biomole. It not only specifies the number of entities, but also declares the existence of a yettobespecified amount of components and processing.
As a general point of principle, two populations cannot be reproductively accessible, nor evolve into each other, unless the distributions of the probabilities attached to their molecules and absolute generation lengths allow the entities to overlap in numbers; in types of chemical components; and in rates of processing. An equivalence in generation length between two populations is thus a statement of the probability that—other things being equal—they share a rate of processing for their entities and their stocks of chemical components at equivalent points across their generation lengths. Therefore, the first of the objective criteria that establish reproductive accessibility or inaccessibility on the biological hyperplane of the present is generation length T. And since this must be expressed in relative terms as T’, and so in terms of—and as a multiple of—our reference population, then its value for B. rapa is:
T’ = 3,110.4.
This T’ simply means that—other things being equal—B. rapa uses 3,110.4 times the energy to complete a generation than does the reference and Franklin population. The reference population is moving at an accelerated rate relative to Brassica rapa … and we now have a clear metric for that difference. The same holds for any other population. This therefore forms part of the proposed Haeckel potential, ηH.
The second force that drives evolution is the Darwin potential, ηD, which declares the number of entities processed by the energy the Haeckel potential makes available over the given generation length. The Darwin potential records the number densities at each moment in time. Mosquitos, as an example, display far higher intrinsic rates of natural increase than do whales. They achieve this not only by having a much shorter mean generation time, but also by reproducing (a) at a much earlier point; and (b) in far greater numbers.
We are quoting all cycles and populations with the same N’ = 1 biomole average over their generation. Therefore, no matter how extensive may be their deviations either side of this average, there is no multiplier for any population. All populations enjoy the same average number density of this one biomole over their generation. This is the standard. The only difference is then the deviation, and its rate at each point.
We then turn to the Mendel potential, ηm, which helps establish the biochemistry declared by the above two potentials. The engenetic burden of components mass, κ, measures the mechanical work the population does in the environment acquiring and binding needed chemical components. It measures the population’s ability to interact with the environment while converting chemical components into biological mass and matter. This is the mechanical energy of marshalling the chemical components needed for the cycle. Just as the Carnot efficiency states the amount of mechanical work it is possible to produce from a given quantity of energy, so also does a similar relation hold for average individual mass in biology. We need a scale of measure for changes in numbers of chemical components; changes in their types; and changes in their rates of absorption and emission.
The microscopic particles a population gathers through its Mendel potential, ηm, is the mechanical chemical energy it uses per each unit of time. Those molecules must of course follow all quantum rules. Before, for example, a Brassica rapa subpopulation can evolve into a species distinct from all other B. rapa, it must amend its ηM from one set of probable values to another. So if a first population is to evolve into a second, it must marshal and transition the molecular probabilities enshrined in its engenetic burden of components mass, κ, from one value to another, and at an appropriate rate. The target range of probabilities must therefore be approachable from an initial range. As with the Haeckel potential, ηh, these two must at some time either overlap or be contiguous. Thus the relative ranges of values for the initial and the final masses in each population, as well as their absolute values, must at some time be approximately equivalent.
We must nevertheless always compare populations quantitatively to our reference cell. We must express each population’s m̅’, its average individual mass maintained per second over its entire generation, as a multiple of the reference. Brassica rapa’s overall mass maintained over the generation is 9.248 x 1010 greater than our reference, meaning that no matter what other differences in rates and times there might be, it expends more energy than does the reference or Franklin population in pursuing its generation.
The final factor driving evolution is the Gibbs potential, ηG. This is a function of the population’s engenetic burden of conformation, χ, which measures the nonmechanical chemical work done per second. It states the numbers and types of chemical bonds that bind the acquired components, and how they change over the cycle, at a given rate. It states the range and diversity in form and type—always from initial to final values—as chemical configurations oversee growth, development, and reproduction. This ranging of visible presence—which is directly equivalent to the compression ratio exhibited by a standard thermodynamic system—is the change in the Gibbs energies per unit mass per second and the measure of the diversity in chemical configurations and reactions per each unit of mass, again per second.
For every trait or feature a population deploys as it heads towards reproduction, there must be a change in chemical binding and bonding, and a change in the rate of chemical activity over its given component numbers. This is a transition from a Vmaximum to a Vminimum over the specified entity numbers and their components. But since the development represented must be reversed in recipience, we can equally well consider this as a Vminimum to Vmaximum.
Once again, if any population is to evolve into or out of B. rapa, then the types of bonds it forms, which are its values for the visible presence, V, must start and/or end somewhere close to ηG. There must be some overlap or contiguity in the distribution of chemical activities and configurations across the two sets of entities and their ranges, and over similar periods of time. They directly affect the traits and features of the populations they describe, and we have again produced an objective value for these molecular probabilities.
We can only for now, however, deal with the total energies involved. We therefore express the generational average, V’, as a multiple for our reference cell. The value for Brassica rapa is ηG = 8.970 x 1012 meaning it uses that amount of energy more than does our reference Franklin population to complete its generation.
The sum of the above multiples is 9.062 x 1012. We therefore now know that Brassica rapa uses that times as much energy as does the reference Franklin population to complete its generation. Energy’s behaviour, however, always involves both the special and the general theories of relativity. They also determine reproductive accessibility and inaccessibility—which still awaits its clear definition. This suggests that each biological entity can only experience the biological and reproductive effects of both the past and the future within its own population light cone, and so with respect to other entities sharing that same population light cone. This must, however, be a function of these multiples of energy.
Every discrete collection of energy, including the biological, is a physical observer with a light cone. Each can only ever experience its own. None can experience that of any other. ‘Now’ is simply the singularity on the hyperplane of the present where the respective past and future light cones intersect. The totality of every physical observer’s space and time is divided into five types or categories. Biological events and their quantum particles must also therefore be divided into those five categories.
From the perspective of a physical observer in a standard Minkowski spacetime, all events—again including those involving our relativistic biological energies—take place in one or another of:
 the inside of the past light cone;
 the inside of the future light cone;
 upon the surface of the past light cone;
 upon the surface of the future light cone;
 outside both the past and the future light cones, and so in the absolute elsewhere.
There are no other possibilities.
All biological entities are Minkowski observers. All biological activities—including speciation—must therefore also fall within those light cones. All current biological activities are on the hyperplane of the present and so must both result from, and be connected to, similar activities in the past light cone, and they must likewise lead to activities in the future light cone.
Light cones invoke relativity, and therefore demand measurements made relative to light. For example, the distance from the average pair of human eyes to a piece of paper, when reading, is ‘thirty centimetres’ when considered spatially. But since the piece of paper is located at a distance that light can travel in approximately onebillionth of a second, it should be more correctly expressed, for energy purposes, as ‘one lightnanosecond’. Both are measures of distance, but the essence of special and general relativity is that the latter is also a measure of time.
Reproduction is also about energy and time. Just like the interaction with a piece of paper held in the hands, a biological interaction has a “very close proximity” to its observer. The moon is also a material body … but one located approximately 380,000 kilometres from the Earth. Radio signals—which are also electromagnetic waves—therefore take approximately 1.5 seconds to travel that distance. Meaningful sublightspeed—i.e. material—interactions are still quite possible with the moon, but it is first necessary to travel there, which very few humans have succeeded in doing.
NASA used a set of sublightspeed interactions to launch the Voyager 1 and Voyager 2 spacecraft in 1977 as part of a proposed “Grand Tour” to study the outer reaches of the solar system and the interstellar medium (JPL, 2012). As of February 2012, Voyager 1 was the farthest manmade object from earth. It had already entered the heliosheath, the outermost layer of the heliosphere, and is projected to be the first manmade and material object to leave the solar system. Voyager 2’s mission is to study the solar system’s boundaries. If undisturbed it will bypass Sirius in about 296,000 years. In February 2012, it was 9.082 billion miles, 14.616 billion kilometres, away from the sun … but only about 16½ hours distance in lighttravel time.
A manned spacecraft that could travel fast enough to catch either of the Voyagers is largely the stuff of science fiction. Meaningful sublightspeed and material interactions with either are now close to impossible. Nevertheless, lightspeed interactions are still possible with both, although roundtrip communications with Voyager 2 now take over 33 hours, making a conversation with any humans who could have left upon it, had it been manned, inconvenient but not impossible. Once again, the matter, energy, space and time needed to interact, materially and in sublightspeed fashion increases as the lightdistance increases. Matter, time, space and energy—even the biological—cannot be separated and are linked to, and by, the speed of light.
The interior of an observer’s future light cone houses all events involving material objects that move at sublightspeeds, and that could therefore engage in meaningful materialbased sublightspeed interactions at some future time when viewed from that observer’s perspective. The interior of the past light cone is similarly filled with those material objects that had the capacity to act so they could interact materially, i.e. with sublightspeed interactions, in the here and now. Just as the future light cone represents those regions of spacetime with a material influence, so also does the past light cone. All events inside the past light cone are those that could have emitted a material body or particle that can affect what is happening in the here and now. In the same way, all events inside the future light cone can be affected by a material particle emitted or leaving from their given herenow point upon the hyperplane of the present.
The only individuals and events that can, or could, influence the here and now by physical—including biological—means are those whose light cones overlap closely enough to allow those meaningful sublightspeed interactions. All individuals on earth currently alive thus have the ability to influence each other biologically and historically, for their light cones can in principle overlap. It is again highly relevant biologically that the movement of physical bodies is limited to the regions of wheres and whens contained within the past and future light cones.
The boundary of the past and future light cones is a special case. Since E = mc2, it is formed by photons that can only travel at the speed of light. From the observer’s perspective, no material interactions are possible with such objects for nothing can move at the speed of light and it is therefore not possible to catch up with anything and interact with it materially. A material bat can hit a material ball when they are at the same location, but any photons arriving at that same location can only warm the microscopic components of each. To reach any spacetime position on the surface of the future light cone requires a movement at the speed of light. Thus the edge of the future light cone represents the future history of a flash of light emitted on the hyperplane of the present, while the edge of the past light cone represents all locations and directions from which flashes of light can at this moment be received.
As time progresses, the future light cone of any given event grows to encompass more and more locations. Similarly, an event’s past light cone stretches back to encompass more and more locations … but at earlier and earlier times. Further away past light cone locations must also be at more distant times, for their light needs correspondingly longer to travel to get here now. The past light cone includes, at its very edges, very distant objects in the observable universe, but only as they looked a long time ago, and when the universe was still young.
For any given event, the set of events on or inside its past light cone is the set of all events that could send a signal to reach it and influence it in some way. Likewise, the set of events on or inside its future light is the set of events that can receive a signal and so be causally influenced by that event. Events that do not lie in either the past or the future light cones can have no influence on, or be influenced by, that same given event.
The hyperplane of the present is a continuously expanding surface that is ostensibly a condition of universal simultaneity, except that it can never be seen as such. The only thing accessible in any and all heres and nows is the universe’s present and quantum state of material and sublightspeed interactions as it is defined by the light cones slicing through the hyperplane. Two events with different coordinates on the hyperplane of the present but seemingly occurring “at the same time”, in whatever frame of reference, always lie outside each other’s past and future light cones as drawn at that time. Light cannot travel instantaneously between them, a fact that always holds. They may occur “at different times” and “at different locations” as referred to by different observers, but the two events will always be observed to lie outside the two light cones concerned.
The future light cone is the boundary of the causal future of a point while the past light cone is the boundary of its causal past. The absolute elsewhere is thus an integral part of light cones behaviour. It is the region of spacetime lying outside the light cone of any given point in spacetime. Elsewhere events are mutually unobservable, and cannot be materially or causally connected in a simultaneous sense. All observers therefore agree on the light cones at each event. In any frame of reference, an event judged to be in the light cone by one observer will be judged to be in that same light cone by all others.
As in Figure 69 the outlines for a quantum relativistic biology are now becoming clear. Since, by the first law of biology, biological entities must constantly do work, then a flow or linear charge density—the same time processing density we met before—exists over all molecular components, and at a specified rate. As per Figure 65, the biopath results from application of the BiotSavart law to that moving charge. It is the Faraday induction we also met earlier that elicits a repeating cycle of operations, and therefore a wave. The enbrane or crosssection at any time then states the population’s instantaneous values. The tangent to that enbrane states the population’s timelike future, including all rates and velocities and all rates of change for all properties. The mass and the energy flux together form the vector field we have already studied, with the biopath having length T which abides by Green’s and Stokes’ theorem, and which we can measure with our planimeter.
Since we have a quantity of relativistic energy moving from a biological past to a biological future, we can now emulate the complete success of Maxwell’s theory in explaining all observed facts about electromagnetism. His realization was that propagating electromagnetic waves can carry energy from one point to another. Since it follows all those same vector rules our biological wave—moving through biological spacetime—can do the same.
Einstein then followed Maxwell and realized that the laws of physics expressed in the Maxwell equations have the same form every where and every when. All physical quantities associated with free space are constant for any observer. They therefore hold good for all biological entities. The speed of light is truly constant in all inertial frames. As Maxwell pointed out, molecules are not born and do not die, and maintain the same behaviour everywhere and at all times (Maxwell, 1875, Vol. III, p 48). All microscopic biochemical behaviour must therefore follow all the established laws of science. And this is the constancy that makes evolution inevitable.
The sublightspeed interactions that govern biology fall under the general theory of relativity which governs the behaviour of space, matter, energy, and time. The general theory successfully allows analysis of the metric tensor as one concept, and the metric field as another. It also separates matter, as a source for gravity, from the gravitational energy arising from matter as it moves through and across space and time. The aspect of gravitation and of the Reimann manifold known as the “Ricci tensor” governs matter’s behaviour. whether it is or is not biological.
The Ricci tensor is a most important part of the observable universe. Slightly more formally, the Ricci tensor rules over the evolution of the sizes of volume elements in spacetime as each point within that volume element flows along its geodesic curve. It is that part of the curvature of spacetime that determines the degree to which matter will tend to converge or diverge in time. Since it governs all mass and matter, it therefore also governs the behaviour of all biological matter.
We can better understand the Ricci tensor if—as is standard in general relativity— we imagine a population’s entire collection of microscopic components as a large sac or ball of dust or “molecules” floating through spacetime. At some locations the sac is put under pressure by the curvature. It is squeezed and gets denser. At other locations the pressure is relieved and it opens out. There will then be movements or changes on all dimensions. The Ricci tensor tells us how the volume or bag of molecules that create the matter in our biological population changes and grows, at its different rates, over the various dimensions. In order to facilitate discussing them jointly, these x, y, z, and t are traditionally given the indices μ and ν and range from 0 to 3. Changes in their values under general relativity take the sac to different locations in the manifold. Matter’s behaviour can then be predicted and measured.
The Ricci tensor is effectively a radial term. It governs the expansion and contraction of those volume elements that contain matter. And since volume is expressed in biology as population count, then the Ricci tensor governs net increases and decreases in that population size. This is the ∂n/∂t factor with which we are already well familiar in Maxims 3 and 4 of ecology, the maxims of succession and apportionment.
If the Ricci tensor governs population count within our floating sac of molecules, which is position on the number axis, then we need another tensor to govern all population activities at each point on the biopath in terms of mass held and energy deployed. As in Figure 70, we can always transport the sac as a vector about a closed loop in the Riemannian geometry. In this case, our geometry is a biological space or field, and the mass and energy involved takes the sac full of biological entities through a complete Franklin cycle. The population of entities within the sac emerges at one point and departs at another, at which point another cycle begins.
We can now take our vectors or sacs of molecularbased entities about a closed loop longitudinally, which is along the generation length. If there is then a difference in the phase or energy volume each sac occupies when it returns, that net change, ΔV, defines a change in the curvature of spacetime. There has been a difference in the stresses imposed on the biological matter following that geodesic or path of shortest distance. The entities concerned must also be feeling, and exhibiting, a difference as they move through space and time. This is again governed by the Ricci tensor. If—and only if—there is no matter directly creating and infuencing the shape of spacetime, then there is no Ricci tensor, and there will be no unit engenetic equilibrium population.
The general theory has another important part to its handling of matter, space, and time. This governs its continuity of effect. Just because no matter is physically present in a given location does not mean there is no gravitational effect at that point. That is to say, the sac is always being drawn from one location to another. When it leaves a given location it is being drawn elsewhere, but it is also likely to return there at some future time, or else to a nearby location. This is an attractive property of all matter handled in the generatl theory. By the Maxwell equations, the necessary gravity waves—the transmission of the gravitational effects that one body can have upon another—can also propagate through free space. Their creation and behaviour is governed by that aspect of the metric tensor known as the “Weyl tensor”.
Our dust or sac of molecules must always have a shape governed by the dimensions and the space in which it is at any time located. The Weyl tensor is then that part of the Riemann curvature tensor that causes those deformations along a geodesic that do not cause changes in volume. Much as we saw with the Liouville theorem, the Weyl tensor can influence spacetime and objects, causing them to change their shapes but without changing their volumes. Gravity can therefore propagate—at the speed of light—even in regions where no source of matter or energy exists. It tranports gravitational effects to other locations where matter exists. Where the Ricci tensor describes the relative sizes of volume elements of space, the Weyl tensor describes their relative shapings, shearings and twistings. It is therefore more related to rotations, and to those behaviours analogous to curls and divergences.
We can now see from Figure 70 that, in biological terms, the Ricci tensor is more particlelike and spacelike, governing the actual locations and movements of populations of particles and entities; while the Weyl tensor is more wavelike and timelike, governing the shape of the space, the trackings of the coordinates, and the handoff and flow from event to event across spacetime.
The Weyl tensor has a component that describes the tidal progression of gravity; and a component that describes the behaviour or transformations of any inertial frames and their coordinates as they feel its effects. The Weyl tensor changes the shape and intensity of the spacetime that any material object will feel once placed in the space it creates. Thus spacetime is not necessarily flat in the absence of matter, because the Weyl tensor contributes a curvature. The gravitational field need not therefore be zero. If the moon were removed from the solar system then its somewhat negligible contribution to solar gravitational behaviour, through its own mass, would be removed through the Ricci tensor currently centred upon it. But the solar system’s net gravitational effect in that location would remain through the Weyl tensor.
It is possible to derive equations for a component of the Weyl tensor that makes it the gravitational analogue of Maxwell’s equations of electromagnetism. The Weyl tensor then represents that part of spacetime curvature that can propagate across, and also curve, free space. Those regions of spacetime where the Weyl tensor could vanish competely across all four dimensions would contain no gravitational radiation; would be conformally flat; and no light rays would bend because of gravity. Even in relatively free space, therefore, light cones cannot be tilted or curved by gravity to become parallel, because that would effectively create a Euclidean style space over all space and time, and mean that the Weyl tensor had vanished completely everywhere. Since matter always exerts a force and attracts, this does not happen. In the same way, by Maxim 1 of ecology—which is the maxim of dissipation and which states that ∇ • M → 0—biological matter is never stationary. It is always either converging or diverging which is to be increasing or decreasing energy and/or its volume elements in biological space. Therefore the biological version of this tensor also cannot be zero. Entities are always being born and dying. Entire species—which govern the shape of the geometry available to the entities—do the same. Both are inevitable consequences of this biological spacetime and geometry.
We are now ready to apply the ideas portrayed in Figures 69 and 70 more fully to biology. Like the general theory of relativity, it is about mass and energy. We have also already proven that biology is a vector field phenomenon, where a vector is simply a more restrictive form of a tensor .
By the Maxwell theory, and by the first law of thermodynamics, gravity is expressed as waves which are propagated through free space. By the same Maxwell theory, biological phenomena can also be expressed as repeating wave motions propagated across the generations and from progenitors to progeny.
The key to Darwinian evolutionary theory lies in the following two pairs of characteristics. One in each pair is stationary, the other dynamic:
 T is a population’s mean generation time in seconds;
 Nt is the population’s number density at any given t;
 Z is the population’s absolute generation length in seconds per biomole, and so the amount of the generation length that passes as each entity is processed at the prevailing rate of activity, and where Z = dT/dN;
 Qt is the population’s numeracy in biomoles per second and where Q = dN/dt.
There is an apparent oxymoron in that Q and Z are measured in opposing units. One, however, represents the the Weyl tensor, the other the Ricci tensor. One is measured in seconds per biomole, the other in biomoles per second. But although when measured for any given population these two should seemingly “multiply together” to give unity, tensor multiplication is not so straight forward. That “unity” does not remove all dimensionality … and certainly does not signify an immediate lack of mass and/or energy at any point. As in Figure 70:
 the absolute generation length, Z, of seconds per biomole is the biological equivalent of the Weyl tensor and establishes the dimensions of spacetime along with its coordinates and intensity at each point;
 the numeracy, Q, is the biological equivalent of the Ricci tensor and is the speed of propagation and overall properies of any populations and entities located at each point defined by the Weyl tensor.
Gravitational waves help to determine the nature of space and time as they propagate. A given body Y placed in spacetime will feel the gravitational force emitted by X according to the field that X places around itself and as felt by Y, and which depends upon the Weyl tensor. If no mass occupies a given location then although no source for gravitational effects exists there, that spacetime can still transmit the Weyl tensor, through itself, to other locations.
In the same way as do gravitational waves, biological field lines propagate. They are transmitted as DNA by progenitors to their progeny, who in their turn become progenitors and retransmit received values and properties, again via their DNA, to their own progeny, all in the manner of the Weyl tensor. They can be measured at Z seconds per biomole. They declare a number density for each t over T. The progenitors declare to the progeny, by the properties and DNA they transmit in reproduction, how many of them there should be at any given point in the generation, and the mass and energy they should have. They thus declare:
 an accompanying rate of propagation in a material medium which is the mass flux and the cycle of material operations composing the entities located at each t over T; and
 an accompaying rate of work and heat, which is the energy flux and the work rate.
Just as gravitational field lines exist to transmit gravitational effects, so also do the biological field lines inherent in reproduction exist to create given numbers of entities with given masses, energies and rates of work. They pass through entities and populations and from one generation to the next via DNA. This is the timeline and worldline aspect of the Weyl tensor.
Since at least two entities—an entry and an exitpoint as shown in Figure 70—must always be enumerated for any biological cycle, the absolute generation length is once again the Weyl tensor. It is wavelike and timelike and reaches out from one material entity in the past to another in the future. It is the biological potential encoded in DNA and the geodesic in biological spacetime. It is expressed through our already given Euler and GibbsDuhem equations. It is the reproductive potential, A, the visible presence, V, the Franklin energy, F, and the biological potential, μ. These establish the biological space and create the biological lines of force. Just as with gravitational and electromagnetic fields, they are the method by which biological entities assign potentials and quantities and transmit properties through biopaths and enbranes.
The transmission of potentials and properties from one set of biological entities to another through given quantities of mass, work, and heat over time is vector based. It follows the BiotSavart law and the Helmholtz and Liouville theorems. It follows our laws, maxims and constraints and etc. It defines a cone of reproductive inaccessibility. It establishes spacetime and its coordinates. Therefore, any set of entities that follows such criteria fully merits the term species.
Now we have a clearer view of species, we can turn to speciation. The as yet unattained holy grail of evolutionary theory is observing evolution in action in the emergence of a new species … although the etymological fallacy continues to haunt the term. Since species is poorly defined, speciation is also poorly defined. Earnest attempts have nevertheless been made to “catch evolution in action” in the laboratory:
Laboratory experiments on speciation investigate the conditions under which reproductive isolation can evolve between members of what was initially a single population, as well as the conditions under which reproductive isolation between initially partly reproductively isolated populations can become intensified. … Speciation experiments have made important contributions to our understanding of mechanisms of speciation, and are likely to continue to do so, complementing comparative, genetic, and theoretical approaches (Fry, 2009).
If, in the light of the Weyl tensor, we look on species as the handing on of definite quantities of mass, energy and configuration across time, then one of the more successful attempts to create a new species was with a hybridization method reported by Greig et al in their paper “Hybrid Speciation in Experimental Populations of Yeast”.
Grieg et al begin with a declaration of the basics, in their view, of speciation:
Speciation is thought to arise by gradual evolution of genetic incompatibilities, ecological specialization or chromosmal differences that prevent mating or cause inviable or infertile hybrid offspring. Rapid species formation can potentially occur by hybridization; however the degree of reproductive isolation between potential new hybrids and the two parental species is a major limiting factor. Hybrids must be selffertile and sufficiently reproductively isolated to maintain a distinct lineage, but reproductive barriers between parental species must not preclude the initial hybridization. …
…
Our results suggest that homoploid hybrid speciation can occur readily and that any intrinsic incompatibilities in Saccharomyces can be overcome relatively easily … .
…
Recent studies have isolated fertile Saccharomyces hybrids in the laboratory and in nature. In this study, we showed that homoploid hybrid speciation occurs readily in laboratory populations of Saccharomyces, in contrast to all known animal species and most plant species. In part, this is due to the ability to autofertilize, which produces identical homologs in every chromosome pair (except at the matingtype locus on chromosome III) and thus avoids any incompatibilities that could arise by fusion with other gametes, even from the same parent. Autofertilization is thought to be relatively common in wild yeast, and it can also occur in other species with gametophytic selfing (e.g., protists, fungi, algae, ferns). Our results extend the range of known mechanisms that cause reproductive isolation. These act at different levels and in different taxa, but all may help produce new species. (Greig et al, 2002).
Unsurprisingly, the lack of clarity concerning species and speciation means that laboratory experiments of this kind receive a guarded, not to say sceptical, analysis:
Although it may be a little too early to conclude a historical analysis of recent decades, it may well be that, one day, this period of uncompromising dogma will be seen as the Dark Ages of speciation research. The allopagric paradigm was based on few facts, a lot of faith, and on paradigmatic despots ruling the field. And we haven’t yet reached the speciation Enlightenment. Anyone who tries to publish alternative speciation scenarios will, sooner or later, be confronted by medieval referees. Personally, I have a good collection of dismissive comments from such colleagues (Tautz, 2009)
Granted that speciation must involve mass, energy, and configuration, all as guided by the Weyl and Ricci tensors, then Diane Dodd’s classic 1988 experiment with Drosophila pseudoobscura is of more than passing interest to us:
According to the biological species concept, speciation is basically a problem of reproductive isolation. Of the many ways to classify isolating mechanisms, the two main divisions are premating isolation, in which mating is prevented from occurring, and postmating isolation, in which making takes place, but viable, fertile offspring are not produced. There is much debate over which type of mechanism, premating or postmating, is most likely to develop first and how the isolation comes about.
In order to gain insight into the process of the development of reproductive isolation, eight populations of Drosophila pseudoobscura were studied (Dodd, 1989).
Although Dodd was examining the possibility for geographic or allopatric isolation, we can nevertheless see the influence of mass, number, and configuration energy in Figure 71. She divided her initial Drosophila pseudoobscura population into two subpopulations. She fed one on an entirely maltosebased diet; the other entirely on starch. After many generations, she tested them to see if the geographic isolation combined with the changed diet had led to preferential breeding. From our perspective, she was testing differences in configuration energy whilst trying to keep mass, generation length, and numbers as constant as possible.
Dodd discovered that her Drosophila pseudoobscura did indeed show stable premating preferences entirely caused by the difference in diet. Those raised on starch mated significantly preferentially with each other, as also did those raised on maltose. She concluded that allopatric speciation could indeed be a means of speciation. The time span was insufficient to establish a new species, but a Darwinian natural selection was nevertheless evident.
A similar real world case is Rhagoletis pomenella, the apple maggot (Feder et al, 1988). The original maggot is an American native, breeding on hawthorns native to the USA. The males tend preferentially to look for mates, and the females tend preferentially to lay their eggs, on the same types of fruit they each reside on. Some 200 years ago US immigrants introduced apples from Europe. The males raised on hawthorn have therefore tended to mate preferentially with their hawthorn females, while in the subpopulation that has discovered apples, the apple males do the same with the apple females. Gene flow has noticeably coalesced on these preferences, and genetic differences are evolving … although there is not as yet a true distinct species for they can currently still interbreed and produce viable offspring.
Although the three above cases of (a) Saccharomyces cerevisiae and S. paradoxus; (b) Drosophila pseudoobscura; and (c) Rhagoletis pomenella show changes in behaviours and values, it is good to be clear what their origin might be. As we learned from Karsai and Kampis, science does ‘require a general understanding of basic scientific notions and the nature of scientific inquiry’; and as Brown et al similarly stated ‘in this paper we have been concerned only with basic science, with developing a conceptual framework for ecology based on first principles of biology, physics, and chemistry’ (Karsai and Kampis, 2010; Brown et al, 2004).
Now we have our Ricci and Weyl tensors, we must do as Einstein did and refer back to fundamentals: to mass, energy, space, and time. Light has both energy and momentum. When Einstein produced the general theory—incorporating both the Ricci and the Weyl tensors—he substantiated it by predicting exactly how much the light of a distant star would be affected by the sun’s gravitational field as it made its way to Earth.
The general theory insists that any light passing close enough to a large mass will be affected by its gravitational field and lose in energy and momentum. Since its trajectory will change, the shift should be detectable by any observer situated outside whatever field has created that large mass. The amount of deflection will depend upon the intensity of the Ricci and Weyl tensors, and so upon mass of the object, the intensity of its field, and how close to it the light passes. Therefore, and as in Figure 72, the light from a faroff star should be deflected as it passes the edge of the sun.
In the summer of 1919 Einstein’s prediction turned him into a world celebrity. The Royal Society of London announced that its scientific expedition in the Gulf of Guinea had been able to photograph the total eclipse that had taken place on May 29th that year. The team led by Sir Arthur Stanley Eddington verified Einstein’s predictions according to general relativity. Light had been bent to almost exactly the extent Einstein had predicted.
Mass and energy create curvatures in space and time. It is of no consequence whether the energy or the space is biological, gravitational, or electromagnetic. By the special theory of relativity, mass and energy are equivalent. Every mass affects the behaviour of all other masses in both space and time. A given body contains a stock of thermal, electromagnetic, thermonuclear and all other energies … including the biochemical. By the general theory, even so modest a massenergy system as the earth uses its Ricci tensor to create gravitational effects about itself and to warp spacetime according to the totality of its energies which then affects the Weyl tensor … and which Einstein predicted.
In 2007 NASA’s “Gravity Probe B Mission” was at last able to produce the results of its varied experiments to test Einstein’s predictions about even the modest earth’s ability to warp spacetime:
Gravity Probe B (GPB) is a NASA physics mission to experimentally investigate Albert Einstein’s 1916 general theory of relativity—his theory of gravity. GPB uses four spherical gyroscopes and a telescope, housed in a satellite orbiting 642 km (400 mi) above the Earth, to measure in a new way, and with unprecedented accuracy, two extraordinary effects predicted by the general theory of relativity (the second having never before been directly measured):
 The geodetic effect—the amount by which the Earth warps the local spacetime in which it resides.
 The framedragging effect—the amount by which the rotating Earth drags its local spacetime around with it.
The GPB experiment tests these two effects by precisely measuring the precession (displacement) angles of the spin axes of the four gyros over the course of a year and comparing these experimental results with predictions from Einstein’s theory (Everitt & Parkinson, 2007)
Gravity Probe B measured the Earth’s gravitational effect at about 3×105 radians per year or 6.6 arcseconds per year. This may be an extremely modest effect, but firstly, it can be measured; and, secondly, it is still propagated outwards, in all its glory, to the entirety of the observable universe. Similar biological effects can therefore also be measured.
We first consider a biological reading of the Ricci tensor. Biological populations must be moving under energy, and according to the conjoined sets of: (a) the Maxwell rules for the electromagnetic field; and (b) the laws of thermodynamics … and as particles, where by particles we mean measurable material manifestations of massenergy in spacetime.
We can in fact see populations as particles in collision with each other in a survey conducted on barnacles in Scotland, in 1961, by Joseph Connell. He studied the interactions between two species of barnacle, Chthamalus stellatus and Balanus balanoides (Begon and Mortimer, 1986). The adult C. stellatus live in an intertidal zone above that of B. balanoides. Their young nevertheless can, and sometimes do, settle in the B. balanoides zone … only to disappear. Connell sought to determine why. Were the C. stellatus adults somehow being excluded or suffering from some yet unobserved ecological detriment? Could they simply not live there? Or was there some other reason for their disappearance?
Connell next took a series of careful censuses. He then removed all Balanus balanoides from selected sites so he could compare the Chthamalus stellatus response when B. balanoides was both present and absent. He immediately noted that C. stellatus flourished in those locations where B. balanoides was absent. Alerted by his findings, Connell eventually observed direct competition between the two species. He saw B. balanoides smothering, undercutting, and crushing young C. stellatus. Greatest C. stellatus mortality occurred during the periods of greatest B. balanoides growth, with the few C. stellatus that survived being smaller and producing smaller offspring.
There are several different but mathematically valid ways of describing particles. This is what makes quantum physics possible. In order to resolve some of the problems that arose at the dawning of the quantum era, Bohr devised a quantum number system to describe the energy levels of interacting atoms. He allocated each electron a four number array. Electron transitions from one energy level to another were now a transition from one Bohr array to another.
Heisenberg then realized that the particles were no longer relevant. Quantum theory—i.e. the theory of microscopic particles moving under energy and according to the conjoined sets of (a) the Maxwell rules for the electromagnetic field, and (b) the laws of thermodynamics—could instead be reduced to an exercise in abstraction. The rules that govern microscopic particles are the rules for manipulating the Bohr arrays. The real issue therefore now lies in determining the mathematical rules that govern the transformation of those arrays. Heisenberg had realized that this is now an exercise in matrices—a part of linear algebra.
With this understanding of arrays and matrices, we can soon arrange for the Chthamalus stellatus and Balanus balanoides barnacles to be examplars of the Ricci tensor and particles that “collide”. Every population will have values for its mass, its numeracy, its work rate, its Wallace pressure, and its absolute generation length: M, Q, P, W, and Z. In the style of Bohr and Heisenberg, we set these values down in an array. We can create one for each of any two interacting populations … such as C. stellatus and B. balanoides.
In its initial state any given population will have given values for its properties. We can therefore construct an initial array for Population A in the form:
[MAinitial, QAinitial, PAinitial, WAinitial, ZAinitial, tAinitial];
and for Population B in the form
[MBinitial, QBinitial, PBinitial, WBinitial, ZBinitial, tBinitial].
If A and B interact, whether with each other or with the environment, there will be changes in these values or elements. After a specified time interval, we will be able to create two further arrays describing A and B’s subsequent states. The two final arrays will take the form:
[MAfinal, QAfinal, PAfinal, WAfinal, ZAfinal, tAfinal]
and
[MBfinal, QBfinal, PBfinal, WBfinal, ZBfinal, tBfinal]
respectively. Changes in our arrays and matrices will now be changes in the two populations, which are each moving—as our sac of molecules—in biological spacetime. They each have a velocity—a rate of change—in all components. Suitably composed equations, probability matrices, and indices—hyperbolic where necessary—will in principle detail their transformations. Two or more populations can now be looked on as individuated dynamical particles that collide, intersect, and interact on the basis of matter and energy. The equations and indices that describe the transformations can predict all future interactions between similar populations, or else between each population and its environment.
We now have the space and particlelike expressions we need to create a biological description for the Ricci tensor. We can even compare it and the movements it produces directly to the Weyl tensor and the spacetime this latter establishes … for our biological populations are also waves moving through the same biological spacetime.
In his tenyear study of the population dynamics of mammals, L. B. Keith demonstrated that biological populations are indeed interacting sets of waves. He studied Lepus americanus, the snowshoe hare, in its interaction with both its immediate predator, Lynx canadensis or the Canadian lynx, and its varied sources of vegetation, referred to as ‘woody browse’ (Begon and Mortimer, 1986; Smith, 1986). L. americanus inhabits a core, foundation area in its woody browse. It would gradually increase its consumption of this available browse in its woodland areas, and population size would increase. The hares would increasingly inhabit less favourable areas in which the woody browse was sparser. This would increase their exposure to their principal predator, L. canadensis. Thanks to the same population increase, the browse would also become overutilized. This not only reduced the food source, but facilitated the lynx predation. The browse overutilization and increased predation intensity would then lead to a population collapse. L. americanus would retreat back to its core areas of relatively highdensity vegetation, where it remained relatively immune from predation. The woody browse would then begin to recover … whereupon L. americanus would set off on another increase phase and so this interrelated cycle would repeat.
Waves, like matrices, fall within linear algebra. The standard wave equation is v = νλ, which is now easy to replicate. Our wavelength, λ, is the absolute generation length, Z. Biological frequency, ν, is the rate of operations involving mass and matter. This is the mass flux, M. The amplitude is then the energy flux, P, which is the combination of the number density, N, at any given time, and the engeny, S.
If a wave equation linking a wavelength, a rate of propagation, and an amplitude is possible, then a wave function of the general form Φ(x,y,z, …, t) is also and immediately possible. We can soon construct one of the type Φ(Z, Q, m̅, P, t) for every population and that is a complete description of its possibilties and capabilities. Should two populations be similar, then their wave functions will also be similar and conversely. And since the principle of linear superposition is inherent in all wave functions, then all ecological effects—such as for Lepus americanus, Lynx canadensis and the woody browse—can be studied through the superposition of their wave functions.
By the principle of superposition, then given two populations A and B, and also letting any other variable of interest be X, then if their initial states at t are:
ΦA (Zinitial, Qinitial, m̅initial, Pinitial, Xinitial)
and
ΦB (Zinitial, Qinitial, m̅initial, Pinitial, Xinitial)
respectively, their joint interactions at that same t will be:
ΦAB ((ZAinitial + ZBinitial), (QAinitial + QBinitial), (m̅Ainitial + m̅Binitial), (PAinitial + PBinitial), (XAinitial + XBinitial)).
After their interactions, whether with each other or each independently with the environment, their final wave functions will be:
ΦA (Zfinal, Qfinal, m̅final, Pfinal, Xfinal)
and
ΦB (Zfinal, Qfinal, m̅final, Pfinal, Xfinal)
for the independent populations; and
ΦAB ((ZAfinal + ZBfinal), (QAfinal + QBfinal), (m̅Afinal + m̅Bfinal), (PAfinal + PBfinal), (XAfinal + XBfinal))
for their interaction. Populations can therefore be classified by their similarities in F; and the evolutionary history of populations will consist, equally well, of their changes in Φ over historical time, for these will be changes in m̅, P, Q and Z, the distinguishing features of any and all populations.
We have already proven that every individual biological organism is a limit point within a vector field. Each one swirls mass and energy about itself. We demonstrated this by experiment. We planted four seeds—i.e. four biological limit points—per pot for the first generation of our Brassica rapa experiment, and observed each swirl the resources it needed closer to itself. Each used work and heat to bring those resources to rest, with respect to itself, before it could use them to grow, develop, and ultimately reproduce.
Since every plant was in a different location, the principles of relativity theory tell us that the energies they expended in work and heat bringing masses and energies to rest with respect to themselves cannot have been the same. Furthermore, a point at the tip of one plant rootlet might be “only” one centimetre away from the similar point at the tip of another rootlet on the same plant, but the “true distance” between these rootlets is the amount of time light would spend covering it when moving in free space. The energies expended by bringing masses to rest in those inertial frames cannot, again, have been the same.
Brassica rapa must also take in substances and energy from the environment, through its mass and energy fluxes, as it proceeds with its cycle. More strictly biochemical issues are therefore also critical in this regard. But suppose a given substance has the chemical formula C4H4. The rest mass is immediately 52 grams per mole. Those molecular rest masses will not, however, be the only contributions to energy densities. Since six different compounds have that same formla, then six different rootlets could well use the same rest masses to form six different substances, one per each tip. We must therefore factor in the fact that since their chemical bondings are all different, they all contain different energies.
If six rootlets take on different substances then light will travel very different distances for each of them in each unit of time. Tetrahedrane, for example, forms a tetrahedral structure with single bonds all about the carbon; butatriene is a cumulene with three consecutive double bonds between three of its carbons, with the fourth being a slightly offset CH3; and cyclobutyne contains a triple carbon bond.
We already know how to calculate the differences in the paths of work and heat—δQ and δW—the plants will follow to take on these different compounds. But by the theory of relativity, these energy densities cannot have the same measure for all entities or populations.
We then have to add to all this the issue of generation lengths. An arbitrary time period of t = 4 days represents an entire generation length for some species of mosquito; only 0.2 of a generation length for our representative mosquito species; and only 0.00035 of generation length for a whale. Generation lengths for Brassica rapa, in our experiment, varied between 28 and 44 days. Light therefore travels oneahalftimes further over the longer generation time than it does for the shorter one. Biological populations and entities may go through the same sequence of events, but they certainly do not do so at the same pace.
If we remain “in the same spot” on earth; throw a ball gently up in the air; and then catch it back in our hands again; it seems to “go straight up and down”. But we know, on reflection, that this cannot be the whole story. The earth is in orbit about the sun. The ball therefore moves with the earth and “actually” describes a parabola. But upon further reflection, the sun is itself in motion through the galaxy, and so the path the ball “actually” travels must be further reconsidered with respect to the sun. In the end, the only thing we can be sure of is that if the ball was up in the air then since the speed of light in a vacuum is a universal constant, no matter how much it might have moved relative to any other body, light travelled exactly the same distance that it should have travelled, in free space, with respect to all of them. It is ultimately the only thing they all hold in common.
The biological cycle is also measured by light. As a statement of energy, it is driven by biological potential, μ, which is a combination of: (a) the engenetic burden of fertility, φ; (b) the engenetic burden of components mass, κ; and (c) the engenetic burden of conformation, χ. There are no other variables.
Figure 73 is the Franklin cycle, the defining standard for biology. We let the engenetic burden of fertility, φ, be unity throughout, meaning there is no change in numbers. There are four equally placed epochs: TI, TII, TIII and TIV, and whose durations are ti, tii, tiii and tiv.
As in Figures 50, 51 and 52 of the Liouville theorem, the first half of the Franklin cycle is TI to TIII in Figure 73. It has the duration ti + tii and consists of Quadrants 1 and 2. The biological potential, μ = dS, is constantly positive. There is a continuous increase in divergences and curls, and in the mass and energy fluxes. This is because either the engenetic burden of components mass, κ, increases, and/or because the engenetic burden of conformation, χ, increases.
In the ideal case, Quadrant 1—from TI to TII and for duration ti—contains Stage I. Mechanical chemical energy is used exclusively. In this ideal case the work rate, W, of watts per kilogramme remains constant, and dW/dt = 0. The total number of components increases while their form, style and configuration remain constant. Since the entity size must be increasing, this stage is called “growth”:
dp̅/dt = dm̅/dt + dW/dt; dm̅/dt > dW/dt ≥ 0.
Quadrant 2—from TII to TIII and for duration tii—contains Stage II. In the ideal case the conditions ruling in the prior stage are exchanged. The number of components, m̅, now holds constant, dm̅/dt = 0, while the work rate increases, dW/dt > 0. The entities must therefore manipulate their chemical bond energies internally and change configurations. Since chemical energies are increased, this is called “development”:
dp̅/dt = dm̅/dt + dW/dt; dW/dt > dm̅/dt≥ 0 .
Quadrant 3—from TIII to TIV and for duration tiii—covers Stage III. This is distinguished by the logical and energetic necessity of containing the simultaneous physical presence of both progenitors and their progeny. The engenetic burden of components mass, κ, decreases. The average number of chemical components held per entity decreases and both the mass and energy fluxes decrease. This is a negative divergence or convergence as the escaping tendency dominates the capturing one. Energy is lost as components and their bonds are lost, and the average individual mass over the population decreases. The smaller progeny appear and/or are left behind. As m̅ decreases, the engenetic burden of conformation—and so the chemical configuration—remains in its higher energy condition throughout so that the work rate, W holds constant in its higher energy condition and dW/dt = 0. This is Stage III of “reproduction”:
dp̅/dt = dm̅/dt + dW/dt; dm̅/dt < dW/dt ≤ 0.
Quadrant 4—from TIV to TI and for duration tiv—constains Stage 4, the final stage of the Franklin cycle. The energy loss continues as the escaping tendency continues to dominate. The progenitors need not be physically present in this fourth recipience stage, but they are obliged to leave behind them a set of higher energy chemical bonds that the progeny receive and can then exploit before they can in their turn interact directly with the environment. Energy is lost as it is degraded, according to the second law of thermodynamics, while sperm swim, eggs polarize, fruits are discarded, and seeds use the resources provided them by endosperm to germinate and so forth. In the ideal case, dm̅/dt = 0. Since the progeny must receive energy from their progenitors so that, overall, dW/dt < 0, this stage is called “recipience”:
dp̅/dt = dm̅/dt + dW/dt; dW/dt < dm̅/dt≤ 0 .
Biological potential, μ, is positive throughout the first two stages. The energy and biological inertia constantly increase and the capturing tendency is consistently greater than the escaping tendency. Since all entities engage directly with the environment; undertake independent activities; and devote all masses and energies entirely to their own purposes; then these two phases together are called the “atmena phase”, taken from the Sanskrit, and meaning ‘for one’s self’ (it is pronounced ‘aatmayna’ and not *aatmeena or *atmeena).
Stages III and IV—TIII to TI in Figure 73, and having the duration tiii + tiv—see biological potential instead turn negative. The escaping tendency now dominates over, and threatens to suborn, the capturing one. The mass and energy fluxes decrease as either the engenetic burden of components mass, κ, and/or the engenetic burden of conformation, χ, decrease. Since the progeny that appear do not yet have independent sources of mass or energy, then these latter two processes are called the “parasmai phase”, also taken from the Sanskrit, and meaning ‘for another’ (it is pronounced ‘parasmy’ and not *parasmay).
The special and the general theories of relativity describe all movements of mass and energy in space and over time. This includes those that are intrinsically biological. All biological entities have their four dimensions in spacetime. The Franklin cycle is responsible for all biological activity in both past and future light cones. It arises from the three dimensions in our function f(n, q̅, w̅). They are orthogonal and have an orthonormal basis. They are, in other words, exactly like the three dimensions of space. Once we add generation length as a fourth, there are no other variables active on any populations.
The special theory tells us that all bodies apparently made of mass—including the biological—are made of energy. The general theory then tells us that in being concentrations of energy, even biological entities shall bend and curve spacetime about themselves, no matter how modestly they may do so. And … a biological entity bending space and time in its favour with the Franklin cycle is the very definition of competition. Massenergy is always either increasing or decreasing, and is never stationary. We must eventually give a value for this intrinsic energy density.
Determining a metric to discuss the bendings of mass, energy, space, and time, biological or otherwise, is always made difficult by the fact that we are prisoners of the local space of earth, with its rather modest gravitational forces and accelerations. In Figure 74 a spirit level, used locally, in fact stretches indefinitely out into free space on either side. The surface it is placed on curves gently away. But on a local or infinitesimal scale, the earth is flat enough for us not to notice this curvature. A “straight line” also has a fairly clear intuitive meaning that seems selfevident in the space we live. It is “the shortest distance” between two points. However, intuition breaks down when we try to realize that concept on a curved surface—which is any spacetime containing mass and gravity.
As on the right in Figure 74, a straight line is not simply the “shortest distance” between two points. The ideal Euclidean line, unlike any real line, may go “straight northsouth”, but it is also without deviation “eastwest”. We therefore curve—zero, in this case—“with the surface”. Had we inadvertently gone in some other direction, as in the curvilinear track beside it, we would also have produced a longer line and we would not have been going “with the surface”. A straight line adds nothing, in any other direction, to what is intrinsic to the surface it is drawn on.
This idea of travelling by curving “with” the surface allows us to define a similar “shortest distance” on the globe upon the left in Figure 74. Each of the sailing ships sails on a line of longitude. This is the line of shortest length for that surface, and is called a “geodesic”. It is “the quickest way” to travel the shortest distance between two points.
We now need to take care, conceptually, about the globe’s lines of latitude. If we imagine a magnetic north pole in the graphic on the right; and if we now try to move easttowest; we will be drawn northwards or southwards, for that is the surface’s “natural behaviour”. If we now consider the earth, it is indeed magnetic, and if we try to move east or west, we are immediately drawn northwards or southwards as a sign of the fact that we are turning with respect to the natural “lie” of the surface, and so we are not taking the shortest distance.
The issue is a little clearer if we try to walk about a latitudinal line right close to—and so circularly around—the north pole. We must now always “turn” with respect to the surface. We can detect this because the pole is always on our left or right and we can feel ourselves going round and round.
Since in the above case, the pole is always to one side, then we cannot be going “straight” on that surface. If we now abandon the latitude and follow the geodesic, we do not turn in any direction upon the surface as we now approach the pole. It is neither on right nor on left, and we are not curving. Therefore: a line of latitude is not a geodesic with respect to the curvature of this surface. It is the way to take a longer path. There is always a shorter path for that surface. That shorter path incorporates the curves of geodosics at every point.
The geodesic is the “Riemannian circle” or “great circle” whose centre is coincident with the sphere’s centre. A unique Riemannian circle can always be drawn through any two points on a sphere that are not directly opposite each other. The length of the shorter of the arcs through those two points is then the “orthodromic” or “greatcircle” distance between them. It is the shortest distance between them upon the surface, and is the course followed by airplanes and boats travelling around the globe. As with the curvilinear arc on the right in Figure 74, the resulting movement may “look” longer than a similar “straight line” drawn on its flat Euclidean representation, but it is shorter when undertaken down on the surface. It is shorter by a computable manitude.
The important idea associated with geodesics is the natural or ‘freelyfalling’ path. A geodesic is the path a body follows, with respect to that curvature, when completely free from all forces. In the case of general relativity, it means all gravitational forces. Gravity in fact stops being a “force”, and becomes simply the curvature of spacetime. Thus a ball falls to earth because when released, it follows the natural path of the spacetime curve around the earth which has the curved geodesic that we observe as its ‘straightline’ fall. If it moved sideways while falling, it would be influenced by a “line of latitude”. It would be taking a longer path, and we would immediately look for the force affecting it. All freely falling objects follow the geodesic. We must eventually define a biological geodesic.
It is still fairly clear from Figure 74 that even though the two ships are each upon a geodesic, and so are each pursuing a straightline path as properly defined, they are somehow “getting closer together” and “further apart” as they each move upon their parallel or shortest length lines upon this surface. We must be able to track those differences in their transports: the different distances and angles.
The globe of the earth may be curved, but a sufficiently small and local subset such as immediately surrounds us, and that we find ourselves living in on a daily basis, is to all intents and purposes flat. It gives every impression of being pure Euclidean. A “manifold” is now a geometrical object rather like this: at any given local and infinitesimal point it is capable of containing these smaller spaces and slices or locales as resemble the far more familiar Euclidean spaces.
We still need to measure the lengths and angles between our ships, never mind the variations and properties of the biological entities that are soon going to dispose themselves on this surface. We still need coordinates. We still need a guarantee that the surface can safely and accurately measure the properties we are used to. A “Riemannian manifold”—such as we have suggested on the torus or doughnut in Figure 75—has the needed “Riemannian geometry”.
A Riemannian manifold allows the familiar operations of measuring distances, finding areas and volumes, taking tangents and normals, and of integrating and differentiating and so forth. “Tangent spaces” can be taken so that such things as spirit levels can be accurately positioned locally. Everywhere, locally, we can take our measurements in a space just like this earthbound one. There is always a smooth and continuous variation from point to point, thus allowing measures such as angles, lengths, areas, volumes, gradients and even curls and divergences to be computed for that curvature, in spite of the locally flat and Euclidean impression. So just as we can gradually compute the dimensions of the globe of earth by taking a sufficient number of local measures, so also can we gradually detect the true nature of a Riemannian surface such as this one. The coordinates and the distances between them may flex, bend, and vary—as longitudes and latitudes do with our two ships—but all measures are complete and accurate … once we apply a suitable metric.
We can … compose tiny adages to characterize this structure: “Locally the motion of a falling object in spacetime is Newtonian, its path is Euclidean, and its coordinates are Cartesian. However globally, its spacetime is neither Newtonian nor Euclidean nor Cartesian; it is curved.” Recalling that many of the concepts about curved spacetime stem from Riemann’s work, we can claim that “gravitational motion is locally Newtonian, its path is locally Euclidean, its spacetime is Riemannian, its geodesic is Hamiltonian, and therefore its global behaviour is Einsteinian.” Similarly we summarize that “Euclidean and Cartesian space is threedimensional and flat. Gaussian space is twodimensional and curved. Minkowskian space is fourdimensional and flat. Riemannian space is anydimensional (including four) and curved. Einsteinain space is fourdimensional and curved by the presence of mass and energy.”
… Qualities such as straightness, uniformity, and parallelism are the same everywhere, but their manifestations in our experience are recast by the geometry in which we find them (Jagerman, 2001).
Now we have a Riemannian manifold, we can easily measure a biological population by, for example, regarding n as the torus’ overall girth. When population size increases and n gets bigger, then the whole torus grows; and when population size decreases it gets smaller. We could additionally measure average individual mass, m̅, as the vertical height inside the torus. It therefore shrinks and grows vertically as the biological population uses its mechanical chemical energy to take up and relinquish chemical components. Together with n, it then tells us the mass flux at any time. Furthermore, we could set V, the visible presence or energy density, as the torus’ internal horizontal breadth. When that crosssection broadens, energy density increases and conversely. We therefore also have the energy flux. And, finally, the absolute generation length could easily be the rate or pace at which the biological entities go circularly around a torus crosssection, reaching the different extrema according to their relative generation lengths. They can speed up and slow down through each phase of the Franklin cycle, and we can easily measure their rates. We always have minimum and maximum values. This Riemannian manifold will now measure any properties we wish, and track any variables, combinations, and changes over as many dimensions as desired, and no matter how we allocate them to whatever imaginary object or axes we choose. We already have our biotrails, biopaths, enbranes and so forth … and we now have a way to measure them. We only need a suitable metric tensor.
The Haeckel, Darwin, Mendel and Gibbs potentials—our four evolutionary potentials of ηH, ηD, ηM and ηG respectively—define both evolution and reproductive accessibility and inaccessibility. They state a given population’s coordinate on the biological hypersurface of the present. The complete evolutionary specification for any population will then be:
η = ηH+ ηM + ηD + ηG.
No two populations can have the same four values without being made of the same chemical components; arranged in the same way; distributed over the same numbers of entities; who then do the same work at the same rate; and over the same periods of time … all without also being held in the same reproductive cone, and therefore being members of the same species. They will in other words have the same DNA, which is the same Weyl tensor. We do not have to concern ourselves with traits.
If we refer back to the Haeckel potential, the information we get from the absolute generation length, Z, is incomplete. It does not adequately reflect that different organisms adopt different strategies in the face of losses imposed by the environment. Although true that a population of organisms taking 20 minutes to complete its cycle must be working at a very different rate from a blue whale that needs 31 years to do the same thing, time span cannot be the sole distinguishing criterion. Ants and elephants differ at least in part because ants reproduce much earlier in their cycle than do elephants. Quite apart from their generation lengths, this is a very different use of energy and resources.
The ideal Franklin cycle sets the standard. We now define the ideal case as one where the transition from atmena to parasmai occurs 50% of the way through the cycle. In other words, for 50% of the cycle dP/dt is positive, and for 50% of the cycle it is negative. This is again simple and objective.
We can then further divide each of the atmena and parasmai phases into two. We have growth and development in the former, and reproduction and recipience in the latter. Mechanical chemical energy dominates in growth and reproduction being positive in the former and negative in the latter, while nonmechanical chemical energy dominates in development and recipience, again being positive in the former, and negative in the latter. Each of those four stages now lasts 25% of the absolute generation length. This gives a relative length for each stage of ti = tii = tiii = tiv = 0.25.
The general theory states that spacetime is not uniform everywhere. There are greater concentrations of massenergy at some points than at others. So let Populations A and B be unit engenetic equilibrium populations with identical generation lengths. If A now acquires mass, numbers, and/or energy more briskly at some points relative to B, then A must move correspondingly more slowly and sedately at yet other points, again relative to B, to make up the discrepancy and so that their overall generation lengths, T, remain the same. We must quantify such deviations—these relative distributions—across the four stages so the Haeckel potential is complete.
Since the reference Franklin population takes on and gives off energy uniformly across the whole cycle, its rates increase uniformly from any minimum to a maximum, and then reverse back to the minimum. The deviation from a normal distribution of massenergy is unity. The full Haeckel potential is thus ηH = 1 / 1.
Brassica rapa does not distribute its rates of mass and energy uniformly across its cycle. Our experiment shows that it takes only 9 days—25% of the cycle—to acquire all the mass and energy it needs to reproduce. It therefore has accelerated rates, relative to the reference, for this first part. It then spends a relatively leisurely 27 days—or 75% of the cycle—in its postreproductive states while, for example, endosperm is consumed and the seedlings of the next generation form. It therefore has decreased rates again relative to the reference. The accelerated first half give it a ‘trace’ or deviation of 0.625. Any other population or species with this same value has this same energy profile and this same energy distribution of relative rates for increases and decreases in massenergy. B. rapa’s full Haeckel profile is therefore:
ηH = 3,110.4 x 0.625.
This is a scalar. More than that, it is based strictly upon energy. This statement of the Haeckel potential is therefore a big theoretical advance. Somewhat like a light year, it now tells us the combination of (a) quantity of energy Brassica rapa possesses relative to the reference; but also (b) that energy’s distribution across spacetime.
We are now ready for the most powerful and elegant of all equations in physics, the Einstein field equations which control all mass and energy, and the curvature of energy, space and time … even the biological. A first version is:
Gμν = 8π Tμν.
The Gμν is the Einstein tensor, a gravitationally relevant manifestation of the stressenergy tensor in Table 7:
The stressenergy tensor tells us all about the curvature—and therefore the behaviour of space, time, and all objects—on the Riemannian manifold we are studying. It tells us how the energies and masses are fluxing through the spacetime we have created with our manifold. The indices μ and ν refer to the various dimensions in that space. Most generally, these are x, y, z and t for the three dimensions of space plus time as the fourth. The convention is for the indices to range from 0 to 3, with 0 being time and 1, 2 and 3 being the x, y and z axes respectively. But in our case, for biology, they are taken from f(n, q̅, w̅). We can measure these as the momentum densities and fluxes of numeracy, mass, and chemical configuration or Q, M and P in biomoles per second, kilogrammes per second, and watts and as they are made manifest in the spacetime created. They also give us the values N, u̅ and V for our Euler and GibbsDuhem equations.
The T00 in the stressenergy tensor refers to the properties that hold where “time” and “time” intersect. Since this is “pure time”, it is the entire energy density holding at that moment, while moving timelike and deeper into time. It is the relativistic energy density of E = mc2, its quantity, and its rate of change. In our case, it is stating all the energies of both all the chemical bonds, and all the rest masses, of all the molecules, throughout our population, and at that time, as they again manifest materially in the spacetime that those energies create. This is how our sac of molecules or dust is currently configured.
The rest of the stressenergy tensor’s top row then ranges over T01, T02, and T03. This tells us how the total relativistic energy is fluxing through each of the individual three dimensions of x, y, and z … or in our case, numbers in the population, average individual mass, and chemical configurations or binding energy in respect of those objects that exist in the relevant light cone on the hyperplane of the present.
The lefthand column ranges over T10, T20, and T30. This tells us how everything capable of having a rest mass, and thus of eliciting some kind of material momentum in the three relevant dimensions, is fluxing or moving materially over or through the same three dimensions. In our case, this tells us (a) how many entities there must be, n, in the material number flux of Q biomoles per second; (b) the number of chemical components they must each have, which is both q̅ and m̅, with the latter being measurable as the mass flux of M kilogrammes per second; and (c) the specific chemical bond energies they must muster between their many components, V, or else its inverse dynamic time expression of specific energy, W.
In Figure 76, an initially spherical object is hanging above a large mass. Since its top and bottom ends are at different heights, they each feel a different gravitational force. The more powerful effect on the bottom has a larger effect, and so it accelerates downwards more rapidly than does the top. The effect can only then increase and the object is distorted to become longer. A black hole is sufficiently large and powerful enough to tear any approaching object apart. We therefore need to measure this effect.
The threebythree matrix of nine elements in the lower right corner of the stressenergy tensor now gives more specific information on the flows of material and energy, given the shape of the surface, and the tendency for that surface to incline in various directions. If, for example, someone is standing on level ground in a normal Euclidean space, the top of his or her head points directly upwards into space. But if he or she is standing on a mountainside, his or her head is slightly inclined eastwest and/or northsouth, and is no longer pointing vertically upwards. Much like a head oriented vertically straight upwards, the T11 element tells us how much of the mass and energy that should be flowing in that specific direction is actually flowing materially in that direction, and not in some other.
Gravity always acts straight towards the centre of any mass. Geodesics therefore narrow as an object falls … much like two ships on a globe get closer together as they each approach a pole. Thus as well as stretching out longitudinally, a falling object loses girth latitudinally. These two are the shear forces felt by any object in a field of this kind, and that are evident at T12 and T13. Those two elements state how much the materials and energy that “should” be going in a given direction are instead favouring one or another of those other two directions.
The element T11, where the indices are the same, is now the main ‘pressure’ or force for its property or direction. It is where that component “stands vertically”, with its mass and energy “pointing in the direction it should”. Given our biological space, it tells us how much matter and energy is being put into population number density. The remaining two elements, T12 and T13 tell us how much of this number density energy is “leaning”. They are the measure of the massenergy that “should” have been allocated to number density that are instead being expended on “leaning” into (a) acquiring components; and (b) modifying chemical configurations. There is one such “pure directional” source of massenergy pressure to be found on each row, and for each property, where the two indices for mass and energy respectively are the same. The other two are of course T22 and T33. The three lie on the diagonal shown in Table 7.
The second row in the 3 x 3 matrix now tells us about the energy—or, more strictly, relativistic massenergy—spent on acquiring and relinquishing chemical components, and so increasing and decreasing the mass flux. The two outermost elements T21 and T23, where the indices differ, tell us how much of the massenergy density that should have been devoted to mass fluxing has instead leaned over or been diverted into (a) number density energy; and (b) chemical configurations energy. Element T22, where the indices are the same, then states the main chemical components flux or massenergypressure—the amount of materials and energy that should have been devoted to the acquisition of chemical components that has actually been devoted to that acquisition, and that has not “leaned” elsewhere. A similar analysis then holds for T31, T32 and T33, with T33 being the full chemical configuration pressure. And since we have—from our Euler and GibbsDuhem equations—the necessary figures and values for every population and at every moment, we now have a complete biological record. The sum of T11, T22, and T33 is of course the essential development we have already met. Our task now is to identify and quantify it but strictly from this relativistic perspective.
The Einstein tensor can also be stated as:
Gμν = Rμν  (½ • Rgμν)
where Rμν is the Ricci tensor, R is the “Ricci scalar”, and gμν is the metric tensor. The Ricci tensor, Rμν, may tell us the overall volume of our floating bag of molecules, but it does not indicate the bag’s shape. We do not know exactly where it is situated, and nor do we have figures for its size. We do not know how the coordinates are changing. The second R (lacking any indices) is the Ricci scalar, also known as the “trace” or “contraction” of the Ricci tensor. It gives us the missing information. It tells us how much the shape has altered away from that expected for a standard ball in Euclidean space. It states how the volume of a geodesic ball is curved—its longitudinal and latitudinal deformations—as the axes change at and for each location, and so cause distortions.
Finally, the gμν is the metric tensor. It gives the precise metric description of that space: how it shears and moves; how far distances reach in each direction; and how volumes are constructed densitywise and so forth. It also includes notions of time, and therefore rates, and declares how distances should be computed and how rapidly they can be traversed. It defines the space that leads to the stressenergy tensor above.
There is the issue of an intrinsic energy density for biology. We know from Maxim 1 of ecology, the maxim of dissipation, that ∫dm < 0, ∇ • M → 0, and M = nm̅. The inevitable dissipation can only be countered through a given quantity of mass in which a given quantity of work is continuously maintained so that progenitors can bring into being the progeny that will succeed them. All entities therefore (a) live in and with cells; and (b) can be counted in populations whose minimum size must be one progenitor and one progeny. These fundamental energy and population densities require a suitable metric. We can reckon these through a fuller version of the Einstein field equation:
Rμν + Lgμν  (½ • Rgμν) = (8πG/c4) Tμν,
which contains the cosmological constant, Λ, that Einstein introduced when he modified his original field equations. It abuts the metric tensor gμν and thereby states an intrinsic energy for spacetime and that affects whatever flows through whatever measured space. It is an energy density Einstein attributed to the cosmos at large. Although originally questioned, since the observable universe seems to be expanding at a greater than expected rate, it has been reintroduced to gravitational and cosmological theory.
The biological importance of the cosmological constant is its declaration of a fundamental energy density attributable to all mass and matter in that without a minimum set of activities within the designated spaces, there is no biology. Since this is an intrinsic energy density which allows all biological matter to be maintained and the cycle to continue, we simply replace Einstein’s cosmological constant with the Virchow constant, kV, that we have already introduced, and whose proposed value is 1,000 kilojoules per mole. The Virchow constant implies that if our cloud or bag of chemical components is going to be biological; and if the mass concerned is somehow to be enclosed within a biologial entity; then it must demonstrate the intrinsic and ongoing energy density characteristic of all biological matter: i.e. those ongoing chemical reactions that support cellular behaviour. The Virchow constant is stated in reference to a standard cell and defines the thermal energy used, the quantity of the work done, and the number of molecules involved, when a standard cell possesses all pertinent properties in a standard Euclidean space. The constant’s precise value, and so the exact rates and components used in any other spacetime, will depend entirely upon the prevailing coordinates, and so on the properties of that specific biological space.
By Maxim 3 of ecology, the maxim of succession, cells and entities must succeed to each other and must therefore maintain a minimum number as well as massenergy density. This succession is a fundamental attractive and repulsive property. The term G on the righthand side of the Einstein field equation is Newton’s universal gravitational constant. Any two bodies with mass exhibit an attractive force of magnitude Gm1m2/r2. The exact value depends on the nature of the space. When applied to the earth, it produces g as the earth’s specific manifestation of gravitational attraction.
In similar fashion, the engenetic constant, Ω, is the formal declaration of the presence or absence of Darwinian competition. It is the biological equivalent of Newton’s gravitational constant and specifies the number of entities in the Franklin population. The gravitational and the engenetic constants both declare the fundamental attractive and/or repulsive properties of their respective bodies. Where the Virchow constant states that components can be gathered into viable cells and biological matter, the engenetic constant states that we expect entities to gather into populations of such a size as can reproduce each other. Any population’s exact size and disposition then depends upon the space it is found in. If the biological space expands then the number of entities is liable to increase, and if it contracts then the number is instead liable to decrease … and entirely according to the gradients found on the Riemannian manifold. The engenetic constant simply defines the ideal case where biological space is flat. There is then no change in population numbers and no competition. This is the definition of a flat and featureless biological space. And just as in the general theory of relativity gravitation emerges not as a force, but simply as a free movement through spacetime along a geodesic, then so also is Darwinian competition now simply the movement of energy and chemical components along a geodesic and between given cells and entities.
We can now clarify, and rigorously quantify, Darwinian fitness. We have an entire axis devoted solely towards number. A first component in Darwinian fitness must therefore be the energy materially expended on number at each moment, which is the entire row T10 and the Haeckel potential. This is the material expression of the “native” number pressure, along with its two shears. But the column T01, the constraint of constant propagation, is the expression of the relativistic massenergy flux on that numbers axis direction. We thus need to factor in the amounts from the mass and energy fluxes that should have been expended, directly, upon mass and upon configuration energy, but that were instead sheared over into maintaining numbers, being T21 and T31. Therefore, Darwinian fitness is T01 + T10  T11, always expressed in appropriate units. It can always be given a value in any real case.
Elton said “It is therefore convenient to have some term to describe the status of an animal in its community, to indicate what it is doing and not merely what it looks like”, which we know to be pdt + mdt (Elton, 1927). As in Figure 77, a given biological entity is a sac or bag of limitpoint molecules moving through biological space. It therefore always has both (a) a normal; and (b) a tangent. The righthand graphic shows two bodies of equal masses and velocities colliding. Whether they rolling dynamically or are attracting each other gravitationally makes no difference. They will bring each other to a halt as their joint forces act normally. No component of the net impulse or momentum is diverted tangentially. The quantities in that direction are zero.
As on the left in Figure 77, a biological entity always has a normal that points “straight upwards” towards a given point, and which is unique for that point. It can be intuitively thought of as the “direction” in which all properties at that point would vanish if it met or collided with another identical to itself. That normal point can now be connected to a specified centre to form a Euclidean sphere. It would then create a surface with that centre of normal curvature surrounding the normal point, and inducing the infinitisemial surrounds to act like a Euclidean sphere would be expected to act, complete with the curvature or gradients for that surface. There is in other words a characteristic pattern of behaviour
The tangent plane on which the entity’s instantaneous and tangential velocity can carry it also just “kisses” that normal point. It can carry the entity in any conceivable direction, depending entirely upon the entity’s precise orientation on its given geodesic. We see three possible directions in the figure. Thus, and very differently from the normal, the number of possible tangents is not unique. Any one entity will always have a normal and will only ever be going in but one direction, but all the other tanget directions remain available to all other entities arriving at that same normal point.
The entity’s normal, and its tangent vector, at any given point, are now a statement of its biological, biographical and massenergy condition. This overall state is now formally defined as its instantaneous niche or “iniche”. The iniche is therefore the property not of that entity, but rather of its biotrail’s spacetime event. The tangent and the normal are properties of the surface. It is the surface that establishes the directions.
An entity’s biotrail is its entire historical record in biological spacetime. Since every point or spacetime event on a biotrail has its iniche, then the sum of all these iniches, for a given biotrail, states the entity’s entire suite of interactive activities—i.e. all its dealings with massenergy—over its lifetime. Since this suite is the set of normals and tangents produced by the surface, then this complete set of instantaneous iniches is, of course, the property of that entire historical biotrail, and not of the entity itself. This set of instantaneous iniches over the entire biotrail—its lifetime dealings in mass and energy—is now the entity’s ecological niche or “eniche”.
We know from Brassica rapa and Chorthippus brunneus that one entity and one biotrail—and so therefore one eniche—is not enough to define a population. Granted the variability and the ubiquity of the tangent vectors at any one normal point in biological spacetime, it is entirely possible for two biological entities to be at the same biological spacetime point—i.e. to participate in the same event—and yet have greatly differing velocities, and so directions, on the same tangent vector, and so to sketch out completely different biotrails. Therefore, if we want an accurate description of C. brunneus, we need multiple adultand22fertilizedeggs sets of biotrails so we produce multiple eniches. Each such set of eniches is the historical collection of all the instantaneous iniches of all the given entities’ biotrails. It is therefore a record of all the iniches and eniches belonging to a population’s biopath.
A standard threedimensional Euclidean sphere has both (a) a onedimensional normal emerging orthogonally from its normal curvature; and (b) a twodimensional tangent plane lying across that same normal point, with its many directions being a valid expression for any and all velocity vectors expressed at that point. A standard, Euclidean, fourdimensional hypersphere will in its turn have, at any given point (a) a unique twodimensional normal that can form a normal spacetime curvature; and (b) an infinite number of threedimensional expressions for its tangent velocity vector. Any Euclidean space—such as this one we currently inhabit—is therefore a tangent to some fourdimensional hypersphere, with its rates and curvatures being an expression of those properties.
All points on a fourdimensional Minkowski spacetime diagram are “events”. An entity or population’s activities at any time are its biological events. They are its BiotSavart current element moving through its biological space. By the BiotSavart law, as soon as we have a biological entity, there is then immediately a curve which is an infinitesimal part of its population’s generation length, T. That spacetime curve has a centre. That centre can be computed from its given hyperbolic function to produce given spacelike and timelike coordinates for that discrete event. It can be calculated for a Euclidean space at that point. That current element is again its generation length along with all the massenergy implied.
We can conclude two further events and worldlines from the earth’s orbit in Figure 66. The first is the sun’s stationary, relative, position at the centre of the earth’s helical worldline; the second is a “vertical” succession of arrows on the earth’s helical trajectory representing a specified point—say an equinox or a solstice—at some point in its orbit, and that it then passes through annually. In the same way, every point in a generation length can also be “passed through” cycle after cycle.
Our unique twodimensional normal for the earth’s worldline and spacetime trajectory can now be connected to the spacetime event for its centre. It is then rotated about its axes to form a fourdimensional hypersphere of linked events. Spacelike and timelike we produce the spacelike surface of a standard Euclidean hypersphere with all pertinent properties. It always has a timelike point or presence to indicate its rates of change and development as the section of a fourdimensional hypercone. The normal will therefore create a normal section and indicate a normal curvature and rates of change in spacetime. Its gradients are those rates. They are its only sense, in each direction, and for each axis incuding time.
Since a normal exists, a given biological entity moving through spacetime at and with this normal has a specified moment in time; a physical location and surround; a set of parameters for normal or orthogonal behaviours and interactions; and a tangent vector with infinitely many threedimensional spaces, all of which it can freely move into, and all of which are flat and Euclidean like our current space. Those infinitely many Euclidean spaces all centre around the entity’s specific spatial location and timepoint, and are all again like this space we ourselves inhabit. All properties in those spaces allow those spaces to be valid expressions for that entity’s instantaneous movement or velocity away from that surround, and into each of their infinitesimally close Euclidean spaces. Their properties are infinitesimally different … yet all are expressions of that entity’s continuing movement into the respective directions, away from its normal curvature or surround. It does not matter where the entity goes and what properties it acquires, it will still occupy a nearby space and time, with infinitesimally close Euclidean rates and properties. In spite of their different directions, those tangent vector spaces still share the same normal, the same curvatures, and the same gradients at that same time and point.
Each member of a population, at each individual event on its individual biotrail, has its own iniche as we have described. The entire collection of biotrails creates a biopath. Since we also need to refer to the population at large, and not simply to its members, then we now call the entire collection of instantaneous iniches over all the population members its “bios”. The bios is therefore the set of properties in both matter and energy as define a set of locales and actions and interactions in spacetime—they are Elton’s doing—at that point t upon the generation length T for that population (Elton, 1927, pp 63–64). The bios is the matter and energy, along with interactions with mass and energy, all stated as rates of curvature on the surface as allow the entities to continue with their cycle. It is the entire set of normalsplustangentvectors to the enbrane, which is itself the biopath’s crosssection. It is the combination of where the population is “pointing” with its mass and energy values; and where it is “headed” because of its tangent vector, at that time, and as the population develops over space and over time.
A biopath is in its turn an entire population’s entire historical record. The biopath’s collection of instantaneous bioses at each t over T, and so over its entire generation length, is now defined as its “firmanent”. The firmanent is thus the entire collection of materials, resources, energies, and interactions involving these, that a population or species needs to complete its biological cycle, and to allow its progenitors to produce its progeny, and so that the resulting progeny can then do the same for that same generation length. And since the firmament is the collection of normals, then all entities will at all times point to some location in the firmament as they move under the influence of their timelike futures courtesy of their tangent vectors … with those tangent vectors then also being a part of the same firmament in their stated values for mass and for energy and for rates in these.
We know from Figures 75 and 76 that spacetime is curved. Any two ships following geodesics can get closer together and further apart. The metric tensor, gμν, tracks all possible changes in spacetime, with the Ricci scalar, R, measuring how much a unit volume ball from Euclidean space would change when transported to that space. If the Ricci scalar or scalar curvature is negative, the unit volume ball will increase in size and occupy a larger volume; and if the curvature is positive then it will occupy a smaller volume. When R is applied to our sac or bag of molecules, then the mass or energy forming the biological entities will do the same.
Since vectors have both size and direction, all vectors and vector spaces are certain to change as the spacetime around them changes. The geodesic will nevertheless remain the “straight line path” for that space and time. It continues to indicate the behaviour of any object free from all influences, and so the shortest line between two points in that space. In general relativity, this is immediately the gravitational behaviour of any particle free from all forces and influences:
… there are some big differences between special and general relativity, which can cause immense confusion if neglected.
In special relativity, we cannot talk about absolute velocities, but only relative velocities. For example, we cannot sensibly ask if a particle is at rest, only whether it is at rest relative to another. The reason is that in this theory, velocities are described as vectors in 4dimensional spacetime. Switching to a different inertial coordinate system can change which way these vectors point relative to our coordinate axes, but not whether two of them point the same way.
In general relativity, we cannot even talk about relative velocities, except for two particles at the same point of spacetime that is, at the same place at the same instant. The reason is that in general relativity, we take very seriously the notion that a vector is a little arrow sitting at a particular point in spacetime. To compare vectors at different points of spacetime, we must carry one over to the other. The process of carrying a vector along a path without turning or stretching it is called ‘parallel transport’. When spacetime is curved, the result of parallel transport from one point to another depends on the path taken! In fact, this is the very definition of what it means for spacetime to be curved. Thus it is ambiguous to ask whether two particles have the same velocity vector unless they are at the same point of spacetime (Baez and Bunn, 2006).
The graphic on the left of Figure 78 shows the kind of movement we think trivial when seen in a Euclidean space. We first pick a random direction in which we want to face: say towards a statue of Darwin. We can then walk around a square field, all the time making sure we keep facing that statue. We never change directions with respect to the surface. We can easily engage in two opposing sets of crablike walks on opposite sides of the square. It makes complete sense, in a Euclidean space, to say that we can always face in that same direction at all times, simply changing the direction in which we walk by exactly 90º at each corner of the square. Since the distances are the same on all four sides, we will soon traverse the entire square … and end up where we first started from, but still facing in exactly the same direction. This is two sets of parallel transports.
A Riemannian space is radically different. Parallel sides of the Euclidean kind do not exist. Movements of the above kind are impossible. As in the righthand graphic, we can start on the western point of the equator, facing due north. We now simply “follow our nose” northwards along the geodesic to the pole. We never turn with respect to the surface. But since the geodesic curves gently all along its length, then even though we have not turned—with respect to the surface—when we arrive at the pole, we will be facing “a completely different direction”. We now do as we did before, and pick a path at 90º to the original. We ourselves do not turn. We now walk crablike sideways down another geodesic, as shown. Since it is indeed a geodesic, we are walking parallel to the first path … but we are facing in a completely different direction, in this case eastwards. When we arrive back at the equator, we again do not change directions. We do not turn with respect to the surface, but we set off at 90º once more. We now transport ourselves back to our original starting point.
We started off facing north. We finish off facing east. We never turned at any time with respect to the surface. Not only are we facing a completely different direction, but we have effectively “lost a side” and “lost a rightangle”. It took only three sides and three rightangles to make this figure, as opposed to the four of each needed in the Euclidean case. It is all due to the surface and is a part of the relations and circumstances measured by the Ricci scalar, R.
By this doctrine of parallel transport, it is impossible to tell if any length, magnitude, or vector is “like” any other without first parallel transporting the one over to join the other in that same space, so they occupy “the same location”. All shapes and measures are deceptive because the curvature changes. Vectors and all other commodities must first be placed under the influence of the same tensors and the same curvatures, or all comparisons are meaningless.
There is an additional factor. The firmament, as we have determined it, is always receiving normals and tangent vectors. It is always stating some given mass and energy; with some given chemical configuration; as relevant to some given point in some given generation length. But … if we can start off “facing north”, and end up “facing east” then infinitely many other such vector exchanges are possible, all entirely depending on the surface. Thus any location within the firmament currently acting as a normal to some point in the curvature, is just as capable of acting as a tangent vector to some other time and location.
Thanks to general relativity, we can now give species a far more rigorous understanding. For example, in Dodd’s experiment with Drosophila pseudoobscura, she fed one subpopulation on maltose, the other on starch. These are different normals into the firmament, and so also mean different tangent vectors on which to move.
The two Drosophila pseudoobscura subpopulations in the Dodd experiment strove to follow their Weyl tensors which is their given generation lengths and to create further progeny using the chemical components she made available from the firmament. But since the chemical components were different for each subpopulation, their iniches and eniches were different at every point—again because their normals and tangent vectors were different. And since their vectors were different, then the values for their coordinates were different. Their spaces were different, meaning the numbers in each population, n; their average individual masses, m̅; and their chemical configurations, w, were all also different. The Ricci tensors, which were the material manifestations of the real populations, were therefore different.
Since biological space is curved we cannot properly compare the two Drosophila pseudoobscura subpopulations without using parallel transport. By the doctrine of parallel transport it is always possible to “carry” the values for each individual D. pseudoobscura back on each of their biotrails to each of their original values and populations. When we examine the biotrails for any two members in either of the Dodd subpopulations, we find the same firmament, as well as shared values for their vectors and tensors. Their biopaths are the same and have not diverged.
By the same doctrine of parallel transport, any Drosophila pseudoobscura Specimen A currently occupying a subspace influenced by the presence of maltose and the absence of sucrose is demonstrating the same probabilities that a Specimen B currently occupying a subspace alternatively influenced by the absence of maltose but the presence of sucrose would display if it were parallel transported over to join A; with the same holding if A were instead parallel transported over to join B. We can know this to be so because each subpopulation can be parallel transported back to its original and shared locations in biological spacetime where they are indeed side by side. At that side by side point, they have the same firmament and they share the same probabilities. For the number of generations for which the Dodd experiment lasted, parallel transport did not take any individual D. pseudoobscura outside the shared cone of reproductive accessibility.
Reproduction is about the transfer of mass and energy over time, which is a generation. But by Einstein’s special theory of relativity, lightspeed is the only possible—and only relevant—measure for all transactions involving either matter or energy; and by the general theory, these same measures must vary according to the quantities of mass and energy involved at each location in spacetime. Biology cannot be taken seriously—as a science—when it steadfastly refuses to acknowledge the signifiicance of either the special or the general theories of relativity and their implications for observations and measurements involving energy, space and time.
Light travels 7½ times around the earth’s circumference in one second. It travels 16,070,400,000 miles, 25,920,000,000 kilometres, in one terrestrial day. Generation lengths involve mass, energy, space and time. Their durations and magnitudes can vary greatly. The critical factor in the Dodd experiment, and the critical factor in our own Brassica rapa experiment, is the number of generations for which each experiment lasted. These differences in their turn create differences in norms and tangent vectors in the firmament. B. rapa’s generation lengths have a minimum range of between 28 and 44 days which is a proportionate change of 36%—over a third. Light can travel 257,126,400,000 miles, 414,720,000,000 kilometres, further during the longer generation than it does in the shorter. When these time differences are stated relative to the speed of light—the only relevant measure—this is a very considerable disparity.
We began by setting aside the first and perhaps the most important of all quantum principles aside: that all energy at the discrete atomic scale is dispensed in quantumsized Planck units, h. If generation lengths vary then the number of quanta delivered must also vary.
We have also determined that a biological population is described by a function of the form f(n, q, w). Since a generation length is a sequence of microscopic interations, then on simple quantum grounds the number of quanta composing n, q, or w must also now be varying.
A generation length is either the return of a given ratio involving numbers in the population and the moles of components of which they are composed, n and q; or else it is the return of a ratio involving moles of component and the rate at which they are processed, q and w; or else it is the return of numbers and their processing rate, n and w; but by quantum criteria it can never be a complete and consistent return of all three simultaneously, for that is in breach of the Heisenberg uncertainty principle.
Biological entities always occupy spatial locations to which their energy must be delivered, in whatever form … but always ultimately as E = mc2 and so as light. The locations concerned can always be measured with a Euclidean metric, and as the triple set of x y and z coordinates for space, with their current scientific realization being the metre:
2.1.1 Definitions
…
It is important to distinguish between the definition of a unit and its realization. The definition of each base unit of the SI is carefully drawn up so that it is unique and provides a sound theoretical basis upon which the most accurate and reproducible measurements can be made. The realization of the definition of a unit is the procedure by which the definition may be used to establish the value and associated uncertainty of a quantity of the same kind as the unit. …
…
2.1.1.1 Unit of length (metre)
…
The metre is the length of the path travelled by light in vacuum during a time interval of 1/299 792 458 of a second.
It follows that the speed of light in vacuum is exactly 299 792 458 metres per second, c0 = 299 792 458 m/s (BIPM, 2006).
We straight away observe that the metre is defined in terms of lightspeed. Every ecological niche is therefore defined by how long it takes light to traverse that space, and every movement of every entity is also therefore measured in those same terms.
The biological entities in any given location have a chemical composition which requires a set number of chemical components. By E = mc2 those same entities must exhibit a momentum density. By the same Einstein equation in combination with the first law of thermodynamics, that momentum denisty must exhibit itself as a mass flux of specified chemical components. Those components must have mass; and must be located in a gravitational field; meaning they must therefore have a mechanical force—i.e. a weight—when measured from that given entity’s perspective:
Considering the necessity to put an end to the ambiguity which in current practice still exists on the meaning of the word weight, used sometimes for mass, sometimes for mechanical force;
The Conference declares
 The kilogram is the unit of mass; it is equal to the mass of the international prototype of the kilogram;
 The word “weight” denotes a quantity of the same nature as a “force”: the weight of a body is the product of its mass and the acceleration due to gravity; in particular, the standard weight of a body is the product of its mass and the standard acceleration due to gravity;
 The value adopted in the International Service of Weights and Measures for the standard acceleration due to gravity is 980.665 cm/s2, value already stated in the laws of some countries (BIPM, 2006; Appendix).
Granted that a biological entity’s weight depends upon the gravitational attraction it experiences wherever it is located; and further granted that the earth is an oblate spheroid; then a given biological process at the equator will proceed more slowly than does the identical process closer to the poles due to the relative difference in the earth’s spin. However, those same processes will also be inclined to proceed more quickly at the equator because they are further away from the earth’s centre of mass (Van Flandern, 1998). These two countervailing processes seem to cancel out at sea level, meaning that all biological processes above sea level will proceed more quickly than will the identical processes at sea level. If, for example, global positioning satellites were not consistently corrected for relativistic effects, they would be inaccurate by as much as two minutes when they arrived up in their orbits from earth; and they would continue to drift by as much as ten kilometres daily in position (Pogge, 2009). Thus the 379.1 feet, 115.55 metres, tall Sequoia sempervirens or coast redwood tree Hyperion discovered in August 2006 in the Redwood National Park, California, by Chris Atkins and Michael Taylor will process the materials in its canopy faster than any materials down in its roots using the same metabolic pathways (Earle, 2011).
Biological processes require durations. They involve changes across those durations, which are rates. And since the epochs marking the beginnings and ends of generation lengths are given ratios in n, q, and w, then the durations must stretch across them. The current scientific realization for all epochs and durations is the second:
2.1.1.3 Unit of time (second)
The unit of time, the second, was at one time considered to be the fraction 1/86 400 of the mean solar day. The exact definition of “mean solar day” was left to the astronomers. However measurements showed that irregularities in the rotation of the Earth made this an unsatisfactory definition. In order to define the unit of time more precisely, the 11th CGPM (1960, Resolution 9; CR, 86) adopted a definition given by the International Astronomical Union based on the tropical year 1900. Experimental work, however, had already shown that an atomic standard of time, based on a transition between two energy levels of an atom or a molecule, could be realized and reproduced much more accurately. Considering that a very precise definition of the unit of time is indispensable for science and technology, the 13th CGPM (1967/68, Resolution 1; CR, 103 and Metrologia, 1968, 4, 43) replaced the definition of the second by the following:
The second is the duration of 9 192 631 770 periods of the radiation corresponding to the transition between the two hyperfine levels of the ground state of the caesium 133 atom.
It follows that the hyperfine splitting in the ground state of the caesium 133 atom is exactly 9 192 631 770 hertz, ν(hfs Cs) = 9 192 631 770 Hz. At its 1997 meeting the CIPM affirmed that:
This definition refers to a caesium atom at rest at a temperature of 0 K.
This note was intended to make it clear that the definition of the SI second is based on a caesium atom unperturbed by black body radiation, that is, in an environment whose thermodynamic temperature is 0 K. The frequencies of all primary frequency standards should therefore be corrected for the shift due to ambient radiation, as stated at the meeting of the Consultative Committee for Time and Frequency in 1999 (BIPM, 2006).
The quantum theory and the Heisenberg uncertainty principle immediately tell us that all such timekeeping efforts are doomed to failure. The uncertainty of, for example, the NISTF1 caesium fountain atomic clock currently housed at the National Institute of Science and Technology laboratories in Boulder, Colorado, USA—one of the exemplars of chronological accuracy—can certainly be improved, but waveparticle duality dictates that it can equally certainly never be completely eliminated.
The NISTF1 clock helps maintain the UTC or Coordinated Universal Time, the official world time, through atomic fountain or cascadelike movements that measure frequencies … and therefore time intervals (NIST, 2009). Six mutually orthogonal laser beams pierce a central vacuum chamber. They gently nudge caesium gas atoms in that chamber into a ball, slowing them down to temperatures very close to absolute zero. The two vertical lasers then toss the caesium atomic ball upwards by about a metre in the eponymous fountain action. All the lasers are then switched off. The ball of atoms falls back under gravity through the microwave cavity. This up and down fountain action takes less then a second.
In line with quantum principles, the energy levels of specified atoms in the fountain may or may not change; and they may or may not be excited into higher quantum states. When the atoms in the ball in the fountain fall back to the bottom, another laser is fired. The entire process is repeated many times while a microwave signal in the cavity is tuned to several different frequencies. Any excited atoms—and on probability grounds there will always be some—will fluoresce and emit photons which are measured by a suitable detector. Since caesium’s natural resonant frequency is 9,192,631,770 Hz, the maximum fluorescence suitably defines the second. In 2000 the uncertainty for this whole process was approximately 1 x 1015, and by the summer of 2010 it had been improved to about 3 x 1016, meaning that as long as all the laws of physics remain constant, then the clock would neither gain nor lose a second in 100 million years.
Biological events also use quantum processes and so are subject to the same quantum uncertainty as the NISTF1 … but there is no possibility whatever of them maintaining anything like the same accuracy. They do not use laser beams. They are not protected from the environment or firmament. They do not use fountains. They occur well above the absolute zero of temperature.
As is well known from special relativity, two inertial frames can only maintain the same ‘clock time’—i.e. the same atomic speed—if they never accelerate relative to each other. But this then also means that the gravitational fields for both frames must be flat. They must be featureless Minkowski spaces and so can contain no mass. Therefore, no two generations of any population of any description can possibly maintain their generation lengths without variation. Either the generation times change as their epochs of beginnings and ends move and their totals for their mass and energy fluxes change while the rates remain the same; or else the totals stay the same while the rates instead change meaning the average individual values change. We have already proven, through the Liouville and the Helmholtz decomposition theorems that either way, this is a change in species.
By special relativity, any entities moving through space and/or time faster than any other will metabolize at a slower rate … meaning that any viruses or bacteria residing upon fastmoving predators will always have slower metabolic and physiological rates and therefore longer generation times than their cousins living on the much slower moving prey. Even if any two biological populations could maintain this quite unattainable caesium clockstyle accuracy of the special theory, they certainly could not overcome the problems posed by the general theory and by accelerating inertial frames. Thus by the special and the general theories of relativity, it is simply impossible for any population to maintain identical generation lengths over all its entities located anywhere and at any time in any geographic location on any planet, whether contemporaneously or successively.
Just as a Euclidean space exists at every infinitesimal point in a Riemannian manifold, so also, as in Figure 79, does a Minkowski space and an inertial frame along with its gravity. That gravitational field can be considered flat. It matches the accompanying Euclidean space created by the normals and the tangents. That Minkowski space then defines the totality of that observer’s spacetime by dividing it into the past and future lightcones; the two surfaces to those cones; and the absolute elsewhere.
Since an entity’s biological behaviour is a subset of the total of its physical and chemical activities, these two light cones for a physical observer must also and immediately divide each entity’s biological activities, and its biological reality, into the same five zones. There are: (a) biological activities that it can causally affect both upon the biological hyperplane of the present and in the future light cone; (b) biological activities that causally affect it both upon the biological hyperplane of the present and from the past light cone; and (c) biological activities that have no possibility of either. Therefore, and as in Figure 79, every biological entity is an observer on a quantum hyperplane of the present that is defined by its overall energetic and metabolic activities, and which can be categorized for biology and ecology as follows:
 those energetic and metabolic activities that lie inside its past cone of reproductive accessibility and that maintained all prior entities in that population;
 those energetic and metabolic activities that lie inside its future cone of reproductive accessibility and that will maintain all future entities in that population;
 the rate and the sequence of the Law 4 reproductive activities that form the edge of its cone of reproductive accessibility and that lead up to it from its past, and that therefore produced its present entities;
 the rate and the sequence of the Law 4 reproductive activities that form the edge of its cone of reproductive accessibility and that lead away from it into its future, and that will therefore produce its future entities;
 the firmament that lies outside both its past and its future cones of reproductive accessibility.
Those five biological and reproductive zones are once again a subset of the entirety of the quantum physical and quantum chemical probabilities that exist relative to any entity and/or population on the hyperplane of the present. These five biologicalreproductive zones are defined by that subset of quantum probabilities encapsulated within the four laws of biology, the four maxims of ecology, and the three constraints that we have already presented, discussed, and demonstrated. There are no other possibilities.
Although the words that Hermann Minkowski spoke in the introduction to the talk he gave at the 80th Assembly of German Natural Scientists and Physicians. in Cologne, Germany, on September 21, 1908, are now famous, their implications seem to have been lost on biologists and ecologists:
The views of space and time which I wish to lay before you have sprung from the soil of experimental physics, and therein lies their strength. They are radical. Henceforth space by itself, and time by itself, are doomed to fade away into mere shadows, and only a kind of union of the two will preserve an independent reality. ….
…
A point of space at a point of time, that is, a system of values, x, y, x, t, I will call a worldpoint. The multiplicity of all thinkable x, y, x, t systems of values we will christen the world (Minkowski, 1908).
A naive interpretation of the NISTF1 clock is that it is an attempt to establish an absolute and independent metric for time. On that interpretation, the clock makes the impossible possible and also bypasses both quantum uncertainty and everything now known about Minkowski spacetime. The NISTF1 then “proves” that time is both regular and independent. According to that naive interpretation the clock is an affirmation that we can indeed point at two different events—say a p as the epoch marking the beginning of a biological generation, and a q as the epoch marking its end—and state a clear and unambiguous value—say a d(p, q)—for the distance or interval between them. More than that, it is also an attempt to say that if we have the three epochs p, q and r, then they can be assigned to the three distinct epochs t(p), t(q), and t(r) with the clear ordering t(p) < t(q) < t(r). The two intervals or distances between them are d(p, q), and d(q, r) and such that d(p, r) = d(p, q) + d(q, r). Since this is a onedimensional metric, then time is automatically flat and Euclidean. This naive interpretation also further assumes that distances can be scaled so that if there is a specific distance, Δs, where Δs = d(p, q), then aΔs = ad(p, q) where a is some arbitrary scaling factor. Time’s “rate of flow” cannot change because a change of any kind immediately implies some second dimension against which time itself can be measured, and against which its rate of flow is changing.
Building on this naive interpretation, we can easily establish a metric for a twodimensional xy plane or similar. Whatever the metric’s form or style, the distance between any two points p and q in that proposed space should satisfy the following five wellknown conditions for metric spaces:
 the identity condition: d(p, p) = 0,
 the symmetry condition: d(p, q) = d(q, p),
 the nonnegative condition: d(p, q) ≥ 0,
 the triangle inequality condition: d(p, q) + d(q, r) ≥ d(p, r) and
 the identity of indescernibles condition: if d(p, q) = 0 then p = q.
A Euclidean plane completely satisfies these conditions. For any two points p and q on such a flat Euclidean space, each of which has the Cartesian coordinates (xp, yp) and (xq, yq), the Pythagoras theorem gives the distance between them as d(p, q) = √[(xq  xp)2 + (yq  yp)2]. The Pythagoras theorem suitably defines the shortest distance between two points, and makes the triangle inequality’s essential point that “a detour is not a shortcut”.
When we apply the Euclidean metric, we immediately assume that each dimension behaves separately, and in the manner of the above onedimensional metric. We also assume that distances in this xy plane can be scaled. In other words, if Δs = d = √[(x2  x1)2 + (y2  y1)2], then aΔs = ad = a√[(x2  x1)2 + (y2  y1)2] where a is again some arbitrary scaling factor. We can still derive a metric if the space is nonEuclidean or not flat. We can accommodate curves within the plane and varying rates of change—i.e. differences in paths—as dx/dy. We can go further and set each axis against time to procure a dx/dt and dy/dt. We simply set each of the x and yaxes against our assumed independent onedimensional temporal metric from above … which is still assumed to continue with its invariant behaviour.
Our twodimensional metric is soon extended to a standard and completely flat threedimensional Euclidean space of x, y and z in which Δs = d = √[(xq  xp)2 + (yq  yp)2 + (zq  zp)2], and which is scaleable to aΔs = ad = a√[(xq  xp)2 + (yq  yp)2 + (zq  zp)2]; and all with our onedimensional temporal axis again being independent as it measures all epochs and durations in this Euclidean space. This is the world we assume as normal all around us.
Arthur Cayley and Augustin Louis Cauchy were the first to take the study of spaces in a more modern direction by (a) defining them analytically; and (b) extending them to n arbitrary dimensions, in which no ‘real’ object can exist (Volkert, 2008). A fourdimensional unit cube or sphere remains available for analysis even though it cannot be seen. This eventually leads to a far more sophisticated definition of dimension, such as used in fractal geometry. A dimension is now any intrinsic property that is independent of the space in which that object happens to be embedded. If a given point is measured with x y coordinates then it is twodimensional. However, since the same point could be accessed on a standard unit circle using polar coordinates, then that same point can also be considered onedimensional, for it is then accessed through its angle. We have already learned how to use polar coordinates and directional derivatives to measure biological populations.
Henri Poincaré was the first to formulate the metric that became so critical to the special and the general theories of relativity (Volkert, 2008). His metric remained invariant under the Lorentz transformations of √(1  (v2/c2)) validated by the MichelsonMorely experiment, and which transformed physical theories by proving that there was no ether, and that light is an electromagnetic radiation with no need of any medium to transport it. Poincaré achieved his purpose by using the imaginary number i where i2 = 1. All points in the Poincaré space now have the coordinates (x, y, z, it). Since squaring a number now simply changes its sign, then when the Pythagoras theorem is applied to compute the shortest distance between two points, the distance metric, Δs, changes from the form (x, y, z, +it) to the form (x2, y2, z2, t2). The Lorentz transformations now simply rotate the quadratic from positive to negative, while the actual value of the measure is left invariant. This makes both the Minkowski space and Einstein’s famous equivalence principle possible.
The effect of the metric first introduced by Poincaré is made clearer in Figure 80. Since we measure along the time axis in ct units, incorporating the speed of light, then for any two points (x1, y1, z1, t1) and (x2, y2, z2, t2), the fourdimensional Lorentz or Minkowski interval, Δs2, becomes:
Δs2 = (x2  x1)2 + (y2  y1)2 + (z2  z1)2  c2(t2  t1)2.
All space values remain positive while the time one becomes negative while maintining the same value with respect to light. The distance or metric Δs does not change no matter how we transform it. However, its distribution of values across the axes used now depends entirely—and entirely arbitrarily—upon how we set up the coordinates. Therefore, what the Minkowski metric measures in any given case depends entirely upon the frame of reference chosen precisely because the measure’s proper or absolute value along its own length remains invariant in the transference between frames. If the interval Δs is now measured against an axis incorporating time, then it suitably measures an interval of time according to whatever units are chosen upon that axis. But if the same interval is instead measured against an axis incorporating space, then it produces a measure for distance. So if the coordinates are established to measure ‘pure distance’, then there is no movement or measure along the time axis. The events can then be considered simultaneous. They simply occur at different locations at that epoch. And if there is no change in the space axis, then there is no movement in space. The same Minkowski interval now instead registers as a duration or set of events in that singular location. Thus, extensions in time and space have become equivalent.
Events A and B in Figure 80 are now considered a timelike pair because it is always possible to find an observer, such as one whose axis is ct3, for whom the two events take place at the same location and so along the worldline of the clock used to measure them. The entire interval is therefore timelike to that observer and with zero relative measures in space. A clock can be used, but the metre stick is redundant.
While the observer at ct3 uses a clock but needs no metre rule to measure Events A and B which are completely timewise events, all other observers will need both. The two events will now always occur at different places and times to all those other observers, with no clear agreement between any of them them as to exactly where and when. Thus axes ct1 and ct2 are also possible, as is any other axis up to the 45º line, which is the speed of light. A and B could well be causally linked. In general, a lessthanlightspeed signal could connect them. The only thing the various observers will agree on is that Event B is always timelike relative to Event A, and that A sits always in B’s past light cone, or else that B sits in A’s future light cone.
Events C and D are spacelike because it is possible to find some observer, such as one whose axis is x2, to whom those two events occur simultaneously, and to whom the interval concerned is entirely spacelike. The world ‘simultaneous’ is used to convey the relative lack of measure along the temporal axis. This x2based observer therefore uses the metre rule, but has no need of a clock to measure these events.
Although no other observer will now measure the same values for Events C and D, and although all other observers will again need both a clock and a metre rule to measure them, Event D is always spacelike relative to C for all possible observers, with D being in C’s absolute elsewhere. Events C and D are causally distinct to all observers in the sense that only a signal travelling faster than the speed of light could now timewise connect them. Observers do not agree on how far apart these events are because the lengths vary for them all, but they will all register some distance or other.
We have already seen from Figures 46, 47 and 48 that we can establish a polar coordinate system for biological populations and for their generation lengths. Any point p on a Euclidean plane is an ordered pair (r, θ). There is therefore a function f(r, θ) that represents all points satisfying the function.
Since we have polar coordinates available, we can of course express our population function f(n, q̅, w̅) as f(ρ, φ, θ). When expressed in Euclidean coordinates then ρ = √(n2 + q2 + w2) while φ and θ are the ratios between any two such as n/q and n/w, which are then average individual mass and average individual energy. This defines the divergence and the curl for the fluxes. We can of course substitute w/q for the latter as the visible presence, V, and so facilitate the thermodynamic approach.
Our new metric, which we have already used for our biological work, has all necessary properties. By the identity condition, d(p, p), the beginning of a generation, howsoever defined, is simply the beginning of a generation, with the same holding for all possible generations. By the nonnegative condition, d(p, q) ≥ 0, any point q within the generation, and at any point t, has a measure greater than zero and so is “in” the generation. Since the generation length ranges, in absolute values, from 0 to T for all populations; and which is also from zero to unity; then it is fully consonant with real numbers for all of them. Any proportion of any generation length is equivalent to any other for by the symmetry condition: d(p, q) = d(q, p) then a value of a measure forwards is the same as its value back. Since all values can also be measured with our vector normals, which range about unity, then any proportionate change in the mass or energy of any entity or population is exactly equivalent to the same proportionate change between any other minimum and maximum values, for they are fully equivalent and can be freely transformed into each other. By the triangle inequality condition, d(p, q) + d(q, r) ≥ d(p, r) a movement along the generation length moves more deeply into that generation for all possible generations. And by the identity of indescernibles condition, if d(p, q) = 0 then an equivalent point in a succeeding generation is the same as one in either a preceding and/or a further succeeding generation, with the beginnings and ends of generations lying freely on each other. This allows us to use our polar coordinates to scale over multiple generations without fear … along as biological space is flat.
The importance of relativity is that if two different observers are in different relative states of motion, with respect to something being measured, then their respective measurements do not have to agree—and will not agree—in all particulars. Since we are mapping biological changes and properties to a circular motion, then we have two important equivalencies. Any number of revolutions or cycles through a generation repeats the angles or ratios, and therefore maps to the same point. In addition to that, any change in values or distances involved in ρ are now all also equivalent under the Lorentz transformations of the Minkowski space. Therefore, for any change in any value or coordinate whatever in one generation, there exists a measurement we can take in any other. All this holds by the special and the general theories, courtesy of the massenergy relations of the former, and the parallel transports of vectors characteristic of the latter.
The net consequence of our discoveries in relativity theory is that two different ecologists on two different planets can both agree that a day, a month, and/or a year have passed relative to their own planets. They can both also agree that such time periods have profound biological significance on their respective home planets. They can even investigate each others’ events and periods, and try to ascertain commonalities and differences. However, because of their different orbital periods, times for planetary spin, orbits about their respective suns and so forth, they will agree on these in general, but will disagree on the absolute periods of time for their days, months and years … if they can even agree on what units to share to state such absolute measurements.
If a mosquito now moves through 0.2 of its generation length while a whale moves through 0.00035 of its alternative generation length, then the two can still agree that four days have passed for them both. They simply disagree on the value for the measure when stated (a) as four days; or (b) relative to their respective generation lengths. And—when it happens—they are also both free to agree that they have at some given moment both moved through 0.2 of their respective generation lengths. They both simply now disagree on the absolute time span. It is 4 days for the mosquito, but 6 years 2½ months for the whale. They are using different rulers. But by the Lorentz transformations and the Minkowski space, these two are equivalent. The two can also both agree—when measuring with their respective unit vector normals—that they have each put on say 0.8 units of their respective initial masses since their generations each started, while completely disagreeing on the absolute amounts of mass those values represent, and even disagreeing on how small or large any of these masses are. There is no agreement anywhere in the observable universe on such values. This is the biological meaning of the constancy of the speed of light.
Since we already have our vector unit normals, our polar coordinates, and now the Lorentz transformation of the special and the general theories, then there always exists a sequence of parallel transports and exchanges of coordinates as can smoothly and freely express any property in any biological population in terms of any other and over all possible transports of energy and matter for any biological population. All are equivalent and all can be freely transformed, the one into any other. When a whale completes but a single generation, it is of no consequence whatever how many of their own cycles the population of mosquitos will have completed in its stead. By E = mc2, that number of generations, and the mass and energy used, are precisely equivalent to the energy used by the whale for its own single generation. The masses, energies and distances are fully equivalent under the special theory of relativity.
The general theory concerns itself with the curvatures and changes caused by aggregations of energy. By the Weyl tensor, biological populations are waves. They have a timelike presence measured as seconds per biomole. But the number of biomoles remains constant over all biological populations. The number of seconds involved in processing that generation, over any population, transforms under relativity and the doctrine of parallel transport to maintain a full equivalence.
Just as the light cone is critical to special and general relativity, so is the cone of reproductive accessibility—as we have titled it—critical to biology. Only those events that lie within an entity’s past reproductive cone can affect its features, properties, and structure, for only they can reach it from the past. They reach it through reproduction for that is the only way the singularity upon the biological hyperplane of the present comes into existence; and it is also the only way biological cones can continue into the future.
By Maxim 2 of ecology, the maxim of succession, ∇ • H = δW, only biological properties causally transmitted through cones of reproductive accessibility can feature in any population. All other materials, resources, traits, features and properties form a part of the firmament.
A population’s extant collection of features can only—and must—be maintained, at any given time, through a spacelike interaction, by each entity, with the firmament and so with the surrounding mass and energy. At every point over a generation, therefore, and as stated in Maxim 2, there is a timelike movement of traits, features and mass and energy in timelike fashion along the generation length, but always and only through and because of the spacelike work done and heat emitted in the firmament, and according to Law 1, the law of existence. The firmament is, again, then the sum of all the biotic and abiotic aspects of the environment as are available—relative to any given entity or population—to be converted into biological matter by any entity or entities mutually enjoying a cone of reproductive accessibility.
Any set of reproductive entities purportedly belonging to “the same species” will evidence similar values and histories. When studied under the doctrine of parallel transport they have the same origins. They share the same proper time for their generation which is the interval Δs, the distance along the Minkowski interval. It is the proper time, τ, and the length of the biopath. It can be measured as the generation length from 0 to 1, which is the angle θ moving from 0º to 360º or else from 0 to 2π radians. All states, rates, and velocities for properties can be stated as dx/dθ, with time also being stated as dτ/dθ which is simply the proportion of the absolute generation span that passes per each infinitesimal increment around that generation. The biological generation has no other metric for any population.
The generation length can always be stated, if desired, in absolute terms and via the NISTF1 clock or whatever measure is considered appropriate. But that measure is entirely arbitrary. No two species need agree on that socalled absolute duration or measure stated. They do all agree, however, that the generation length is 0 to 1 when measured relative to themselves. They all also agree that they move, in that period, from a minimum to a maximum value for mass and energy and back again, although they do not agree on any absolute values for those movements … and nor are they obliged to do so, for by the Einstein equivalence principle of special relativity no one measure for absolute quantities is any better than any other. By the special theory, all absolutes are arbitrary and without foundation.
We are now ready to break out our planimeter for the last time. All previous planimeters have been linear, measuring and circulating about the circumference, albeit it is the generation length. Figure 81 shows a polar planimeter which instead works with angles. The central measuring wheel can be placed anywhere. Population III out of the three different populations is currently being measured. Population I is the ideal and Franklin population.
We measured the three constraints of constant propagation, constant size, and constant equivalence in absolute units as watts, darwins, and watts per kilogramme respectively. But we then reexpressed the same quantities as unit vector normals ranging around unity, and so as the engenetic burdens of fertility, components mass, and conformation. The population’s absolute measures as the three constraints gave us access to their measured external properties, while their relative measures as vectors and burdens not only gave us access to their potentials, but also made them comparable to each other. The three fluxes of numeracy, mass and energy, Q, M, and P, are similarly expressed in absolute terms over the generation length, which is itself measured in absolute terms as Z seconds per biomole. They therefore give us access to their measured and external values. The Haeckel, Darwin, Mendel and Gibbs potentials then express exactly the same values, but in relative terms all about the generation … which we also alternatively measure from zero to unity for all populations. The linear and polar planimeters therefore measure exactly the same commodities, but express them in those importantly different ways: the relative and the absolute.
The element T00 is the Haeckel potential, ηH. It is the sum of all the energies, rest mass energies, and materials fluxes over the population. Since the momentum of the relevant energies expressed as photons is mc; and since the time axis on the worldline is measure in lightspeed units as ct; then the entire combination of mass and energy expressed at each instant is mc2, and the full statement of the evolutionary potential, η, is the sum of ηH, ηD, ηM, and ηG.
The measuring wheel in Figure 81 has just moved incrementally from B to D, and so from (x, y) to (x + dx, y + dy). The total area is the figure ABCDE. The planimeter has slid, like our linear one, across the length dT to create the parallelogram ABCE; but it has also rotated about itself to create the polar section or triangle CED of angle dθ.
The parallelogram’s area is hdT, while the small triangle’s area is (h2dθ) / 2.Therefore, when the planimeter has gone around the whole generation the total area will be:
A = (h2/2)∫cdθ + h∫cdT.
But if the planimeter completes a circuit, then the net change in θ is zero, which gives A = h∫cdT. Therefore, determining the total area of the generation depends only on the net displacement of the planimeter’s wheel around the circuit, T, and is completely independent of its position relative to the curve being measured. The distance h can always be stated relative to any other population. We can measure and transform any curve into any other. All generation lengths are therefore still equivalent in that they all measure one revolution; while by the principle of special relativity their absolute speeds in traversing that generation length in terms of its proper length, T, continues to make them equivalent. This is so irrespective of how much mass and/or energy is involved in that absolute length, and of how rapidly, and relatively, the given population must move relative to any other. Green’s theorem, Stokes’ theorem, Gauss’ theorem and the gradient theorem of line integrals are all still valid, as are the BiotSavart law, and the Liouville and the Helmholtz decomposition theorems.
Population I, the proposed ideal Franklin cycle, is everywhere flat and Euclidean. We can Lorentz transform and parallel vector from this or any other population to any point in any other—including Brassica rapa and Chorthippus brunneus—and record all masses, numbers, and energies for all of them, so we can compare them and draw conclusions. They all therefore appear Euclidean at every point, and can be measured as such. This is equivalent to “posing” every population we come across, and then recording all values.
Once we have recorded all possible values at t = 0, we infinitesimally increment along the generation length, or time axis. This is a timelike movement. We then record all spacewise values that accompany this incremental timewise dT, and/or dθ, increment. We will never notice any shear values in our Euclidean population. We also do not seem to notice them, locally, in any other population.
We dutifully record all values everywhere. We begin to suspect, from the figures we take, that many of these spaces are distorted, but there is no real evidence of this at this local and infinitesimallyneartoEuclidean scale. Every space appears ideally Euclidean. There are vector units everywhere. We always take “good pictures” everywhere. All events at every t over T are appropriate.
We eventually reach the end of this exercise. We can then analyse our results using the stressenergy tensor in Table 8:
H a e c k e l p o t e n t i a l ηH 
Three engenetic burdens (relative measures) 


Engenetic burden of fertility, φ  Engenetic burden of components mass, κ  Engenetic burden of conformation, χ  
Three constraints (absolute measures) 

Constraint of constant propagation (joules per generational increment)  Constraint of constant size (joules per biomole)  Constraint of constant equivalence (watts per kilogramme)  
n entities 
q molecules 
w bonding 

Four potentials (relative)  Haeckel  T00  
potential, ηH  T01  T02  T03  
Darwin  Three fluxes (absolute)  Number flux, Q (biomoles per second)  n  T10  T11  T12  T13  
potential, ηD  entities  Pronumeracy  Anumeracy  Anumeracy  
Mendel  Mass flux, M (kilogrammes per second)  q  T20  T21  T22  T23  Compensatory  
potential, ηM  molecules  Abundance  Probundance  Abundance  Development  
Gibbs  Energy flux, P, (joules per second)  w  T30  T31  T32  T33  
potential, ηG  bonding  Accreativity  Accreativity  Procreativity  
Compensatory  Essential  
Development  Development 
We again place our total relativistic energy in T00. We do some careful calculations and distribute our measured energies, from each population, along the relevant rows and columns. The elements T11, T22, and T33 again record the normal pressures.
When we carefully examine these values, we notice that we do not have shears in our ideal Euclidean and Franklin cycle … but we now notice them in every other possible—and real—population. The T11, T22, and T33 values must then be the normal movements through each space. They must also be the geodesics. The shears are signs of a larger global curvature that all real populations seem to suffer from. All local and Euclidean spaces act as if free from the inevitable inertia of any mass and energy placed within them, but only the ideal one acts so globally. It is the only one that is linear and multiplicative and whose values on its diagonal act as if free from the ability of all masses and all energies to bend space and time about themselves. Real and global spaces do not have this property. Thus in spite of all appearances to the contrary, every local and apparently Euclidean space has an intrinsic curvature embedded in it.
When energy is expended on a shear, then it is not being expended on the essential development, λ, which is the diagonal composing the normal pressure. It is instead being expended on compensatory development, L. Therefore, the measured values will never be what is expected for a flat Euclidean space. Those shears can therefore be measured and isolated through the Ricci scalar or scalar curvature, which is the trace or contraction of the Ricci tensor.
The Ricci scalar is determined directly by that point’s intrinsic geometry. That geometry is the combination of its location and the rates of change of its various dimensions. The Ricci scalar is a unique real number assigned to each point that quantifies the effects mass and gravity have upon that space and at that point.
The Ricci scalar is unique because its value depends entirely upon the interaction of its normal and its shears. Any two points with the same Ricci scalar have identical descriptions, masses, energies, and rates of change in masses and energies. They are therefore the same point.
Our Euler and GibbsDuhem equations are well capable of analysing these measurements and their rates of change. They can test the hypothesis, formed from our data, that biological space is not flat, and that it is everywhere curved. We have already defined, measured, and recorded the values at T11, T22 and T33 which are the essential development, λ. They give us the probundance, γ = T22 = (∂S/∂U)V,{Ni} dU, and the procreativity, ψ = T33 = (∂S/∂V)U,{Ni} dV in the Euler equation and the dU and dV in the GibbsDuhem one. The shear values are then the compensatory development, L, which is the abundance, C = T12 + T21 + T23 + T32; plus the accreativity, Y = T13 + T23 + T31 + T32. These are ∑i(∂S/∂ui)U,V,{Nj=/i} dui and ∑i(∂S/∂vi)U,V,{Nj=/i} dvi respectively in the Euler equation, and ∑iμi(dvi  dmi) in the GibbsDuhem. The combination T01 + T10  T11 , or elseT11 + T12+ T13 + T21 + T31, is the measure of Darwinian fitness. It states the change in size and shape suffered by a unit Euclidean ball so a biological cycle can be completed. This is Darwin’s theory of evolution.
The four evolutionary potentials within the biological stressenergy tensor are the biological statements of the Ricci scalar. They declare the forces that curve biological massenergies and biological spacetime. They determine the type and behaviour of all the molecules and chemical bonds over all the members in any given population or species.
Table 9 gives Brassica rapa’s four evolutionary potentials as measured from its unit engenetic equilibrium population:

Multiples 
Deviations 
Minimum 
Maximum 
Haeckel potential, ηH 
3,110.4 
0.625 
9 
36 days 
days per biomole 
days per biomole 

Darwin potential, ηD 
1 
0.121 
0.662 
1.096 
biomoles per second 
biomoles per second 

Mendel potential, ηM 
9.248 x 1010 
0.427 
1.171 x 103 
1.049 x 101 
grams per second 
grams per second 

Gibbs potential, ηG 
8.970 x 1012 
1 
6.740 x 103 
1.120 x 102 
kilogrammes per joule 
kilogrammes per joule 

Evolutionary potential, η 
9.062 x 1012 
1.173 
1.063 x 1013 
All populations with exactly the same Darwin potential, ηD, will have the same numeracy and rate of change of numeracy across their entire generation length. It states a specific numeracy all across τ, i.e. across the generation length when measured relatively, and so from zero to unity.
The Darwin potential does not stamp a population as unique. Other populations could easily have the same Darwin potential yet have different absolute generation lengths. We must combine the Darwin and Haeckel potentials. The Haeckel potential, ηH, states both (a) the total quantity of energy available to the population; and (b) the overall timescale over which that energy is distributed. We then have a clearer statement of biological inertia in darwins, as well as an absolute number distribution over an absolute time span.
Although we now have a statement of both watts over the population and joules per entity, we do not yet know how that energy is apportioned between mass and energy. The Mendel potential, ηM, then tells us, for every moment t over T, both the quantities and the types of chemical components, in moles and in kilogrammes, of which the entities are composed. This is also a statement of their DNA.
Finally, the Gibbs potential, ηG, states the exact biothalpy and work rate the population delivers, and therefore also states how the components are configured so they deliver a specific Wallace pressure at every point.
The sum of all the four evolutionary potentials—i.e. η = ηH + ηD + ηM + ηG—is then a unique scalar that states the complete record of all the masses and energies for any population. It is derived entirely from number, mass, chemical configuration and energy, and yet also states every trait objectively. Brassica rapa’s value is:
η = 9.062 x 1012 x 1.173 = 1.063 x 1013,
and it is unique. No other population can have this value without also being made of the same components, behaving in the same way, over the same numbers, over the same time interval, and with the same energy.
The famous “twin paradox” from special relativity now has a particular biological meaning. In that paradox, which highlights the complete equivalence of timelike and spacelike distances across spacetime, the light cones of two terrestrial twins overlap. Ordinary material interactions between them are therefore possible. One twin then journeys into outer space in a highspeed rocket. Ordinary material interactions are now not possible. Upon his or her return, he or she now finds that he or she has aged less than his or her identical twin, who has meanwhile remained stationary on Earth.
The special theory of relativity, taken alone, cannot resolve this twin paradox. It deals only with inertial frames in constant motion relative to each other. The general theory, which deals with accelerating frames, is needed for the resolution. By that general theory, one twin and his or her light cone have together experienced tremendous accelerations and decelerations relative to the other twin. The two have thus behaved differently relative to both spacetime and each other.
The resolution to the twin paradox is that the earthbound twin, being stationary relative to earth, has travelled a lesser spacelike distance. But he or she has paid for this by travelling the greater timelike distance. Meanwhile, the spacebound twin has travelled far greater spacelike distances and so has experienced spacetimes of much greater intensity and curvature. By travelling much greater spacelike distances and accelerating through spacetimes with greater curvatures, he or she travelled much less far in timelike distances, and has therefore aged less relative to the stationary twin who has remained on Earth.
The biological version of this twin paradox is a Rassenkreis or “ring of species”, a longstanding biological conundrum that can also only be resolved with the general theory. A ring of species is a similar, biological, journey through mass, energy and time and which contrasts the special and the general relativities.
A population’s biological cycle—i.e. its movement through time—is governed by its Weyl tensor … which is its wave function. This is composed of three parts: the engenetic burdens of fertility, of components mass, and of conformation. Since these always have both given values and given rates of change, they together establish both timelike and spacelike behaviours They establish the population’s three constraints of constant propagation, size, and equivalence, all of which are spacelike values at every point across the generation, which is then also a set of timelike values. These spacelike and timelike values taken together create a cone of reproductive accessibility. All entity biotrails within that cone fall upon the same population biopath. Since the axes of measure remain constant, a species is a manifestation of the Minkowski light cone within the special theory. All species are groups of entities that share the same inertial frames, all of which remain in constant motion relative to each other, and as defined by their cones of reproductive accessibility and inaccessibility again relative to each other.
A species and a cone of reproductive accessibility is established when the three engenetic burdens of fertility, φ, of components mass, κ, and of conformation, χ, come together to produce an instantaneous wave interference for a given biological potential, μ, and Weyl tensor. A wave interference is:
The process whereby two or more waves of the same frequency or wavelength combine to form a wave whose amplitude is the sum of the amplitudes of the interfering waves (Parker, 1998).
This interference results from the superimposition of the three parts under linear algebra. But since a biological population is a wave of probability based upon n, m, and V, then all the entities extant within any cone at any time have the same source. This the population from which—and by parallel transport—they all emerge. Their vectors are the same length, and they point constantly in the same direction in the firmament. All these entities whose biotrails lie upon the same biopath, and so that fall within a given cone of reproductive accessibility, initially have the same proposed wavelength, frequency, and amplitude, which is (a) the same proposed numeracy or number flux, Q, at every t over T for the cycle; (b) the same mass flux, M, and therefore the same average individual mass, m̅, at all those ts; and (c) the same engeny flux or Wallace pressure, P, which is then the same energy density or visible presence, V, the same engeny, S, and the same average individual energy, p̅.
The three engenetic burdens also establish the same curvatures for the entire generation. This is immediately the same rates of change in their various quantities … and which are again the same timelike movements over each of the spacelike values. We can therefore now define a species as a collection of biological entities all of whose biotrails lie upon the same biopath, and that therefore maintain the same cone of reproductive accessibility upon the quantum hyperplane of the present.
The maxim of apportionment—∇ x H = ∂m̅/∂t  ∂n/∂t  ∂V/∂t which is Maxim 4 of ecology—declares the rate of change of engeny, which is dS and so the biological potential, μ. This biological potential is the instantaneous expression of all apportionments, including any shears. It is also the Weyl tensor and so is both responsible for, and the result of, an interference amongst its three components. That interference must be constructive, destructive, or intermediary.
Where species manifest the special theory, speciation manifests the general theory. These spacetime differences in waves, and so in the Weyl and the Ricci tensors, give rise to Darwin’s variations and are clearly demonstrated by the herring gull Larus argentatus and the lesserblackbacked gull, L. fuscus which create a ring of species.
Both gull species Larus argentatus and L. fuscus live in the oceans around Great Britain and the eastern North Atlantic (Hanson, 1981). In or around the Pleistocene epoch, some 2½ million years ago, various subgroups of an original L. argentatus population branched out and settled all around the northern polar regions in a huge geographic circle stretching from the Aral and the Caspian Seas to northern Europe, onwards to the Mediterranean, Siberia, Alaska, and into northern America. Since these are vast geographic distances, local subpopulations reproduced preferentially with each other and formed distinct reproductive enclaves. The Caspian gull, L. cacchinans is one such distinct population. This seems, from its DNA, to be an offshoot of an original L. argentatus population that stayed near the Aral and Caspian seas. It has a subspecies entitled the Steppe or Baraba gull, L. cachinnans barabensis. Then there is L. glaucoides, the Icelandic gull, living in the Arctic Ocean, near Baffin Island, which has its own subspecies, Kumlien’s Gull, L. glaucoides kumlieni, around Canada’s Arctic coasts. L. cacchinans seems to have given rise to L. fuscus, the lesserblackbacked gull, which then expanded outwards, leaving L. cacchinans in its original territories. And meanwhile, the original L. argentatus continued surviving both on the eastern coasts of Siberia, and in Alaska. From there, it gradually spread into North America, approaching the L. fuscus territories. Another genetically distinct variety is L. grucoides, a North American form of the original L. argentatus, and which also met and bred with L. fuscus.
The Association of European Rarities Committees recognizes six Larus types: Larus argentatus, the European herring gull; L. smithsonianus, the American herring gull; L. cacchinans, the Caspian gull; L. michahellis, the yellowlegged gull; L. vegae, the Vega gull; and L. armenicus, the Armenian gull (AERC TAC, 2003). But in 2010 the British Ornithologists’ Union Records Committee stopped recognizing the Caspian gull, insisting it was not truly distinct; whereas earlier in 2003 it had admitted Audouin’s Gull, Larus audouinii, to the British list on the grounds that a single gull had wintered on British territories for a second successive season (Birdguides, 2004). There is also debate about the Mongolian gull and whether it should be titled L. vegae mongolicus, or else L. cachinnans mongolicus, and so questioning whether it is or is not a subspecies of either the Caspian or the East Siberian gulls. Some argue, however, that it is a distinct species, in its own right, and that it should rightly be called L. mongolicus.
As is common in such rings of species, no single gull or gull type traverses the entire ring. It is nevertheless possible to geographically trace populations and subpopulations back and forth around these latitudes that do still seem to breed … and even though, at the putative beginnings and ends of the ring, there seem to be sets of populations that simply do not interbreed, and so where the genetic and reproductive chain genuinely seems to be broken. Whether or not (a) this entire ring should be classified as one species, despite the fact that distinct and identifiable subpopulations within it cannot then breed; or else whether (b) the ring should be divided up into separate species, even though viable hybrids would then inevitably be found (and are found) that would not (and do not) strictly belong to any one species decided on, is a matter of continuing debate.
The salamander Ensatina eschscholtzii stands as another example of the general theory of relativity’s ability to produce a ring species. There are various salamander types found in a range extending from British Columbia, in Canada, southwards through Washington, Oregon, and California in the USA, down into Mexico (Hanson, 1981; CaliforniaHerps, 2009). California contains the famous Great Central Valley with salamanders at both the northern and southern ends. There are distinct types both north and south, and also on the western and eastern sides. There are at least seven subspecies, some of which cannot interbreed. It is possible to start at the south end of the valley, head north on the eastern side, picking interbreeding types all the way, and then return south on the western side, continuing to do the same. By the time the ring is completed there will have been so many changes amongst the salamander subspecies that the initial salamander types selected cannot interbreed with the final ones, even though mutual interbreeding occurs continuoulsy around the ring in either direction.
Variations in size and scale of this kind, and their possible effect on speciation are, of course, the main bone of contention with Darwin’s theory. The proponents of Darwinian evolution say that the evident rings of gulls and salamanders and others—with some breeding in one area and others failing to breed in another—is how new species begin. The antiDarwinians, however, utterly deny that suggestion.
We have already proven, in our experiments with Brassica rapa, that when the number of entities, n, increases; and/or when the number of moles of molecules retained, q, decreases; then the generation length, T, also decreases. We also noted immediate changes in the rate of chemical processing, w, which also increased as T decreased. Therefore—and contrary to assumptions made in the Dodd experiment—differences in biological cycles are not defined geographically. They are instead defined by differences in the function f(n, q̅, w̅), and in the timelike and spacelike differences these three variables create. A ring speces is a collection of entities where accumulated differences in the spacelike variables of n, m and w produce timelike differences in T which then produce relative reproductive inaccessibilities through a destructive interference between their values. In line with the general theory, those net differences are sufficient to induce differences in the curvature in spacetime.
To constantly select new entities from different and nearby sections of a ring is to take a biological journey through different spacetimes and their axes and curvatures. It is to select entities from differing biological spaces. There will then be different spacelike curvatures and measures which are differences in numbers, mass, and energy. These will then invoke timelike differences in rates. Different measures across all axes—inluding in generation time—will result, producing this biological version of the twin paradox through general relativity.
Selecting new entities from different sections of a ring invokes the general theory because it juxtaposes different curvatures and axes in spacetime. If the massenergies confronting a population are consistently different from the massenergy potentials encoded in its Weyl tensor—which is effectively its DNA—then its actual values, which is its Ricci tensor, will diverge, over a range, as it interacts with its firmament or environment. This difference in the Ricci tensor is differences in the fluxes of Q, M and P. These are then in their turn differences in biological potential, which is an immediate difference in the Weyl tensor.
By the principle of superposition, a wave’s total displacement at any point is the vector sum of the individual displacements of the individual waves comprising it. This superposition produces constructive interference when crests coincide; but it produces destructive interference when crests coincide with troughs. A destructive interference gradually brings numbers, masses and energies down to zero. If the phase difference is intermediate then there is a spread of values between the fully constructive and the fully destructive. Thus a change in the values in f(n, q̅, w̅), is a journey in spacetime that has both timelike and spacelike consequences. Any such changes will—by the general theory—eventually lead to speciation.
We can again rigorously define a “true species”. This reqires only the special theory. A true species is a stationary and repeating wave of the biological potential, μ, or Weyl tensor. This standing wave creates a continuously maintained Minkowski light cone for that species. It is shared by all members. All their biotrails then lie upon the same biopath. Their standing wave arises through a destructive interference at both the beginning and the end of that population’s generation length. It defines the Minkowski spacetime cone of their mutual biological and reproductive accessibility.
As in Figure 43, a true species has definite extrema for n, m and w. A true species thus has a full consonance between its wave and its particle expressions, and so between its tripleset of extrema. All entities that could occupy any intermediary region immediately outside its extrema, and so beyond the limits set by its generation length, are unable to become, or to remain, viable or to breed. The mass, energy, and number density implied by those locations are disallowed through destructive interference between the engenetic burdens of fertility, components mass, and conformation. Other entities cannot then interact either timelike or spacelike to gain entry to the species’ cone of reproductive accessibility. They cannot reproduce with any entities already inside the cone to produce viable offspring. It is this destructive interference at both the beginning and the end of the generation length—and so outside the extrema—that produces the standing wave of the Weyl tensor and the biological potential, and that creates the cone of reproductive inaccessibility with respect to all other entities.
The boundaries of these cones of reproductive accessibility—and so inaccessbility—are formed only of quantum probability. They are measured as joules for the reproductive potential, A, and kilogrammes per joule for the visible presence, V; but as seconds per biomole for the Weyl tensor. They are then expressed as the three material fluxes, but particularly including biomoles per second for the resulting Ricci tensor. They are thus the relativistic massenergy appearing as the three engenetic burdens, the three constraints, the three fluxes, and the four evolutionary potentials.
Since the boundaries around species are based only on quantum probability, they are not impermeable. There will therefore be cases, such as with ring species, where the destructive interference at the beginnings and/or ends of the generation have yet to run to completion. Boundaries around the species are therefore demonstrating their intrinsic permeability. Populations whose numbers, masses and energies fall midway between a fully constructive and a fully desctructive interference could well then admit, into their cones of reproductive accessibility, members that originate in other and nearby populations and species. Extant members within a cone may then also find that they can breed successfully with others not originating within their cones.
This proposed definition for a species can be easily tested by experiment. It is, however, necessary to create not just one, but two new species: one per each extremum tripleset. We maintain a control subpopulation in its normal environment. Its entities will continue to breed and to provide entities exhibiting the normal values. They may not, however, breed with any entities in the two subpopulations extracted from it, and especially not if introduced as immigrants.
Either Brassica rapa or Chorthippus brunneus can provide an example set of values. Our B. rapa experiment began with n = 4 seeds per pot. Since this was below the equilibrium age distribution values, numbers and energy density both increased. But conversely, mass and generation length values both decreased until there were n = 14 seeds per pot. All values then reversed as the population collapsed from that maximum value back to n = 5 seeds per pot. We therefore now know B. rapa’s upper and lower bounds.
By the general theory, all mass and energy curves space and time. Our proposed experiment in speciation is based on Friedmann’s realization that the universe is expanding. As evidenced by the clumping of galaxies and galaxy clusters, the associated distributions and/or expansions of mass and energy across space and time, as the universe expands, cannot happen linearly and uniformly and is far from Euclidean.
We now bring together the special theory as it relates to species, and the general theory as it relates to speciation. Speciation is now the testing, via the general theory, of the lower and the upper bounds that, by the special theory, are placed around species. The general theory states that masses and energies curve and bend the axes and the spacetimes that—via the special theory—species attempt to maintain about themselves.
Every species is defined by Z, m̅ and p̅, with T, the generation length then measuring the total quantity of massenergy about the boundary set by the Liouville theorem, while m̅ and p̅ are set by the Helmholtz decomposition theorem and are directional directives. We then test the boundaries these two triplesets of extrema form about the population. We do so by extracting two subpopulations, from the control, and placing them in their own environments of known properties.
The most technically difficult component to test of the tripleset is the chemical configuration, the visible presence and the engenetic burden of conformation. The first subpopulation should be deliberately fed a stressful diet, or else be otherwise placed in circumstances known to be chemically inimical … but that also enable it to advance reproduction. The chemical configurations provided should hover constantly close to the known minimum tolerable values for that species or population so as to force it to emphasise its rate of movement along the generation length, which is its rate of change in the work rate. The diet should seek to shorten T. This is to impose as great a competitive stress as possible via the average individual pressure, p̅.
The subpopulation is additionally stressed by removing from it all largerthanaverage entities, and then substituting smallerthanaverage entities for them as taken from the control group. Those smallerthanaverage entities should also emphasise p̅, as much as possible, by always being as far advanced along T, the generation length, as possible. Largerthanaverage entities are clearly showing a relative success, in spite of the stress conditions, in acquiring chemical components. This must be disincentivized. Particularly as the number of generations increases, the unstressed replacement entities should not be permitted to breed with the stressed subpopulation. Their consistently higher energy densities should simply add to the net stresses imposed upon the subpopulation’s resources. In Brassica rapa’s case this is easily effected simply by refraining from handpollinating any of them. Thus the sole effect of the introduced members is to increase competition and help to raise p̅, and to lower m̅, as much as possible. This is now an added stress via average individual mass, m̅.
The subpopulation is then further stressed numerically. Our Brassica rapa experiment tells us that 14 seeds per pot forms an upper bound for this species. That upper bound also induces the shortest generation length. However, left to its own devices B. rapa tends to collapse away from this value and towards its equilibrium age distribution. We must therefore always introduce additional entities, at every necessary point, to maintain the upper bound number density and for an extended series of generations. All inserted members should of course continue to be (a) as far advanced as possible upon the generation length; and (b) smallerinmass than average. This is now a further stress via n.
This first subpopulation’s generation length is now being induced, through every variable, to become as short as possible. Since a population is defined by anf(n, q̅, w̅), which establishes an appropriate phase volume, which is its T, then this subpopulation must seek a set of values for a new extremum and must gradually establish a new set of upper and lower bounds.
We are now bending space and time about this population and imposing spacetime rates of curvature that are as intense as possible. We are inducing infinitesimal timelike increments in dθ and/or dt that are as large as possible per each equally infinitesimal spacelike increment imposed. We are therefore forcing this subpopulation to accelerate relative to the control group from which it was originally extracted and so to move through its cycle of the generations more quickly, relatively.
All opposite circumstances then of course hold for the second subpopulation, which must decelerate, relatively. Its circumstances should be made as favourable as possible with the diet or environment being as enriching as possible to help it slow its dθ and dt and to increase its generation length. This subpopulation is then further favoured by removing from it any smallerthanaverage entities. These will be tending to favour changes in p̅ over m̅, which is to favour development over mass, and to decrease T … whereas we wish to give this generation length every opportunity to increase. This requires increasing in mass for that is the way to increasingly bend or delay light. All smaller and faster moving members should therefore be replaced. We also, and finally, remove a known number to make the number density favourable for to decrease numbers is another way to extend T. We must therefore keep the population constantly close to its known lower bound and constantly decrease competition. We again want to enhance this subpopulation’s ability to bend and curve light.
Einstein’s general theory of relativity at last provides the sound mathematical model that validates Darwin’s claims. Our two subpopulations are now tending to the opposite extrema exhibited by the original species. An increasing diversity in numbers, in mass, and in energy density accrues. As illustrated by Rhagoletis pomenella and the Dodd experiment, where similar results could be observed, the first circumstance will be a ring species. The Weyl tensor or wavelength—which is the tripleset of engenetic burdens—that now stretches right across our control group plus its two subpopulations will eventually be so great that a wave incoherence will set in. Each of the two subpopulations will be unable to breed with the other; but both will still be able to breed with the original control population. It is now located directly between them. If the differentiation processes continue, then the two ring ends will continue to diverge and the destructive interferences will mount. The two subpopulations will each eventually instigate their own new extrema, inserting one between themselves and their original. Each subpopulation will then have its own new set of distinctive lower and upper bounds. The final result, therefore, will be that the two subpopulations will not only fail to breed with each other, but they will also fail to breed with their original population. We will then have successfully persuaded two new species to emerge from a single original.
By the general theory of relativity, no axes measuring any spacetime of any kind can remain uniform as they scale. All spacetimes can be locally linear at all points … but none can be globally linear throughout themselves. Cosmology, for example, has the wellattested “clumping” that produces galaxy clusters. Spacetime’s inability to scale linearly and to remain smooth and uniform at any scale means that the massenergy clumpings that produce species are equally inevitable.
The special theory shows that the future light cone centred on every observer on the quantum hyperplane of the present is constantly growing. But since light’s singular property is its linking of time and space, then this is both a timelike and a spacelike growth. Thus when a given observer has moved 10,000 light years into the future, then objects that were at one time close together spacelike could equally well have moved apart, again spacelike, to become 20,000 light years apart. Two different space rockets—say a Darwin I and a Darwin II—could each take off from earth simultaneously, and move in opposite directions for that time span. This timelike movement leads to an associated spacelike separation, along with all that that implies for all material based interactions within and between them … including the biological.
By the special theory of relativity, the cone of reproductive accessibility similarly centred upon every entity and population on the quantum hyperplane of the present is also constantly growing. As the cone undertakes its timelike growth, spacelike divergences will accrue in its tripleset of variables. This means a steadily increasing diversity, or range, in (a) its evolutionary potentials: Haeckel, Darwin, Mendel and Gibbs; and (b) its three engenetic burdens: fertility, components mass, and conformation. This is an everincreasing range in (i) energy density at each moment t over T; (ii) number density at each t; (iii) mass density or chemical components again at each t; and (iv) energy density, and work rate, at all ts. And … these are Darwin’s variations.
An increasing range in relative values of course also means an increasing range in absolute ones. Thus the three materials fluxes in numeracy, Q, mass, M, and energy, P, must all also increase in range or breadth; as indeed must the three constraints of constant propagation, constant size, and constant equivalence.
By Riemann’s theory of curved manifolds, the geometry of spacetime is described by the metric tensor, gμν; while the curvature of spacetime at each point, or spacetime event, is described by the Riemann curvature tensor. This is itself decomposed into the Ricci and Weyl tensors.
Riemannian manifolds have local and Euclidean spacetimes that can be considered flat, again locally, as the Riemann tensor can be considered to vanish. Local space is then (a) isometric or scaling uniformly and linearly; and (b) conformal or with consistent angular relations and so no ongoing changes in shape. Parallelograms do not suddenly convert to triangles; rectangles stay as rectangles; and parallel transport is really parallel.
In general relativity, the Ricci tensor is directly coupled to the quantity of matter present at any spacetime event, as well as to the pressures and stresses, or behaviour, it induces. If no mass or energy exists then the Ricci tensor has vanished at that point. But since the Weyl tensor represents that part of spacetime that can propagate across free space, all matter separated by a void can still influence all other matter gravitationally.
The Weyl tensor in general relativity is in many ways analogous to the Maxwell electromagnetic field. Just as small variations in an electromagnetic field can propagate outwards, transported through free space at the speed of light; so also can small perturbations in a gravitational field … and for exactly the same reason. The electromagnetic field, however, has an electrical and a magnetic component. Therefore, the Weyl tensor must also have an ‘electric’ and a ‘magnetic component’, however those might be expressed. And since biological events follow all spacetime rules for the electromagnetic field and Weyl tensor, then we must identify biology’s electric and magnetic components.
We saw, from the BiotSavart law in Figure 65, that if we hold ourselves stationary relative to an electric charge, then there is no magnetic field or current element, but that one appears if we are in motion relative to that same charge. The same holds for that charge relative to ourselves. Thus in the lefthand graphic in Figure 82, we see a charge remaining stationary, relative to ourselves, in spacetime. Stationary means that wherever it goes, we also go. This granted, then its electrical field radiates out from it, without variation, and forms a completely static worldline. The electric field strength decreases at the rate 1/r2 all around it, and all other charges inserted into it will respond identically and invariantly according to the field strength at the insertion point.
The situation in the righthand graphic is very different (Lowry, 2004). An electric charge—or a series of them—is moving, relative to ourselves, from Location 1 to Location 2, where it reverses and returns. There are three associated effects. Firstly, we have, at every time point, the same stationary electric field as above. Every charge is at least instantaneously stationary relative to ourselves, and the straight lines with an arrow in Figure 82 declare that field. Secondly, each charge’s spacelike movement along its wire induces a circular magnetic field about that wire, and according to the righthand rule. If the right hand wraps around the charge and wire with the thumb pointing in the direction of the current, then the field curls with the fingers. The wire itself forms a worldsheet along its length. A charge moving along the wire forms a worldline that slopes to, and intersects, the wire’s worldsheet. The associated magnetic field is carried about, and along, the wire with each charge, also in spacetime, and establishes a worldvolume of its influence. Any other charged particle inserted into, and moving within, that magnetic field will eperience a magnetic force perpendicular both to its own velocity, and to the magnetic field. Whether it is repelled or attracted depends upon its charge. When viewed from the wire’s frame of reference, there is only a magnetic force acting on that inserted charge … but when viewed from the inserted particle’s frame of reference, there is both an electric and a magnetic force. If, of course, that alternative charged particle is stationary then the forces sum to zero. When the charge reaches Location 2 it reverses, which is an acceleration. This acceleration now results in the third effect, which is a wavelike radiant pulse of energy propagated through free space at the speed of light, and that has both electric and magnetic field components. The two have a ratio of fixed intensity relative to each other. Their phase is mutually perpendicular, and is also perpendicular to the direction in which both the wave and its energy are propagated. The resulting wave or pulse increases in its breadth in spacetime to encompass more and more locations. When it reaches any suitably charged particle at any time, such as at Location 3, that particle will experience a force whose direction depends entirely on the original particle’s direction of motion.
We must now find the gravitational and biological equivalents. By the Liouville and the Helmholtz decomposition theorems, every species is defined by its Z, m̅ and p̅; by Law 1, the law of existence, every population must always do work; and by Maxim 1, the maxim of dissipation, every entity is liable to degradation. Therefore, no species can maintain a constant worldline in the manner of a constant electric field.
A pure magnetic field about a wire loops continuously about that wire and also acts exactly as a magnetostatic field, dependent on that charge. However, it also requires a constant charge moving in a given direction, which reduces to the above in that no biological population can be steady in this manner, and therefore this is also impossible.
The third and last scenario is the propagation of a wave through free space, and so without any physical medium. This is similar to the proposal of the Aristotelian template, which suggests that biological entities and populations can be influenced as if without benefit of any material medium. This then requires that the Aristotelian template be propagated through free space to influence each population. Its influence must then also be steady and without variation from generation to generation. But as with ∇ • B = 0 of the Maxwell electromagnetic field, the continuity of the looping required makes reproduction impossible for it is without variation at any point over the generation. Furthermore: general relativity asserts that the intensity of matter’s energymomentum, however it is distributed locally, is the sole and entire source of the Ricci tensor. It can only ever be zero—which is required for this Aristotelian template proposal—if that local matter distribution is zero. But since, by the law of existence, biological matter can never be zero—or the species is extinct—then the Ricci tensor must always exist. And since the Ricci tensor can never be zero, then its Weyl tensor can also never be zero for otherwise the population again becomes extinct. This then becomes the BiotSavart law. A biological current element carried through spacetime by the explicit activities of specified entities must always exist. Biological field lines, which is effectively all DNA, must exist wherever populations exist. They can never be zero. This is the curve in biological spacetime induced through and as the Weyl tensor. It can propagate through matter; it cannot propagate without matter … and therefore the template proposal is impossible. And since a population forms a curving biopath that loops through spacetime as values oscillate between its extrema on all three dimensions, then the biopath is constantly accelerating … and that constant acceleration propagates as the Weyl tensor and the biological potential through the matter formed via the Ricci tensor, and according to the third and fourth maxims of ecology, which are the maxims of heritability and apportionment. These are Darwin’s variations.
Under the proposal of the Aristotelian template, a light cone’s proposed growth in breadth also requires that populations scale evenly. But they can only scale in this Euclidean manner if they are free from ‘clumping’. This requires that the Weyl tensor scales absolutely smoothly at all ranges; and over all possible ranges. The total number of entities, N, for the generation, which is ∫N dT, must hold its value, meaning that the number density at every t must increase and decrease completely oppositely, smoothly and uniformly all about the generation. It is not permissible for, for example, x or y entities to be lost at any time, without exactly x or y being replaced at another for every generation and for all species. Since this replacement can only happen through reproduction, then this must again be invariant over all generations and species. The same holds for ∫M dT, ∫dm̅ dn, which is the number and types of chemical components over the population and per each entity for all generations of all species; and for ∫P dT, ∫dp̅ dn which is the configuration again over the population and per each entity, again for all generations of all species. But this then requires a completely perfect medium that is not only free from all shears, but that must be perfectly electric (Glass, 1975). Since the values for all curls must remain constant for all populations, then the medium must additionally be irrotational. It must have no variations at any point. It must in other words also be perfectly magnetic. But if the Weyl tensor is irrotational, and without variations, then the Ricci tensor must also be without variations. And if the Ricci tensor is without variations then there are again no growths and no changes of any values anywhere for any population. All entities are the same everywhere throughout all populations. They must have the same zero or undefined generation length, and a uniform energy density, everywhere and at all ts. There can also be no reproductions … all of which is impossible.
The general theory thus proves that a Weyl tensor—or biological potential, μ—for biology that is free from variations is impossible. It can again be neither pure electric nor pure magnetic. Only the electromagnetic variety—i.e. one susceptible to accelerations and variations—is possible. Furthermore, only a Weyl tensor transmitted in, and through, matter is possible. It is thus impossible for a cone of reproductive accessibility to move timewise without introducing spacelike differences on all three axes of mass, number and configuration energy. It is also impossible for a Weyl tensor—which is again the biological potential and the rate of change of engeny—to propagate without accumulating differences in all three aspects of the engenetic burdens of fertility, components mass, and conformation. A constant and invariant destructive interference, at the edges of a cone of reproductive accessibility, yet that still somehow respects the values of all ongoing Ricci and Weyl tensors, is simply impossible. Changes in the values responsible for destructive interferences are inevitable as the relevant Minkowski light cone and its contained cone of reproductive accessibility expands into the future. Changes in entities are inevitable. These are all again Darwin’s variations.
In the manner of the experiment we have described, there will therefore be such a range of values across masses, number densities, and chemical configurations that entities at either side of a ring of species cannot breed, although they can all still breed with contiguous entities located between them. The splitting of a given reproductive cone can again only be prevented if biological spacetime is flat, and so that all values for mass, energy and number scale uniformly, which we have already shown to be impossible. As the Minkowski cone continues into the future, the range of values will continue to spread. Chemical and metabolic incompatibilities within each given reproductive cone are an inevitability. Therefore—and by destructive interference—given values for N, m̅ and p̅ will gradually fail to scale. As in a ring of species, populations can then no longer breed across the entire breadth of their cones … which is simply across a range of masses, energies and numbers. By destructive interference, given values for N, m̅ and p̅ will gradually be eliminated from within any given reproductive cone of accessibility. Eventually, the cone splits as distinct extrema or lower and upper bounds are formed. Entire cones will also die away as their own destructive interference at the extrema overwhelms any constructive interference within them. A single preexisting ring of species can then no longer interbreed across its breadth, and two reproductive cones of accessbility result whose entities remain timelike connected to their progenitors, even though they are now spacelike disconnected from each other. These are Darwin’s variations.
There is a further consequence to the appearance, in biology, of the general theory of relativity. There is a particular consequence to the fact that cones of reproductive accessibility remain timelike connected to all their joint progenitors, while subpopulations at the extreme ends of any cone can simultaneously become spacelike disconnected from each other and from their respective progeny. As also does Darwin’s theory of evolution, the general theory of relativity invades previously theological territory, for Darwin’s doctrine of common descent creates a far more general theory of biology.
By the Einstein formula, E = mc2, of the special theory, our evolutionary potential, η, is both a timelike and a spacelike measure between any two populations upon the quantum biological hyperplane of the present. It states, in the required objective fashion, any Remane or cladistic distance for any given population from any other. Since the evolutionary potential is the equivalent of the light year, it is a statement of a specified time span and number of generations over which they have diverged. As well as being their spacelike separation, the evolutionary potential is their timelike separation. It is a statement of mass, energy and numbers. It quantifies the entire set of parallel vector transports and Lorentz transformations that separate them.
We have already seen how Friedmann was the first to realize that the Einstein theory turned the cosmos into a selfcontained and consistent system for discussing the universe’s longterm behaviour. He saw that general relativity implies the constant expansion of the universe. He saw that scientific analysis now stretches both forwards into the future, and backwards into the past: i.e. towards the Big Bang.
Friedmann pointed out that if the universe is expanding, then all time and matter must have emerged from a distinct location. Darwin in his turn pointed out that natural selection leads to a series of slight and heritable variations that not only and in the long term originate an entire species … but that also has implications for the long term and the distant past. By his doctrine of common descent, then for all extant terrestrial biological entities, there exists a reproductive cone of accessbility in the past of them all, and that is common to them all. The distance to that past cone is measured by our already given evolutionary potential, η … which then and also immediately suggests the most probable date and time for all of them as Darwin’s Origin of Species. This is the general theory of biology.
All these propositions can now be fully investigated using standard scientific methods and techniques. We have now demonstrated the full unity of all parts of science within biology. We have also delivered the promised fully quantum relativistic biology.