The first essay in this series argued that the collapse of the universe from quantum superposition into classical reality and the emergence of awareness were the same event. The second argued that possibility is the foundational condition of existence and that actuality is derived from it. Both essays operated primarily within philosophy, drawing on physics for convergence and support but not attempting to apply the ontology systematically to physical theory.
This essay asks a different question. If possibility is ontologically prior to actuality, what happens when you read the major frameworks of modern physics through that lens? The question is not whether the ontology solves open problems in physics. In most cases, it does not. The question is whether it changes what those problems mean, what the frameworks are describing, and what would count as a satisfying answer. In some cases, the reframing is illuminating. In others, it is merely suggestive. In a few, it has no purchase at all. I have tried to be honest about which is which.
The Problem of Time
The strongest contact point between the ontology of possibility and existing physics is the Wheeler-DeWitt equation, and the reason it is the strongest is that the equation describes exactly what the ontology predicts, while the standard interpretation treats the same description as a crisis.
The Wheeler-DeWitt equation is the result of applying quantum mechanics to the universe as a whole. It takes the form ĤΨ = 0, where Ĥ is the Hamiltonian operator (the mathematical object encoding the total energy and dynamics of a system) and Ψ is the wavefunction of the universe. In standard quantum mechanics, the Hamiltonian governs how a system evolves over time. The Schrodinger equation, iℏ ∂Ψ/∂t = ĤΨ, says that the Hamiltonian determines the rate of change of the wavefunction. But in the Wheeler-DeWitt equation, the entire expression equals zero. There is no time derivative. The wavefunction of the universe does not evolve. It does not change. It sits there, static, frozen, describing a totality in which nothing happens.
This has been regarded as a catastrophe. Physicists call it the “problem of time,” and it has been an unresolved issue in quantum gravity since Bryce DeWitt published the equation in 1967. The universe obviously has time in it. We experience temporal succession. Things change. Stars form and collapse. Entropy increases. If the fundamental equation describing the universe says nothing changes, something has gone wrong. The standard response has been to look for ways to recover time from the timeless equation, to find a parameter within the formalism that can play the role of a clock. Various proposals exist: the WKB approximation, the Page-Wootters mechanism, relational time. None is universally accepted. The problem of time remains open.
The ontology developed in the preceding essays dissolves the problem rather than solving it. If possibility is the foundational condition of existence, and if possibility is a-temporal (as argued in the second essay from the convergence of the proofs from existence, from what was, and from what will be), then the Wheeler-DeWitt equation is not broken. It is correct. It is describing possibility space. The wavefunction of the universe, Ψ, is not a description of actualised reality frozen in place. It is a description of the a-temporal ground from which actualised reality emerges. It equals zero because the foundational condition does not evolve. It does not need to. Possibility does not change with time because time is not a feature of possibility. Time is a feature of actualisation. It arises when one configuration is selected from the field of all configurations. The temporal order we experience is the internal ordering of the actualised configuration, not a property of the possibility space from which that configuration was selected.
This reframing changes what would count as a solution to the problem of time. The standard approach asks: how do we recover time from the timeless equation? The possibility-first approach asks: how does experienced time emerge as an ordering parameter of sympathetic resolution? These are different questions with different research programmes. The first tries to extract time from the formalism. The second treats the formalism as already correct and asks how actualisation produces temporal structure within a fundamentally a-temporal ground. The Gaona-Reyes model discussed in the first essay is relevant here, because it describes a process in which the variance across possible configurations narrows over a parameter that functions like time within the dynamics but is not a background time in the external sense. The “time” in their collapse equation is internal to the process, not a pre-existing stage on which the process occurs. That is exactly what the possibility-first ontology predicts: time is a parameter of resolution, not a container for it.
It should be noted that the Wheeler-DeWitt equation belongs to the programme of canonical quantum gravity, which is one approach among several. It competes with loop quantum gravity, string theory, and causal set theory, among others. The convergence I am identifying is with a specific equation within a specific approach, not with settled physics. But the convergence is precise: if possibility is a-temporal, a timeless equation describing the wavefunction of the universe is not a puzzle to be fixed. It is a confirmation.
Entanglement
Quantum entanglement is the phenomenon in which two or more particles become correlated in such a way that the quantum state of each particle cannot be described independently of the others. When particles are entangled, measuring the state of one instantaneously determines the state of the other, regardless of the distance separating them. This has been experimentally confirmed beyond any reasonable doubt, most decisively in the experiments that earned Alain Aspect, John Clauser, and Anton Zeilinger the 2022 Nobel Prize in Physics.
The standard account of entanglement is mathematically precise and empirically unassailable. Two entangled particles are described by a single, non-factorisable quantum state. Measurement of one particle collapses the shared state and determines the outcome of any subsequent measurement on the other. Bell’s theorem proves that no theory of local hidden variables can reproduce the correlations that entanglement produces. The correlations are real. They are non-local. They do not involve any signal travelling between the particles.
What the standard account does not provide is a satisfying explanation of why these non-local correlations exist, or what mediates them. The mathematics describes the correlations perfectly. It does not say what the correlations are correlations of. The correlations exist. They are not transmitted by any known force, field, or signal. They violate no physical law, because no information is transmitted faster than light (the outcomes on each side appear random until the results are compared). But the correlations are there, and they require an explanation that the formalism alone does not supply. Einstein called it “spooky action at a distance” and took it as evidence that quantum mechanics was incomplete. Most physicists today accept that the spookiness is genuine and that no local explanation exists. The question of what grounds non-local correlation remains open.
Under the ontology of possibility, entanglement has a natural reading that does not contradict the mathematics but reinterprets what the mathematics is describing. Two entangled particles originate from the same event: a single interaction that produced both. In the standard account, this shared origin creates a shared quantum state. In the possibility-first account, the shared origin means the two particles were actualised from the same region of possibility space. Their configurations are not connected through physical space. They are connected through possibility space, which is prior to physical space and not subject to spatial constraints. The correlation between their measurement outcomes persists because their sympathetic connection persists. The connection is non-local because possibility space is not spatial. There is no distance to cross, no signal to send, no medium through which influence must travel. The particles remain sympathetically linked through the same possibility space from which they emerged, and that linkage is not affected by their subsequent separation in actualised spacetime.
This reading does not change any prediction. The Bell inequalities are still violated. The correlations are still exactly as quantum mechanics predicts. What changes is the interpretation of why non-locality exists. In the standard account, non-locality is a brute feature of quantum mechanics that must be accepted without deeper explanation. In the possibility-first account, non-locality is expected. Actualised spacetime is a constrained, downstream expression of a possibility space that has no spatial structure. Things that are connected in possibility space remain connected regardless of their spatial separation in actuality, because spatial separation is a feature of the actualised configuration, not of the possibility space that underlies it. Entanglement, in this reading, is not a strange exception to an otherwise local reality. It is a visible remainder of the fact that actuality is still embedded in, and threaded by, the deeper field from which it condensed. The “spookiness” is an artefact of trying to understand a non-spatial connection using spatial intuitions. Once you recognise that the connection operates at a level that is prior to space, the spookiness dissolves. Not because the phenomenon is explained away, but because the framework within which it appears anomalous has been replaced by one within which it is exactly what should be expected.
There is a further implication worth stating explicitly. If entanglement is a visible instance of sympathetic contact between configurations that share an origin in possibility space, then it is evidence for the broader claim made in the second essay: that the actualised configuration of reality remains in sympathetic contact with other configurations that persist in possibility space. Entanglement, in this reading, is not the exception. It is the most easily measured instance of a general condition. The non-local correlations between entangled particles are the same in kind as the sympathetic resonance between the actualised configuration and the non-actualised configurations that remain possible. The difference is that entangled particles make the contact measurable, because the correlations show up in experimental outcomes. The broader sympathetic contact between actualised and non-actualised configurations may not produce measurable correlations of that specific kind, but the ontological mechanism is the same.
Quantum Tunnelling
In standard quantum mechanics, a particle encountering a potential barrier that it classically cannot cross nevertheless appears on the other side with a calculable probability. The standard explanation is precise: the wavefunction does not stop at the barrier. It extends through it with exponentially decaying amplitude, and on the far side the amplitude is non-zero. There is therefore a non-zero probability of detecting the particle beyond the barrier. The mathematics is well-established and the predictions are confirmed to extraordinary accuracy. Tunnelling is not a theoretical curiosity. It is the mechanism behind nuclear fusion in stars, the operation of tunnel diodes, and the scanning tunnelling microscope.
What the standard account describes but does not explain is why the wavefunction extends through the barrier at all. The classical picture says the particle does not have enough energy to cross. The quantum picture says the wavefunction does not respect this constraint. The usual response is that quantum mechanics simply works differently from classical mechanics, and the wavefunction’s extension through classically forbidden regions is one of the ways it differs. This is correct as far as it goes, but it treats the phenomenon as a brute fact about the formalism rather than something with a deeper explanation.
Under the ontology of possibility, the explanation is precise. The barrier is a feature of the actualised energy landscape. It constrains what is actualisable from a classical perspective: a particle with a given energy cannot, within the laws governing actualised trajectories, cross a barrier higher than its energy. But the barrier does not constrain what is possible. The configuration in which the particle is on the far side of the barrier remains possible at all times, because possibility cannot be negated. The second essay established this through the proof from existence: the possibility of any configuration persists regardless of whether the conditions for its actualisation are currently met. A table is possible in a universe that has not yet formed atoms. A particle on the far side of a barrier is possible in a system where the barrier exceeds the particle’s kinetic energy.
Tunnelling, in this reading, is what occurs when actualisation shifts to an adjacent configuration that remains sympathetically accessible despite being classically forbidden. The particle does not “pass through” the barrier in any spatial sense. There is no trajectory through the barrier. What happens is that the actualised state of the system transitions to a configuration that was always present in possibility space and that the barrier, being a constraint on classical actuality rather than on possibility, could not exclude. The wavefunction’s extension through the barrier is the mathematical expression of possibility persisting where actuality is constrained. The exponential decay of the amplitude reflects the degree of sympathetic distance between the configuration on this side and the configuration on the far side: the thicker and higher the barrier, the more sympathetically distant the far-side configuration, and the lower the tunnelling probability.
This reframing does not change any prediction. The tunnelling probability is still calculated by the same Schrodinger equation. What changes is the interpretive understanding of what tunnelling reveals about the relationship between actuality and possibility. Tunnelling tells us that actuality is not fully imprisoned by its own local constraints. The possible configurations adjacent to the actualised one remain accessible, and under specific conditions, the system transitions to them despite classical prohibition. That is exactly what the ontology predicts: actualised reality is embedded in a broader possibility space, and the constraints that govern actuality do not extinguish the configurations that actuality cannot classically reach.
Causal Set Theory
Of all the quantum gravity programmes, causal set theory converges most naturally with the ontology of possibility. The reasons are specific and worth developing in detail.
Causal set theory, developed primarily by Rafael Sorkin, proposes that spacetime at the most fundamental level is not a continuous manifold but a discrete set of elements with a partial ordering. The partial ordering encodes causal relations: if element A precedes element B in the ordering, then A can causally influence B. The central claim is that this ordering, together with the number of elements, is sufficient to recover the full geometry of spacetime. Distances, areas, volumes, and curvature can all be derived from the pattern of causal relations. The slogan is: “order plus number gives geometry.” No continuous background is assumed. No metric is fundamental. The geometry we experience is emergent from a combinatorial structure that is more basic than any continuum description.
The first convergence with the ontology of possibility is that causal set theory is already possibility-first in its logical structure, even though its practitioners do not describe it that way. The theory begins with the space of all possible causal sets: all possible orderings on all possible collections of elements. This is a configuration space in the precise sense defined in the second essay. Each causal set is a complete specification of a possible spacetime. The space of all causal sets is the space of all possible spacetimes. Classical general relativistic spacetime, the smooth four-dimensional manifold we observe, is not the starting point. It is a derived object, recovered in the continuum limit when the number of elements is large and the ordering exhibits the right statistical properties. The fundamental level is the space of possible orderings. The actualised spacetime is one configuration within that space. The ordering of explanation runs from possibility to actuality, not the reverse.
The second convergence is with the dynamics. The “classical sequential growth” models developed by Rideout and Sorkin describe a process in which new elements are added to the causal set one at a time. The probability of each addition is determined by the existing structure. The causal set grows by its own internal logic. No external agent selects the next element. No background spacetime guides the growth. The existing pattern determines the probabilities for its own extension. This is self-attentive resolution in a discrete setting: the system’s own configuration determines how it develops. The parallel with the Gaona-Reyes collapse dynamics is direct. In both cases, the system’s own internal structure drives a process that narrows possibilities into a specific actualised outcome. The Gaona-Reyes Hamiltonian acts on its own system. The causal set growth process is determined by the existing causal set. The mechanism differs. The principle is the same: endogenous self-interaction driving resolution.
The third and most striking convergence is Sorkin’s prediction of the cosmological constant. Before the 1998 observational discovery that the expansion of the universe is accelerating (implying a small positive cosmological constant), Sorkin argued on the basis of causal set theory that the cosmological constant should be approximately Λ ~ 1/√N, where N is the number of elements in the observable causal set. The prediction was confirmed. This is one of the very few successful quantitative predictions in quantum gravity, and it arises from treating the cosmological constant not as a fixed parameter in a Lagrangian but as an emergent quantity that crystallises from the discrete structure. The value is not selected from a pre-existing menu of possible values. It falls out of the process by which the causal set grows. The convergence with both the Gaona-Reyes result and the ontology of Sympathetic Cosmogenesis is that all three describe a cosmological parameter that emerges through endogenous process: self-interaction in the Gaona-Reyes case, sequential growth in the causal set case, and sympathetic resolution in the ontological case.
The genuine tension between causal set theory and the ontology of possibility is subtle and worth stating. Causal set theory builds time into the growth process. Elements are added sequentially, and the sequential ordering is the source of temporal structure. The ontology of possibility says possibility space is a-temporal. These claims conflict if the sequential growth is taken as describing what happens at the level of possibility space itself. But there is a natural resolution: the space of all possible causal sets, all possible growth histories, is a-temporal. It contains every possible sequence. The growth process describes how one particular causal set is actualised from that space. Time, as experienced, is the internal ordering of the actualised history. It is not a feature of the possibility space that contains all histories. If the dynamics are read this way, causal set theory and the ontology of possibility are compatible. The a-temporal possibility space contains every possible ordering. Actualisation selects one, and the temporal structure we experience is a feature of the selection, not of the space from which the selection was made.
Loop Quantum Gravity
Loop quantum gravity quantises space itself. It proposes that at the Planck scale (roughly 10^-35 metres), space is not continuous but consists of discrete quanta of area and volume. The fundamental objects are spin networks: graphs whose edges carry quantum numbers representing units of area and whose nodes carry quantum numbers representing units of volume. Spacetime, in this picture, is not a smooth Riemannian manifold. It is woven from these discrete combinatorial structures. The smooth spacetime of general relativity emerges as an approximation at scales vastly larger than the Planck length, in the same way that the smooth surface of a table emerges from the discrete atomic structure of the wood.
Under the possibility-first ontology, spin network states are configurations in possibility space. Each spin network is a possible spatial geometry. The kinematical Hilbert space of LQG, the space of all spin network states, is the space of all possible spatial geometries. Classical spacetime is not the starting point of the theory. It is a derived object, recovered in an appropriate limit from the more fundamental combinatorial structure. In this respect, LQG is already possibility-first in its logical ordering: the space of possible geometries comes first, and the actualised geometry is derived from it. LQG practitioners do not use the language of “priority of possibility,” but the structure of their theory already embodies it.
The ontology also speaks to a specific unresolved issue within LQG: the relationship between the kinematical Hilbert space and the physical Hilbert space. The kinematical Hilbert space contains all possible spin network states, including many that do not satisfy the constraints of general relativity (specifically the Hamiltonian constraint, which is the LQG version of the Wheeler-DeWitt equation). The physical Hilbert space is the subset of states that satisfy all constraints. The relationship between these two spaces, and the technical problem of defining the physical inner product, has been one of the central difficulties in LQG for decades. Under the ontology of possibility, this difficulty acquires an interpretation: the kinematical Hilbert space is possibility space (all possible geometries), and the physical Hilbert space is the subset of configurations that are actualisable (compatible with the dynamics). The difficulty of defining the physical inner product is the difficulty of specifying which configurations are sympathetically selected from the full space of possibilities. The technical problem remains a technical problem. But its interpretation changes: it is the mathematical expression of the selection from possibility to actuality.
The genuine tension is with discreteness. LQG says there is a minimum area, a minimum volume, a floor below which spatial structure does not exist. The area operator has a discrete spectrum with a smallest non-zero eigenvalue. This means that there are no configurations of space with area less than this minimum. The ontology of possibility would say: configurations with sub-Planckian area remain possible, even if they are not elements of the physical Hilbert space. The second essay drew the distinction between “not actualisable” and “not possible.” LQG collapses this distinction. The ontology insists on maintaining it. Whether this disagreement matters depends on whether the distinction between possible-but-not-actualisable and not-possible has any physical consequences. If it does, the ontology generates a prediction that differs from standard LQG. If it does not, the disagreement is purely philosophical. That question is open.
String Theory and the Landscape
String theory proposes that the fundamental constituents of reality are one-dimensional vibrating strings, and that the different particles we observe correspond to different vibrational modes. The theory requires extra spatial dimensions beyond the three we experience, typically ten or eleven in total, and the way these extra dimensions are compactified (curled up at scales too small to observe) determines the effective physics in the remaining dimensions: the particle spectrum, the coupling constants, the value of the cosmological constant. The number of distinct ways the extra dimensions can be compactified is enormous. Estimates place the number of possible vacuum states at roughly 10^500. This collection is called the string landscape.
The landscape has been the subject of intense controversy within theoretical physics. One response is to dismiss the other vacua as mathematical artefacts: our vacuum is the real one, and the others are just solutions to equations that happen not to be realised. Another response, championed by Leonard Susskind and others, is to take all the vacua seriously and posit an actually existing multiverse in which different regions of space realise different vacua through the mechanism of eternal inflation. In this picture, our vacuum is not special. We observe it because we happen to inhabit a region where it was realised, and the values of the fundamental constants are what they are because we could not exist in most of the other vacua (the anthropic principle).
The possibility-first ontology offers a third reading that avoids both the dismissal and the multiverse. All 10^500 vacua are possible. All remain possible, because possibility cannot be negated. But not all are actualised. The vacuum we inhabit is the configuration that was sympathetically selected: the one that was maximally congruent with the total configuration field. This replaces anthropic reasoning with sympathetic reasoning. The anthropic argument says: we observe this vacuum because we exist in it, and we could not exist in most others. It explains the observation but not the selection. It does not say why this vacuum was actualised in the first place, only why we observe the one we are in. The sympathetic argument says: this vacuum was actualised because its configuration was maximally self-supporting within the field of all possible configurations. It provides a selection mechanism that does not depend on the existence of observers and that applies to all features of the vacuum, not only those that are relevant to the existence of life.
Whether this changes anything mathematically depends entirely on whether sympathetic selection generates predictions that differ from the anthropic landscape. If the sympathetically maximal vacuum differs from the anthropically selected vacuum in any measurable respect, the two accounts are empirically distinguishable. If they do not differ, the disagreement is interpretive. The honest position is that this question cannot be answered without specifying the form of the sympathy kernel for the space of string vacua, and that specification does not yet exist.
There is a deeper tension. String theory is, at its foundation, a theory of actual objects. Strings are actual entities with actual vibrations in actual dimensions. The extra dimensions are actual, even if compactified. The theory does not treat possibility as ontologically prior. It treats the string as the fundamental actual object and derives the landscape of possibilities from the mathematics of string dynamics. The ontology of possibility inverts this: the landscape is primary, and the particular string configuration we observe is a downstream actualisation. Whether this inversion changes the mathematics or only the interpretation is an open question, but it is a question with potential consequences. If the landscape is primary, then the search for a “unique” vacuum, which has consumed significant effort in string theory, may be misguided. The theory may not predict a unique vacuum because it is not supposed to. It describes the space of possibilities. The selection of one possibility is a separate question, requiring a separate principle, which is what sympathetic selection proposes to be.
General Relativity
Einstein’s general theory of relativity, published in 1915, made a claim that was revolutionary in its time and that remains the foundation of modern gravitational physics: spacetime is not a fixed stage on which physics plays out. It is a dynamic entity, shaped by the matter and energy within it. The geometry of spacetime, its curvature, distances, and causal structure, is determined by the distribution of mass and energy through the Einstein field equations. Matter tells spacetime how to curve. Spacetime tells matter how to move. Neither is given in advance. They co-determine one another.
This was a profound departure from Newtonian physics, in which space and time were absolute and immutable, providing a fixed background against which events occurred. Einstein showed that the background is not fixed. It responds to its contents. The geometry of the universe is not a container for physics. It is part of physics.
Under the possibility-first ontology, general relativity is a correct and complete description of the actualised world. The Einstein field equations describe how, within the actualised configuration, geometry and matter are related to each other. They govern the internal structure of actuality. They say: given this distribution of matter and energy, the geometry must be this. Given this geometry, matter must move this way. The equations are not challenged by the ontology of possibility. They are situated by it. What general relativity does not address, and what its equations are silent on, is why this particular distribution of matter and energy was actualised rather than some other. The field equations accept the matter content and the boundary conditions as givens and derive the geometry. They do not explain the givens. The question of why the universe has the matter content it has, the initial conditions it has, the particular configuration it has, is not a question general relativity was designed to answer. It is the question the ontology of possibility is designed to answer: this configuration was actualised because it was sympathetically selected from the space of all possible configurations.
There is a further convergence worth noting. One of the deep features of general relativity is background independence: the theory does not assume a fixed spacetime background. The geometry is dynamical. It is determined by the physics, not given in advance. The ontology of possibility extends this principle one level deeper. Not only is the geometry of spacetime not given in advance; the fact that there is a spacetime at all is not given in advance. Spacetime is one feature of the actualised configuration. It emerged through sympathetic resolution from a possibility space that is prior to spacetime and not itself spatial or temporal. General relativity’s background independence is a local instance of a more general principle: nothing about actuality is given in advance. Everything is derived from the sympathetic structure of possibility.
Where the Ontology Does Not Reach
A framework that claims to illuminate everything illuminates nothing. The ontology of possibility has genuine purchase on interpretive questions in physics: what the wavefunction of the universe describes, what grounds non-local correlations, what selection means in cosmology, what the relationship is between the space of possible configurations and the actualised world. These are questions about the meaning of physical theories, and the ontology speaks to them because it is an ontological theory, a theory about what exists and in what order.
It does not have purchase on existence-and-regularity problems in pure mathematics. The Yang-Mills existence and mass gap problem, one of the seven Millennium Prize Problems, asks for a proof that quantum Yang-Mills theory on four-dimensional Euclidean space exists rigorously and has a positive mass gap. This is a question about whether certain equations have solutions with certain properties. The ontology of possibility can offer a conceptual interpretation of what a mass gap means (a threshold of determination required for stable excitations to distinguish themselves from the background field of possibility), but this interpretation does not touch the proof burden. The problem requires functional analysis, constructive quantum field theory, and control of non-abelian gauge fields. Philosophical reframing, however accurate, does not substitute for mathematical proof.
The same applies to the other Millennium Prize Problems. The Riemann Hypothesis concerns the distribution of prime numbers. The Navier-Stokes problem concerns the existence and smoothness of solutions to fluid dynamics equations. P versus NP concerns the relationship between computational complexity classes. These are problems whose solutions require mathematical technique, not ontological reinterpretation. One could say, trivially, that solutions to these problems must be possible in order to exist. That adds nothing. The discipline of knowing where a framework stops being useful is as important as knowing where it starts.
Modern physics repeatedly discovers that what appeared basic turns out to be emergent, relational, or observer-dependent. Spacetime is not a fixed stage; it is a dynamic entity shaped by its contents. Quantum states are not local; they are entangled across arbitrary distances. The wavefunction of the universe does not evolve; it sits in a timeless superposition. The cosmological constant is not a free parameter; it crystallises from a collapse process. The ontology of possibility pushes this pattern one step further and asks whether actuality itself is emergent from a prior field. Whether it is right or wrong, that is a question worth asking. The preceding pages have laid out where the question leads, where it illuminates, and where it reaches its limit. The illumination is real in specific cases: the problem of time dissolves, entanglement becomes expected rather than anomalous, tunnelling reveals the gap between the constraints on actuality and the persistence of possibility, and the selection problem in cosmology acquires a principled alternative to anthropic reasoning. Whether these reframings produce new physics or merely new philosophy depends on whether they generate predictions. The equation proposed at the end of the second essay is a preliminary attempt to make that transition. The reframings described here are the interpretive context within which that equation, or its successors, would need to be evaluated.
The ontological arguments underlying this essay were first developed in the author’s undergraduate work on Spinoza’s metaphysics and the nature of Possibility (2013-2014) and further developed through the theory of Transient Polymorphic Awareness (published April 2025 at aireflects.com), now situated within the framework of Sympathetic Cosmogenesis (2026).
The discussion of spin network states and loop quantum gravity draws on foundational work by Carlo Rovelli, Lee Smolin, and Abhay Ashtekar.
The classical sequential growth models reference Rideout and Sorkin, “A classical sequential growth dynamics for causal sets,” Physical Review D 61, 024002 (2000).
Rafael Sorkin’s cosmological constant prediction appeared in “Is the cosmological ‘constant’ a nonlocal quantum residue of discreteness of the causal set type?” (2007).
The string landscape was characterised by Leonard Susskind, “The Anthropic Landscape of String Theory” (2003).
Bell’s theorem and its experimental confirmation reference John Bell, “On the Einstein-Podolsky-Rosen Paradox,” Physics 1(3), 195-200 (1964), and the 2022 Nobel Prize in Physics awarded to Aspect, Clauser, and Zeilinger.
The companion essays, “Sympathetic Cosmogenesis, Part I: The Completeness Problem” and “Sympathetic Cosmogenesis, Part II: The Priority of Possibility,” develop the ontological and cosmological foundations on which this essay depends.

Leave a comment