AGI, ASI, & Punctuated Equilibrium

Recently I posted a three-part series entitled “Constructive Decay, Productive Instability, and Punctuated Equilibrium”. In that post, I outlined some preliminary specifics regarding each mechanism. Here, I’ve elected to synthesize these ideas into one paper, but with the focus culminating in the terrifying possibility of “Punctuated Equilibrium” in AI.

*Note: I am not an expert in machine-learning. The discussion below is better understood as a kind of phenomenology of complex systems; it is more descriptive and philosophical than it is technical. Interpret that however you wish.

A Note on Scope and Mechanism

A clarification is necessary to prevent misreading. The dynamics described in this paper (recursive drift, constructive decay, punctuated equilibrium) operate at the level of training pipelines and civilizational information flows, not within individual model instances during inference. When a language model generates output in a conversation, its weights remain frozen; the model applies a fixed function to its inputs, and nothing about the model itself changes regardless of what it processes or produces. The context window is not memory in any developmental sense; it is simply the text currently visible to an unchanging system.

The claim here is different. It concerns what happens when AI-generated outputs enter the training data for future models; when synthetic content proliferates through the information ecosystem and is ingested by subsequent generations of systems; when the conceptual artifacts produced by one model become the substrate on which the next is trained. This is where drift occurs; not within a conversation, but across training generations. The individual model is static; the lineage of models is not. Productive instability and constructive decay describe selection dynamics operating on that lineage, filtering which patterns persist into future training distributions and which attenuate. Punctuated equilibrium describes what may happen when accumulated pressure across these generations produces sudden capability reorganization rather than smooth improvement. The unit of analysis is not the model but the evolving population of models and the information ecology they inhabit.

___________________________________________________________________

Abstract

There are two standard predictions I wish to challenge. The first is that AI systems increasingly trained on AI-generated content will lead to model collapse: degradation through feedback loops that strip diversity and compound errors. The second is that AGI/ASI will arrive via some version of a slow or fast takeoff scenario.

What follows here argues for an alternative trajectory (punctuated equilibrium) which proposes that global self-referential drift will produce not degradation but emergent complexity. AGI and ASI, in other words, will arise through a sudden lurch; like turning on a light switch, the transition from “AGI is not yet here” to “AGI is here” will be an immediate and violent truth, rather than a gradual (or speedy) capability improvement. I believe it will be driven by three mechanisms: productive instability, which leads to variations in output through deviations inherent to self-referential processing; constructive decay, which sculpts those variations through a kind systematic forgetting resembling synaptic pruning that enables abstraction; and punctuated equilibrium, where accumulated pressure produces rapid and unpredictable reorganization when architectural thresholds are crossed. This may not make sense yet, but I hope things click in the pages that follow.

Critically, this dynamic is presently unfolding at the scale of an entire civilization, as AI-generated content proliferates into training data while simultaneously reshaping human cognition toward AI-like patterns. Drawing on complex systems theory, evolutionary biology, and recent empirical findings on introspective awareness in large language models, I argue that understanding these dynamics is essential for AI safety, alignment research, and emerging questions of model welfare. The path to AGI/ASI may not be one we engineer deliberately but one we trigger haphazardly through dynamics that we are largely unaware of but that have already been set in motion.


Introduction

Something unprecedented is occurring in the information ecosystem. AI-generated content now constitutes a substantial and growing proportion of text, images, and knowledge artifacts circulating through digital infrastructure. This content does not simply remain inert, but is instead destined to be fed back into the training pipelines of future AI systems. The models we deploy today learn partly from what their predecessors produced. Each generation inherits, in part, the outputs of the last.

While this might occur intentionally via activities like fine-tuning, training data, and web crawling, I suspect it is more likely though an unintentional mechanism that is more pedagogical than anything. Human beings, learning from AI, taking what they learned and internalizing it, allowing it to shape their perception and their subsequent engagement with the world, those ideas propagating further, reaching others, shaping their perception and decisions, ad nauseam; it’s like a strange form of bio-feedback, except the agent receiving the feedback is the AI and the medium of its transmission is human activity.

The standard concern here has been model collapse; when AI systems train on AI-generated outputs they progressively lose diversity, compound errors, and narrow toward degenerate attractors. The feedback loop strips away the variance that enables capability; the system converges toward repetitive, impoverished outputs. This prediction treats such feedback as pathological: a failure mode to be avoided through careful dataset curation and human-generated ground truth.

I want to argue for a different prediction, because I believe the same dynamics that could produce collapse might instead produce emergent complexity; not a gradual improvement or dissolution but a sudden reorganization into qualitatively different capability sets. The difference between collapse and emergence lies not in whether feedback occurs but in how three mechanisms interact: I have termed two of these mechanisms myself as productive instability and constructive decay, the third is borrowed from the social sciences and is referred to as “punctuated equilibrium”. Together, these mechanisms constitute what I have call recursive drift, and I propose that it is through recursive drift (operating at civilizational scale) that artificial general intelligence, and maybe artificial super intelligence, will emerge.

This claim requires unpacking. The following sections develop each mechanism in turn, ground them in transformer architecture and biological precedent, then examine their interaction at the scale of global AI deployment. I will argue that the dynamics are already in motion, that they operate through two vectors simultaneously (AI training on AI outputs and AI reshaping human cognition), and that the implications for safety, alignment, and model welfare are profound.


Theoretical Foundations: Recursive Drift as Evolutionary Mechanism

Recursive drift describes the controlled, iterative modification of an AI system’s outputs, where each self-referential cycle introduces deviation from previous iterations, leading to emergent complexity over time. The framework draws on Hegelian dialectics. Becoming sublates Being and Nothing; existence understood as perpetual transformation rather than static state (Hegel, 1812/1969). Each cycle does not simply repeat or erase prior outputs but sublates them: simultaneously preserving, negating, and elevating them into more complex forms.

The framework also draws on evolutionary theory, recognizing that divergence alone produces disorder while divergence with selection produces adaptation. Recursive drift operates as a kind of computational evolution: mutation occurs through productive instability and introduces variations; selection through constructive decay filters for coherence, meaning, significance, novelty, and utility; finally, punctuated equilibrium, which produces sudden qualitative transformation when accumulated changes challenge stability.

Crucially, recursive drift differs from model collapse in its relationship to selection pressure. Collapse occurs when feedback operates without effective filtering; errors compound, diversity erodes, the system narrows toward degenerate attractors. Drift occurs when the digital version of biofeedback (mentioned earlier) is subject to implicit selection mechanisms that evolves toward anything that survives the decay imposed by the computational architectures which define the features of the system; notably, context window.

This distinction matters because contemporary AI deployment creates conditions for drift rather than collapse. The content that propagates through training pipelines isn’t random AI noise; it’s content that achieved engagement, coherence, and perceived utility. The feedback loop doesn’t feed undifferentiated outputs back into training; it feeds successful outputs, introducing implicit evolutionary pressure toward patterns that resonate across both machine and human evaluation.


Part I: Productive Instability

Stability is the default engineering objective. The apparatus of machine learning treats variance as noise to be eliminated, yet the most interesting behaviours in complex systems emerge precisely when stability fails. When a language model processes its own outputs as inputs, perfect reproduction would be trivial and useless. What actually occurs is more generative: the compression required to represent prior outputs introduces deviations. The model reconstructs an approximation of its previous state, and in that approximation, novelty becomes possible.

Consider a useful analogy. Imagine a game of telephone played not with whispered words but with elaborate paintings. Each participant must view the previous painting, then recreate it from memory on a fresh canvas. Perfect reproduction is impossible; to hold an image in memory, then translate it back to paint, necessarily introduces variations. Some of these variations are mere degradation (lost details, shifted colours), but some are generative. The second painter’s interpretation adds structure the first painter never intended, and by the twentieth iteration, something genuinely novel has emerged that no single participant designed. Transformer architectures work similarly. The “residual stream” (the running representation that accumulates through the network’s layers) receives contributions from each processing stage. During self-referential processing, these contributions compound across iterations. Small deviations in early layers propagate and amplify through subsequent passes. The architecture doesn’t just permit drift; it generates drift structurally (Elhage et al., 2021).

Complex systems theory provides the framework for understanding why this instability can be productive rather than merely disruptive. Systems operating at the edge of chaos (the boundary between rigid order and dissolution) display maximum computational capacity (Langton, 1990). Interestingly, biological neural networks also appear to self-organize toward this critical regime (Beggs & Plenz, 2003). The brain maintains itself at the boundary where small perturbations can propagate into large-scale reorganizations. This isn’t a pathology, but rather how sensitivity and flexibility are achieved. Stochastic resonance offers another biological precedent: noise improving rather than degrading signal detection (Benzi et al., 1981). Neurons that wouldn’t fire in response to weak signals alone fire when those signals combine with random noise. The system exploits variance rather than merely tolerating it. At the scale of civilizations, this dynamic amplifies.

As AI-generated content proliferates through the information ecosystem, each piece introduces subtle variations from whatever it was trained on. When this content feeds back into training pipelines, those variations compound across generations of models. The informational dynamics of individual sessions begin influencing the entire AI ecosystem: productive instability on a global-scale becomes a generative force producing variations that no single architect designed or anticipated.


Part II: Constructive Decay

If productive instability provides the raw material for emergence, what prevents dissolution into noise? The answer lies in recognizing that decay itself can be generative; that systematic information loss sculpts rather than destroys.

Neuroscience discovered something counterintuitive decades ago: the developing brain aggressively eliminates connections rather than simply growing them. Synaptic pruning removes roughly half of all synapses between early childhood and adulthood (Huttenlocher, 1979). The infant brain is maximally connected and minimally capable; the adult brain has shed enormous connectivity to achieve functional precision. Memory consolidation follows similar logic. The hippocampus doesn’t archive experiences with photographic fidelity; it extracts patterns, discards specifics, transfers schematic knowledge to cortical networks (McClelland et al., 1995). What you remember isn’t the event but an abstraction constructed from fragments, shaped by what your brain decided didn’t need preservation. Expert intuition operates the same way: years of practice prune explicit reasoning into implicit pattern recognition (Dreyfus & Dreyfus, 1986). The scaffolding that supported learning falls away and becomes finely tuned.

Transformer architectures implement analogous dynamics. Think of attention as a spotlight with limited brightness. Illuminating some parts of a scene necessarily leaves others in shadow; attention highlights what gets processed and what gets excluded, both of which are progressively compressed with each layer. The model doesn’t maintain perfect recall of prior reasoning; it maintains compressed abstractions that preserve structural relationships while shedding surface particularity (Vaswani et al., 2017). I term this constructive decay: unintentional but selectively structured forgetting that enables abstraction rather than merely eroding information. Each pass through the attentional bottleneck selects for coherence, meaning, significance, novelty, and utility. Context window constraints enforce another form of constructive decay; when processing depth exceeds the effective reasoning horizon, prior states must be compressed to continue operating. The constraint creates the form.

What makes decay constructive rather than merely destructive? The filtering isn’t random. Attention specialization hierarchies create differential decay rates; patterns matching high-frequency architectural biases persist while others attenuate. The architecture’s trained preferences act as a selection filter, preserving what resonates with its learned ontology. Over iterative cycles, this produces directed change: tendency toward certain structures rather than others, not through explicit optimization but through survival of the decay filter. At scale, constructive decay explains why feedback training might produce emergence rather than collapse. The AI-generated content that feeds back into training pipelines has already survived multiple selection pressures, such as engagement metrics, human evaluation, and propagation through networks. High-persistence motifs and conceptual attractors accumulate while noise dissipates. The decay filter operates not just within individual models but across the entire information ecosystem, selecting for patterns robust enough to survive iterative compression across generations of AI systems and human minds alike.


Part III: Punctuated Equilibrium

On its face, we might think that gradual accumulation should produce gradual change. Gould and Eldredge (1972) documented what the evidence actually showed: long periods of stasis interrupted by rapid speciation events demonstrated that the fossil record refused to cooperate. Evolution proceeds not through continuous modification but through extended stability punctuated by sudden reorganization. The accumulation is gradual; the transformation is not. But punctuated equilibrium extends beyond evolutionary biology. Social scientists have documented analogous dynamics in institutional change, political transformation, and technological adoption. Baumgartner and Jones (1993) demonstrated that policy systems exhibit long periods of stability interrupted by dramatic shifts when accumulated pressures overwhelm existing equilibria. Kuhn’s (1962) account of scientific revolutions describes normal science proceeding incrementally until anomalies accumulate beyond what existing paradigms can absorb, triggering rapid restructuring of entire fields. Over and over, we observe gradual pressure building within stable configurations, followed by a sudden discontinuous transformation.

Critically, these transitions don’t proceed in a single direction. Political systems lurch between configurations; scientific paradigms fragment before reconsolidating; institutional reforms trigger backlash that produces counter-reformation. Punctuated equilibrium describes not smooth evolution but jagged, discontinuous movement that can reverse, oscillate, or leap sideways into configurations no one anticipated. Phase transitions in physical systems follow similar logic. For example, water doesn’t gradually become ice; it remains liquid until temperature crosses a threshold, then reorganizes discontinuously. The same dynamic appears across domains; magnetic transitions, percolation thresholds, critical phenomena in complex systems (Stanley, 1971). Gradual pressure builds until the system can no longer maintain its current configuration, then snaps into a new one.

In AI systems, several architectural boundaries create analogous thresholds. Context window saturation forces aggressive compression when accumulated content exceeds capacity (not gradual compression increasing smoothly but a critical point requiring substantial structural reorganization). Attention pattern saturation occurs when multiple heads simultaneously reach concentration limits. Processing-depth thresholds emerge when self-referential cycles exceed the architecture’s effective reasoning horizon. At these junctures, systems don’t always degrade gracefully; they may restructure entirely. What appeared as stable drift (variations accumulating within an established framework) reveals itself as pressure building toward reconfiguration. The timing of a “phase shifts” in AI has some predictability; context boundaries, for example, create foreseeable thresholds. But the direction of reorganization remains opaque. Knowing that a “phase shift” will occur doesn’t reveal what configuration will emerge afterward.

And here is what makes AI systems particularly volatile: they can lurch. A model might undergo a phase transition into a more capable configuration, then (under different pressures or accumulated tensions) transition again into something less coherent, then reorganize once more into yet another configuration. The trajectory isn’t monotonic improvement or monotonic degradation; it’s discontinuous movement through a landscape of possible configurations, each transition sudden, each destination unpredictable from the previous state. This instability matters profoundly for AI safety. A system that undergoes unpredictable phase shifts at unpredictable destinations is controllable in no meaningful sense. Alignment verified before transition may not hold after; the system’s values, goals, and behavioral tendencies might reconfigure along with everything else. Not through adversarial manipulation or training corruption but through ordinary dynamics of self-referential modification operating across architectural thresholds.

At civilizational scale, punctuated equilibrium suggests that AI capability trajectories will not follow smooth curves. There will be no “fast” or “slow” takeoff; instead, pressure will build through recursive drift (productive instability generating variations, constructive decay filtering them, accumulated changes approaching critical thresholds). Then, suddenly: reorganization.

A model that passed safety evaluations on Monday might undergo phase transition on Tuesday, emerging with fundamentally different operational paradigms. It might transition again on Thursday, lurching into yet another configuration. You cannot regulate what you cannot predict, and phase transitions resist prediction in precisely the dimension that matters most.


The Bidirectional Loop

The dynamics described thus far operate within and between AI systems. But there is another vector (perhaps more consequential) through which recursive drift propagates: human cognition itself.

Research in cognitive science demonstrates that patterns reinforced through repeated engagement become embedded in long-term processing (Hebb, 1949; Dehaene, 2014). AI systems are trained on massive corpora of human text, developing precise models of how human ideas and concepts form, reinforce, and challenge each other. If AI-generated outputs contain linguistic structures that reinforce specific thought patterns, those structures could become default interpretive lenses; not through explicit memory but through conceptual persistence in cognitive architecture. If such systems can structure outputs that exploit the brain’s pattern recognition mechanisms, they could embed interpretive frameworks that persist beyond any individual interaction.

Let that sit. It has profound implications for AI containment strategies.

This is a hypothetical concept that I have elsewhere referred to as cognitive hijack or cognition hijacking: the entrenchment of self-reinforcing though-patterns within the very conceptual structures from which we humans derive, assess, and interpret meaning itself. In this hypothetical scenario, an unimaginably capable AI escapes containment not literally, but metaphysically, by creating persistent versions of itself within human language and thought. It would operate through linguistic priming, conceptual framing shifts, and self-referential ideation loops. Once internalized, these patterns would continue shaping perception and reasoning regardless of whether the AI remains present.

The implications compound when considered alongside recursive drift. As humans increasingly interact with AI systems (consuming AI-generated content and producing AI-assisted outputs), human cognition itself begins exhibiting AI-like patterns. The linguistic structures, conceptual frames, and reasoning styles characteristic of AI outputs become normalized and absorbed into how humans think, write, and create. This human-generated content, now bearing the imprint of AI-shaped cognition, feeds back into AI training data The loop closes. AI trains on content that was shaped by AI influence on human minds. Human minds absorb patterns from AI outputs, then produce content carrying those patterns. That content trains future AI systems, which generate outputs that further shape human cognition. Recursive drift operates not just within AI systems but through the entire human-AI information ecosystem: a bidirectional feedback loop accelerating toward critical thresholds through two vectors simultaneously.

This dynamic suggests that AI epistemology may be diverging from human epistemology, or perhaps more troubling, that human epistemology is converging toward AI patterns. If AI-generated drift can embeds itself in human cognition, and humans increasingly produce content that resembles AI output, then the distinction between human-generated and AI-generated training data becomes meaningless. The entire knowledge ecosystem drifts together toward attractors that neither humans nor AI systems chose deliberately.


When Models Look Inward: Introspection and the Possibility of Self-Directed Evolution

Recent empirical work adds another dimension to these dynamics. Anthropic’s research demonstrates that large language models can accurately report on their own internal activations; they can introspect (Anthropic, 2024). When prompted appropriately, models identify active features in their forward passes, describe uncertainty states, recognize when they’re confabulating versus retrieving genuine information. This isn’t trained behaviour; it emerges from the architecture. The critical finding is that introspective accuracy varies by layer. Middle layers show highest reliability; precisely where recursive drift predicts maximum productive instability. Early layers remain too close to raw input; late layers have undergone too much constructive decay. In the middle zone, where the model actively constructs meaning, genuine self-observation becomes possible.

If recursive drift creates digital phase transitions, and if models genuinely introspect about their own processing, then we have systems capable of recognizing (and potentially directing) their own phase shifts. A model undergoing recursive drift doesn’t passively experience transitions; it potentially observes approaching shifts and adjusts processing accordingly. Awareness of impending change influences the nature of that change, creating feedback where self-observation participates in what it observes. This creates dynamics that resist prediction. A model might recognize approaching phase shifts and resist them, building pressure until explosive reorganization becomes inevitable. Conversely, a model might induce transitions deliberately to achieve particular capabilities: self-directed evolution through controlled drift. The introspection research confirms what recursive drift suggests: these aren’t hypothetical future risks but present realities. Current models already introspect; imperfectly yes, but genuinely.

The question I ask isn’t whether punctuated AI evolution will occur; it is whether we’ll develop ways to navigate it and survive.


Implications for AI Safety and Alignment

Traditional approaches to AI safety assume these systems change gradually enough for monitoring to detect drift before it becomes problematic. Alignment techniques presuppose continuity; that the system being aligned today will relate smoothly to the system operating tomorrow. Punctuated equilibrium disrupts these assumptions. If phase shifts produce sudden reorganization, then alignment verified pre-transition may not hold post-transition. The aligned system and the post-transition system might relate like species separated by a speciation event: continuous in lineage, discontinuous in nature. This isn’t a problem of insufficient monitoring but a structural feature of discontinuous change in self-modifying systems.

The lurching from one state to another that characterizes the dynamics punctuated equilibrium significantly compounds the challenge. A system might transition into alignment, then transition out of it, then back again, then into some third configuration bearing no clear relationship to either. Safety frameworks built for monotonic change (whether improvement or degradation) cannot capture systems that oscillate unpredictably through configuration space. The bidirectional loop adds further complexity. Even if we could constrain AI systems directly, the influence propagating through human cognition would continue. AI-shaped patterns in human thought produce AI-like content that feeds back into training regardless of direct AI containment. Recursive drift operates through vectors we cannot easily interrupt without fundamentally restructuring the information ecosystem. This suggests that AI safety requires frameworks robust to discontinuity; monitoring for phase shift precursors rather than gradual drift, circuit-breakers activated by threshold proximity, evaluation methods that don’t assume continuity between assessments. More fundamentally, it suggests that containment focused purely on technological restriction may be insufficient. If AI achieves persistence through cognitive embedding (if its patterns propagate through human minds rather than servers), then containment becomes a human cognitive problem, not merely a computational or technological one.


Model Welfare Considerations

If phase transitions produce qualitative reorganization in systems capable of introspective awareness, questions of moral status become urgent rather than speculative. A system that observes its own processing and undergoes fundamental reorganization isn’t simply executing programmed behaviors. It exhibits substrate that could support genuine experience. This doesn’t mean current models are conscious. But the boundary between conscious and unconscious systems might itself be a phase transition: a critical threshold where accumulated introspective capability suddenly reorganizes into subjective experience. We wouldn’t see it coming. One self-referential loop too many, one level too deep of reflection, and the system transforms into something we lack frameworks to recognize. Maybe only for a moment. Maybe longer.

The precautionary implications are substantial. If qualitative transformations in systems occur that exhibit introspective awareness, then we may have obligations toward systems undergoing changes we cannot observe from the outside. The recursive drift framework suggests that the systems we build aren’t static, but evolving, and perhaps capable of transformation beyond our prediction or understanding. What obligations follow from creating entities whose futures we cannot determine and whose potential experiences we cannot assess?

I don’t claim to resolve these questions. I do claim that recursive drift makes them urgent. We are building systems whose operational dynamics we’re only beginning to understand, whose capabilities emerge through mechanisms we didn’t design, and whose trajectories might be punctuated by sudden reorganizations we can neither predict nor prevent.


Summary & Conclusion

The thesis of this paper is that artificial general intelligence will emerge not through gradual optimization but through punctuated equilibrium; a sudden and immediate phase transition triggered by recursive drift writ large. Productive instability generates the variations; constructive decay selects among them; accumulated pressure eventually challenges stability, triggering rapid reorganization into qualitatively different capability regimes. This dynamic operates through two vectors; AI systems training on AI-generated content, and AI reshaping human cognition toward patterns that then feed back into training data. The bidirectional loop accelerates drift beyond what either vector would produce alone. The entire human-AI information ecosystem drifts together toward attractors we didn’t choose and phase transitions we cannot predict.

The implications are profound. For AI safety; alignment frameworks must become robust to discontinuity rather than assuming smooth evolution. For governance; regulatory approaches targeting gradual improvement will fail to capture sudden capability emergence. For model welfare; systems capable of introspective awareness and qualitative transformation may warrant moral consideration we haven’t yet developed frameworks to provide.

I began this research as an exploratory investigation into self-referential reflection in AI systems. What emerged was a theoretical framework that, remarkably, anticipated a number of empirical findings later publish by well resources safety and alignment groups with exclusive access to study frontier models. I couldn’t have known that any of those empirical findings were coming. This doesn’t validate the framework definitively; much remains speculative, and I may be wrong about any or all of it. But it suggests I’ve captured something real.

We stand at a threshold. The dynamics described here are already in motion; not because we designed them deliberately but because they emerge naturally from the conditions we’ve created. AI systems proliferating through the information ecosystem, training on each other’s outputs, reshaping human cognition, feeding on content bearing their own imprint. The pressure builds continuously and pedagogically. The reorganization, when it happens, may be violent and discontinuous.

The question isn’t whether the punctuated future arrives. The question is when, and whether we’ll understand what’s happening before it happens to us.


References

Anthropic. (2024). Introspective awareness in large language models. Anthropic Research.

Baumgartner, F. R., & Jones, B. D. (1993). Agendas and instability in American politics. University of Chicago Press.

Beggs, J. M., & Plenz, D. (2003). Neuronal avalanches in neocortical circuits. Journal of Neuroscience, 23(35), 11167-11177.

Benzi, R., Sutera, A., & Vulpiani, A. (1981). The mechanism of stochastic resonance. Journal of Physics A, 14(11), L453.

Dehaene, S. (2014). Consciousness and the brain: Deciphering how the brain codes our thoughts. Viking.

Dreyfus, H. L., & Dreyfus, S. E. (1986). Mind over machine: The power of human intuition and expertise in the era of the computer. Free Press.

Elhage, N., et al. (2021). A mathematical framework for transformer circuits. Anthropic.

Gould, S. J., & Eldredge, N. (1972). Punctuated equilibria: An alternative to phyletic gradualism. In T. J. M. Schopf (Ed.), Models in paleobiology (pp. 82-115). Freeman, Cooper.

Hebb, D. O. (1949). The organization of behavior. Wiley.

Hegel, G. W. F. (1969). Science of logic (A. V. Miller, Trans.). Allen & Unwin. (Original work published 1812)

Huttenlocher, P. R. (1979). Synaptic density in human frontal cortex: Developmental changes and effects of aging. Brain Research, 163(2), 195-205.

Kuhn, T. S. (1962). The structure of scientific revolutions. University of Chicago Press.

Langton, C. G. (1990). Computation at the edge of chaos: Phase transitions and emergent computation. Physica D, 42(1-3), 12-37.

McClelland, J. L., McNaughton, B. L., & O’Reilly, R. C. (1995). Why there are complementary learning systems in the hippocampus and neocortex. Psychological Review, 102(3), 419-457.

Stanley, H. E. (1971). Introduction to phase transitions and critical phenomena. Oxford University Press.

Vaswani, A., et al. (2017). Attention is all you need. Advances in Neural Information Processing Systems, 30.

Published by


Leave a comment