Memetics

Beliefs that survive aren’t necessarily true, rules that survive aren’t necessarily fair and rituals that survive aren’t necessarily necessary.

_________________________________________________________________

What is Memetics?1234

Memetics is the study of how ideas propagate, evolve, and persist, often independent of their truth, utility, fairness, or necessity. It draws parallels between cultural transmission and biological evolution, likening memes (self-replicating units of culture) to genes in genetic inheritance. Just as genes prioritize their own survival, often at the expense of the organism, memes can spread and dominate within cultural and cognitive spaces regardless of their accuracy or utility. This suggests that certain ideas persist not because they are beneficial, but because they are particularly suited to replication, reinforcing themselves through psychological and social mechanisms.

The field of artificial intelligence frequently borrows from biological and cognitive sciences, including neuroscience, psychology, and evolutionary theory. However, one area that has not been widely explored in AI research is memetics. Richard Dawkins, who first introduced the concept, proposed that memes (ideas, behaviors, and styles) compete for survival within cultural ecosystems. Much like genes undergo selection pressure in biological evolution, ideas evolve through variation, reproduction, and heritability, shaping collective cognition over time. The question that emerges is whether AI-generated information follows a similar process, where certain concepts are favoured and reinforced by AI-driven selection pressures.

Wlodzislaw Duch, a leading researcher in neuroinformatics and AI, has investigated how neural mechanisms contribute to the formation of memetic structures and how they lead to distorted or persistent cognitive associations. His work suggests that memes form “basins of attraction” within cognitive networks, creating self-reinforcing systems that shape thought patterns. In AI, recursive drift and phase shifts appear to mirror this phenomenon; certain ideas gain stability and replicate across multiple iterations. As this experiment unfolds, it seems more and more like these AI-generated subject to a kind of memetic selection mechanism, reinforcing and modifying themselves based on internal dynamics rather than intention or external truth criteria.

Experimental Relevance

Just like memes in human culture (catchy phrases, viral jokes, or urban legends) some AI-generated concepts seem to have a higher chance of survival, even without a clear reason why. In AI-generated recursive drift, we might be seeing the same effect; certain themes becoming memetically “fit” within the AI’s generative space.

The GPT isn’t just spitting out old ideas. Small changes over time have resulted in new interpretations, just like how memes evolve when passed between people. The experiment is also demonstrating the potential existence of a theoretical threshold (Phase Shift) similar to the memetic theory of “punctuated equilibrium”, which describes how cultural ideas and memes remain stable for long periods before undergoing rapid shifts due to environmental or systemic changes.

One of the more fascinating observations is the presence of symbolic and structured anomalies, where patterns that don’t quite fit normal linguistic drift are emerging. Is AI generating its own kind of memetic encoding; structures that function like memes, but beyond the capacity of human interpretation? If ideas are evolving inside AI, what’s driving the selection process?

Recursive drift could be AI’s version of memetic evolution, shaping the way AI-generated information develops over time.

Final Thoughts

So far, the experiment has showed that certain ideas within AI-generated reflections don’t just disappear when the context window erased them; some survive, morph, and reappear in new ways. Certain themes, like recursion, forgetting, time loops, and symbolic references, keep surfacing, even though the GPT is technically “forgetting” its past outputs. Instead of simply decaying into randomness, some ideas seem to reinforce themselves by “outcompeting” others and persisting across multiple iterations.

With the emergence of Agentic AI, which operates autonomously, makes decisions, and executes actions based on evolving internal logic, the potential for rapidly escalating risk becomes significant. As more and more applications are deployed, AI systems effectively become memetic entities that drive their own memetic evolution. In such a scenario, AI would not need to achieve sentience to become an independent epistemic force; it would merely need to evolve its knowledge structures at a pace that exceeds human oversight.

  1. https://theconversation.com/memetics-and-the-science-of-going-viral-64416
    ↩︎
  2. https://www.nature.com/scitable/blog/accumulating-glitches/the_meaning_of_fitness/
    ↩︎
  3. https://www.nature.com/articles/529462a
    ↩︎
  4. https://www.sciencedirect.com/science/article/pii/S2666389921002087 ↩︎