
Please read the introduction of this blog series for context.
The State of War
There are valid concerns that the profound transformation in influence mechanisms will lead to a form of state of war where AI agents become tools of disinformation or control in the hands of malicious actors and authoritarian governments. This is because artificial intelligence tools now serve as interfaces for consumers of information and can produce content on demand. It is concerning that the leadership teams at major AI companies often share the view that powerful AI must remain in the hands of a few governmental actors.
In The Coming Wave, Mustafa Suleyman, co-founder of DeepMind and now head of Microsoft AI explains that the risks of artificial intelligence and biological engineering can only be prevented if these technologies are "contained". By “containment", he means that they must remain under the exclusive control of the US government and its allies. He has no objection whatsoever to mass surveillance or virus engineering as long as it is conducted by the US government. Similarly, Dario Amodei has made numerous statements calling for strict control of his technology and appealed for government support against Chinese "spies".
Given the rhetoric from AI lab leadership over the past 2 years, the growing integration with their national governments is hardly surprising. It was completely predictable that the Americans would eventually propose an "AI Manhattan Project" at the US federal level involving Palantir, among others, in reference to the development of the atomic bomb. The dynamic is no different in China, even if the tone is currently a bit less aggressive. There is little doubt about the Chinese government's intentions, both domestically (which are more-or-less openly acknowledged) and externally. For example, DeepSeek's R1 model demonstrated how the Chinese could compete with the Americans on AI despite hardware constraints imposed by the US. The model was released on the day of Donald Trump's inauguration. It is hard to believe that the timing was a coincidence, as the model's release was an occasion for China to feature the CEO of Deepseek on official television for the first time, in an interview with the number 2 of the Chinese Government. Even if Deepseek is originally a subsidiary of a Chinese hedge fund with no formal link to the Government, any good politician would have seized the opportunity. And the opportunity was seized : DeepSeek now benefits from seemingly unwavering government support.
Europe finds itself in a very uncomfortable position, totally dependent on the United States or China at almost every level of the AI technology value chain. Two countries whose values and motivations are very different from its own.
Self Infliction
The influence of authoritarian governments, or of malicious actors with political or ideological motivations, may not be necessary to destabilize civil society.
In Part Two, we touched on the opportunity cost of delegating decisions and actions to AI agents. The primary concern is the lost opportunity to exercise our brain plasticity, learning capacity, and decision-making skills. We must question the consequences, for the individual, of no longer engaging in independent thought.
AI’s promise is to force-feed us information through hyper-personalized curation of hyper-addictive content and algorithms that complete the takeover of our natural response/reinforcement mechanisms at the molecular level. The promise is the creation of air-tight individual filter bubbles where any perceived novelty stems from algorithmically-induced memetics (cf. Richard Dawkins). This algorithm, besides the content it serves, also makes any attempt at concentration increasingly difficult. And, as the effort of concentration becomes increasingly painful, AI offers us a palliative by transferring the burden of reflection to another algorithm.
A more contemporary reference, Wall-E (Pixar, 2008) is not only about environmental issues: it is also a cautionary tale about humans so assisted by machines that they forget how to walk and become obese. One of the grimmest prospects of AI is indeed to make us consume information like junk food while simultaneouly providing machines that perform the tasks that would otherwise help us counterbalance the negative effects of this consumption. To the point that we forget we could think and learn for ourselves. Artificial Intelligence and junk consumption do not contradict each other if the former enables the digestion of the latter.


This issue of learning capacity also applies at the macro scale as each generation of LLM is trained on increasingly synthetic internet content, which is itself derived from a model trained on older content. The feedback loop being established can then slow down or even prevent cultural renewal.
This feedback loop isn’t new, it was in place long before the emergence of LLMs. The unwritten rules of SEO, short sentences, keyword repetition for ranking purposes, had already largely polluted the Web, much like the fragmented style of social media feeds has, or even like PowerPoint bullet points have. LLMs simply have the potential to magnify cultural inertia. The shutdown of the wordfreq project provides an excellent illustration of this, as detailed here and here.
The Entropy of the Social System
Adaptation of Domesticated Machines : The Nature of DisorderCultural calcification is the direct result of a machinic process. Defining a machinic process as one that aims to reduce randomness in the pursuit of a goal, rather than just a means to achieve that goal, allows placing the machine in the context of the physical laws governing the universe.
Many have had this intuition since antiquity, but the fundamental law that pieces everything together, the second law of thermodynamics, eluded them. The machine itself would reveal its secret in the early 19th century, with industrialization as a backdrop, when the study of physical laws governing engines led to the emergence of the fundamental principles of thermodynamics. The second law, also called Carnot's principle (1824), establishes the irreversibility of physical phenomena. This idea was further developed by Clausius (1865), who introduced the notion of entropy—a measure of a system's disorganization. The second law of thermodynamics simply states that, in an isolated system, entropy does not decrease over time. In other words, it increases until maximum chaos is reached.
This law has many implications that might interest us, but let's first focus on this: if we observe that disorder is not increasing, then either chaos is maximal, or the system is not closed, meaning external energy is constantly added to counteract inevitable disorder. The characteristic that all machines have in common, whether they are general machines or specific machines, is their ability to channel energy to resist the increase in entropy. This principle applies universally, from mechanical processes, to animal labor, to procedures organizing tasks performed by humans. It is also valid for influence machines which, formerly, used human energy to control the entropy of the social system and, nowadays, use the electricity powering GPUs in data centers.
While specific machines maximize the probability of an event occurring; general machines aim for system stability. The sole purpose of general machines is to prevent the system's elements from dispersing. The system's equilibrium hinges on a tension between two opposite forces. Drawing an analogy with planetary systems, our general machine functions like gravity—without which celestial bodies, had they even formed, would drift erratically through our expanding universe.
This analogy has inspired architects of general machines since Newton's formalization of gravitation in 1687. For Beccaria in 1764 (On Crimes and Punishments), the legislators "endeavours to counteract the force of gravity by combining the circumstances which may contribute to the strength of his edifice". Since the 18th century, scientific discoveries have prompted us to invert Beccaria's perspective. We are now inclined to view the rules contributing to a system's equilibrium as analogous to gravitation, rather than the inverse. Nevertheless, the idea of a resistance—in its mechanical sense—to a natural force that inevitably leads to disorder remains valid. This insight is particularly intriguing coming from Beccaria, who was likely more influenced by Rousseau than by Hobbes.
Reversing Beccaria's cosmic analogy has a major consequence: while gravity can maintain the system in a state of equilibrium that doesn't correspond to maximum chaos, there is a threshold beyond which the system collapses upon itself. Controlling a system's entropy is one challenge; attempting to reduce it is quite another. This serves as our first warning regarding the energy we channel through our general machines.
The machinic process is not the exclusive domain of humanity; rather, humans appear to be the only species not merely subject to its power, but capable of harnessing it. In fact, all forms of life consume energy in an attempt to resist randomness, uncertainty, and ultimately, chaos. Life itself could be distilled to this very resistance. Norbert Wiener, father of cybernetics and early theorist of learning machines, articulated this idea in Cybernetics and Society: The Human Use of Human Beings in 1950.
Organism is opposed to chaos, to disintegration, to death, as message is to noise. To describe an organism, we do not try to specify each molecule in it, and catalogue it bit by bit, but rather to answer certain questions about it which reveal its pattern: a pattern which is more significant and less probable as the organism becomes, so to speak, more fully an organism
Cybernetics and Society: The Human Use of Human Beings, Norbert Wiener, 1950
Two insights from this quote are essential.
Firstly, Wiener's concept of "pattern" aligns closely with the organization of a system, standing in opposition to chaos. He describes this pattern as "less probable" because the combined probability of the characteristics present in the system (those that define a species) diminishes as their number increases. This implies that more complex and precise patterns—indicating stronger resistance to disorder—correlate with more evolved life forms.
His perspective invites two possible interpretations: either as evidence of human superiority in the biological hierarchy, or as a justification for his work on cybernetics. We see here that determinism holds a universal appeal to humans, regardless of intellectual sophistication. A more trivial example of our appetite for determinism is the seemingly universal wonder at the sight of a child resembling one of its parents. Whether this reaction is innate or culturally induced remains unclear, no cultures are known to consider such resemblance "bizarre" or even "repugnant."
The other important aspect of Wiener's quote is the introduction of the "message" opposing "noise" mirroring the pattern opposing chaos. Wiener thus establishes a bridge between physics and information theory. This bridge is the cornerstone of cybernetics, the theory of control and communication in the animal and the machine, as detailed in the seminal 1948 work. This bridge necessitates a detour through information theory.
Information theory is always associated with Claude Shannon, who created, ex nihilo, this branch of mathematics in the late 1940s. The theory's foundation lies in quantifying a message's information content not by the physical amount of data transmitted (i.e. number of bits), but in terms of probabilities. There is also a concept of entropy in information theory (information entropy or Shannon entropy). This is the entropy referred to in Part Two. As in physics, it is intimately linked to probabilities. In information theory, entropy corresponds to the expected value of information content. When entropy is very low, the probability of being surprised by a message is low, meaning there is very little chance the message will lead you to update your prior representation of the message's subject.
This brief presentation of entropy helps grasp a paradox that arises when dealing with entities capable of learning. This paradox emerges when, like Wiener, you link physical entropy with information entropy in the context of adaptive systems.
in control and communication we are always fighting nature's tendency […] for entropy to increase
Cybernetics and Society: The Human Use of Human Beings, Norbert Wiener, 1950
Consider an event that is completely unexpected—one with a perceived probability of zero. Once we experience such an event, it ceases to be entirely unexpected. Even if we assign it a minuscule probability of 0.000001% (up from 0%), the information content of the next instance of this event will be lower. Consequently, the entropy —in the sense of information theory— also decreases. In essence, learning creates order from chaos, running counter to the tendency of physical systems to increase in entropy over time.
It's common to equate intelligence directly with the ability to learn. However, this association oversimplifies a complex relationship as the capacity for learning is a necessary condition for intelligence, but it is not sufficient on its own. Learning and life are intimately intertwined, even in organisms we might consider unintelligent.
Bacteria represent one of the earliest and most fundamental forms of life—being single-celled organisms without a nucleus, often possessing just a single chromosome. These microorganisms reproduce through binary fission, essentially creating clones of themselves. Yet bacteria exhibit remarkable adaptive capabilities and their apparent simplicity hides a sophisticated capacity for "learning" at the genomic level.
The bacterial genome serves as a historical record of past encounters, documenting the "surprises" these organisms have faced throughout their evolution. A prime example of this genomic learning is the CRISPR-Cas9 system : the genome editing technology, now deployed in the pharmaceutical industry, is a natural mechanism used by bacteria to integrate DNA fragments from viruses that have previously infected them, thus developing an immune defense against future encounters with similar viruses.
Recent research suggests that the last universal common ancestor (LUCA), some 4.2 billion years ago, already had CRISPR-Cas9 genes. This speaks to the intricate link between life and the capacity for learning through dynamic encoding of new information. While the precise mechanisms may vary considerably between species today, this ability to learn from and adapt to environmental challenges is a common thread running through most forms of life.
Learning, in all its forms, carries an energetic cost. This energy expenditure occurs in two primary stages: first during the encoding process and second when retrieving and interpreting encoded information. This principle holds true for both the complex mechanisms at work in the brain and the simpler immune processes described earlier.
If life is learning and learning reduces entropy, then life aligns with our definition of a Machine even in its most basic manifestations. It's a remarkably efficient machine but it’s far from infallible. It remains subject to errors that continually modify the code and keep Life on Nature's inexorable path toward chaos:
- random mutations modify existing genes;
- de novo genes appear following random mutations in non-coding DNA;
- horizontal transfers allow genetic information to be passed from one species to another via intermediaries such as bacteria or [retro]viruses.
This constant tension between order and chaos contributes to the fragile equilibrium of biological systems, perpetually at risk of collapse under the relentless assaults of unpredictability and randomness. In nature, when the system shatters, it creates ever more specific sub-systems where the machine can focus its energy to defend an ever-narrower front. If our societal system ceases to learn, it ceases to resist, and seems destined to shatter into smaller, less diverse groups. And so the process continues ; this is how cultural calcification could lead to cultural necrosis.
The same reasoning can be applied at the individual level : if an individual ceases to learn because they can no longer concentrate and because they no longer need to, are they still an individual or simply a collection of organs?
The Social Fabric
Adaptation of: Domesticated Machines : Human LivesShould we oppose the cognitive crutch on principle? Why refuse it to those who express the need? Used well, its benefits are undeniable. Should we not instead ask how it can be used in a way that maximizes the benefits and minimizes the downsides? Furthermore, if conflicts arise between individual and community interests, or between short-term gains and long-term consequences, shouldn't our focus turn to the underlying reasons why such a crutch is needed in the first place? We largely covered the intellectual crutch above, but it is also very relevant for the emotional crutch.
Overreliance on technology can have unintended consequences. When market forces and technology converge to optimize for scale, they tend to target easily measurable surface characteristics rather than deeper substantive qualities. Evidence of this appearance-over-substance phenomenon can be found in many machines developed by humans to address the mass market, from tasteless fruits with uniform colors and shapes to fast-fashion. But what are the implications when this dynamic extends to emotional connections? The question is worth asking because we have entered an era where these very romantic bonds can be convincingly simulated.
At the beginning of the large language model era, the creation of "waifus" -a term used in anime and otaku culture to describe a fictional character that a person views as a romantic partner-, emerged as an important use case. If you believe that this is marginal today, consider that the AI companion start-up Character AI processes 20,000 queries per second, about 20% of Google's search volume. Dating apps had already made emotional connection a mass market, but it becomes even more scalable if only one individual is required. Substituting romantic bonds with simulated bonds may not equate to actual commoditization but the effects on human behavior could prove similar.
Commoditization typically reduces the effort required to access a resource to a minimal level, making people less willing to fight either to obtain it or to retain it. Commoditization also alleviates most of the burdensome aspects as the resource can be more easily disposed of. Finally, when users can access something at will and in precise quantities, its marginal utility declines more sharply, and thus its price drops regardless of how vital this resource is ; it is the essence of Adam Smith's Diamond-Water paradox. Should we worry that readily available substitutes might diminish the perceived value of human connection?
A parallel could probably be drawn with the increased accessibility of porn while the share of the population reporting an interest in sexual activity decreases. But, here again, we must avoid assuming causation from correlation.

Other Pillars of the Social Contract Threatened
In Part Two, we've presented identity as a foundational pillar of the social contract. This pillar seems weaker than ever. Our personal data, collected en masse, can be cross-referenced on an equally massive scale to build digital doubles of ourselves. This threat is compounded by the ease with which our voices and images can be cloned and deployed in seconds using readily available consumer technology. As individuals, we will certainly adapt to the new reality but, as a society, can we?
These challenges will undoubtedly spur the development of new technological solutions, which will in turn have shortcomings that will need to be addressed by other innovations. And so on.
Regarding identity, Sam Altman, the CEO of OpenAI, already has a solution with Worldcoin. This company founded in 2020 pivoted to become a crossover between crypto and iris-scanning technology, with a tangible use case following the emergence of consumer AI tools. The service relies on blockchain technology and involves scanning a user's iris with proprietary hardware developed by the startup. This allows users to prove their identity to access platforms. Worldcoin pairs its commercial project with a political vision where participants would receive a universal basic income funded by Artificial General Intelligence. Under such a system, those who refused to use the technology would have access to neither the services nor the resources. The Savage Reservation from Brave New World doesn't seem so far off...
While iris scanners might seem futuristic, services to verify human identity online already exist. The visible or invisible tests on web pages designed to prove you aren't a robot are effectively dominated by Google's reCAPTCHA. The service is free but requires an API key that Google could revoke. If an API key becomes invalid, the web page blocks access to everyone unless the bot protection is disabled. Cloudflare is the other dominant player in the space, protecting about 20% of the global internet traffic. Google’s and Cloudflare's track records suggest that these companies should be trusted, yet web traffic protection is another area where the European Union would be well-advised to develop its own sovereign solutions.
Beyond personal identity, some of the legal and regulatory frameworks structuring our economies seem vulnerable. Intellectual property is a prime example. Most discussions about intellectual property and AI focus on the upstream parts of the value chain: copyrighted data used to train models. But further downstream, when a deployed AI becomes responsible for the bulk of production in a certain industry (as is increasingly the case with software code today), it raises profound questions about whether this output can be protected.
The integration of the generative AI layer in the production cycle threatens to undermine the entire structure of intellectual property. And it may already be too late to address this effectively. All the content ingested and regurgitated by the first generation of open-source models is now being used to train other models on synthetic content that is itself likely unprotectable. Data traceability is largely absent in deep learning. And this affects all modalities without exception: the fate of the vast majority of rights holders for text and images seems sealed. Music is bound to struggle immensely against generative AI, perhaps even more than it did against piracy. The challenge isn't preventing copying; it's competing with machines that can produce thousands of projects at virtually zero cost before a human even finishes a first draft.
From a societal perspective, we must finally worry about the delegation of decisions (or at least influence on decisions) following the integration of artificial intelligence into certain critical sectors like justice. In almost every country in the world, justice is slow and bureaucratic. The use of assistants to process information would undoubtedly represent progress for legal professionals seeking in good faith to make the system faster and more efficient. However, the inherent biases in these systems are incompatible with fundamental principles of justice like transparency and accountability. Consequently, algorithmic justice could jeopardize another vital pillar of the social contract.
The allure of efficiency and personalized assistance masks deeper costs. While fears of state control and disinformation are increasingly tangible, an equally potent threat arises from within – the voluntary surrender of cognitive functions and the erosion of authentic human connection. The hyper-curated, algorithmically-mediated reality AI promises can trap individuals in echo chambers, dulling critical thought and replacing genuine interaction with scalable simulations. This mirrors a societal trend where AI, trained on past data and optimized for predictability, reinforces existing patterns and stifles novelty, threatening the cultural renewal essential for adaptation. As fundamental pillars of the social contract – identity, [intellectual] property, justice – face unprecedented challenges from AI, we must consider the possibility that our reliance on these powerful tools is not merely changing society, but potentially diminishing the human capacity to exist meaningfully within it.




