Part One : Progress in Perspective

Please read the introduction of this blog series for context.

While intense debates continue over the potential, the limitations, and the risks of artificial intelligence, user adoption is undeniable, especially since the arrival of large language models (LLMs). Unlike blockchain applications – the most recent tech bubble (2020-2022) – the accessibility of large language models via chatbots is strong, which allows users to readily assess the value of these new tools for their specific needs. Figures reported by model developers themselves should be taken with a grain of salt, particularly during fundraising, but OpenAI's reported data (400 million weekly users for ChatGPT) seems credible and suggests the hype is justified this time. Users are finding practical applications, believe it genuinely creates value, and see it as more than a zero-sum game.

As early as 2017, European authorities foresaw that artificial intelligence systems might require a framework to protect consumers within the EU market. This foresight may have come too early, however, as consumer-facing models only hit the market after the first drafts of regulations were published. In any case, these drafts were too theoretical, lacking a firm grasp of the typical user's reality. Ultimately, the AI Act fell short of its initial ambitions: (1) protecting European citizens (whether direct or indirect users of AI models) and (2) fostering innovation within the internal market.

Given adoption is widespread and the regulatory oversight remains minimal, we can expect typical free-market dynamics to unfold. This implies that, over the medium-to-long term, market forces should weed out inferior products and services. However, reaching this equilibrium takes time, as a market education phase is required to reduce the information asymmetry between AI providers and their clients – and, in this case, their clients' clients.

More than two years after ChatGPT's release, and despite the impressive adoption figures, the process of market education has barely begun. Science fiction scenarios, both utopian and dystopian, still dominate most discussions about AI. Those best positioned to discuss AI – whether researchers, engineers, or employees at companies marketing AI solutions – often speak in stark, uncompromising terms. Many claim that these models will soon be capable of replacing humans in nearly all tasks, and some fear that these superhuman capabilities might be turned against humanity.

They might be right. Or perhaps, they simply lack perspective.

A Century of Perspective

In winter 1926, Antoine de Saint Exupéry was waiting for the omnibus that would take him to the Toulouse airbase. There, for the first time, he would be charged to fly the African mail over the Pyrenees, Spain, the Mediterranean Sea, and the Sahara Desert, all the way to Dakar. From where we stand today, it is easy to assume the Aéropostale pilots' task was simply delivering mail to a destination. But their true job was to tame primitive flying machines ; hoping to reach their destination alive. Over the following decades, technological innovations would allow the airplane to revolutionize far more than just communication; it would fundamentally reshape humanity's relationship with space and time.

A century ago, the general sentiment towards the airplane was as ambivalent as the current sentiment towards AI. Yet, there is a notable difference: for the pioneers of aviation, the dangers were real and immediate. Among the pilots of the 1920s, few survived long enough to recount their experiences. Saint Exupéry was one of the exceptions – until his own plane vanished over the sea in 1944. Chapter 3 of Wind, Sand and Stars (Terre des Hommes), written in 1939, echoes current debates about progress, risk, and the human element in technology.

It seems to me that those who are overly frightened by our technical progress confuse ends and means. It is true that anyone whose sole objective is material gain will harvest nothing worthwhile. But how can anyone even conceive that the machine is an end? It is a tool. The airplane is not an end, it is a tool. A tool like the plow.[...]

“Agreed!" my dreamers will say, "but explain to us why it is that a decline in human values has accompanied the rise of the machine?" I can see that it is tempting to accuse industrial progress of this evil. But we lack perspective for the judgment of transformations that go so deep. What are the hundred years of the history of the machine compared with the two hundred thousand years of the history of humanity? It was only yesterday that we started to settle in this landscape of laboratories and power stations, that we took possession of this new, unfinished, home we live in. Everything round us is new and different - our concerns, our working habits, our relations with one another.

[...]

Each step on the road of progress takes us farther from habits which [...] we only recently acquired. [...] We are in truth settlers who have not yet founded our homeland.

We are just young barbarians still marveling at our new toys. [...] We do not pause to ask ourselves why we race to fly the highest and the fastest: right now, the race itself is more important than the end. And it always works this way. For the colonial soldier who looks to build an empire, the meaning of life is to conquer. He despises the colonists. But isn’t the settling of the colonists the very point of his conquest?

Carried away by the excitement of our rapid mechanical conquests, we have committed countless arms and bodies into the building of infrastructure or machinery, thinking like soldiers and overlooking for a while that machinery is being built for the service of humans.

Little by little the machine will become part of humanity. Read the history of the railways in France, or anywhere else: they had all the trouble in the world to tame the people of our villages. The locomotive was an iron monster. Time had to pass before men forgot what it was made of. Mysteriously, life began to run through it, and now it is wrinkled and old. The sailing vessel itself was once a machine born of the calculations of engineers, yet it does not disturb our philosophers. [..] There is a poetry of sailing as old as the world. There have always been seamen in recorded history. Whoever assumes that there is an essential difference between the sloop and the airplane lacks perspective. Every machine will gradually take on this patina and lose its identity in its function.

Have you ever thought, not only about the airplane but about whatever man builds, that all of our industrial efforts, all the computations and calculations, all the nights spent over blueprints, invariably culminate in the production of a thing with simplicity as its sole and guiding principle? Several generations of craftsmen may be required to achieve this goal [and the sign] that perfection is finally attained is not when there is nothing else we could add, but instead when there is nothing else we could remove. It results from this that absence of invention is intrinsic to perfection of invention, or rather when all visible evidence of invention has to be refined out of the instrument so that it looks to us as natural as a pebble polished by the waves. So we even forget that it is a machine.

There was a time when a pilot sat at the center of a factory. Flight set us factory problems. The indicators that oscillated on the instrument panel warned us of a thousand dangers. But, in the machine of today, we forget that engines are whirring: the engine, finally, has come to fulfill its function, which is to whirr as a heart beats - and we give no thought to the beating of our heart.

Wind, Sand and Stars (Terre des Hommes), The Airplane (Chapter 3), Antoine de Saint Exupéry, 1939. Translated from French by PITTI, differs from the official English version

Today's AI corresponds to the aviation of the 1920s. Early pilots couldn't simply start the engine and hold the course to destination. They had to navigate by sight, they had to "monitor [the] work [of the machine] by the shaking of their seats", and they had to tinker when things went wrong. This happened frequently; as Saint Exupéry noted, “In those days our planes frequently fell apart in mid-air". Similarly, those working closely with AI today are aware of their tools' weaknesses. They know how to handle them carefully as they learnt to push them faster and higher while trying to avoid problems. They know how to 'repair' things when necessary, even if it means one or two nights of work in a metaphorical desert.

Anonymous pioneers learning to fly their machines.

AI pioneers aren't affected by noise, cold, heat, hunger, or thirst. They are not threatened by hostile populations. They do not need rescuing. They do not die. This makes it much easier to downplay the technical limitations of the tools, and commercial/financial incentives encourage this, especially as investors pour billions of dollars into the ecosystem.

Technical Limitations

Sometimes the technical limitations of the imperfect tool are dismissed based on the conviction that they are merely transient problems. The belief is that industrial efforts will eventually give the tool the elemental purity of a pebble polished by the sea.

To the technician, some common criticisms might be attributable to rough edges that will inevitably be smoothed out over time:

  • Annotation Work / Click Workers

The quality of a model is directly linked to the time and attention invested in preparing the training data. This requires colossal annotation work. Within the training data production chain, some players are more visible than others with 'unicorns' like ScaleAI on one end, and clickworkers often based in emerging countries (Nigeria, Philippines, India...) on the other. We must put clickwork in the context of the “countless arms and bodies" that, "carried away by the excitement of our rapid mechanical conquests", we have "committed into the building of infrastructure".

Stories about annotation work can be shocking, but we must keep in mind that we are taking a snapshot at a specific moment. The leverage effect from this preparation work is enormous for each subsequent generation of AI models. The percentage of human labor is set to decrease with the renaissance of reinforcement learning (PPO, GPRO,...). Where verification functions can be built, reinforcement learning could reduce human labor to a minimal share, as the model alone will iterate until it finds the solution (test-time scaling - see Agents in Part Two). This renaissance is the origin of so-called "reasoning" models.

I’ve recently read the following criticism of Chinese models : the Chinese have a domestic advantage of cheap annotation and data preparation. This is also the obvious conclusion from reading the presented figures. I read "the presented figures" as the numbers presented by Deepseek given Deepseek is the only Chinese company that gets coverage by Western media outlets. I am not certain that the criticism is entirely justified. By sharing their entire recipe, down to the "hack" of Nvidia's software to bypass US restrictions, Deepseek provided a perfect demonstration of the leverage effect on annotation work: they took an open-source model from Qwen (another Chinese developer) and significantly increased its performance by training it with "only" 800,000 hand-selected reasoning chain examples (see method in paragraph 2.4 and results). 800k may seem enormous, but it is actually extremely small by the standards of the annotation industry. Lambda or Nemotron-H are other examples, achieving state-of-the-art results with only 0.1% of the typical data requirements thanks to architectural innovations and logit distillation (see below). Data work is now more about quality than quantity.

We need to take a step back regarding the term "click worker". It has a negative connotation, yet this work has value-add and will be at the heart of the real AI revolution. This work is crucial not only for training specialized models but also for evaluation, enabling users to make informed choices about solutions. Rather than dismissing the value of click work, Europe would be well-advised to develop its own expertise in the field.

  • Stochastic Parrots

"Probabilities and the theory of large numbers relegate affirmation versus negation to statistically non-significant peculiarities."

The problem here is not the technology itself but what we do with it in the rush to get a product in front of potential clients. The root of the issue lies specifically in the final stage of the process: the point where probability distributions collapse into specific tokens.

Generative Pre-trained Transformers (GPTs) generate probability distributions (inferred from from the logits values, i.e scores attributed to each potential token in the vocabulary). see Tokenizers in Part Two. There is a lot of information in the shape of these probability densities, typically information signalling high uncertainty, which cautions against making any firm assertion.

Researchers know the value of this information: it is used for model "distillation", a process where smaller models are trained not by directly learning from vast amounts of text to predict the next token, but rather by learning to replicate the probability distributions generated by larger models given the same input text (see Gemma 3). You cannot put a probability density in the hands of an average user; they wouldn't know what to do with it and wouldn't find the technology very promising. So, this crucial information is swept under the rug, and we use an algorithm called a sampler at the end of the chain to crystallize the choice of the next token. The GPT itself doesn't make assertions or denials; it merely suggests degrees of certainty or doubt. It's the sampler that resolves this uncertainty, often in a somewhat arbitrary manner, to select a token.

An interesting parallel can be drawn with quantum physics, where elementary particles do not exist in a determined state, but rather in a superposition of states with associated probabilities, until an observation is made. It is the act of measurement — the interaction with the measurement apparatus — that triggers the collapse of this superposition into a single, determined state1. For LLMs, the sampler works analogously to the measurement tool in quantum physics: specific tokens only 'come into existence' through the action of the sampler.

The sampler itself is not artificial intelligence; it's a relatively standard computational function that incorporates an element of randomness. Yet, the sampler is the source of many misconceptions about AI, such as the notion that LLM outputs for a given prompt are inherently irreproducible. This isn't true; reproducibility can be achieved simply by controlling the sampler (which is feasible because computational 'randomness' is typically pseudo-random).

  • Hallucinations

One intrinsic flaw of current AI models, and perhaps the hardest to alleviate, is the problem of hallucinations. Each token is, in a way, a hallucination; model training primarily serves to align these generated outputs with our perception of reality. Without falling into the trap of anthropomorphism, the analogy with dreaming is relevant. Our dreams don't engage our conscious intelligence; they form while we're unconscious, pieced together from snippets of information stored in unclear ways. Yet we like to interpret them, to see coherence or even meaning in them. Did you know it is very difficult to do mathematics in your dreams? This is one of the many fascinating parallels between dreams and LLMs.

We don't blame our brain for the fantasies it generates in our sleep. When we interpret these fantasies as incoherent, we don't conclude that our brain is faulty. Similarly, perhaps we shouldn't blame the AI tool for the ways it's misused. The user tricked by hallucinations can only blame themselves for their naivety and/or laziness. But the technician's explanations are unlikely to shift the user's perspective, because the former uses a tool while the latter uses a machine. And the difference is significant.

What characterizes a technical tool is, first of all, its availability, its flexibility of use, its specialization; a good craftsman has a wide range of tools adapted to different specific uses; he makes some of his tools or modifies them. The machine, on the contrary, does not move; it must be made profitable, thus operated as much as possible; to produce at a high rate, processes must be standardized, objects produced simplified; it has a rhythm to which one must submit; a large part of the production knowledge is concentrated in it and the people who operate it become interchangeable; it cannot be continuously improved, but when it seems ill-suited, it must be changed, an operation one hesitates to perform because it is expensive. Finally, this opposition between tools and machines lead Marx to highlight an inversion of roles between man and the means of production: in the artisanal mode, it is the workers who produce with the help of tools, in the industrial mode, it is the machines that produce with the help of workers; in one case, man is at the center of production, in the other, the machine is.

[...]

Thus are explained the frequent dichotomy between the views of tech savvy people and those of actual users: faced with the evidence of a troublesome shortcoming in a system, the tech savvy person will generally say: "this problem can be taken into account", thinking of the computer-as-a-tool; but the computer-as-a-machine is subject to other requirements of deadlines and implementation costs, of relative simplicity of the programs implemented, so that any inadequacy of the system for the problem considered risks having major consequences: outside of well-defined applications and sufficiently repetitive problems (payroll, accounting) the implementation of [Enterprise Resource Planning] software is a minefield. Even then, one is fortunate when the system's flaws appear clearly, because in certain applications, the machine's logic prevails, a logic all the more blind because, due to the opacity of the systems' functioning and the de-responsibilization of the agents they entail, it happens that vigilance is no longer exercised over the reliability of the data entered into the system and the relevance of the processing they undergo.

One might say that on the scale of human evolution, computers are a very recent tool and that this explains some of the teething problems of a tool of considerable power. But it is to be feared that progress in the implementation will be slow as long as it is considered only as a tool, which is the underlying vision of most current discourse on these systems.

Une technologie invisible - L’impact des instruments de gestion sur l’évolution des systèmes humains (An Invisible Technology - The Impact of Management Instruments on the Evolution of Human Systems), Michel Berry, 1983

This distinction between tool and machine was not lost on Saint Exupéry. He saw a machine — aviation — take shape as a system that enabled the large-scale use of a revolutionary tool: the airplane. The tool was then no longer in man's hands; man put himself at the service of the machine:

The squall has ceased to be a cause of my complaint. The magic of the craft has opened for me a world in which I shall confront, within two hours, the black dragons and the crowned crests of a coma of blue lightnings, and when night has fallen I, delivered, shall read my course in the stars.

Thus I went through my professional baptism and I began to fly the mails. For the most part the flights were without incident. Like sea-divers, we sank peacefully into the depths of our element. It is well explored today. Pilot, mechanic, and radio operator are shut up in what might be a laboratory. They are obedient to the play of dial-hands, not to the unrolling of the landscape. Out of doors the mountains are immersed in tenebrous darkness; but they are no longer mountains, they are invisible powers whose approach must be computed. The operator sits in the light of his lamp, dutifully setting down figures; the mechanic ticks off points on his chart; the pilot swerves in response to the drift of the mountains as quickly as he sees that the summits he intends to pass on the left have deployed straight ahead of him in a silence and secrecy as of military preparations. And below on the ground the watchful radio men in their shacks take down submissively in their notebooks the dictation of their comrade in the air: "12:40 a.m. En route 230. All well."

So the crew fly on with no thought that they are in motion. Like night over the sea, they are very far from the earth, from towns, from trees. The motors fill the lighted chamber with a quiver that changes its substance. The clock ticks on. The dials, the radio lamps, the various hands and needles go through their invisible alchemy. From second to second these mysterious stirrings, a few muffled words, a concentrated tenseness, contribute to the end result. And when the hour is at hand the pilot may glue his forehead to the window with perfect assurance. Out of oblivion the gold has been smelted: there it gleams in the lights of the airport.

Wind, Sand and Stars (Terre des Hommes), The Line (Chapter 1), Antoine de Saint Exupéry, 1939. Translated from French by PITTI, differs from the official English version

Modern aviation is hardly synonymous with freedom. The standards and rules governing the construction and use of the airplane, the tool, are numerous. And thousands of people are employed to orchestrate the machine, aviation. The large-scale deployment of AI will likely also require establishing a system because the tool, while not endangering its pilot's life, is extremely difficult to maneuver. This is all the more problematic because, for an information technology, the ability to reproduce information flawlessly and almost indefinitely — essential for establishing reliable long chains of communication — is a key characteristic. The promise of the system is to rein in chaos. So, without necessarily waiting for the technology to mature, many people are fighting today to impose their vision of a system. The question then arises: what constitutes a good or a bad system? Is it even possible to answer this question?

The Map and the Territory

Aviation profoundly changed our approach to time and space and, more generally, our perception of the world and humanity. The speed of this new mode of transportation is only part of the story. The change largely stems from the additional dimension in which it allowed us to evolve. The third dimension offered us a new perspective, revealing sometimes harsh and sometimes marvelous realities, hitherto masked by land formations or merely the vegetation bordering the paths of our flat world.

The airplane has unveiled for us the true face of the earth. For centuries, highways had been deceiving us. We were like that queen who determined to move among her subjects so that she might learn for herself whether or not they rejoiced in her reign. Her courtiers took advantage of her innocence to garland the road she traveled and set dancers in her path. Led forward on their halter, she saw nothing of her kingdom and could not know that over the countryside the famished were cursing her.

Even so have we been making our way along the winding roads. Roads avoid the barren lands, the rocks, the sands. They shape themselves to man’s needs and run from stream to stream. They lead the farmer from his bams to his wheatfields, receive at the thresholds of stables the sleepy cattle and pour them forth at dawn into meadows of alfalfa. They join village to village, for between villages marriages are made.

And even when a road hazards its way over the desert, you will see it make a thousand detours to take its pleasure at the oases. Thus, led astray by the divagations of roads, as by other indulgent fictions, having in the course of our travels skirted so many well-watered lands, so many orchards, so many meadows, we have from the beginning of time embellished the picture of our prison. We have elected to believe that our planet was merciful and fruitful.

But a cruel light has blazed, and our sight has been sharpened. The plane has taught us to travel as the crow flies. Scarcely have we taken off when we abandon these winding highways that slope down to watering troughs and stables or run away to towns dreaming in the shade of their trees. Freed henceforth from this happy servitude, delivered from the need of fountains, we set our course for distant destinations. And then, only, from the height of our rectilinear trajectories, do we discover the essential foundation, the fundament of rock and sand and salt in which here and there and from time to time life like a little moss in the crevices of ruins has risked its precarious existence.

We to whom humble journeyings were once permitted have now been transformed into physicists, biologists, students of the civilizations that beautify the depths of valleys and now and again, by some miracle, bloom like gardens where the climate allows. We are able to judge man in cosmic terms, scrutinize him through our portholes as through instruments of the laboratory. I remember a few of these scenes.

Wind, Sand and Stars (Terre des Hommes), The Airplane and the Planet (Chapter 4), Antoine de Saint Exupéry, 1939

The opposition between the map and the territory is primarily a question of dimensions. Even more than the automobile, the train is an example of innovation that reinforced the map's contrast and attenuated the territory's colors. The implementation of the railway network in France in the mid-19th century directly shaped the country's modern organization — centralized around its capital — and helped set in stone the indulgent fictions that embellish the picture of our prison.

This reflection on social justice and the dimensions of space finds a parallel in the plot of a beautiful novel written by Edwin Abbott about forty years before Saint Exupéry's adventures: Flatland: A Romance of Many Dimensions. In this satire of the Victorian society, the narrator, a square who by definition perceives only two dimensions, recounts his experience of a third dimension (and even the hypothesis of a fourth dimension, 30 years before Einstein!). But he also warns about the societal risks associated with limited dimensionality, even envisioning zero-dimensional space: the point.

"Look yonder", said my Guide, "in Flatland thou hast lived; of Lineland thou hast received a vision; thou hast soared with me to the heights of Spaceland; now, in order to complete the range of thy experience, I conduct thee downward to the lowest depth of existence, even to the realm of Pointland, the Abyss of No dimensions.

Behold yon miserable creature. That Point is a Being like ourselves, but confined to the non-dimensional Gulf. He is himself his own World, his own Universe; of any other than himself he can form no conception; he knows not Length, nor Breadth, nor Height, for he has had no experience of them; he has no cognizance even of the number Two; nor has he a thought of Plurality; for he is himself his One and All, being really Nothing. Yet mark his perfect self-contentment, and hence learn this lesson, that to be self-contented is to be vile and ignorant, and that to aspire is better than to be blindly and impotently happy."

Flatland: A Romance of Many Dimensions, Edwin Abbott, 1884

For many reasons, Abbott's little-known novel can feel very contemporary. Firstly, because the Monarch ruling Pointland might be reminiscent of the head of state of a certain global superpower. Secondly, because modern information systems – into which AI technologies are increasingly integrated – seem to contribute to the collapse of dimensionality in our individual universes.

Putting aside the third dimension, which remains a luxury few people can access due to a lack of means or a lack of opportunity, we are progressively abandoning the second dimension as we look in a single direction: ahead of us, eyes glued to a screen. This screen might even be just a trompe-l'œil window onto the world. When the information we consume only reflects a distorted image of ourselves that makes us "blindly and impotently happy", we lose the idea of plurality, we become for ourselves "the One and All."

The information we consume and the way we consume it are not necessarily benign, especially when consumed without moderation. And we consume more because we produce more. There will be much to say about the impact that generative AI can have, as it enables information production on demand and at scale. But it cannot be considered in isolation from the broader information value chain.

Let's go back to our two-dimensional world: an interesting implication of a perfectly flat world is that one’s horizon is defined by elements in their immediate surroundings along a 360-degree angle. Everything that is not in the front row is hidden. To add content in a perfectly flat world, you must increase its surface area. If the boundary of the flat world can be pushed further away, this does not increase the size of an individual's horizon ; only the hidden area grows. To increase the content on one’s horizon, the individual must increase the distance between themselves and the front row, i.e., increase the circle's radius. Another solution could be to use technology to help highlight important elements (e.g., with search algorithms or sorting content in social media feeds) and increase the turnover of elements presented in the foreground (e.g., with software that increases productivity for processing information).

When we only look in one direction, technology becomes absolutely necessary to handle the volume. Since what is absolutely necessary has value, those who sell this technology use all sorts of stratagems so that (1) you only look in one direction and (2) you cannot create distance from the content. This guarantees revenue, but the user is not the customer ; the user is the raw material. Or even the victim. The opportunity cost for the user who lets themselves be guided to an address via the optimal route without ever looking up from their screen is never quantified.

In the attention economy, the very concept of opportunity for end-users is a mirage: they have no more opportunities than the battery chicken raised to become as big as possible as quickly as possible. As long as the battery chicken meets all the criteria for being plucked, its physical health is a secondary concern. Similarly, the cognitive health of users that are bombarded with information is rarely a source of concern. Perhaps it should be.

Lessons from Covid

On top of its multi-million death-toll, the Covid-19 pandemic will likely remain in history books for pushing policy-makers to re-visit their approach to certain risks in Healthcare, as well as in several other sectors. Global supply-chain disruptions, for example, highlighted an asymmetry in global inter-dependencies that was largely ignored. Here the "flat world" is Thomas Friedman's. This realization inevitably led to geopolitical adjustments: exacerbation of trade tensions between countries or regions, increased polarization of domestic political landscapes.

The pandemic will also remain memorable for the technological advances it encouraged, as most crises do. The most notable innovation, mRNA vaccines, went from laboratories to clinical trials and then mass production in less than a year – an unthinkable timeline before 2020. Although the majority of populations in developed countries welcomed this innovation, a minority resisted quite vehemently. This offered a striking example of the legitimate concerns a population might have about the safety of new technologies, especially when they involve injecting or ingesting a new chemical compound, material, living organism, or nano-device. Even if it was approved by a competent regulator. The reasoning could be extended to anything that has been in contact with such a new compound, material, organism, or device.

However, the intensity of a population's concerns seems largely determined by the sophistication of the underlying technology rather than the number of individuals exposed to it: we rarely hear about threats related to materials used in common food containers and packaging, despite a long history of regulatory failures in this domain (lead in cans and pipes, BPA in plastic containers, micro-plastic pollution in all our organs...). Regarding food, populations typically assume that anything allowed on the market is safe until proven otherwise. They are clearly more ambivalent towards drugs, but high vaccination rates and medication use in most developed countries (especially when covered by the welfare state) show that most of the population ultimately accept deferring to authorities and regulatory frameworks to determine if these products are safe.

Food and drugs, as tangible products, constitute a small part of the spectrum of technologies subject to health and safety regulation. In the invisible or intangible part of the spectrum, technologies relying on [radio] waves provide good examples of what warrants regulatory supervision. Although no one denies the practical benefits of 5G, the deployment of this technology unsurprisingly provided a platform for a fringe of the population who opposed it citing concerns for citizens' physical integrity, even after regulatory approval. If we consider that the scope of health and safety regulation should encompass anything that could affect, directly or indirectly, the functioning of the human body under "normal" conditions, the spectrum of covered technologies should probably be extended to even more abstract effects on our daily functioning. For example, how we form our opinions, desires, or feelings... how we build ideas, how we focus, or how we learn. The recent "promotion" of mental health as an integral part of health in the public eye – another positive consequence of the Covid pandemic – also justifies considering this regulatory blind spot.

In Part Two, we'll consider cognitive health and question the consequences of the emergence of AI tools. As the information value chain becomes more complex, the surface of attack increases for anyone wishing to influence us for ideological, political, or commercial reasons. To understand this, we must revisit the evolutions of the last 25 years and analyze the expected changes. We must ask what information is consumed, but also how it is consumed and why. And upstream in the value chain, how it is produced and why.


1 The measurement (descriptive) becomes a measure (prescriptive) in one of the most fundamental examples of the Goodhart law. In French, there is only one word : “mesure"

We care about your privacy so we do not store nor use any cookie unless it is stricly necessary to make the website to work
Got it
Learn more