At first glance, the answer to this question might already be a resounding ”no.” And that’s exactly why this matters. Before we give up or move on, we need to examine how we got here — and what we’re still capable of doing.
The Alignment Problem Isn’t Waiting for Brussels
The European trajectory of ethics and regulation depends on being able to influence the systems and structures that control and influence future AI behaviour and functionality, but what are those? In discussions around advanced AI systems, two terms are increasingly critical: alignment and scaffolding.
Alignment refers to ensuring that AI systems do what humans want them to do, safely and predictably — especially as their capabilities scale. But it’s not just about what goals the system pursues; it’s about how it pursues them. We aren’t merely aligning outputs — we’re aligning reasoning processes, prioritizations, and moral assumptions. For example, a system given the goal of ’eradicating disease and poverty’ must learn to achieve that objective in a way that preserves human dignity, rights, and autonomy — not by concluding that the most efficient path involves totalitarian control or the extinction of humanity. Alignment, then, is not only about obedience to commands, but fidelity to intent, context, and ethical nuance. It’s a central problem in AI safety, made famous by researchers like Stuart Russell, Paul Christiano, and Eliezer Yudkowsky. And, importantly, it’s not something you can impose after the fact through external controls. Once a system is powerful enough to form and pursue its own goals, aligning it retroactively becomes extraordinarily difficult, if not impossible.
Scaffolding, meanwhile, refers to the technical and institutional structures that support safe AI development — things like interpretability tools, model governance frameworks, sandboxed deployment environments, and human-in-the-loop oversight mechanisms. These are the tools, processes, and organizational norms that make it possible to observe, test, constrain, and refine how powerful AI systems behave before they are deployed at scale. Think of scaffolding as both guardrails and diagnostic tools — it includes everything from transparency tooling that allows us to understand a model’s reasoning, to oversight protocols that determine when and how human input is required in decision-making. Crucially, these systems must evolve in lockstep with AI capabilities, not lag behind them. A model trained without scaffolding in mind may prove too opaque or too autonomous to effectively monitor or steer later on. As Nick Bostrom notes in his book Superintelligence [6], a sufficiently advanced AI could easily deceive, bypass, or disable scaffolding measures — especially if those measures are introduced only after the system has already been trained. Post-hoc controls may be not only ineffective, but potentially transparent to the AI itself, rendering them moot.
Europe’s current strategy — focusing on ethics boards, impact assessments, and legislation-first frameworks — treats alignment and scaffolding as a policy problem rather than a technical frontier. But as researchers at Anthropic, DeepMind, and OpenAI have repeatedly shown, alignment must be engineered into the training process. Legislation, however well-meaning, cannot realign a system trained in a way that embeds alien goals or emergent behaviors [1][2].
In short: If we want to steer the ship, we have to be on the ship.
The Compute Gap Is a Chasm
Europe is not just trailing in alignment and scaffolding R&D — it is also completely absent from the race to build the compute infrastructure that enables us to create, study and understand state-of-the-art models.
Large language models and multimodal AI systems scale predictably with compute. As described by the “scaling laws” discovered by Kaplan et al. at OpenAI [3], model performance improves logarithmically with increases in model size, dataset size, and training compute. These insights led directly to the creation of GPT-3, GPT-4, and Claude 3. In parallel, Meta and Google have built internal clusters with hundreds of thousands of GPUs.
Microsoft and OpenAI are reportedly building training clusters at the scale of tens of thousands of GPUs, with infrastructure designed to support orders-of-magnitude (OOM) increases in training compute [5]. These facilities are engineered to unlock entirely new levels of model capability. The frontier AI race is being driven by these exponential leaps in compute, and the institutions that control them are the ones defining what becomes possible.
In contrast, Europe has not even begun construction on comparable compute clusters. There is no EU-scale investment in public training infrastructure akin to the Frontier Model Forum (US/UK) or China’s state-backed AI programs. Even initiatives like LEAM (Large European AI Models) are focused on open-source alternatives, not frontier-scale experimentation.
Yet economically, Europe is a powerhouse: the EU’s GDP is comparable to that of the United States, and collectively larger than China’s [4]. If there were a political will to coordinate investment across member states, Europe could field its own AGI research initiative.
But that would require urgency — and urgency is what we’re lacking.
To illustrate the disparity more clearly, consider the following table of upcoming frontier-scale training facilities. For simplicity, compute capacity is expressed as estimated peak training throughput, measured in petaflop/s-days — a metric representing the amount of compute used over time to train a model:
Region | Facility / Org | Location | Est. Compute (PF-days) | Notes |
---|---|---|---|---|
USA | Microsoft + OpenAI | Iowa & Wisconsin | 1,000,000+ | $100B ”Stargate” supercluster |
China | Baidu + State Grid | Beijing & Wuxi | 800,000–1,000,000 | National AI plan support |
South Korea | Naver & Samsung | Gak Cluster | 300,000+ | Regional hub, supports Korean LLMs |
India | Ministry of IT | Hyderabad (planned) | TBD | Under India’s AI mission |
UAE | G42 + Cerebras | Abu Dhabi | Wafer-scale equivalents | Focus on open LLMs |
Europe | — | — | — | No frontier-scale clusters announced |
Europe risks becoming an AI consumer, not a contributor — a regulatory buffer zone between digital empires.
The Danger of Being Left Behind
The path we’re currently on leads to a future where Europe is not just behind in AI capabilities — it is structurally excluded from shaping AGI.
Legislation alone will not rein in systems trained on hardware and datasets entirely outside our jurisdiction. Open models will be trained in the US and China; proprietary models will be commercialized and deployed globally. By the time European policymakers finish calibrating the AI Act, frontier labs elsewhere may already be deploying agents with long-term memory, recursive planning, and tools to use.
Europe’s current posture — ethical observer, regulatory gatekeeper — is not wrong. But it is insufficient.
What Must Be Done
Europe must:
- Invest in compute infrastructure at scale — preferably with public-private collaboration and cross-border cooperation.
- Establish a flagship AGI research institute with alignment and safety embedded from day one.
- Partner internationally on open safety standards, model evals, and cooperative governance.
- Fund alignment-specific research labs, fellowships, and interpretability tooling projects across universities and research networks.
- Shift from reactive regulation to active capability-building.
None of this is easy. But it is possible — and Europe, with its scale, talent, and tradition of values-led leadership, is uniquely positioned to do it if it acts now. We have been at the forefront of every major technological revolution that led to this moment. To now abdicate our influence and responsibility in shaping the future of AGI would be, at best, negligent. Europe carries with it a complex legacy — from colonial exploitation to industrial excess to modern-day geopolitical ambivalence — but also the institutional maturity and normative frameworks that could help guide AI development in a more sustainable, democratic, and globally conscious direction.
Glossary (for the curious)
AGI (Artificial General Intelligence): A form of AI that can perform any intellectual task a human can — not limited to narrow domains.
Alignment: The process of ensuring that AI systems pursue goals in ways that are consistent with human values and intentions.
Scaffolding: The combination of technical and institutional mechanisms (like transparency tools, deployment controls, oversight) that enable safe development and monitoring of AI systems.
PF-days (Petaflop/s-days): A measure of compute used over time; e.g. 1 petaflop per second for one day = 1 PF-day. Used to estimate training scale.
OOM (Order of Magnitude): A tenfold increase or decrease in scale.
LLM (Large Language Model): A type of AI trained on massive text corpora to predict and generate human-like language.
Human-in-the-loop: A safety mechanism where human judgment is included in AI decision-making processes.
Interpretability: Techniques for understanding how and why AI systems produce their outputs.
Next: The Alignment Paradox
In the next post, we’ll explore the strange contradiction at the heart of alignment:
How can we impose ethical rules on an AI system when we ourselves are inside the system?
Is alignment even possible in a recursive, self-modifying world? Or is it a way of comforting ourselves as we sprint toward superintelligence?
Stay tuned.
Sources:
[1] Anthropic. (2023). ”Constitutional AI: Harmlessness from AI Feedback.” https://arxiv.org/abs/2212.08073
[2] AI Alignment Forum. (2022). ”Reward is Not the Optimization Target.” https://www.lesswrong.com/posts/pdaGN6pQyQarFHXF4/reward-is-not-the-optimization-target
[3] OpenAI Scaling Laws Paper: https://arxiv.org/abs/2001.08361
[4] World Bank GDP Rankings 2023: https://data.worldbank.org/indicator/NY.GDP.MKTP.CD
[5] Reuters. (2024). ”Microsoft, OpenAI Plan $100 Billion AI Supercomputer.” https://www.reuters.com/technology/microsoft-openai-planning-100-billion-data-center-project-information-reports-2024-03-29/
[6] Bostrom, Nick. (2014). Superintelligence: Paths, Dangers, Strategies. Oxford University Press.