Microsoft CEO, Satya Nadella just said the quiet part out loud
Satya Nadella called model sovereignty "the least talked about topic in AI" at WEF26. His point: if you can't embed your firm's knowledge in weights you control, you're leaking enterprise value. Europe loves talking about data residency. But the value moved to the weights. We wrote about this in 2024. Now it's a Davos keynote.

The real sovereignty issue isn't where your data sleeps. It's who owns what your data becomes.
Satya Nadella stood at the World Economic Forum this morning and named what he called "the least talked about topic in AI." He predicted it will be the most talked about by year end.
Model sovereignty.
His exact words: "If you're not able to embed the tacit knowledge of a firm in a set of weights, in a model you control, by definition you have no sovereignty. That means you are leaking enterprise value to some model company somewhere."
Then he said something that should make every firm in Europe pay attention: "It's sort of fascinating that nobody is talking about that as the real sovereignty issue."
Conversation with Satya Nadella, CEO of Microsoft. World Economic Forum 2026.
He's is absolutely right. And the silence is deafening.
Europe loves the wrong sovereignty conversation
Walk into any enterprise AI discussion in Zurich, Frankfurt, or Amsterdam. Within five minutes, someone will raise "data sovereignty." Where does the data live? Which cloud? Which jurisdiction? Is it GDPR compliant? What about Schrems II?
These are real concerns. They're also the wrong concerns.
The data residency conversation made sense in 2015. Data was the asset. Where it lived mattered because that's where the value was.
That's no longer true.
The value has moved. It now lives in the weights. In the fine-tuned models that encode your processes, your edge cases, your institutional knowledge. The stuff that took decades to build and makes your firm defensible.
Data is the raw material. Weights are the product. And most European enterprises are giving away the product while obsessing over where to store the raw material.
The API trap
What happens when you build on rented intelligence:
Every prompt you send teaches someone else's model. Every refinement, every correction, every edge case you surface becomes training signal for a foundation model you don't control.
If you're renting capability from an API, you're training someone else's model. Your competitive advantage is becoming training data for someone else's foundation.
The irony is brutal. European firms spend millions on data governance, legal reviews, and compliance frameworks to protect their information. Then they pipe that same information through APIs that explicitly use it to improve models owned by American hyperscalers.
The data sovereignty box is checked. The actual sovereignty is gone.
What sovereignty actually means now
Nadella defined it precisely: embedding the tacit knowledge of your firm in weights you control.
Tacit knowledge is the stuff that isn't written down. It's how your best compliance officer knows which counterparty relationships feel wrong. It's how your senior engineer knows which architectural decisions will cause pain in three years. It's the institutional memory that walks out the door when someone retires.
That knowledge can now be captured. Encoded. Made operational. But only if you own the resulting model.
Sovereignty in 2026 means:
You control the training. Your data, your process, your evaluation criteria. Not a generic fine-tune on someone else's infrastructure.
You own the weights. The actual model parameters live in your environment. No API dependency. No "your contract terms may change" risk.
You capture the improvements. When the model gets better from production use, you keep those gains. They don't flow back to a foundation model that serves your competitors.
You can audit the system. Full transparency into what the model learned and why. Not a black box that "worked better after the update."
This is why we do full IP transfer
At paterhn, we've been making this argument since we started. We wrote about the hidden costs of pre-trained models back in 2024, when this was still an unfashionable position. Now it's a Davos keynote.
When we build a system, the client owns everything. Code. Weights. Configurations. Prompt libraries. Evaluation sets. Runbooks. The complete system, transferred fully.
No recurring license fees. No "you can use the model but we own the weights" arrangement. No lock-in that makes switching expensive. No situation where your institutional knowledge improves a model that then serves your competitors.
We use Frontier APIs as development tools. The system we deliver runs on weights you own. Your institutional knowledge ends up in your model, not theirs. That's the difference.
Full sovereignty. By design.
The conversation is shifting
Nadella is right that 2026 will be the year this conversation goes mainstream. The early signs are already visible.
Regulated industries are asking harder questions. Banks want to know exactly what happens to the data they send to AI providers. Pharmaceutical companies are realizing that their drug discovery insights shouldn't be training general-purpose models. Defense contractors are being told, explicitly, that certain capabilities cannot depend on foreign-controlled AI.
The "move fast and use APIs" era is ending. The "build sovereign capability" era is beginning.
The question for European leadership
European policymakers spent years building frameworks for data protection. They largely succeeded. GDPR is global standard. European citizens have rights that Americans don't.
But the next battle is already being lost.
While Europe debates AI Act compliance categories, American and Chinese firms are capturing the world's institutional knowledge in weights they control. Every month of delay is another month of enterprise value flowing into models that European firms will pay to access.
The sovereignty question isn't theoretical. It's economic. And the transfer is happening right now, one API call at a time.
What to do Monday morning
If you're a CTO or Chief AI Officer reading this, the path forward is straightforward:
Audit your AI dependencies. Map every system that sends proprietary data to external AI providers. Understand what you're teaching and who's learning.
Define your crown jewels. Which processes encode genuine competitive advantage? Those should never train someone else's model.
Build owned capability for what matters. Not everything needs to be sovereign. But the systems that encode your institutional knowledge, the stuff that makes you defensible, those need to run on weights you control.
Demand full IP transfer from partners. If a vendor won't give you the weights, ask why. The answer usually reveals the real business model.
The window for building sovereign AI capability is open. It won't stay open forever. The firms that capture their institutional knowledge in models they own will compound that advantage. The firms that keep renting will keep training their competitors.
Nadella said the quiet part out loud. The question is whether European enterprises were listening.
Sources
Satya Nadella, remarks at the World Economic Forum Annual Meeting 2026, Davos, January 20, 2026.
The value moved. The conversation didn't. Europe won the data residency debate. But the value now lives in the weights. Control the weights or lose the moat.
Sovereignty means ownership. Own the training. Own the weights. Own the improvements. Full IP transfer isn't a nice-to-have. It's the definition of sovereignty in 2026.
Every API call is a transfer. If you're renting capability from a foundation model, you're training someone else's model with your institutional knowledge. Your edge becomes their training data.
