Speaking of the Swedish Public Employment Service and AI: some things that deserve a little more air

Speaking of the Swedish Public Employment Service and AI: some things that deserve a little more air

In recent days, the Public Employment Service's use of an AI tool has received a lot of attention. In the subsequent reporting, I myself have participated as an expert commentator. However, what can be contained in an interview is rarely the whole reasoning, so let me take this opportunity to expand on some of the things that were only hinted at.
As background, here is Aftonbladet's original article and the interview.
This is not a text. If The Swedish Employment Agency. It is a text about the discussion, and about security, technology and some recurring misunderstandings around AI.
When we say “AI” but mean completely different things
One of the most unfortunate confusions in the debate concerns the word language model.
In everyday speech, “LLM” is often used as a synonym for services like ChatGPT. However, in that context, we are actually talking about a ready-made, cloud-based AI product (a SaaS service) where data is sent to an external provider that runs the entire solution.
In the case of the Swedish Public Employment Service, as far as can be judged from the reporting, it seems to have been about something completely different: a language model that was used as a component in a custom-built internal chat, operated on-prem.
From a security perspective, these are two fundamentally different things.
One way to think about it is to distinguish between:
- The language model – the engine.
- The AI product – the car that is built around the engine.
- Operating mode – Do you drive the car yourself, or do you take a taxi?
Driving yourself means responsibility, but also control. Taking a taxi means convenience, but then you also give up information about where you are going and when. The same logic applies to AI. The question of safety is largely determined by how the solution is used and operated – not just by where the model was once trained.
“No data left the organization” – yet there is a risk
Much of the debate has centered around data leaks. Was information sent to China or not? In this case, the answer seems to be no.
But the safety reasoning often ends there far too early.
Not all security threats are about stealing or destroying data. Some are about influence. About shaping behaviors, language, and perspectives – slowly and subtly.
This is not a new phenomenon. Influence operations are well documented and well understood. What is new is that language models can become part of the everyday cognitive infrastructure of organizations: they summarize, suggest, formulate, and prioritize.
Language models are not neutral
A common counterargument is: “The model only answers what we ask.”
That's not wrong. But it's a bit naive.
Most questions worth asking in an organization don’t have yes-or-no answers. And even with seemingly simple tasks – like summarizing a text – there is a lot of room for interpretation. What is highlighted? What is downplayed? What is not mentioned at all?
Language models are trained on human-produced material, under human values and priorities. They therefore carry ideological and cultural biases. This applies to models from state-owned Chinese companies. It applies just as much to models from Western tech companies driven by libertarian ideals.
This is not speculation. It is well-documented in the research. And it is something that technology decision-makers need to address, regardless of sector.
What I hope decision-makers take away
Two things, actually.
Firstly:
There is reason to feel a certain calm. Technical security is difficult – but it does not become more difficult in principle because AI is involved. Much of the work is about the same classic issues as always: architecture, operations, access and liability.
Secondly:
When you put a language model into your internal tools, you also bring an external voice on board. That voice has values. It influences how things are expressed, which alternatives feel reasonable, and which perspectives are never suggested.
That doesn't mean you should give up. But it does mean you need to be aware. Think through what functionality is appropriate, and what follow-up, monitoring or evaluation is required to manage this type of risk.
No panic. No naivety.
Just a slightly more insightful conversation about what it actually means to use AI.
Latest articles






Insights
Latest articles

How dangerous is it to let AI-based agents into your own systems?

Omegapoint is one of the leading companies in creating customer value

Why read a 60-year-old book about “data” this Christmas?
Speaking of the Swedish Public Employment Service and AI: some things that deserve a little more air
