How dangerous is it to let AI-based agents into your own systems?

2026-02-12

How dangerous is it to let AI-based agents into your own systems?

2026-02-12

The development of AI creates fantastic opportunities, but it also creates a lot of concern. A question that often comes up is whether AI agents should be allowed to access internal resources such as data and services. Not infrequently, that question is answered with a resounding ”YES” by some, and with an equally confident “ABSOLUTELY NOT” by others.

The correct answer is both simpler and more complicated than either option. It all depends, as with so much else, on how we do when we give AI agents permission to use our internal resources. As long as we do it in a secure way, AI-based agents pose no greater threat than other clients. But if we are careless and take shortcuts, we risk creating security problems. On the other hand, this is not unique to AI agents.

But let's start by clarifying the concepts.

What distinguishes an AI agent?

Although AI agents are a new type of agent, it is still about them accessing our internal services (such as an API). Adding a client, for example to integrate with a new third-party, always involves risk, whether it involves AI or not. It is basically the same issues as always: architecture, operations, access and liability.

What makes AI agents special is that they are less predictable, they are not statically programmed in the same way as a classic API client for system integrations. We simply do not know in advance exactly how it will act to solve its task. In this way, they are more similar to a creative human user. Since the AI agent is partially or fully automated, it can act without a human being being able to control exactly what it is doing. Potentially, it can explore all the functions and all the data it can access. In this way, AI agents resemble an attacker who has taken over an account, which can seem scary.

Another important aspect is that agents often run on behalf of a human user and therefore should not be classified as system integrations, which are given general system authorization. This also applies if they operate without the user approving each step, since the user has delegated their authorization to the agent to act on their behalf. Here we usually want to continue to have good traceability that it is the agent that is acting on behalf of the user.

What protections do we need?

How do we manage security for an agent with delegated authority who more or less independently solves a task? The answer is the same as for all clients who access our data and services; with well-established security principles and patterns such as Secure By Design, Defense in Depth, Zero Trust and Least Privilege.

A strong solution requires that the API, regardless of whether the client is AI-based or not, ensures fine-grained authorization control. Trying to solve the lack of authorization control at the resource owner on the server side (the API) with restrictions at the client side is unlikely to work. Especially not for such a flexible and unpredictable client as an AI agent.

A common mistake is to start with a long list of things that the client not can do, a “deny-list”. It is a messy and insecure solution that requires us to anticipate every conceivable problem that the agent can cause. Countless penetration tests have shown the danger of that method. Instead, we should create an “allow-list”, that is, a strict set of rights where we apply the Least Privilege principle. An allow-list will always be stronger than a deny-list.

This does not mean that deny lists, such as Guardrails, should be avoided entirely. Guardrails attempts to prohibit specific input for an extremely open and flexible domain (an LLM). One can compare this to how difficult it is for anti-virus scanners to detect new variants of viruses in time. But just as we should have anti-virus scanners active after all, we should use Guardrails. However, it is important to realize what kind of protection they actually offer, and that they never replace proper authorization control on the server side when it comes to access to internal data.

So how do we implement AI solutions safely?

The industry standard for handling this type of scenario, with secure access to internal resources for different types of clients, is OAuth. For AI agents (LLMs), MCP is currently being established as a standard, which is integrated with OAuth. For further reading and technical explanation, it is recommended:

https://curity.io/resources/learn/design-mcp-authorization-apis

https://genai.owasp.org/llm-top-10

And for more reading about Secure By Design as a concept and API security, there is material on Omegapoints. security blog.

Article writer
Tobias Ahnoff
Omegapoint

Insights

Latest articles

All articles

How dangerous is it to let AI-based agents into your own systems?

2026-02-12