Artikel
22 augusti 2025 · 7 min lästidIn June, we hosted a Microsoft Azure meetup focused on AI agents, gathering a full house of technology enthusiasts. Our Senior Software Architect, Arto Kaitosaari, has explored the topic extensively and gave a talk at the event. In this article, he shares what AI agents are, how they add value to businesses, and how they will affect our work in the future.
There’s been a lot of interest in AI agents lately, but what are they, and how do they differ from the large language models most people are already using?
Let’s look at the basics. A language model like ChatGPT doesn’t actually have a conversation with you. It predicts the most likely answer based on your input. And even though it can feel like it, the model can’t remember previous conversations. An LLM knows only the data it was trained on, and it simply responds based on probabilities. And while these models contain a lot of information, they have access to only a specific point in time. To allow them to access current knowledge, we typically need to use something called retrieval-augmented generation, or RAG, which is a method for fetching relevant external data in real-time.
But AI agents are different, and they go a step further.
LLMs predict, agents act
Large language models are great at answering questions and generating text. However, taking action is not part of their “work”. This is because they can’t access external systems, execute plans, or monitor results. They only generate responses.
An AI agent, by contrast, can plan, act, and evaluate. It can decide what steps it needs to take, execute those steps using connected tools or systems, and assess the results. If something goes wrong or if the goal hasn’t yet been achieved, it can revise its plan and try again. Sounds complicated, but it is actually pretty simple.
This shift from passive response to actively doing something based on a logical model is what makes agents fundamentally different from large language models. They don’t just suggest what to do, they do it.
What characterises an AI agent?
Three defining capabilities set agents apart, a process called thought - action - observation or reasoning - acting -model.
They can think: an agent can tell itself the steps it needs to complete to answer a question. It can also interact with the outside world to fetch or store data, for example.
They can act: when the agent knows what tools are available and what these tools do, it can interact with them to carry out the previously defined steps. Tools can be knowledge databases or scheduling systems, for example.
They can observe and adapt: AI agents are able to correct their own mistakes without human interaction and update their plan based on what worked, what didn’t, and what happened as a result.
This cycle of reasoning, acting, and reflecting can repeat as many times as necessary until the desired outcome is reached or a new approach is needed. It may sound complex, but it’s surprisingly straightforward in practice.
Where can AI agents add value?
Currently, AI tools are most commonly used in supporting internal processes or customer interactions. AI agents work well automating these processes. Although many simple tasks have already been automated, agents bring value to automating more complex workflows with more variability.
They can also be relevant across business functions, but it’s often easiest to start with internal use cases. This helps manage data security, validate how the agent acts, and learn how it performs in a controlled environment. And as mentioned, another natural fit is customer-facing chat, especially if the system can actually carry out tasks and retrieve information for the customer about the status of their order, for example, rather than just simulate conversation. This ability makes a chatbot an agent and not just an LLM predicting answers based on what the customer asks.
Although the concept of autonomous agents has been around for a long time, recent developments have made agents much more capable. Before, a human needed to define the workflow of an automated agent. Now, because the agents have access to different organisational tools and they have the ability to reflect on their own process, they can define their workflows autonomously. Also, newer LLMs like Deepseek are designed to solve problems instead of just generating responses, shifting the focus of AI tools even more towards agents and what capabilities they offer for organisations.
Where is human interaction critical?
Technically, agents can operate quite independently. But the key question isn’t actually what they can do, it’s what they should be allowed to do.
It’s possible, for example, to give an agent the ability to access customer records or schedule meetings. But, it is increasingly important to have safeguards for this autonomy. Highly sensitive tasks still require a human in the loop, and simple workflows need well-defined boundaries. Even if an agent could make your online grocery purchases, do we want it to have access to online banking details or personal emails? They are not mature enough to be trusted with sensitive data, leading to privacy risks, especially if they interact with outside platforms.
Even though the agents possess enormous capability, the key is knowing what the appropriate use for them is.
So… how does all this affect our work?
I don’t think that AI agents will replace entire professions, but they will make things more efficient and change how people work. Many roles will see a shift in focus, as routine parts of the job will be automated.
Let’s take doctors as an example. If AI can produce a high-quality summary of a consultation, the doctor spends less time writing and more time with patients. Software engineers can create code faster. Knowledge workers can upload research or reporting tasks to an agent that analyses the data.
If you work with a computer, agents will likely touch some part of your workflow, equally in B2B and B2C contexts.
Creating an agent is not as complex as one would think
The technology for building agents has evolved rapidly, and it’s nowadays possible to create agents without extensive AI expertise. For example, the Azure AI Foundry service enables tailoring an agent to your organisation’s needs and makes it possible to build working agents with relatively little coding skills.
The real challenge lies in design, oversight, and safety. It’s essential to define what tools an agent can use, what permissions it has, and how it handles sensitive data. For instance, if an agent can send emails or book meetings, you need to consider what happens if someone tries to misuse it. Phishing is a real threat. Security and access control must be part of the planning.
It’s also important to start with clearly defined, low-risk tasks. Interestingly, along with where the agent stores the data, the used prompt presents the biggest risk, because it is the guideline based on which the agent starts its work. An agent that schedules a single meeting or generates a simple report is much easier to test and trust than one that interacts with multiple systems or handles customer data.
Practical advice
Start small: Use low-risk, internal tasks to pilot agent functionality before expanding.
Limit access: Give agents only the permissions they need, nothing more. If an agent has access to multiple systems, the risks also grow. A dedicated purpose for the agent without unnecessary privileges is the safest option.
Design carefully: Make sure the agent’s tools and data sources are secure and reliable. For example, if the agent is used in customer interactions, how does it verify that the customer is truly the customer, and not someone else with access to a valid email address?
Understand regulation: For example, in the EU, there are clear limits on automated decision-making that impacts people.
Plan for oversight: Even highly capable agents need monitoring and governance.
Be clear on accountability: At the end of the day, your organisation is responsible for what the agent does.
And lastly, a thought on sustainability. The agents have the potential to reduce energy consumption. While training an agent requires a significant amount of energy, once trained, it can be used with relatively few resources. However, even though agents can be used for automation, it’s worth noting that if a process can be automated without AI, it may often be simpler and more sustainable.