AI agents are set to redefine enterprise systems, AWS warns

TUESDAY, MARCH 31, 2026
|

AI is moving beyond chatbots to autonomous agents that can work across enterprise systems, making speed of adoption critical, says AWS.

Artificial intelligence (AI) is evolving beyond chatbots into autonomous agents capable of connecting with enterprise systems and carrying out real tasks, according to Amazon Web Services (AWS), which says the real question for businesses is no longer whether to adopt AI, but how quickly they can do so.

“AI is not a competitive advantage. If you're not using AI, we're already behind. Every enterprise wants to use AI now. But the question is really how fast can we do that?” said Olivier Klein, Chief Technologist at AWS Asia-Pacific.

He made the remarks during the seminar AI Revolution Shift 2026: Shaking the Global Economy, organised by Bangkokbiznews, where he outlined this year’s AI direction and what Amazon is developing.

Olivier Klein, Chief Technologist at AWS Asia-Pacific

Klein opened with a clear position: AI is no longer something that gives one organisation an edge over another, because nearly every business is now trying to adopt it. In that environment, companies that fail to move are already being left behind. 

The key issue is no longer whether AI should be used, but how quickly and effectively it can be deployed.

He said AI should be seen as a tool, much like other technologies that have helped humans work faster and more efficiently throughout history, rather than something to fear or ignore.

According to Klein, the most effective way to use AI in an organisation is to assign it a task, allow it to plan its own steps, call on other specialist AI systems when needed, and refine the process until it achieves the desired result. 

Amazon is already using AI in this way across activities ranging from warehouse management to advertising design and software coding. He added that Anthropic, the developer of Claude, is now using AI to assist developers with almost 100% of their coding work.

Five AI trends reshaping enterprises in 2026

Klein said five major trends are rapidly spreading across the corporate world and are likely to become standard practice soon.

First, newer AI models are able to reason before taking action. Unlike earlier systems, which often responded immediately without much deliberation, the latest generation can think through decisions first, improving both accuracy and reliability.

Second, AI models are becoming increasingly multimodal. Rather than processing only text, modern systems can now understand images, video, speech and documents at the same time. By combining these capabilities, AI is moving closer to understanding the world in a more human-like way.

Klein cited the example of an insurance company using AI in its claims process. Previously, after a car accident, customers had to wait for an officer to inspect the damage, complete paperwork and approve the claim, often taking several days. 

Under the new system, customers only need to take photos and videos of the damage, explain the incident by phone and upload their policy documents. AI can then process the images, audio and text together, assess whether the claim is reasonable and, if approved, pay compensation within 10 minutes.

Third, AI is beginning to act rather than simply respond. Klein said this is the most significant leap. The future of enterprise AI lies not in chat interfaces alone, but in AI agents that can connect directly to real corporate systems, whether databases, accounting platforms or inventory systems, and carry out tasks on behalf of humans.

These tasks could include sending emails, updating records or placing orders. However, he warned that security remains critical. AI should not be given unrestricted access to enterprise systems, and companies must establish clear permissions and policies before deployment.

Fourth, open-source software is gaining momentum. Open-source AI models, which are freely available for organisations to use and develop further, are becoming increasingly attractive for businesses seeking flexibility or greater control over their own data.

Fifth, infrastructure must become more energy-efficient. Klein argued that simply building ever larger AI models is not the right path towards advanced AI.

He compared the attempt to reach artificial general intelligence (AGI) by endlessly scaling up models to trying to travel to Mars by constructing taller and taller buildings. No matter how high the building becomes, he said, it will never reach Mars.

AI agents are set to redefine enterprise systems, AWS warns

Training AI once can consume as much electricity as 50 years of streaming

Klein presented figures to illustrate the scale of AI’s energy challenge. Training a single popular AI model used by the public just once, he said, can consume as much electricity as around 1.6 million hours of continuous streaming, or nearly 50 years without interruption. That implies enormous cost and rising carbon emissions.

To address this, Amazon has been investing in its own processors, including Trainium for AI training and Graviton for real-world deployment. Both are specifically designed for AI workloads and are intended to use less energy than general-purpose chips.

“If you want to get to Mars, you don’t get there by building bigger and bigger buildings,” Klein explained, “The same analogy applies to AGI: it doesn’t make sense to build bigger and bigger models. That will not get us where we want to be with AGI.”

He added that Amazon is currently adding both types of chips to its data centres every day in volumes exceeding those of chips from Intel, AMD and NVIDIA combined.

Amazon says it is not tied to any single AI provider

Klein said Amazon wants to be home to AI models from every provider, whether open or closed. Its current key partners include Anthropic, whose models run on Amazon’s chips, while OpenAI, the developer of ChatGPT, has also recently announced broader co-operation with Amazon’s systems.

He stressed this point because the leading model today could easily be overtaken within months. For that reason, businesses need the flexibility to switch models quickly without having to rebuild their systems from scratch.

In the past two months alone, many new models have emerged, including GPT-4, Gemini, Llama and others. Overall, he said, the top models are beginning to converge in terms of capability, meaning that the most important factor is not which model a company chooses today, but whether its system is flexible enough to change tomorrow.

AI can use a computer screen like a human user

Klein also demonstrated how AI can use a computer in the same way a person does, by seeing the screen and controlling the mouse, without having to wait for an IT team to create a new system connection first.

In a live demonstration, he showed AI opening a browser, searching for a product, selecting it and completing the ordering process on Amazon’s website without any human intervention.

Anthropic calls this capability “Computer Use”, while Amazon offers a similar service under the name Nova Act. The benefit, Klein said, is that AI can immediately be layered over legacy systems that already have a usable screen interface, without requiring a full rebuild.

Even so, he acknowledged that this approach uses more computing power than direct system integration. It is therefore better suited as a short-term solution, while the longer-term goal should be to connect AI properly with software through standard protocols.

AI agents are set to redefine enterprise systems, AWS warns

MCP aims to give AI a universal way to connect with software

Model Context Protocol, or MCP, is a connectivity standard that allows AI to interact with different software tools. Klein compared it to USB-C, a universal cable standard that works across multiple devices without requiring a separate cable for each one.

He demonstrated this by connecting the Claude model to Blender, a 3D design application that typically requires considerable specialist expertise. Although he said he had no knowledge of the software himself, he was able to give AI an image and ask it to turn it into a 3D model. The AI then controlled Blender on its own, adding trees, creating a house and resizing components, without the user needing to know how the software worked.

Klein highlighted that this was the true power of MCP: not just enabling AI to read data, but allowing it to control and use existing software like an expert.

Amazon robots are trained millions of times a day in virtual worlds

In the final part of his presentation, Klein showed video of robots in Amazon warehouses moving beneath shelves, lifting entire shelving units and transporting goods while working in the same space as human staff. He said this is a highly complex engineering challenge because the robots must operate safely around people.

“You have millions of different product variations. Some are heavy, some are sturdy, and some are squishy. So you need a robotic hand with the sense of touch to understand how hard or how softly it needs to grip a particular object. That is our latest innovation, combining robotics, machine learning and AI,” Klein said.

He also explained that training robots in the real world is constrained by time, since there are only 24 hours in a day and only a limited number of real-life situations that can occur. 

To overcome that, Amazon uses NVIDIA’s Omniverse simulation software to create realistic virtual warehouse environments and generate millions of different scenarios in a single day.

These include situations in which a person pushes a robot, an item spills on the floor, the power goes out or other unusual conditions arise, allowing robots to learn how to respond safely in a wide range of situations.

Amazon is also developing robotic hands fitted with touch sensors at the fingertips, enabling them to detect whether an object is heavy or light, hard or soft. This matters because warehouse inventories include millions of items with vastly different characteristics. 

The company is also introducing humanoid robots to move baskets used in order sorting.

Klein said the reason for using a human-like form is that such robots can work immediately in spaces already designed for humans, without the need to redesign the entire warehouse structure, generating significant cost savings.

He concluded by saying Amazon remains committed to expanding infrastructure in Southeast Asia, including Thailand, in both data centres and workforce skills development.

The real question, he said, is not which industries AI will transform, but which organisations will move fastest to lead that transformation.