In the first article of our Agentic AI Essentials series, we’ll establish what makes agentic AI distinct. We’ll look at the process of tool calling and examine how agentic systems convert intelligence into action. We’ll also explore the human fears, pressures, and ambitions that fuel the hype around agentic systems. By sorting the signal from the noise, IT decision-makers can take the first step toward making sound decisions around agentic AI adoption.

Setting the Stage: From Anomaly Detection to LLMs

Here’s the story so far. Before the era of LLMs and generative AI, anomaly detection systems were the most common form of AI found in IT management. Developers could train a system to look for spikes in CPU usage or other time-series data and alert a human. The system's job was to spot a problem, then toss the responsibility to the user, who would decide what to do next. Teams could automate this process by writing endless "if/then" conditions—if the CPU spikes, take this specific action—but this was a monumental developer effort. Today, LLMs are skilled at analyzing data and taking an autonomous lead. If a CPU is spiking, the LLM, having learned from a vast amount of data, can understand what needs to be done. It can make autonomous decisions on remediation, such as updating tickets or escalating an issue, and can even decide when to involve a human. It does this by moving beyond just recommendations and into the realm of action.

Tool Calling and the Leap to Agentic AI

How have we made the leap to autonomous systems? In a pre-agentic world, a system's hands were tied; models could analyze data and recommend solutions, but not take action. For example, an ITSM AI might scan past tickets and suggest a fix, leaving the human agent to implement it. Agentic AI changes that. If a user requests VPN access, the agent can understand the request, seek manager approval, read the response, and then provision access directly. The breakthrough enabling this shift is known as tool calling. Instead of relying only on memory, an agentic LLM recognizes when a task requires an external tool.

  • For math, it uses a calculator
  • For weather, it checks a live source
  • In IT, it can query metrics, events, logs, or incident data rather than fabricating an answer from scratch

Tools become their “textbooks”, external resources they use for accurate, real-time results. The model is given instructions on how to use these tools to answer questions. For example, if you ask, "How is my website doing?" a properly configured LLM will first check a tool that provides data on slowness, rather than simply fabricating an answer from scratch. It’s important to remember that agentic systems are about more than just executing a single instruction. Chatting with an LLM like ChatGPT doesn’t qualify—it simply produces a monologue without access to tools or decision-making. A truly agentic system, by contrast, chains multiple steps together: if asked to write an article, it might search the web, draft content in specialized software, review for quality, and refine its output. That ability to choose and coordinate across tools is what makes it distinctly agentic.

How Vendors and Customers Are Fueling the Hype

How is the market responding to these new possibilities? Today’s software vendors face intense pressure to remain relevant in a rapidly evolving landscape. In the context of agentic AI, this has led to a trend in which more traditional forms of AI are rebranded or relabeled as agentic. One Gartner report says: “Many vendors contribute to the hype by engaging in ‘agent washing’ – the rebranding of existing products, such as AI assistants, robotic process automation (RPA), and chatbots, without substantial agentic capabilities.”

Gartner estimates only about 130 of the thousands of agentic AI vendors are real. This rebranding is a low-risk way to get on the "hype train" and generate more sales. The result, however, is a blurry line between what is truly autonomous and what is simply a sophisticated, workflow-based system. Many "agentic" use cases can be solved without explicitly using agents, but the label adds credibility.

Customers, too, are fanning the flames. Many IT leaders and teams are desperate for a single solution that can magically solve all their complex, deeply ingrained problems. They hear the promises of agentic AI and see it as the answer to everything from technical debt to staff shortages. This desire for a quick fix creates a market where vendors feel pressured to overpromise, and customers are willing to believe them.

Technological Potential and Human Anxieties

Given the uncertainty around the technology, Gartner predicts that about 40% of so-called agentic use cases will be shelved, while the remaining 60% will shift toward more traditional workflow-based systems—not canceled, but evolving into more practical forms. Anushree Verma, senior director analyst at Gartner, notes that this is likely to lead to failed implementation projects:

“Most agentic AI projects right now are early-stage experiments or proof of concepts that are mostly driven by hype and are often misapplied. This can blind organizations to the real cost and complexity of deploying AI agents at scale, stalling projects from moving into production. They need to cut through the hype to make careful, strategic decisions about where and how they apply this emerging technology.”

The gap between promise and reality will gradually narrow as agentic systems become more powerful and their adoption becomes cheaper. But for now, it’s notable that in a field defined by cold logic, a key challenge around agentic AI is a human one. The fog of hype we're currently navigating is fueled by emotion: the prospect of being left behind, the anxiety of missing out, and the simple fear of failure. IT departments are often drawn to the promise of a panacea, making it tempting to overlook the practical realities of implementation costs, shifting roles, and technical debt. These are the focus of the next article in our Agentic AI Essentials series.