AI Agents

When most people hear "AI agent," they picture a chatbot that can do more things. That mental model leads to bad deployments: agents that wander, do the wrong thing, and leave no record of what they did. The organisations getting real value from agents in 2026 think about them differently — as governed workers with specific authorised capabilities, running inside a controlled environment.
Four concepts explain how well-designed agent systems work: LLMs, tools, skills, and harnesses. Understanding all four is the difference between deploying an agent and deploying a reliable one.
A Large Language Model — Claude, GPT, Gemini, Llama — is the reasoning engine of any AI agent. It reads the task, decides what to do next, interprets results, and determines when the job is done. The LLM is what makes an agent intelligent.
What an LLM cannot do on its own is take action in the world. It cannot read your ERP, send an email, query a database, or approve an invoice. For that, it needs tools.
A tool is a specific, named action the LLM can request. Read this document. Query this database. Send this notification. Submit this form. Each tool does exactly one thing and returns a result the LLM can reason about.
Tools are the bridge between the model's intelligence and your business systems. When an agent processes an invoice, it is not the LLM doing the reading and the ERP updating — it is the LLM deciding which tools to call and in what order, while the tools do the actual work of touching your systems.
Every action an agent takes is a tool call. This is the property that makes agents auditable: because every action is a discrete, named tool call with defined inputs and outputs, every action can be logged automatically. You always know exactly what the agent did, with what data, and when.
A skill is a higher-level capability built from one or more tools, packaged with instructions for a specific business context. Where a tool is read invoice PDF, a skill is process supplier invoice — which internally sequences reading, extracting, matching against a PO, and flagging discrepancies. Where a tool is query transaction data, a skill is run AML check — which applies a specific rule set and returns a structured risk assessment.
Skills let you define reusable, business-meaningful capabilities that sit above the raw tool level. An agent assigned the onboard new vendor skill knows exactly what that entails for your organisation — which tools to use, in what order, with what validation logic — without needing that context in every prompt.
An agent is an LLM configured with a specific set of tools and skills, a permission level, and a purpose. The same underlying model can power very different agents depending on what it is authorised to do.
Three autonomy levels cover most enterprise workflows:
These are not different models — they are the same model with different permission configurations. The intelligence is identical; what changes is the trust boundary.
If tools are the hands and the LLM is the brain, the harness is the organisation that employs them — the policies, infrastructure, and controls that govern how the agent operates.
A harness manages four things:
Permissions. Which tools and skills each agent can access, and whether they execute automatically, require approval, or are blocked entirely. Changing what an agent is allowed to do is a configuration change — not a code deployment.
Model routing. Which LLM handles which type of request. Simple, high-volume tasks route to a cost-efficient model. Complex reasoning routes to a frontier model. This keeps costs proportionate to task complexity across thousands of daily operations.
Infrastructure. The harness runs inside your own network. The agent's tool calls — querying databases, reading documents, accessing internal APIs — execute on your infrastructure. Only the prompt and response cross the boundary to the model provider. For Indian enterprises with data residency or compliance requirements, this is what makes enterprise agent deployment viable without exposing sensitive data to external systems.
Audit trail. Every tool call is automatically logged — which tool, which inputs, which output, which agent, at what time. This is the compliance record that regulators and auditors require, produced as a natural output of the architecture rather than a separate instrumentation effort.
The skills-agents-harness model solves the three problems that cause enterprise agent deployments to fail.
The blast radius problem: an agent with unrestricted tool access can cause significant harm from a single bad instruction. Scoped tool permissions make the boundary explicit — a read-only agent cannot modify anything regardless of what the LLM decides.
The governance problem: "the AI did it" is not an acceptable answer to an auditor. A logged tool call with typed inputs and outputs is. The harness makes every agent action attributable and reviewable.
The trust problem: organisations do not adopt agents they cannot control. The supervised agent pattern addresses this directly — humans approve irreversible actions, and the harness enforces that checkpoint at the infrastructure level, not through the model's self-restraint.
An enterprise deploying an invoice processing agent gives it four tools: read email, query ERP, write ERP, send approval notification. It runs as a read-only agent for the reading and matching phase, then hands off to a supervised agent for the ERP write — which pauses and shows the finance team what it intends to post before doing so. The harness logs every tool call, prevents the agent from accessing any system outside its defined scope, and routes the extraction work to a cost-efficient model while using a more capable model for the matching logic.
A CFO, an internal auditor, and a compliance officer can all engage with this system confidently — not because they have been reassured, but because the governance properties are observable.
Every agent deployment we build starts with the tool and skill design — matching specific capabilities to the workflow, not general-purpose access to everything. Our AI Builder service scaffolds the full stack: tools connected to your business systems, skills packaged for your workflows, agents configured at the right autonomy level. Our automation practice applies the supervised agent pattern to every workflow that touches irreversible actions.
For organisations that need agents operating inside their own infrastructure, our technology services team deploys and manages the harness on your servers. Our integration team builds the tools that connect agents to your ERP, HRMS, document systems, and internal APIs.
The organisations that design this architecture well from the start can deploy each new agent in days — the harness, the tool library, and the permission model already exist. The ones that skip the architecture deploy each agent as a one-off and end up with a fragmented landscape no one can govern. Talk to our team about getting the foundation right.