Decoding Agents and Agentic Systems

Decoding Agents and Agentic Systems

Things have been moving at the speed of light since the launch of ChatGPT and every year since then has been the year of some construct around Generative AI. I think 2025 will be seen as the year of Agents since everyone is talking nothing but Agents or Agentic Systems. 

When I think about Agents, I remember Agent Vinod a 2012 movie by Sriram Raghavan. Well unfortunately, as such there is no relationship with AI Agents, but that name is stuck in my brain. 

Agents are evolving, and my understanding of them is also developing. Agents within the Agentic Systems can be described as anything within the range from being deterministic to autonomous. Well, in all honesty it's still vague and is very technical in nature. I prefer to put it from a layman's perspective “Agents are systems that independently accomplish tasks on your behalf”, an OpenAI definition.  

Let’s further break it down using an example of a Business Process, technical recruitment, the persona being Technical Recruiter in an enterprise. I strongly believe the Business Process is the King and the right way to articulate an Agentic system as that's where the buck is. 

In a simplistic world, the job of a Technical Recruiter is to identify and onboard the talent from the market whose skills match the job description.


Article content

We can break this into multiple activities that the Technical Recruiter will do 

  1. Firm up the job description. This will be an input from the Business Unit that which is looking to fulfill this position. 
  2. Comb through the profiles that match the job description. The profiles could come through multiple channels; it could be from job sites or emails or referrals etc. 
  3. Once the profile is identified, it is sent for a review to the concerned Business Unit 
  4. Once the Business Unit gives a go ahead, there is a first level conversation with the candidate in terms of experience, expectations etc. 
  5. If all goes well, a time is setup for an interview or multiple interviews 
  6. .... <<a few more activities>> 
  7. The final step being the Candidate getting onboarded 

This is just a rough sketch of the process flow and there could be more details embedded in each activity but essentially all of them time-consuming for a Technical Recruiter assuming one is working on fulfilling multiple requests from multiple Business Units. A closer inspection of the nature of the activities will help determine the feasibility of adopting AI to ease the job thereby helping them focus on higher order set of activities. Let's assume that each of them is an Agent designed to accomplish the task, and they are all stitched together by a Workflow.

Here is a distinction between a Workflow and an Agent. 

Anthropic tries to distinguish between a Workflow and an Agent but consider both as Agentic 

  • Workflows are systems where LLMs and tools are orchestrated through predefined code paths. 

  • Agents, on the other hand, are systems where LLMs dynamically direct their own processes and tool usage, maintaining control over how they accomplish tasks. 

In my view it's a design pattern. As an AI Solution Architect, I prefer a deterministic flow where each step can have a probabilistic outcome. 

Here is a view of list of possible Agents at work that is orchestrated by a master Agent. Each Agent is an Activity that is being performed by the Technical Recruiter. Each Activity can be an area where AI can be applied for productivity gains. As of today, probably there are adhoc custom tools that improve productivity in small percentages but gives a disconnected user experience. The productivity can be amplified by leveraging a general purpose LLM and programming the entire sequence through a workflow. In my view each Activity should be atomic in nature which then qualifies as an Agent. The Agents work together either in sequential or parallel, sharing information, to get the task done.  As we start thinking about Activities which map to Agents, it's also equally important to outline on how to measure and evaluate the correctness and performance of each Agent which can be used a benchmark to compare against the human counterpart.  

Article content


Article content

Let's consider the 2nd Activity or the profileMatching_Agent for a specific job requirement against a specific channel.  

Volumetric and Productivity 

The followings data points are fictitious in nature 

  • A Technical Recruiter takes on an average manual screening of 10 profiles for a specific job requirement to identify 1 golden profile 

  • 1 profile takes on an average 10 minutes for a Technical Recruiter to manually screen and typically its more of a keyword search across multiple parameters for e.g. overall experience, how often the Candidate has jumped the ships etc. 

  • It takes on an average 50 profiles to select a Candidate, consuming anywhere between 8 to 12 hours' worth of effort that can span multiple weeks  

  • It is a daunting manual task, and the productivity can go for a toss given the rote nature of work 

  • If it's a small to medium sized organisation, then these numbers can have a ripple effect on the hiring turnaround time leading to loss of revenue 

  • Typically, the TAG team is a thin shared team that handle multiple BUs  

Agentic Intervention 

  • This can reduce the profile screening time down to a few seconds 

  • Data Integration across different channels automates the process of extracting and matching profiles thereby reducing the manual work of a Technical Recruiter 

Correctness 

  • Agentic design should ensure the right profile selection and must be monitored periodically, to ensure that the approach can be fine-tuned and compared it against the human benchmark 

  • Some of the metrics for a periodic evaluation 
  • No. Of Profiles scanned 

  • No. Of Profiles selected against the job description 

  • No. Of Profiles wrongly selected against the job description

Agentic design must be decided based on a clear cost benefit analysis and not based on FOMO. If the process today is working fine, meets the objective, there is no reason to make it complex. Agentic systems are not plug and play, it takes time, effort and capital to build it. 

As Anthropic puts it 

When building applications with LLMs, we recommend finding the simplest solution possible, and only increasing complexity when needed. This might mean not building agentic systems at all. Agentic systems often trade latency and cost for better task performance, and you should consider when this tradeoff makes sense 

References:


Aparna Vedam

Director - Competency & Engagement on GenAI | IIM Rohtak | Winner of L&D Technology Champion Award | Certified SAFe® 5 Agilist

2w

You've explained it perfectly - simple yet exhaustive. Thanks for this insightful blog.

Anindita Desarkar, PhD

PhD in CSE (JU) || Product Owner || Gen AI Practitioner || Director @LTIMindtree|| Dedicated Researcher in Data Science, Gen AI || Mentor || Patents on AI/DS/Gen AI

2w

Yes very true. Agents are costly in terms of time and resources. Also needs much more evaluation in terms of evaluating trustworthiness of the system, appropriate tool selection and optimal path selection. Hence, it needs to be applied only as an essential.

To view or add a comment, sign in

More articles by Vishwanathan Raman

  • Model Context Protocol or MCP

    Models are only as good as the Context provided to them. This was the opening statement from Mahesh Murag on "Building…

    7 Comments
  • Generating Assessments using Gen AI -- Part 2

    This is in continuation to the article https://www.linkedin.

    3 Comments
  • Generating Assessments using Gen AI

    Assessment Generation has always been a task of a SME. It is a complex and time consuming task of both Development &…

    5 Comments
  • Flutter/Dart, Gemini API

    #ignAIte, #AI, #GenAI The past few weeks have been crazy, relearning a lost knowledge and then creating something fun…

    7 Comments
  • Github Co-Pilot -- Part 3

    This is a continuation to my earlier article I am just starting from the place I left off. Its time for some complex…

    4 Comments
  • Github Co-Pilot -- Part 2

    This is a continuation to my earlier article Scenario based code generation 3 - CSV file as a reference and mongodb as…

    2 Comments
  • Github Co-Pilot -- Part 1

    Co-Pilot is an interesting topic within the Gen AI landscape and one of the most widely adopted areas. A report by…

    2 Comments
  • Gen AI -- Quantised Models -- llama.cpp -- Long Weekend Project Part 3

    Here are the links to the my earlier articles first article, second article on building a Gen AI solution on the…

    3 Comments
  • Gen AI -- Long Weekend Project Part 2

    Here is the followup to my first article I remember the phrase "When the GOING gets tough the TOUGH gets going" and…

    2 Comments
  • Gen AI Weekend Project

    The World Cup is on. The match against PAK on 14-Oct was anti-climax, maybe our Indian team was too good for the other…

    8 Comments

Insights from the community

Others also viewed

Explore topics