Agentic AI Patterns : Handbook for Architects
Many of us read all sorts of articles, blogs, and watch endless videos about Agents, but how many of us actually sit down and write code to implement them? Probably not a lot. Maybe it’s because we don’t really write code these days — we just ask ChatGPT, Gemini, or Claude to do the heavy lifting for us.
So, after coming across Andrew Ng’s article on Agentic AI Patterns, I thought, “Why not try to write (or… generate) some code myself?” Maybe show a few examples, compare them, and discuss the best frameworks for bringing these Agentic AI Patterns to life. Let’s see if any real typing actually happens!
Agentic patterns can be broadly categorized into these 6 patterns.
- Chain-of-Thoughts Prompting
- Tool-Using Agent
- React (Reason + Act)
- Reflexion
- AutoGPT
- Multi Agent
I’ll go through, these 6 patterns, and provide examples to implement them, as well as discuss about Frameworks those are available to easily implement them.
Chain-of-Thoughts Prompting
This is not exactly an Agentic pattern but a way of prompting, result of which helps to bring up the agents required to fulfill the thoughts. Hence I keep this as a pattern to start with.
Here is a code example, where based on a query, a prompt is trying to generate CoT to determine list of tools. I did it using OpenAI, but you can replace it with any other chat/completion LLM inference.
Tool-Using Agent
A pattern with low complexity, easier to implement. The AI has access to external “tools” and decides when to call them. From the query it’ll identify one tool from list of tools and invoke it.
Most of the LLMs, call this as Function Calling capability, where in a prompt, you send a list of tools, along with the query, it tells you which function to call and populate the parameters. Here is an example, where a Helpdesk agent trying to identify which tool to invoke. It’ll help you to understand the internals.
Here is a sample of Function calling using OpenAI, from their cookbooks.
Here is another example how Semantic Kernel implements it.
Now using the two patterns above, we can start looking into some of the complex patterns, if you look closely, they are just various orchestration of the two patterns mentioned above.
React (Reason + Act)
A commonly used pattern which Interleaves chain-of-thought reasoning with explicit “action” steps, then processes the outcome before continuing. It needs a loop that captures the AI’s “actions,” executes them, returns results for further reasoning.
I gave an example, where an airline agent, processing refunds based on query.
If you understand my example, here is a LanGraph template for a simple react agent. Just few lines of code.
Reflexion Pattern
Agent generates an answer, then “reflects” or self-critiques by another Agent, and revises if necessary. This pattern requires an additional “reflection” / “feedback” step or repeated query to the LLM.
I tried to give an example where based on the user query on a csv file, agent is creating dynamic code, executing it and then fixing it in case of any error, till final answer is achieved.
Here is a similar example using Autogen, with code generator and code reviewer.
Auto-GPT / Task Loop
Set of Agents autonomously plans tasks, executes them, updates memory, and iterates until a high-level goal is completed. To run these set of agents requires memory management (vector databases), scheduling logic, and robust handling of indefinite loops.
I know it looks complicated, but with framework like Autogen / LanGraph, it is easier to implement. Framework takes care of few steps implicitly.
I gave an example of AutoGPT without using any framework, where it is trying to answer query by gathering info from multiple sources.
If you are still with me, by now you are wondering, then how React, Reflexion and Auto-GPT are different. All of them are autonomous agent, build upon Chain-of-Thoughts and tool access, but they have few differences and pros and cons. I have tried to explain them on my other medium article.
Multi-Agent Collaboration
A pattern where multiple specialized agents communicate — each with a distinct role. It requires a “controller” or “communication” layer to facilitate agent interactions.
I tried to give an example using Agentic RAG, where a Multi-Agent system pulling out information from various sources, internal knowledge base using Tool Agent, web-search using Auto-GPT, as well as doing a local data analysis using Reflexion.
Langraph is commonly used to implement this pattern. Langraph provide a Graph like representation of the Agents and their interaction, it provides an in-built mechanism to transfer state from one agent to another.
I’ll stop here, you may find few things are overlapping, or same pattern described in different way, but if you try to code them you will find subtle differences. While implementing any of the autonomous patterns (ReAct, Reflexion, Auto-GPT and Multi-Agent) , keeping human in the loop is very important. After few tries it should allow human to interfere and not go into a infinite loop.
Here is a quick guide on how to select your pattern.
- Simple tasks or quick improvements? Consider Chain-of-Thought or a Tool-Using Agent.
- Iterative tasks with external data checks? ReAct or Reflexion.
- Long-running or open-ended goals? Look to Auto-GPT/Task Loop.
- Complex workflows needing multiple expert roles? Multi-Agent Collaboration.
A single solution can be blend of multiple patterns, the way I described at Multi-Agent example.