By extending agent capabilities with the aid of SuperAGI tools, developers can carry out particular tasks outside the purview of the usual LLM. For instance, the GoogleSearchTool enables agents to scan Google search results and extract insightful information, while the CodingTool enables the writing, ag, and reworking of code. Tools have the ability to access other tools’ memories in order to incorporate their knowledge and lessons learned into their own operations, which improves the calibre of the agent’s output.

🧠 Dedicated and Shared Tool Memory

Tools may access the memory of their own previous iterations and apply the lessons learned appropriately in subsequent runs thanks to each tool having its own dedicated memory.

Additionally, tools have a broader Shared Memory (a Vector DB) that enables them to communicate with one another and obtain the pertinent data they need to accomplish their objectives.

⚙️ How does this work?

SuperAGI agents can use a variety of tools during each run to accomplish their objectives through numerous iterations.

Tools can retrieve data points from their own dedicated memory for all prior iterations or, upon completion of each iteration, fetch the most recent memory utilized by another tool.

This procedure is referred regarded as a “Feed Memory” since it allows an agent to retrieve tool learnings from all agent run iterations.

For instance, If an agent is established with access to SearchTool —> WriteToFile —> CodingTool to achieve a goal. Then during a run, the CodingTool can fetch the memory of the last search results conducted by the ‘SearchTool’ directly to write the code. The “Feed Memory” also allows the SearchTool to fetch the memory of any former iterations of itself i.e.,




Tools can also access the Long-Term-Memory(LTM) of previously used tools both within and across runs providing output history from past runs.

Long-term memory allows the tools to store the learnings of any other tool and itself during the complete agent workflow.

For instance, If an agent is established with access to CodingTool —> WriteToFile —> GitHubTool. Any iteration of the CodingTool’s memory is accessible to the WriteToFile. Once the initial agent run is completed, the overall learning of each tool used in the run including all previous iterations will be saved in LTM. These learnings can then be accessed and employed during future agent runs within an agent workflow.

✨ Conclusion

The Traditional CoT (Chain of Thoughts) method sends extremely limited context to the following step between each agent iteration. However, when the demand arises to convey the complete memory of a tool to the next iteration, the performance of CoT tends to be sub-optimal. The most recent developments in SuperAGI tool architecture aim to do away with these shortcomings and improve the tools’ ability to interface with memory when utilizing SuperAGI.

Try SuperAGI – https://superagi.com/understanding-how-dedicated-shared-tool-memory-works-in-superagi/

Shares:

Leave a Reply

Your email address will not be published. Required fields are marked *