Skip to main content

Adapters

OmniCache-AI adapters provide drop-in integrations with popular AI and agent frameworks. Each adapter implements the framework's native cache or agent interface, so you get caching without changing your existing code.

Overview

Adapters bridge the gap between OmniCache-AI's cache engine and the framework-specific APIs that each AI library expects. Instead of writing custom glue code, you instantiate an adapter with a CacheManager and plug it into the framework's standard extension point.

There are two adapter styles:

  • Interface adapters (LangChain, LangGraph) -- subclass the framework's cache/checkpointer base class and implement its required methods. The framework calls these methods automatically.
  • Wrapper adapters (AutoGen, CrewAI, Agno, A2A) -- wrap an agent or handler with cache logic. All non-overridden attributes proxy through to the original object via __getattr__.

Framework Support Matrix

AdapterFrameworkMin VersionExtraInterface
LangChainCacheAdapterLangChainlangchain-core >= 0.2pip install 'omnicache-ai[langchain]'BaseCache
LangGraphCacheAdapterLangGraphlanggraph >= 0.1pip install 'omnicache-ai[langgraph]'BaseCheckpointSaver
AutoGenCacheAdapterAutoGenpyautogen >= 0.2 or autogen-agentchat >= 0.4pip install 'omnicache-ai[autogen]'Agent wrapper
CrewAICacheAdapterCrewAIcrewai >= 0.28pip install 'omnicache-ai[crewai]'Crew wrapper
AgnoCacheAdapterAgnoagno >= 0.1pip install 'omnicache-ai[agno]'Agent wrapper
A2ACacheAdapterA2A (Agent-to-Agent)--pip install omnicache-aiHandler wrapper / decorator

Quick Start

All adapters follow the same three-step pattern:

from omnicache_ai import CacheManager, InMemoryBackend, CacheKeyBuilder

# 1. Create a CacheManager
manager = CacheManager(
backend=InMemoryBackend(),
key_builder=CacheKeyBuilder(namespace="myapp"),
)

# 2. Import the adapter
from omnicache_ai.adapters.langchain_adapter import LangChainCacheAdapter

# 3. Plug it in
adapter = LangChainCacheAdapter(manager)
tip

Install the all extra to get every framework dependency at once:

pip install 'omnicache-ai[all]'

Adapter Architecture

All wrapper-style adapters (AutoGen, CrewAI, Agno, A2A) implement the transparent proxy pattern:

  1. Cache-aware methods (run, kickoff, process, etc.) check the cache before delegating to the wrapped object.
  2. All other attribute accesses are forwarded to the wrapped object via __getattr__, so the adapter behaves identically to the original object for any non-cached operations.
# The adapter acts like the original object
cached_agent = AutoGenCacheAdapter(agent, manager)
cached_agent.name # proxied to agent.name
cached_agent.run("hello") # cache-aware

Choosing the Right Adapter

If you use...Use this adapterWhy
langchain LLMs / chat modelsLangChainCacheAdapterImplements BaseCache -- set it globally via set_llm_cache()
langgraph state graphsLangGraphCacheAdapterImplements BaseCheckpointSaver -- pass to compile(checkpointer=...)
pyautogen 0.2.x agentsAutoGenCacheAdapterWraps generate_reply() with caching
autogen-agentchat 0.4+ agentsAutoGenCacheAdapterWraps run() / arun() with caching
crewai crewsCrewAICacheAdapterWraps kickoff() / kickoff_async()
agno agentsAgnoCacheAdapterWraps run() / arun()
Custom A2A / inter-agent messagingA2ACacheAdapterWraps any handler via process() or @wrap decorator
Custom LLM functions (no framework)MiddlewareUse LLMMiddleware or AsyncLLMMiddleware directly

Next Steps