Skip to content

kayba-ai/agentic-context-engine

Repository files navigation

Kayba Logo

Agentic Context Engine (ACE)

GitHub stars Discord Twitter Follow PyPI version Python 3.11+ License: MIT

AI agents that get smarter with every task 🧠

Agentic Context Engine learns from your agent's successes and failures. Just plug in and watch your agents improve.

Star ⭐️ this repo if you find it useful!


πŸ€– LLM Quickstart

  1. Direct your favorite coding agent (Cursor, Claude Code, Codex, etc) to Quick Start Guide
  2. Prompt away!

βœ‹ Quick Start

1. Install

pip install ace-framework

2. Set API Key

export OPENAI_API_KEY="your-api-key"

3. Run

from ace import ACELiteLLM

agent = ACELiteLLM(model="gpt-4o-mini")

answer = agent.ask("What does Kayba's ACE framework do?")
print(answer)  # "ACE allows AI agents to remember and learn from experience!"

πŸŽ‰ Done! Your agent learns automatically from each interaction.


🎯 Integrations

ACE provides four ready-to-use integrations:

β†’ Integration Guide | β†’ Examples

1. ACELiteLLM - Simplest Start πŸš€

Create your self-improving agent:

Click to view code example
from ace import ACELiteLLM

# Create self-improving agent
agent = ACELiteLLM(model="gpt-4o-mini")

# Ask related questions - agent learns patterns
answer1 = agent.ask("If all cats are animals, is Felix (a cat) an animal?")
answer2 = agent.ask("If all birds fly, can penguins (birds) fly?")  # Learns to check assumptions!
answer3 = agent.ask("If all metals conduct electricity, does copper conduct electricity?")

# View learned strategies
print(f"βœ… Learned {len(agent.skillbook.skills())} reasoning skills")

# Save for reuse
agent.save_skillbook("my_agent.json")

# Load and continue
agent2 = ACELiteLLM.from_skillbook("my_agent.json", model="gpt-4o-mini")

2. ACELangChain - Wrap ACE Around Your Existing Agent ⛓️

Wrap any LangChain chain/agent with learning:

Best for: Multi-step workflows, tool-using agents

Click to view code example
from ace import ACELangChain

ace_chain = ACELangChain(runnable=your_langchain_chain)
result = ace_chain.invoke({"question": "Your task"})  # Learns automatically

3. ACEAgent - Enhance Browser-Use Agent with Self-Optimizing 🌐

Self-improving browser agents with browser-use:

Features: Drop-in replacement for browser_use.Agent, automatic learning, reusable skillbooks β†’ Browser Use Guide

Click to view code example
pip install ace-framework[browser-use]
from ace import ACEAgent
from browser_use import ChatBrowserUse

# Two LLMs: ChatBrowserUse for browser, gpt-4o-mini for ACE learning
agent = ACEAgent(
    llm=ChatBrowserUse(),      # Browser execution
    ace_model="gpt-4o-mini"    # ACE learning
)

await agent.run(task="Find top Hacker News post")
agent.save_skillbook("hn_expert.json")

# Reuse learned knowledge
agent = ACEAgent(llm=ChatBrowserUse(), skillbook_path="hn_expert.json")
await agent.run(task="New task")  # Starts smart!

4. ACEClaudeCode - Claude Code CLI πŸ’»

Self-improving coding agent using Claude Code:

Features: Claude Code CLI wrapper, automatic learning, task execution traces β†’ Claude Code Loop Example

Click to view code example
from ace import ACEClaudeCode

agent = ACEClaudeCode(
    working_dir="./my_project",
    ace_model="gpt-4o-mini"
)

# Execute coding tasks - agent learns from each
result = agent.run(task="Add unit tests for utils.py")
agent.save_skillbook("coding_expert.json")

# Reuse learned knowledge
agent = ACEClaudeCode(working_dir="./project", skillbook_path="coding_expert.json")

Why Agentic Context Engine (ACE)?

AI agents make the same mistakes repeatedly.

ACE enables agents to learn from execution feedback: what works, what doesn't, and continuously improve.
No training data, no fine-tuning, just automatic improvement.

Clear Benefits

  • 🧠 Self-Improving: Agents autonomously get smarter with each task
  • πŸ“ˆ 20-35% Better Performance: Proven improvements on complex tasks
  • πŸ“‰ Reduce Token Usage: Demonstrated 49% reduction in browser-use example

Features

  • πŸ”„ No Context Collapse: Preserves valuable knowledge over time
  • ⚑ Async Learning: Agent responds instantly while learning happens in background
  • πŸš€ 100+ LLM Providers: Works with OpenAI, Anthropic, Google, and more
  • πŸ“Š Production Observability: Built-in Opik integration for enterprise monitoring
  • πŸ”„ Smart Deduplication: Automatically consolidates similar skills

Demos

🌊 The Seahorse Emoji Challenge

A challenge where LLMs often hallucinate that a seahorse emoji exists (it doesn't).

Seahorse Emoji ACE Demo

In this example:

  • Round 1: The agent incorrectly outputs 🐴 (horse emoji)
  • Self-Reflection: ACE reflects without any external feedback
  • Round 2: With learned skills from ACE, the agent successfully realizes there is no seahorse emoji

Try it yourself:

uv run python examples/litellm/seahorse_emoji_ace.py

🌐 Browser Automation

Online Shopping Demo: ACE vs baseline agent shopping for 5 grocery items.

Online Shopping Demo Results

ACE Performance:

  • 29.8% fewer steps (57.2 vs 81.5)
  • 49.0% token reduction (595k vs 1,166k)
  • 42.6% cost reduction (including ACE overhead)

β†’ Try it yourself & see all demos

πŸ’» Claude Code Loop

Continuous autonomous coding: Claude Code runs a task, ACE learns from execution, skills get injected into the next iteration.

Python β†’ TypeScript Translation:

Metric Result
⏱️ Duration ~4 hours
πŸ“ Commits 119
πŸ“ Lines written ~14k
βœ… Outcome Zero build errors, all tests passing
πŸ’° API cost ~$1.5 (Sonnet for learning)

β†’ Try it yourself


How does Agentic Context Engine (ACE) work?

Based on the ACE research framework from Stanford & SambaNova.

ACE uses three specialized roles that work together:

  1. 🎯 Agent - Creates a plan using learned skills and executes the task
  2. πŸ” Reflector - Analyzes what worked and what didn't after execution
  3. πŸ“ SkillManager - Updates the skillbook with new skills based on reflection

Important: The three ACE roles are different specialized prompts using the same language model, not separate models.

ACE teaches your agent and internalises:

  • βœ… Successes β†’ Extract patterns that work
  • ❌ Failures β†’ Learn what to avoid
  • πŸ”§ Tool usage β†’ Discover which tools work best for which tasks
  • 🎯 Edge cases β†’ Remember rare scenarios and how to handle them

The magic happens in the Skillbookβ€”a living document of skills that evolves with experience.
Key innovation: All learning happens in context through incremental updatesβ€”no fine-tuning, no training data, and complete transparency into what your agent learned.

---
config:
  look: neo
  theme: neutral
---
flowchart LR
    Skillbook[("`**πŸ“š Skillbook**<br>(Evolving Context)<br><br>β€’Strategy Skills<br> βœ“ Helpful skills <br>βœ— Harmful patterns <br>β—‹ Neutral observations`")]
    Start(["**πŸ“Query** <br>User prompt or question"]) --> Agent["**βš™οΈAgent** <br>Executes task using skillbook"]
    Agent --> Reflector
    Skillbook -. Provides Context .-> Agent
    Environment["**🌍 Task Environment**<br>Evaluates answer<br>Provides feedback"] -- Feedback+ <br>Optional Ground Truth --> Reflector
    Reflector["**πŸ” Reflector**<br>Analyzes and provides feedback what was helpful/harmful"]
    Reflector --> SkillManager["**πŸ“ SkillManager**<br>Produces improvement updates"]
    SkillManager --> UpdateOps["**πŸ”€Merger** <br>Updates the skillbook with updates"]
    UpdateOps -- Incremental<br>Updates --> Skillbook
    Agent <--> Environment
Loading

Installation

# Basic
pip install ace-framework

# With extras
pip install ace-framework[browser-use]      # Browser automation
pip install ace-framework[langchain]        # LangChain
pip install ace-framework[observability]    # Opik monitoring
pip install ace-framework[all]              # All features

Configuration

ACE works with any LLM provider through LiteLLM:

# OpenAI
client = LiteLLMClient(model="gpt-4o")

# With fallbacks for reliability
client = LiteLLMClient(
    model="gpt-4",
    fallbacks=["claude-3-haiku", "gpt-3.5-turbo"]
)

Production Monitoring

ACE includes built-in Opik integration for tracing and cost tracking:

pip install ace-framework[observability]
export OPIK_API_KEY="your-api-key"

Automatically tracks: LLM calls, costs, skillbook evolution. View at comet.com/opik


Documentation


Contributing

We love contributions! Check out our Contributing Guide to get started.


Acknowledgment

Based on the ACE paper and inspired by Dynamic Cheatsheet.

If you use ACE in your research, please cite:

@article{zhang2024ace,title={Agentic Context Engineering},author={Zhang et al.},journal={arXiv:2510.04618},year={2024}}

⭐ Star this repo if you find it useful!
Built with ❀️ by Kayba and the open-source community.

About

🧠 Make your agents learn from experience. Based on the Agentic Context Engineering (ACE) framework.

Topics

Resources

License

Contributing

Stars

Watchers

Forks

Packages

No packages published

Languages