
Building Your Own AI Agent Stack. What I Learned From 10 Open Source Projects
- Stephen Jones
- February 16, 2026
Table of Contents
I spent the last week falling down a rabbit hole. Not the productive kind where you emerge with a working solution and a sense of accomplishment. The kind where you save ten GitHub repos in a single week and then sit back and realise they’re all telling you the same thing.
The AI agent ecosystem is moving faster than most of us can track. And the people building the most interesting stuff aren’t waiting for enterprise vendors to package it up. They’re stitching together their own stacks from open source components, Claude Code extensions, MCP servers, and a healthy amount of duct tape.
Here’s what I found, and what I think it means for anyone working in cloud infrastructure and consulting.
The Repos That Caught My Attention
Let me walk through ten projects I bookmarked this week. Individually they’re interesting. Together they paint a picture of where AI development is heading.
1. Everything Claude Code
A complete collection of Claude Code configs from an Anthropic hackathon winner. Production-ready agents, skills, hooks, commands, and MCP configurations. Not a toy demo. Someone’s actual daily workflow, open sourced.
This is the one that got me thinking. We’re past the “hello world” phase with AI coding assistants. People are building serious infrastructure around these tools: custom agent personalities, hook systems that fire on specific events, and MCP servers that extend what the AI can reach.
2. TinyClaw
A minimal multi-channel AI assistant that wraps Claude Code with WhatsApp integration and a queue-based architecture. The interesting part isn’t the messaging layer. It’s the architecture: a lightweight wrapper that routes conversations from multiple channels into a single AI backend.
If you’ve built microservices, this pattern should feel familiar. The AI model is becoming just another service in your stack.
3. Clawe
Multi-agent coordination where each agent has its own identity, workspace, and scheduled heartbeats. Think of it as container orchestration but for AI agents. Each one runs independently, checks in on a schedule, and coordinates with the others.
This is where things start getting genuinely interesting for enterprise use. Not one agent doing everything. A team of specialised agents, each with a clear responsibility.
4. Claude Code API Gateway
An OpenAI-compatible API gateway for Claude Code CLI. Takes the local CLI tool and exposes it as a standard API endpoint. If you’ve ever needed to integrate Claude Code into a larger system, this is the bridge.
The “OpenAI-compatible” part matters. It means you can plug Claude Code into any tool or workflow that already speaks the OpenAI API format. No custom integration needed.
5. Google Researcher MCP Server
An MCP server that gives your AI agent access to Google’s research capabilities. MCP (Model Context Protocol) is becoming the standard way to extend what AI agents can access. This is one example, but the pattern is what counts: need a new capability? Build an MCP server.
6. TLDR
Code analysis specifically designed for AI agents. Takes your 100K line codebase and extracts just the structure, cutting token usage by 95% while preserving everything the AI needs to understand and edit code correctly.
This solves a real problem. Context windows are large but not infinite. The bottleneck isn’t what the model can process. It’s what you can efficiently feed it.
7. Dictate
Push-to-talk voice dictation for macOS that runs entirely on-device using Apple Silicon MLX models. No cloud, no API keys, no subscriptions. Hold a key, speak, release. Clean text appears wherever your cursor is.
Nothing to do with agents directly, but everything to do with the broader trend: AI capabilities moving to the edge, running locally, with zero cloud dependency. I’ve personally been running WhisperFlow for this, and I’m sold on it, though it takes a little time to get used to. Voice as an input method has a natural advantage in the agentic era. Describing what you want in plain speech maps directly to how these tools expect instructions.
8. EdgeQuake
A Rust implementation of the LightRAG algorithm that builds knowledge graphs for multi-hop reasoning over documents. Traditional RAG retrieves chunks of text based on vector similarity. EdgeQuake decomposes documents into entities and relationships, then traverses the graph at query time.
That’s a fundamental shift. Instead of “find me the paragraph most similar to this question,” it’s “trace the relationships between these concepts across my entire document set.” If you’re running RAG workloads on AWS, this is worth watching.
9. Polymarket Autopilot
Automated paper trading on prediction markets with custom strategies. An AI agent that monitors Polymarket for arbitrage opportunities and executes trades autonomously.
I’m not suggesting you go automate financial trading. The interesting bit is the pattern: an autonomous agent with a defined strategy, running continuously, making decisions without human intervention. That’s the direction all of this is heading.
10. Daemon
Described as “a personal API for human connection.” The concept is building a digital interface that represents you, your knowledge, your communication patterns, and making it accessible through an API.
Sounds abstract until you realise this is what we’re all building towards. Your personal AI infrastructure isn’t just tools you use. It’s an extension of how you think and work.
The Pattern Nobody’s Talking About
Look at those ten repos again. They aren’t ten random projects. They’re components of the same stack:
- A core AI engine (Claude Code, extended via configs and hooks)
- Tool access layer (MCP servers for research, data, external services)
- Agent coordination (multi-agent orchestration with identities and schedules)
- Intelligence layer (knowledge graphs, code analysis, context optimisation)
- Interface layer (voice input, messaging channels, API gateways)
- Autonomy layer (agents that run independently and make decisions)
This is a full stack. Not a web stack. Not a data stack. An AI agent stack.
And here’s the part that should make you pay attention: nobody is buying this stack from a vendor. People are assembling it themselves from open source components. The same way we assembled LAMP stacks in 2005 and container orchestration in 2015.
What This Means If You Work in Cloud
I spend my days thinking about AWS infrastructure. Here’s where this connects.
The compute layer is shifting. These agent stacks need to run somewhere. Some of it runs locally (Dictate on Apple Silicon). Some of it needs cloud compute (multi-agent coordination, knowledge graph processing). The infrastructure question isn’t “do we need cloud?” anymore. It’s “what runs where, and how do we orchestrate across both?”
MCP servers are the new microservices. Every capability your agent needs becomes an MCP server: a small, focused service with a clear interface. If you’ve built and deployed microservices on AWS, you already understand the patterns. The implementation is different but the architecture is familiar.
RAG on AWS needs rethinking. If EdgeQuake’s knowledge graph approach becomes mainstream (and I think it will), the standard “chunk, embed, store in OpenSearch” RAG pattern isn’t enough. You’ll need graph databases (Neptune), more sophisticated indexing, and different query patterns. The infrastructure bill changes shape.
Agent orchestration is the next Kubernetes. Multi-agent coordination systems like Clawe are primitive today. Give it two years. Enterprise teams will need to deploy, monitor, scale, and secure fleets of AI agents the same way they manage container workloads today. AWS will build managed services for this. The question is whether you understand the patterns before the managed services arrive.
The Honest Assessment
Let me be clear about something. Most of these repos are early stage. Some have rough edges. A few might not survive six months. That’s not the point.
The point is that the people building these tools are solving real problems. Problems that enterprise AI deployments will face at scale. Context window management, multi-agent coordination, capability extension, edge inference.
If you wait for AWS or Azure or Google to package these patterns into managed services, you’ll understand how to click buttons in a console. If you experiment with the raw components now, you’ll understand why those services work the way they do.
That’s the difference between using a tool and understanding a tool.
The Bigger Picture
We’re in the “build your own” phase of AI agent infrastructure. It’s messy, it’s fast-moving, and half the tools that exist today will be replaced by something better next quarter.
But the patterns are solidifying. Core AI engine, tool access, agent coordination, intelligence layer, interface layer. That architecture isn’t going away even if every individual component gets swapped out.
If you work in cloud infrastructure, now is the time to understand these patterns. Not because you need to deploy a personal AI stack in production. Because the enterprise version of this stack is coming, and the people who understand the architecture will be the ones designing it.
I hope someone else finds this useful.
Cheers.


