How We Built Race Radio Control in a Week With Claude Code and Airia

How We Built Race Radio Control in a Week With Claude Code and Airia

Table of Contents

Last week, Chris Porter and I entered the Airia F1 Atlassian Williams Racing virtual hackathon as team JoPoCo (Jones + Porter + Co). We had seven days to build an AI-powered fan experience for remote motorsport viewers. What we shipped was Race Radio Control: a second-screen companion that lets fans tune into four AI-powered radio feeds — Carlos Sainz, Alex Albon, team principal James Vowles, and stats host Max Folds — all grounded in live telemetry data.

We won by building fast. And the thing that let us build fast was Claude Code.

The Brief

Airia ran a virtual hackathon with $20,000 in prizes. The challenge: build AI agents that enhance the fan experience for Atlassian Williams Racing. Three tracks: trackside companion, home Grand Prix, or hybrid. We picked Track 2 — the home Grand Prix experience for remote fans.

The question we started with was simple: what if every fan watching at home had their own personal pit wall?

What We Built

Race Radio Control is a real-time AI companion that personalises the race experience based on who you are. A casual fan gets simple, exciting updates. A hardcore fan gets stint pace deltas and tyre degradation curves. Same race, completely different experience.

But the feature that became the centrepiece was the radio feeds. Fans can switch between four channels — two drivers (Albon and Sainz), a strategy feed from James Vowles, and a stats and trivia feed from Max Folds — and hear them respond in first person, in character, grounded in live race data. Ask Alex “How are the tyres?” and you get back something like:

“Yeah mate, the tyres are starting to go off a little — 14 laps on the mediums now and the rears are sliding around in the slow corners. But honestly the car’s been mega through sector one and three, we’ve gained five places! Let’s keep pushing while we’ve got the pace.”

That’s not a canned response. It’s generated from live lap-by-lap telemetry data, filtered through a persona prompt that captures each driver’s actual radio style. Carlos is measured and analytical. Alex is upbeat and uses “mate” in every other sentence. The AI doesn’t just generate text it embodies the driver.

We ran the audio through ElevenLabs TTS with a radio bandpass filter and bookended it with authentic radio crackle effects. It genuinely sounds like team radio.

The Architecture

Here’s what the stack looked like:

Fan (Browser)
    |
    v
Astro + React Frontend (Cloudflare Pages)
    |
    |-- Onboarding --> localStorage (client-side, zero latency)
    |
    |-- Feed Selection --> /api/chat/:pipelineId --> Airia Agent
    |                                                 |-- Router --> Albon Radio LLM
    |                                                 |-- Router --> Sainz Radio LLM
    |                                                 |-- Router --> James Vowles (Strategy) LLM
    |                                                 |-- Router --> Max Folds (Trivia) LLM
    |                                                 '-- Router --> Default LLM
    |
    '-- TTS --> /api/tts/:voiceId --> ElevenLabs API --> Audio back to browser

All proxied through a single Race Feed Worker (Cloudflare Worker):
    |-- /api/chat/*    --> Airia API proxy (CORS + API key)
    |-- /api/tts/*     --> ElevenLabs TTS proxy
    |-- /mcp           --> Race Feed MCP (get_current_race_state, set_race_start_time)
    |-- /lap/*.json    --> Static race feed data (56 laps)
    '-- /api/*         --> Race control REST endpoints

OpenF1 MCP Server (separate Cloudflare Worker):
    '-- /mcp           --> 18 tools for historical race data

One Airia agent. Two Cloudflare Workers. Two custom MCP servers. Four radio feeds. All serverless on Cloudflare.

The key architectural decision was consolidating everything behind a single Race Feed Worker. The Airia chat proxy, ElevenLabs TTS proxy, Race Feed MCP server, and static lap data all live in one Cloudflare Worker. The frontend only talks to one backend URL — the Worker handles routing internally. This kept API keys server-side and eliminated CORS issues in one shot.

Airia’s agent router is LLM-driven. It reads the fan’s message and classifies it to the right specialist. Strategy questions go to James Vowles. Trivia goes to Max Folds. Anything that mentions a driver name gets routed to the corresponding driver radio persona. The router uses GPT-4o-mini for fast classification with chat history enabled so it understands context from previous messages.

Every route LLM has Airia’s Memory Store and Memory Lookup tools attached. When a fan is chatting with Carlos about tyre degradation and then switches to Alex, Alex already knows what was discussed. The memory persists across feed switches so every persona has context from previous conversations.

How Claude Code Made This Possible

Here’s where I want to be specific, because “AI helped us build it” is vague to the point of being useless. Let me tell you exactly what Claude Code did.

MCP Servers in Hours, Not Days

We needed two custom MCP servers:

  1. Race Feed MCP — embedded as a Durable Object inside the Race Feed Worker. Two tools: get_current_race_state (auto-calculates the current lap from elapsed time and returns full positions, tyres, gaps) and set_race_start_time (lets us control when the simulated race begins). It reads from static lap JSON files and a KV namespace for the start time override.
  2. OpenF1 MCP — a separate Cloudflare Worker exposing 18 tools across five categories (session info, race actions, telemetry, standings, context) for historical race data.

Claude Code built the Race Feed MCP from a natural language description of what we needed. I described the data model — lap-by-lap JSON with driver positions, tyre compounds, degradation levels, sector times — and Claude Code generated the Worker, the KV binding, the Durable Object MCP class, and the tool definitions. The whole thing was functional in under two hours.

The OpenF1 MCP was similar. I pointed Claude Code at the OpenF1 API documentation and asked it to wrap the relevant endpoints as MCP tools. Eighteen tools across five categories. It handled the parameter mapping, error handling, and response formatting. I reviewed the output, tweaked a few edge cases, and deployed.

Without Claude Code, each of these MCP servers would have been a full day of work. With it, both were done in an afternoon.

Frontend in a Day

The frontend was an Astro site with React components, styled to match the Williams Racing dark theme. Claude Code scaffolded the project, built the onboarding flow (driver selection, knowledge level, interest picker), the chat interface with streaming responses, and the Driver Radio mode with its audio pipeline.

The audio pipeline alone was interesting. Claude Code wired up the ElevenLabs TTS call through a Cloudflare Worker proxy (to keep API keys server-side), added the radio crackle sound effects, and implemented a bandpass filter on the audio output. The result sounds like you’re listening to actual team radio.

One Worker to Rule Them All

We consolidated the Airia chat proxy, ElevenLabs TTS proxy, Race Feed MCP, and static lap data into a single Cloudflare Worker. CORS headers, API key management, request routing, the kind of plumbing that’s tedious but essential. Claude Code generated this Worker with the right route matching, error handling, and retry logic (including a single retry on 401s for transient ElevenLabs auth failures). One wrangler deploy and the entire backend was live.

The Compound Effect

The real power wasn’t any single generation. It was the compound effect. Claude Code understood the context of what we were building across the entire session. When I asked it to add the Driver Radio feature, it already knew the architecture, the data model, the Airia agent structure, and the Cloudflare deployment pattern. It could make changes that were consistent across the stack.

This is the difference between using AI as a code completion tool and using it as a development partner. Claude Code wasn’t autocompleting lines. It was reasoning about the architecture and making coherent changes across multiple files and services.

The Data Problem

The 2026 racing season started after our deadline. We couldn’t use live race data. So we built a simulation layer: historical race data from the 2025 season reframed as a 2026 scenario and served as a deterministic lap-by-lap feed.

This was actually an advantage for the demo video. Because the feed was deterministic, we knew exactly which dramatic moments would appear at which timestamps. We could script the video around Albon’s charge from P10 to P1, Sainz’s pit stop decision, the late-race tyre drama. We set the race start time 12 minutes before filming to land on the most exciting moments.

What Airia Brought to the Table

Airia is a visual agent builder with 100+ LLM models, MCP integrations, memory systems, and an LLM-driven router. For this project, the platform did three things particularly well:

The Agent Router classified fan messages to the right specialist without us writing a single line of routing logic. We just described each route in natural language (“Route for live race updates, driver positions, incidents, and what is happening right now”) and the router figured out where to send each message.

Memory Objects as Tools gave every route LLM the ability to load and store fan context. The LLM decides when to check memory (every first message) and when to store (after onboarding). This eliminated the need for a separate user management backend.

MCP Integration let us register our custom MCP servers directly in the Airia canvas. Every route LLM calls get_current_race_state as its first action on every message, so responses are always grounded in what’s happening right now.

What I’d Do Differently

If I were doing this again, I’d spend more time on evaluation. We tested the routing accuracy manually by feeding sample queries and checking they hit the right specialist. That worked, but automated evals would have caught edge cases faster.

I’d also invest more in the onboarding personalisation. We had three knowledge levels (casual, intermediate, hardcore) but the differentiation in responses wasn’t always dramatic enough. The system prompts could have been more opinionated about how much jargon to use at each level.

And honestly, I’d start with the Driver Radio feature from day one. We built it on day three as an addition to the Second Screen Strategist concept, and it immediately became the centrepiece. If we’d led with it, the overall architecture would have been cleaner. Thanks to Chris for that idea!

The Takeaway

Seven days. Two people. One Airia agent with four radio feeds, two MCP servers, an Astro frontend, two Cloudflare Workers, ElevenLabs TTS integration, and a five-minute demo video.

The tools exist now to build things at a pace that would have been absurd even a year ago. Claude Code isn’t replacing developers. It’s removing the friction between having an idea and having a working prototype. The bottleneck isn’t writing code anymore. It’s knowing what to build.

If you haven’t tried building with Claude Code and MCP servers yet, you’re leaving speed on the table. This hackathon was a proof of concept for how fast a small team can move when the tooling gets out of the way.


Team JoPoCo: Steve Jones and Chris Porter. Built for the Airia F1 Atlassian Williams Racing Virtual Hackathon, February 2026.

Share :

Related Posts

The Friction Was the Point

The Friction Was the Point

My dad had a camera. Not a phone with a camera. A camera. A proper one with a roll of film that gave you 24 shots, maybe 36 if you were feeling extravagant.

Read More
Building Your Own AI Agent Stack: Lessons from 10 Open Source Projects

Building Your Own AI Agent Stack: Lessons from 10 Open Source Projects

I spent the last week falling down a rabbit hole. Not the productive kind where you emerge with a working solution and a sense of accomplishment. The kind where you save ten GitHub repos in a single week and then sit back and realise they’re all telling you the same thing.

Read More
AWS Bedrock Open-Weight Models in Sydney: Australian AI Sovereignty

AWS Bedrock Open-Weight Models in Sydney: Australian AI Sovereignty

If you’ve been building AI workloads in Australia, you’ve felt the frustration. The models you want to use are sitting in US regions. Your compliance team is asking where inference data is being processed. And every API call is adding 180-200ms of network latency before the model even starts thinking. Run a five-step agentic workflow and you’re adding a full second of pure network overhead before any model computation happens.

Read More