
The Real Skill Isn't Coding Anymore. It's Describing What You Want.
- Stephen Jones
- February 10, 2026
Table of Contents
You’ve Got the Tools. So Why Are You Still Slow?
If you’re building on AWS right now, you have access to more managed services, more abstraction layers, and more AI-assisted tooling than at any point in computing history. CDK, SAM, Amplify, Bedrock, Kiro, Claude Code. The list keeps growing.
And yet. Delivery is still painful.
I keep seeing the same pattern play out. Engineers spin up a Claude Code session, describe something vaguely, get back something half-right, iterate in circles for an hour, and end up with a codebase that works but nobody fully understands. Context drifts. Quality degrades. What started as “vibe coding” turns into vibe debugging.
The tools aren’t the bottleneck anymore. You are. Specifically, your ability to describe what you actually want.
The Problem Has a Name: Context Rot
Here’s something I hadn’t put a label on until recently. When you’re working with an AI coding assistant on a long session, quality drops the further in you get. The model fills its context window with accumulated decisions, corrections, dead ends, and half-finished ideas. By task 30, it’s forgotten the architectural decisions you made at task 5.
The community calls this context rot. And if you’ve felt that creeping sense of “this was going well an hour ago, what happened,” that’s what you experienced.
It’s not a model problem. It’s a workflow problem. You’re asking a single session to hold too much state for too long.
Enter GSD: Get Shit Done
I came across GSD recently thanks to a colleague and it clicked with something I’ve been thinking about for a while. GSD is a meta-prompting and context engineering framework that sits on top of Claude Code (and now OpenCode and Gemini CLI). It has over 12,000 stars on GitHub and a fast-growing community.
Let’s be clear about what it is and isn’t. GSD is not an infrastructure-as-code tool. It doesn’t compete with Terraform or CDK. It’s a workflow orchestration layer that changes how you work with AI coding assistants.
The core idea: instead of one long session that rots, GSD breaks your project into phases, creates structured plans, then spawns fresh AI subagent instances for each task. Every task gets a clean 200K token context window loaded with precisely the information it needs. Task 50 runs with the same clarity as task 1.
That’s a fundamental shift in how you work.
How It Actually Works
The workflow is deceptively simple. Five slash commands cover the entire lifecycle:
/gsd:new-projectasks you questions until it properly understands your idea. Not a requirements template. Not a form. A genuine conversation that probes until the ambiguity is gone./gsd:discuss-phasecaptures your preferences for each implementation phase before any code is written. Layout decisions, error handling approaches, technology choices./gsd:plan-phasecreates 2-3 atomic task plans with XML-structured actions, verification criteria, and done conditions./gsd:execute-phaseruns those plans in parallel waves with fresh subagent instances. Every task gets its own atomic git commit./gsd:verify-workwalks you through testable deliverables one by one. Not “does the file exist” verification. Substantive checks: is it wired up, does it handle edge cases, are there hardcoded values that shouldn’t be there.
Behind the scenes there are 11 specialised agents: planners, executors, researchers, verifiers, debuggers. The orchestrator stays thin. Heavy lifting happens in subagents with clean context.
The AWS Angle: Why This Matters for Cloud Delivery
Here’s where it gets interesting for AWS practitioners.
Think about what your last CDK project looked like. You probably started with a rough idea of the architecture, opened Claude Code, started describing resources, hit a few wrong turns, made corrections, and ended up with a stack that works but grew organically rather than by design.
Now picture the same project through GSD:
Phase 1: Foundation. You describe the networking layer you want. VPC, subnets, NAT gateways, Transit Gateway attachments. GSD’s questioning system pushes back: “How many availability zones? What’s the CIDR strategy? Are you connecting to on-premises?” By the time planning starts, the ambiguity is gone.
Phase 2: Compute. Fresh context. The executor knows the networking decisions from Phase 1 but isn’t carrying the baggage of that implementation session. Clean CDK constructs, proper typing, sensible defaults.
Phase 3: Security. Another clean slate. IAM roles with least privilege, security groups, WAF rules. Each gets verification: “Does this role grant broader permissions than needed?”
Phase 4: Observability. CloudWatch dashboards, alarms, X-Ray tracing. Built against the actual resources from prior phases, not against a stale mental model.
Each phase is researched, planned, executed, and verified independently. The compound quality improvement is significant.
The Secret Nobody Talks About
Here’s the take I keep coming back to.
The engineers I’ve seen get the most out of AI-assisted development aren’t the best coders. They’re the best describers. They can articulate what they want with enough precision that the system doesn’t have to guess.
GSD’s creator, Lex Christopherson, frames project initialisation as “dream extraction, not requirements gathering.” The questioning system is designed to help you sharpen a fuzzy idea into something concrete. It asks things like:
- “Walk me through using this.”
- “‘Simple’ means what, specifically?”
- “‘Users’ means who?”
This isn’t just good prompt engineering. It’s the skill that separates people who ship from people who iterate endlessly. If you can clearly describe what you want, the tools to build it already exist. The bottleneck was always the description.
Think about how you write a CDK stack today. You don’t struggle with the TypeScript. You struggle with the decisions: which construct, what configuration, how should these resources connect, what are the security boundaries. Those are description problems, not coding problems.
What It Doesn’t Solve
I’m not going to pretend this is a silver bullet.
GSD adds overhead upfront. One user described it as “measure twice, cut once.” The planning and questioning phases take time. For a quick Lambda fix or a minor config change, it’s overkill. The /gsd:quick command exists for ad-hoc tasks, but the full workflow is designed for substantial builds.
It’s also young. Version 1.18 shipped in February 2026, barely two months after the initial release. The release cadence is fast (sometimes multiple versions in a single day), which suggests rapid evolution but also potential instability. As I’ve only just started on this today, I will report back on this mileage. I can already see some personal optimisations like MCP server that I’d like to add to the researcher agents etc.
And the model cost question is real. Running multiple Opus subagents in parallel isn’t cheap. GSD offers model profiles (quality, balanced, budget) to manage this, but a comprehensive project with 8-12 phases will burn through tokens.
The Bigger Picture
The shift happening right now in cloud development is subtle but significant. The value is moving from “can you write the code” to “can you describe the system.” That’s true whether you’re building CDK stacks, SAM templates, Terraform modules, or application code.
Tools like GSD are early signals of where this is heading. The developers who invest in getting better at describing intent, decomposing problems into clear phases, and thinking about verification criteria upfront will have an outsized advantage. Not because the AI does the work for them, but because they’ve learned to direct it with precision.
Your ability to clearly articulate what you want built, how it should behave, and what success looks like is becoming the most valuable skill in the room. The code writes itself. The description doesn’t.
I hope someone else finds this useful.
Cheers

