The Hidden Cost of Code Reviews: 22 Hours to First Review
Median time from PR opened to first review is 22 hours. That is not review effort — it is system latency. Here is what it costs, why it happens, and what elite teams do differently.
21 articles in Engineering Practices
Median time from PR opened to first review is 22 hours. That is not review effort — it is system latency. Here is what it costs, why it happens, and what elite teams do differently.
Most founders evaluate engineering vendors on story points and velocity. Neither metric predicts whether software reaches customers. Here are the four questions — no technical background required — that reveal everything.
Every team with a legacy codebase hits the same wall: you cannot add tests without refactoring, and you cannot refactor without tests. Characterisation tests are the specific tool — invented by Michael Feathers, amplified by AI — that breaks the loop.
Your PM writes prose. Your engineer translates it to code. QA finds the gaps. Your AI agent just compressed this broken loop from weeks into hours — without fixing any of the misalignment. Specification-driven development is the contract layer both sides can read, write, and execute. And it's the practice that separates teams who get leverage from AI agents from teams who ship the wrong thing, faster.
Most developer tools fail adoption not because they're poorly built, but because they're designed from the wrong starting point. Here's how Jobs-to-be-Done thinking explains why engineering tools succeed or sit unused — and what it reveals about AI adoption.
Every new model release prompts the same question — is this the one that finally makes AI coding agents reliable? It's the wrong question. What keeps single-agent workflows from scaling to production is architectural, not about the model. Here's the pattern the teams shipping real code keep converging on.
AI coding agents are the most powerful tools engineering has ever had. But the teams getting unreal results aren't just prompting — they're combining AI with practices that unlock outcomes nobody else can explain. Hypothesis-driven development is the first unlock.
Harness engineering gives coding agents guides and sensors to make output reliable — the downstream controls that ensure things are built right. Shape is the upstream control that ensures the right thing is being built. Here's what the shape discipline looks like — and why AI teams that skip it ship well-built fragments of the wrong product.
Gene Kim and Steve Yegge argue that vibe coding works for production — if you have preventive, detective, and corrective controls. We agree. We built the platform that enforces those controls structurally. Here's what we've learned about why practices alone aren't enough.