Skip to content

Essential Reading for Agentic Engineers - July 2025

Published:
5 min read

New perspectives on AI-assisted development from the field.

This July edition features four compelling articles that showcase the evolving landscape of agentic engineering: a detailed experience report from a team successfully integrating Claude Code into production workflows, a thought-provoking analysis of how AI tools are reshaping developer career paths, a candid look at AI automation experiments that didn’t work as expected, and a technical deep-dive challenging conventional wisdom about MCP limitations.

Six Weeks of Claude Code

Read the article by Orta Therox (@orta) • July 2025 • 12 min

Orta shares his experience integrating Claude Code into daily development work at Puzzmo, providing one of the most detailed real-world productivity assessments available. His team completed 15+ significant engineering tasks in just six weeks, demonstrating measurable impact on technical debt resolution and feature development.

Claude Code has fundamentally changed how we approach technical debt and side projects, enabling rapid exploration and implementation that seemed impossible before.

Full-Breadth Developers

Read the article by Justin Searls (@searls) • July 2025 • 15 min

Justin explores how AI tools are enabling a new archetype of “full-breadth developers” who can work effectively across the entire technology stack, fundamentally challenging traditional specialization models and career development paths.

We’re moving from an era where depth was king to one where breadth plus AI might be the winning combination for creating software that truly matters.

Things That Didn’t Work

Read the article by Armin Ronacher (@mitsuhiko) • July 2025 • 18 min

Armin provides a candid retrospective on AI coding experiments that failed, offering valuable lessons for developers navigating the AI-assisted development landscape. His honest analysis of what didn’t work provides essential balance to the enthusiasm around AI automation.

The key lesson is that AI is incredibly powerful for execution but still needs human guidance for strategy and quality assurance—automation should amplify human decision-making, not replace it.

MCPs are Boring (or: Why we are losing the Sparkle of LLMs)

Watch the video by Manuel Odendahl (@programwithai) • July 2025 • 32 min

Manuel presents a provocative technical argument that MCPs artificially limit LLM capabilities by forcing structured tool calls instead of leveraging their superior code generation abilities. His presentation challenges the entire foundation of current agentic development practices with concrete performance data and working implementations.

LLMs are absolute magic and we should think recursively—if you ask the LLM to do something, ask it to write code to do something, then ask it to write code to write code. They create words that create more words, and ultimately make things happen in the real world.


This builds on my original Essential Reading collection with fresh insights from the field.

New posts, shipping stories, and nerdy links straight to your inbox.

2× per month, pure signal, zero fluff.


Edit on GitHub