
New perspectives on AI-assisted development from the field.
This July edition features four compelling articles that showcase the evolving landscape of agentic engineering: a detailed experience report from a team successfully integrating Claude Code into production workflows, a thought-provoking analysis of how AI tools are reshaping developer career paths, a candid look at AI automation experiments that didn’t work as expected, and a technical deep-dive challenging conventional wisdom about MCP limitations.
Six Weeks of Claude Code
Read the article by Orta Therox (@orta) • July 2025 • 12 min
Orta shares his experience integrating Claude Code into daily development work at Puzzmo, providing one of the most detailed real-world productivity assessments available. His team completed 15+ significant engineering tasks in just six weeks, demonstrating measurable impact on technical debt resolution and feature development.
- Workflow innovations: Introduced “Write First, Decide Later” approach for rapid prototyping and parallel development strategies using multiple git clones with different VS Code profiles
- Quantitative insights: While commit/PR metrics didn’t dramatically change, perceived productivity increased significantly—completing tasks like Adium theme recreation in ~2 hours that would normally take much longer
- Practical applications: Excelled at React Native to React conversions, system migrations, infrastructure updates, and exploration of experimental features across diverse technical domains
- Team perspective: Treated Claude as a “pair programming buddy with infinite time and patience,” running with minimal permissions for maximum flexibility
- Philosophy: Compared AI coding to “introduction of photography” in programming—a fundamental shift requiring new approaches but not replacing core engineering skills
Claude Code has fundamentally changed how we approach technical debt and side projects, enabling rapid exploration and implementation that seemed impossible before.
Full-Breadth Developers
Read the article by Justin Searls (@searls) • July 2025 • 15 min
Justin explores how AI tools are enabling a new archetype of “full-breadth developers” who can work effectively across the entire technology stack, fundamentally challenging traditional specialization models and career development paths.
- Paradigm shift: AI enables developers to work competently across multiple domains without years of specialization in each, with Justin completing “two months worth of work on Posse Party” in just two days using Claude Code
- Career evolution: Traditional role segregation between engineering and product is becoming obsolete—successful developers now need to be results-oriented, experiment rapidly, and identify opportunities others miss
- Cognitive transformation: AI handles syntax, configuration, and boilerplate complexity, freeing developers to focus on higher-level design and product thinking
- New skill requirements: Success requires strong prompt engineering, system thinking, and the ability to verify AI-generated solutions rather than deep technical specialization
- Democratization: Complex tasks that once required specialists become accessible to generalists with AI assistance, creating opportunities for adaptable, multi-skilled developers
We’re moving from an era where depth was king to one where breadth plus AI might be the winning combination for creating software that truly matters.
Things That Didn’t Work
Read the article by Armin Ronacher (@mitsuhiko) • July 2025 • 18 min
Armin provides a candid retrospective on AI coding experiments that failed, offering valuable lessons for developers navigating the AI-assisted development landscape. His honest analysis of what didn’t work provides essential balance to the enthusiasm around AI automation.
- Automation failure modes: Documents specific failed experiments with slash commands, hooks, and print mode automation—most pre-built commands went unused due to limitations like unstructured argument passing and lack of file-based autocomplete
- Over-automation dangers: Warns that elaborate automation leads to disengagement and actually degrades AI performance, with critical insight that “LLMs are already bad enough as they are, but whenever I lean in on automation I notice that it becomes even easier to disengage”
- Context over complexity: Demonstrates that “simply talking to the machine and giving clear instructions outperforms elaborate pre-written prompts”—flexibility and adaptability matter more than sophisticated workflows
- Human engagement imperative: Emphasizes the need to maintain active mental engagement and avoid becoming passive consumers of AI-generated solutions
- Practical principles: Only automate consistently performed tasks, manually evaluate automation effectiveness, and be willing to discard ineffective workflows
The key lesson is that AI is incredibly powerful for execution but still needs human guidance for strategy and quality assurance—automation should amplify human decision-making, not replace it.
MCPs are Boring (or: Why we are losing the Sparkle of LLMs)
Watch the video by Manuel Odendahl (@programwithai) • July 2025 • 32 min
Manuel presents a provocative technical argument that MCPs artificially limit LLM capabilities by forcing structured tool calls instead of leveraging their superior code generation abilities. His presentation challenges the entire foundation of current agentic development practices with concrete performance data and working implementations.
- Tool calling inefficiency exposed: Traditional MCPs waste massive resources—20,000 tokens, $0.50, and 5 minutes for queries that code generation handles in 500 tokens with deterministic results
- Dynamic tool creation paradigm: Demonstrates how LLMs can generate exactly the tools needed in real-time rather than being constrained by predefined schemas, with live examples showing SQL query optimization and API creation
- Recursive development potential: Introduces “ask LLM to write code that writes code” methodology, enabling infinite tool creation loops where generated code creates libraries, views, and reusable functions
- Concrete implementation: Shows JavaScript sandbox with SQLite and web server libraries that transforms from single eval tool into full CRM application with REST endpoints and web interface
- Performance metrics: Quantifies improvements—15 tool calls reduced to 1, significant token savings, and 2-3 second execution vs traditional multi-minute workflows
LLMs are absolute magic and we should think recursively—if you ask the LLM to do something, ask it to write code to do something, then ask it to write code to write code. They create words that create more words, and ultimately make things happen in the real world.
This builds on my original Essential Reading collection with fresh insights from the field.