Agentic Coding Trap: Cognitive Debt Hits Hard

Theo - t3.gggo watch the original →

AI coding agents deliver massive productivity but erode core skills and rack up cognitive debt, as Theo unpacks Lars Faye's viral critique.

Agentic Coding's Productivity Mirage

Theo dives into Lars Faye's article 'Agentic Coding is a Trap,' reacting live while acknowledging his own heavy reliance on AI tools. He celebrates the fun and output—CEOs cranking 37,000 lines daily—but warns of the slot-machine addiction: endless reruns hoping for a win, costing money and atrophying syntax skills. Faye argues the hyped workflow (human plans meticulously, agents implement, human reviews) creates distance from code, demanding expert oversight that's ironically undermined by the tools themselves. Theo admits his editor now shows prompt files and CSV run data, not code; he's mastering Git and SSH more than ever, but core coding feels rusty.

"AI does the coding and the human in the loop is the orchestrator"—Faye quotes industry hype, which Theo mocks as Inspect-Driven Development (IDD) fantasy. Real workflows involve micro/macro requirements, plan generation, and iterative agent spins across instances. Tradeoffs emerge: surging system complexity to tame AI non-determinism, skill decay across juniors to vets, vendor lock-in (cloud outages halt teams), and token costs spiking with demand.

Cognitive Debt Trumps Tech Debt

Faye coins 'cognitive debt' over tech debt—losing mental maps of codebases, struggling to link decisions to intent. Simon Willison and Martin Fowler echo this: faster shipping but foggy deeper sense-making. Theo flips it: AI crushes tech debt (e.g., lint rules across codebases, GraphQL migrations breaking 8,000 TypeScript files—once manual Google Sheets hell at Twitch). But cognitive load explodes for vibe coders like him, who shipped 5+ projects yearly pre-AI via T3 stack for rapid confidence without full mastery.

"If you weren't already experiencing a bit of it you weren't shipping fast enough"—Theo's hot take. He was the traded 'bargaining chip' engineer unblocking teams; now AI democratizes that, but without his foundational piece-knowledge. Humans crave pain avoidance: learning hurts (dumb feels bad), AI rewards pulls instantly. Skateboarding analogy: most quit ollie pre-mastery from shin-bangs and incompetence shame; coding's pain was mental, now optional.

"I kind of miss feeling dumb. I haven't gotten to as much lately"—Theo confesses, now intentionally tackling Rust or crypto challenges, exhausting AI first for sharper friend-asks. Fear lurks: AI disincentivizes primitives, turning devs into prompt gamblers.

Cost Nuances and Abstraction Myths

Faye flags rising token bills (fixed dev salaries vs. ballooning Cursor tabs: $100k to $500k yearly). Theo nuances: raw costs up from more usage, but intelligence-per-dollar plummets. Artificial Analysis benchmarks: GPT-4o (57 pts, $2.8k), o1-medium (60 pts equiv, $1.2k), o1-low (~46 Sonnet, $500)—8x cheaper IQ. o1-preview tops at $3.3k but prior tiers halved. Companies eat it short-term, but long-term efficiencies grow.

Abstractions debate: not markdown-to-JS like JS-to-C++. Historical shifts (punch cards → assembly → C → Python) rewarded learning adjacent layers; great JS devs grok V8. AI's ambiguity ≠ abstraction—no incentives for depths. Reddit evidence (9-year vet: company mandates AI-only, feels dumber; skills erode post-year of reliance).

"A high level of ambiguity is not a higher level of abstraction"—Faye nails why it's no stack evolution.

Agents curl-fail on JS-heavy sites, CAPTCHAs; Browserbase's fetch API (POST /fetch with URL/API key) delivers HTML/JSON/MD via cloud browsers—handles JS, logins, search. Theo geeks on tool-calls, plugging it for unblocking.

Key Takeaways

  • Balance AI productivity with deliberate 'dumb-feeling' learning: pick unfamiliar langs/tools, exhaust docs before prompting.
  • Prioritize fundamentals over agent outputs—know building blocks to infer fixes, avoid pure slot-pulls.
  • Track cognitive debt: revisit codebases actively; AI amplifies fast-shipping pitfalls for non-vets.
  • Costs drop per intelligence level (8x in months)—optimize models by tier, not always chase frontier.
  • Cultivate 'say no' willpower: resisting AI bailout is the new superpower for expertise.
  • Use tools like Browserbase for reliable web access in agents—curl alone flops.
  • AI excels at tech debt (migrations, lints); wield for grunt, humans for architecture.
  • Vibe-coding scales via AI, but without piece-knowledge, it's fragile velocity.
  • #rant
  • #ai
  • #dev-tooling
  • #commentary

summary by x-ai/grok-4.1-fast. probably wrong about something. check the source.