Run Claude Code in Codex for Dual AI Coding Power
Chase AIgo watch the original →
the gist
Reject choosing between Claude Code and Codex—run Claude's terminal inside Codex's desktop app to combine their strengths for better planning, execution, and review.
Rejecting the False Choice Between Coding Agents
Claude Code dominated AI coding discussions due to its lead, but Codex—powered by GPT-5.5 or GPT-5.5 Pro—has narrowed the gap significantly. The speaker argues against AI tribalism: "you are hamstringing yourself if you are trying to choose between claude code or codeex." Instead, use both, as their Venn diagram is "99% overlap," making mastery straightforward. Loyalty to companies is unnecessary: "you owe these companies zero loyalty." OpenAI's Pro plans ($20-$200/month) offer more generous limits than Anthropic's Max, even post-doubling, with GPT-5.5 Pro beating Mythos in benchmarks and using fewer tokens despite similar per-million costs.
Codex's 258K context window (vs. Claude's 1M) forces better management via new chats, avoiding context bloat issues. Pricing favors OpenAI for volume, starting cheap to test before upgrading.
Codex Desktop App: Intuitive Features for Power Users
The Codex desktop app (install from openai.com/codex) outshines CLI for integration, featuring an in-app browser, visual 'pets' for status (e.g., streaming task progress overlay), plan mode toggle, permissions (bypass/auto/full access), intelligence/effort sliders, and model selection. Settings cover work modes (technical detail for coding), follow-up (Q vs. steer—default Q), sandbox, dependencies (enable Codex ones), agents.md (Codex's Claude.md equivalent), memory (disable for non-personal use), browser/computer use (Mac-only for latter), and git/worktrees.
Plugins (one-click from providers like Supabase for DB ops) and skills import automatically from Claude Code/Open Code, blurring lines with MCPs/skills. Automations mirror Claude's routines, creatable visually or via natural language. Projects organize folders/chats; multiple chats per project mimic multi-terminals. UI shows branch details, diffs, git actions, file previews—convenience over terminal alone.
""the desktop app honestly has some really nice quality of life stuff" like in-app browser and pets for workflow hooks."
Seamless Integration: Claude Code in Codex Terminal
Setup takes seconds: Open Codex terminal (top-right toggle), run claude—now both agents share the project directory. Bounce tasks: Plan in one, critique/copy-paste to the other, review code mutually. Skills invoke via /slash (e.g., /front-end-design) or @ (e.g., @spreadsheets). Natural language works too.
This dual setup provides "second set of eyes," vital for non-technical users: AI plans can miss gaps, but cross-review catches them (e.g., Claude flags Codex plan's lack of trend signals/competitor checks).
Demo: Building an AI Trend Planner Web App
Task: Single-page Next.js/TS/SQLite app for 24h AI news scan (RSS/YouTube/Twitter), report synthesis, content ideas (title/outline/hook), mini-kanban scheduler. No paid APIs; local gen.
Codex (5.5 Pro, extra high, plan mode) proposes greenfield dashboard. Claude critiques gaps (e.g., needs ranking/saturation). Codex iterates: Adds trends, competitor checks. Executes in 23m21s: Key files (README verifies), dev server spins up in-app browser. UI: Brutalist scan/report/ideas/kanban (drag TBD). Claude reviews first pass: Spots Llama issues, praises structure.
Annotate in-app for feedback (e.g., "make this prettier"). Iterate: Codex builds, Claude audits—warm fuzzy from consensus.
"Codex even says 'I agree with the critique's main diagnosis... requires trend signals ranking and competitor saturation checks.'"
Pricing and Model Nuances
GPT-5.5 (base) solid on $20 plan; Pro ($100+) unlocks superior model. Slower on heavy tool calls vs. Opus, snappier chatting. Auto-compaction at 258K manageable via new chats.
Key Takeaways
- Install Codex desktop, toggle terminal, run
claudefor instant dual-agent setup in shared dir. - Import skills/plugins automatically; 99% overlap eases transition.
- Cross-review plans/code: Plan in one, paste to other for blind spots (e.g., trends over raw ingestion).
- Use visual aids (pets, in-app browser, diffs) to track without terminal limits.
- Start $20 OpenAI plan; upgrade if hooked—better limits/value than Anthropic.
- Invoke skills: / for skills, @ for plugins, natural lang fallback.
- Non-tech users: Dual AIs simulate expert consultation on plans/executions.
- Context: 258K forces discipline; new chats = /clear equivalent.
- Pets/notifications prevent task abandonment post-AI handoff.
- Tool-agnosticism scales: Apply to any workflow stage.
""the best play isn't to sit here and try to choose which one of these two good options is better the best play is to use both." (Opening thesis on integration benefits.)"
""if you have no idea what right is supposed to look like you're kind of just like sick dude thumbs up go for it and it could be missing a whole lot." (Value of cross-AI review for novices.)"