Trigger.dev: Async Infra Powers 90% AI Agents

Y Combinatorgo watch the original →

Trigger.dev evolved from Zapier-for-devs background jobs to a reliable SDK for executing AI agents, hitting PMF with v3's hosted execution and checkpoint-resume primitives—perfectly timed for agent era, now 90% usage from agents.

Product Evolution: Three Pivots to AI Agent PMF

Trigger.dev's co-founders Matt and Eric describe launching v1 in early 2023 as "Zapier for developers"—an async background jobs framework for back-office automation like GitHub workflows or marketing tasks. It gained traction with a strong Hacker News launch, praised for its design-first approach: code snippets on the landing page, thoughtful SDK UX to minimize developer failure modes. Pete Koomen notes their obsession with the first 5 seconds of developer experience, prioritizing simple, code-first onboarding.

V2 shifted to embedding async tasks in customer products (e.g., document processing, video encoding), moving from internal tools to user-value paths. Adoption was decent but lacked PMF—serverless gaps existed, but their solution still required messy customer-side execution. A customer poll revealed 60% assumed Trigger handled execution, prompting v3 in June 2024: full SDK + hosted infrastructure for long-running tasks. Revenue surged 30%+ MoM immediately, confirming PMF as AI demand aligned perfectly with their two-year async bet.

"We decided to actually start executing code... that's when stuff started to really take off," Matt explains. Over 90% of usage now powers agent workflows, with cloud pricing on compute making monetization straightforward despite Apache 2.0 open-source core (Kubernetes layer proprietary for scale).

Agent Use Cases: Context Building and Human-in-the-Loop Loops

Early adopters pulled Trigger toward AI agents. Icon.com uploads product assets, Trigger classifies/processes them into context, then generates/refines video ads via user prompts—real-time visibility, pausing for human feedback before resuming. "There's kind of two parts... context that you need and then the actual moment where you're doing something with that context," Eric details.

Magic School (edtech) runs teacher/student agents for lesson planning, homework grading—all execution in Trigger. Scrappy Bar's coding agent pulls GitHub repos, iterates code with LLMs, commits changes—full machine access (shell, Python, FFmpeg, Puppeteer) via TypeScript SDK. Developers get a customizable machine image, interfacing via AI-friendly TS framework.

These workflows highlight Trigger's strength: long-running, stateful execution with pauses for feedback (human or agent), avoiding rehydration pitfalls.

"You get hundreds of adverts spat out and then you can give feedback... we will pause the compute until the feedback is received," Matt says of Icon.

Checkpoint-Restore: Future of Stateful Compute

Trigger's core innovation—programmatic checkpoint and restore—freezes/resumes full machine state (CPU, memory, filesystem) on demand, like an OS scheduler over containers. Eric positions it as "the future of computing," democratizing low-level tech for agents needing intermittent execution.

Built for async gaps in serverless, it excels in agent loops: build context, pause, inject user input, resume seamlessly. No traditional state rehydration hacks required. TypeScript-first suits AI apps, with extensibility for any tool.

"What you really want to do is freeze the compute... the whole state of the compute... and then say whenever something else happens, please resume," Eric argues.

Open Source and LLMs as Users: Marketing to Agents

Open-source footprint aids agent adoption—LLMs like Claude reference docs/tests directly, generating PRs for bugs. Vibe coders (non-traditional devs) once struggled but now thrive post-o1 ("Opus 4.5"), with Trigger's LLM-friendly docs (MCP server, agent skills). Support quality rose as humans defer to AI first; lines for customers/executions/revenue/support diverge favorably.

Eric: "There's kind of two users now: the human user... and the LLM is a user of Trigger." Future vision: end-to-end agent onboarding, where Claude spins accounts autonomously. Humans still approve initially, but open-source amplifies discoverability.

"Being open source is a massive advantage... we have a much bigger footprint on the internet," Matt notes.

Hiring, Shipping, and Founder Advice

Post-o1, hiring shifted: vibe coders blurred with pros, enabling faster iteration. They ship quality via agent-assisted code reviews/PRs. Pete probes non-dev builders; bifurcation faded as AI leveled skills.

Advice for founders: Design SDKs obsessively ("hardest conversations... how to design this specific SDK function"); code-first landing pages; pivot boldly—two years async positioned them for agents by luck and persistence. Value design holistically: visual + DX. For AI builders, prioritize checkpoint primitives over hype.

"The best developer tools actually care about design... designing the experience so that it's easy to succeed," Matt emphasizes.

Key Takeaways

  • Build async infrastructure early—it future-proofs for AI agents, as Trigger's two-year head start captured 90% agent usage.
  • Lead with code on landing pages: Show simple SDK snippets first to hook developers in 5 seconds.
  • Offload execution to your infra—developers assume/expect it; v3's hosted model drove instant PMF.
  • Embrace checkpoint-resume for stateful agents: Pause/resume full machine state beats rehydrating context.
  • Open-source for LLMs: Agents reference your repo/docs/tests, slashing support and boosting adoption.
  • Design DX ruthlessly: Make SDK functions failure-proof through iteration.
  • Pivot to product-embedded use cases over back-office; user-value paths win.
  • Hire post-o1: AI collapses vibe coder/pro dev gaps—leverage for speed.
  • Monetize compute in open-core: Infrastructure management justifies cloud pricing.
  • Future: Agent onboarding—let LLMs spin accounts autonomously.
  • #open-source
  • #saas
  • #startups
  • #ai-automation

summary by x-ai/grok-4.1-fast. probably wrong about something. check the source.