Tensor Labs

Not everything needs an agent

In 1931, Rube Goldberg won a Pulitzer Prize for drawings of machines that accomplished simple tasks through spectacular chains of unnecessary steps. A self-wiping napkin apparatus involving a parrot, a lit candle, a swinging pendulum, and seven other components.

May 5, 20263 min read4 sectionsBy Tensor Labs
Not everything needs an agent

Introduction

The joke was not that the machine was badly designed. It was that it was brilliantly designed for the wrong problem. The napkin got wiped. The complexity was the real output. The AI industry spent most of 2024 in the same place.

The Problem That Didn’t Need an Agent

A client came to us with a document extraction task. Twelve fields. The same twelve fields, in the same positions, on every submission of the same form. Structured data, consistent source, zero variation.

We built an agent. Memory, tool use, a supervisor node, retry logic. It worked reliably in production, handled edge cases gracefully, and made a good demo. Eight months later we replaced it with a GPT-4o call, a Pydantic response model, and four lines of logic. Faster, cheaper, and when it failed, the failure was immediately obvious rather than buried three nodes into an execution graph.

The agent had been solving a problem that didn’t require one. We built it anyway because agent were what we were thinking about. The napkin got wiped. The complexity was the real output.

When the Costume Fits

Agents earn their complexity when the path through the problem is genuinely unknown in advance. When the right tool call depends on what a previous tool call returned. When a human given the same task would need to make judgment calls mid-process. “Find the most recent filing for this entity, extract the relevant disclosures, cross-reference against our internal database, and flag anything that changed” — the steps depend on what you find. The path branches at runtime. That’s an agent problem. “Extract these twelve fields from this form” is not. If you can draw a complete flowchart of the task before you start, and the flowchart has no branches that depend on runtime outputs, you need a pipeline.

The question is not “can an agent do this?” Everything can be framed as an agent problem. The question is whether the path actually changes.

Why We Build Agents Anyway

Part of it is the tooling. LangGraph and its relatives make it easy. The tutorials show agents. The demos are agents. When the primary abstraction is agents, everything looks like it needs one. (There’s also the pitch: “we built you an AI agent” lands differently than “we built you a function that calls an API,” and everyone in the room knows it.) The cost shows up months later. An agent that fails produces a state replay problem. A pipeline that fails produces a traceback. That difference matters at two in the morning when something breaks in production.

Most “agentic AI” projects are deterministic pipelines wearing a costume. The costume adds complexity, not capability.

Rube Goldberg’s machines worked. The parrot released the string, the pendulum swung, the napkin wiped. The complexity wasn’t a flaw. It was what made them drawings rather than engineering. When you’re building for a client, that distinction matters.