GadaaLabs
AI Automation
Lesson 1

Agents vs Chains

12 min

"Agent" and "chain" are two different approaches to orchestrating LLM calls, and conflating them is one of the most common mistakes in AI system design. Chains are deterministic, predictable, and testable. Agents are flexible, powerful, and expensive to debug. Choosing the wrong one early costs weeks.

Chains: Deterministic Pipelines

A chain executes a fixed sequence of steps. The control flow is defined at development time, not by the LLM:

python
from anthropic import Anthropic

client = Anthropic()

def summarise_then_classify(document: str) -> dict:
    # Step 1: summarise
    summary_resp = client.messages.create(
        model     = "claude-opus-4-5",
        max_tokens= 300,
        messages  = [{"role": "user", "content": f"Summarise in 3 sentences:\n\n{document}"}],
    )
    summary = summary_resp.content[0].text

    # Step 2: classify summary (chained)
    class_resp = client.messages.create(
        model     = "claude-opus-4-5",
        max_tokens= 50,
        messages  = [{"role": "user",
                      "content": f"Classify as LEGAL, TECHNICAL, or GENERAL:\n\n{summary}"}],
    )
    return {"summary": summary, "category": class_resp.content[0].text.strip()}

Chains are ideal when you know the required steps, the order, and the inputs at design time.

Agents: The ReAct Loop

Agents let the LLM decide which tools to call and in what order, using the ReAct (Reason + Act) pattern:

Thought: I need to find the current stock price of AAPL.
Action: get_stock_price({"ticker": "AAPL"})
Observation: {"price": 189.42, "currency": "USD"}
Thought: Now I have the price. The user also asked for the P/E ratio.
Action: get_pe_ratio({"ticker": "AAPL"})
Observation: {"pe_ratio": 28.5}
Thought: I have all the data I need.
Final Answer: AAPL is trading at $189.42 with a P/E ratio of 28.5.

The LLM drives the loop. It can call tools in any order, recover from errors, and take paths the developer did not anticipate.

Decision Framework

| Criterion | Use Chain | Use Agent | |---|---|---| | Steps known at design time | Yes | No | | Step order is fixed | Yes | No | | Predictable latency required | Yes | No | | Tool choice depends on prior results | No | Yes | | Budget per request is tightly bounded | Yes | No (risk of runaway loops) | | Debugging ease matters most | Yes | No |

python
# Quick heuristic
def choose_approach(steps_known: bool, order_fixed: bool, budget_sensitive: bool) -> str:
    if steps_known and order_fixed:
        return "chain"
    if budget_sensitive:
        return "chain with fallback"
    return "agent"

Chain-of-Thought vs Tool Invocation

These are often confused. CoT is the model thinking in text before answering — no external call is made. Tool invocation is the model triggering an external function:

Chain-of-thought:
  "Let me think step by step. 15 × 24 = (15 × 20) + (15 × 4) = 300 + 60 = 360."
  → No API call. Model uses its own computation.

Tool invocation:
  Action: calculator({"expression": "15 * 24"})
  Observation: 360
  → External function runs and returns a result.

For simple arithmetic, CoT suffices. For fetching live data, writing files, or calling APIs, you need tool invocation.

When Chains Beat Agents

  • Customer support tier-1: always run sentiment → category → response template. No tool selection needed.
  • Document processing: always OCR → extract fields → validate → store. Fixed pipeline.
  • High-volume batch jobs: 10,000 documents/hour. Agent overhead (multiple LLM calls per item) makes this prohibitively expensive.
  • Regulated industries: every step must be auditable and reproducible. Agent non-determinism creates compliance risk.

Summary

  • Chains are deterministic pipelines where the developer defines every step at design time.
  • Agents use the ReAct loop to let the LLM choose tools dynamically based on intermediate results.
  • Prefer chains when steps are known, order is fixed, and budget is tight.
  • Use agents only when the optimal path genuinely cannot be determined without seeing intermediate tool outputs.
  • Chain-of-thought is internal reasoning; tool invocation triggers external function calls — they are not interchangeable.