Module 1 · Lesson 1.3

The loop: prompt → reason → tool call → response → repeat.

By the end of this lesson, you can name the five steps of one agent-loop iteration in order, walk through a worked multi-turn trace, name the four ways a loop can terminate, and open a real trace and locate any turn’s reasoning, tool call, and result.

Stage 1 of 3

Read & Understand

5 concept blocks

Why the loop is the central pattern CORE

In Lesson 1.1 you met the four parts of an agent: model, loop, tools, state. In Lesson 1.2 you learned what the model is really doing inside each step. This lesson is about the loop itself — the part that turns a chatbot into something that can actually get work done.

Here is the one idea to take from this lesson: an agent is a small repeating cycle, and almost every failure you will ever see is a problem with one specific step of that cycle. Good directors watch the loop, not just the final answer. When the output is bad, they can point at the turn where it went wrong. When a new task is hard, they ask what the loop will need to succeed.

Almost every “advanced” technique in this course — multi-agent orchestration, scheduled tasks, custom skills — is some variation on this single pattern. If the loop is solid in your head, the rest of the course is elaboration.

Anatomy of one turn CORE

One iteration of the loop has five pieces, in order:

  1. Prompt goes in. The model receives the state so far (the original task, previous reasoning, previous tool results) plus its instructions.
  2. The model reasons. Next-token prediction, Lesson-1.2-style. It produces a response — which might be internal “thinking” text, a decision to call a tool, or a final answer to the user.
  3. Something decides what happens next. If the model produced a final answer, the loop ends. If it produced a tool call, the loop dispatches the call.
  4. A tool runs. The tool executes — reading a file, searching the web, sending a message, running a shell command. It returns a result.
  5. The result is added to state. The new information becomes part of the context for the next iteration. Go to step 1.

That is the whole pattern. Every agent you will meet in this course is some variant of this five-step cycle. More sophisticated systems add parallel tool calls, nested loops, or internal planning steps — but the core anatomy does not change, and the places where things can go wrong do not change either.

A worked trace: the Dust Bowl research agent CORE

Let’s walk through the example from Lesson 1.1 one turn at a time. Same task:

“Find the three most commonly cited causes of the Dust Bowl across at least five academic sources. For each, give me a one-paragraph explanation with a direct quote and citation. Save it as dust-bowl-brief.md on my desktop.”

Each turn below is written in plain English. When the agent “calls a tool,” that's the agent asking one of its abilities (search, file writer, weather, etc.) to do something specific. When the tool “returns a result,” that's the tool reporting back. You'll see traces written in code-style notation in some other places online; we won't use that style here.

Turn 1. Prompt in: task plus instructions. Model reasons: “I need academic sources. I'll start with a search.” Tool call: web search for “Dust Bowl causes peer-reviewed.” Result: 15 search hits. State grows by one search-result list.

Turn 2. Prompt in now includes Turn 1's reasoning and search results. Model reasons: “Three of these look like academic sources. Let me read the first.” Tool call: fetch the first article. Result: full article text. State grows by one article.

Turns 3–6. Same pattern. The model reads four more sources, one per turn. After Turn 6, state contains five article texts plus the original task.

Turn 7. No tool call. Model reasons: “Five sources gathered. The three most commonly cited causes across them are over-plowing of marginal land, severe drought, and federal land-use policies of the 1910s–20s.” This reasoning is itself written into state — the model is “thinking on paper” so that the next turn has its synthesis available.

Turn 8. Tool call: write a file called dust-bowl-brief.md on the desktop with the formatted brief inside. Result: file written successfully.

Turn 9. Model reasons: “Task complete.” Produces final response to the user: “Saved dust-bowl-brief.md to your desktop. Three causes covered, each with a source and direct quote.” Loop ends.

Nine turns. Six tool calls. Each turn used the state from the previous turn. The model never knew the answer at Turn 1 — it built the answer through the loop. That building process is what separates an agent from a chatbot, and it is what you are learning to direct.

How the loop ends CORE

An agent loop terminates in exactly one of four ways:

  1. Task complete. The model decides it has satisfied the task and produces a final response with no tool call. This is the happy ending.
  2. Turn budget exhausted. Most agent systems have a maximum number of turns as a safety limit. Once hit, the loop stops and returns whatever it has — finished or not. This is why you sometimes see an agent cut off mid-work.
  3. Unrecoverable error. A tool returns an error the model does not know how to handle. The loop gives up or asks the user what to do.
  4. User stops it. You hit the stop button because the agent is going sideways and you do not want to pay for or wait through the rest.

Termination is not just a safety mechanic — it is a directing lever. A vague task like “help me with my homework” often fails to terminate well, because the model has no way to tell when it is done. A well-scoped task like “summarize this PDF in 300 words and save the summary to summary.md” terminates cleanly: when the file exists and the word count fits, the task is obviously complete.

Part of being a good director is writing tasks that have an unambiguous “done” condition. If you cannot tell, from the outside, when the agent should stop — the agent cannot either.

Reading the trace when something goes wrong CORE

Every serious agent tool shows you a trace — a visible record of each turn: what the model reasoned, what tool it called, what the result was. Learning to read traces is the single highest-leverage skill in this course. When an agent produces a bad result, the diagnostic question is almost always which turn went wrong, and at which step?

Common failure patterns you will learn to spot:

  • Bad reasoning step. The model misread the task on Turn 1 and every subsequent turn inherited the mistake.
  • Silent tool failure. A tool returned an empty result or an error, and the model pressed on without noticing.
  • Ignored tool result. The tool returned the right answer, but the model hallucinated a different one and used that instead. (Rarer than it sounds, but it happens — and it is the Lesson-1.2 mechanism at work.)
  • Early termination. The task needed twelve turns; the turn budget was ten.

You will rarely need to fix the model itself. You will often need to fix the task specification, the tool selection, the context you provided up front, or the termination condition. The trace is the diagnostic instrument that tells you which. The sidebar below shows you where to find one in the tool you will use most in this course.



Stage 2 of 3

Try & Build

1 recipe + activity

Reading an agent trace in the Cowork tab RECIPE

You haven't installed the Claude desktop app yet — that's fine. Module 2 walks you through installing the app and signing in with Anthropic Pro; the Cowork tab inside the desktop app is what this recipe is about. For now, just read this section so you know what you're looking for. You can come back and run the “Try this right now” exercise after Module 2 is done. Don't stop the lesson to install anything.

About “Recipe” sections. This is the first one in the course. Recipe sections are tool-specific step-by-step instructions, dated and refreshed every quarter. The table below tells you when it was last verified. If your screen looks different from what's described here, the underlying concept hasn't changed — only the UI has — and a fresher version of the recipe lives in the Recipe Book.

Tool Claude desktop app — Cowork tab
Version tested April 2026 build
Last verified 2026-04-17
Next review 2026-07-17

When you give Cowork a multi-step task, it exposes the loop to you as it runs. Here is what to look at.

Heads up — you haven't installed Cowork yet. You set up the Claude desktop app in Module 2. For now, read this section to build the picture in your head. Once Cowork is installed, come back and run a real trace — the muscle memory only sticks once you've seen one.

The main chat pane shows the final message Cowork produced for you. The loop happens in the Activity panel — often to the right of, or underneath, the chat pane. Exact placement depends on your build and window size; look for a panel labeled “Activity,” “Steps,” or “Trace.”

Inside the Activity panel, each turn of the loop appears as a row. Three things to locate on any row:

  1. The model turn. Shows the reasoning text the model produced and, if applicable, the tool call it made. Click it open to see the full reasoning, not just the headline.
  2. The tool call. Labeled with the tool's name (for example Read a file, Write a file, WebFetch for grabbing a webpage, or a connector name like Calendar). Expand it to see what Cowork sent to the tool and the result the tool returned.
  3. The final answer. The last row, with no tool call — the message you see in the chat pane.

Try this right now, if you have Cowork set up. Give it this prompt:

“Find three interesting facts about the city I live in, cite a source for each, and save them to facts.md in my working folder.”

When the task finishes, open the Activity panel and read every row in order. You should see the five-step pattern from the previous section played out about four or five times — a search, a read, another read, a write, a final answer. The left-to-right shape of a real trace is the muscle memory you are building.

If your version of Cowork looks different from this description: the concept has not changed, only the UI. Look for any view that shows “steps,” “activity,” “turns,” or “trace.” If you cannot find one, check the Cowork documentation or the Recipe Book entry dated closest to today.

Try it CORE

Trace annotation

Goal. Build the habit of reading a trace the way a mechanic reads an engine. You will be given a transcript of an agent working on a task, and you will label every turn and every step.

The transcript (run it in your head). A user asks an agent: “Look at today’s weather in Salt Lake City and recommend an outdoor activity for this afternoon. Save the recommendation to today-plan.md.”

  1. Model turn: “I need today's weather. I'll use a weather tool.” → Tool call: weather lookup for Salt Lake City, today. → Result: 72°F, sunny, 8 mph wind.
  2. Model turn: “72°F and sunny is great for anything outdoors. A hike would fit. Let me suggest a specific trail.” → Tool call: web search for “best short hikes Salt Lake City.” → Result: list of 10 trails.
  3. Model turn: “Ensign Peak is a short 1-mile hike with city views. Good match.” → Tool call: write a file called today-plan.md with the weather plus the recommendation. → Result: file written.
  4. Model turn: “Done.” → Final answer to user: “Saved today-plan.md. Suggested a short hike at Ensign Peak given the 72°F sunny weather.”

What to do. On paper or in a document:

  1. For each numbered turn, label the five steps (prompt-in, reason, decide, tool-run, result-to-state) and identify which ones happened in that turn.
  2. Mark which turn terminates the loop, and why it terminated (which of the four termination types).
  3. Identify one thing that could have failed on each turn and what the trace would show if it had.
  4. Rewrite the original user task so that the “done” condition is even more unambiguous. One sentence only.
Deliverable. Four annotated turns, a termination type, four failure sketches, and a one-sentence task rewrite. Keep this — you will reuse the same agent in Lesson 1.4 when we put the context-window stress test on it.

Done with the hands-on?

When the recipe steps and any activity above are complete, mark this stage to unlock the assessment, reflection, and project checkpoint.

Stage 3 of 3

Check & Reflect

key concepts, quiz, reflection, checkpoint, instructor note

Key concepts CORE

  • An agent-loop turn has five steps: prompt in → model reasons → decide next → tool runs → result added to state.
  • Every agent you will meet in this course is a variation on that five-step cycle.
  • A loop terminates in one of four ways: task complete, turn budget exhausted, unrecoverable error, or user stop. Writing tasks with an unambiguous “done” condition is a directing decision.
  • A trace is the visible record of the loop — a primary diagnostic instrument you have.
  • You will rarely fix the model. You will usually fix the task spec, the tool selection, the up-front context, or the termination condition.

Quick check CORE

Four questions. Pick the best answer, then reveal the explanation — the why matters more than the letter.

Q1. In one iteration of an agent loop, which of the following correctly orders the five steps?
  • A Tool runs → model reasons → prompt in → result to state → decide next.
  • B Prompt in → model reasons → decide next → tool runs → result added to state.
  • C Model reasons → prompt in → tool runs → decide next → result to state.
  • D Prompt in → tool runs → model reasons → result to state → decide next.
Show explanation

Answer: B. The ordering matters because it tells you where to look when something breaks. The model reasons before it decides what to do; the tool runs after the decision; the result goes into state so the next iteration can see it. Memorizing this sequence pays off every time you read a trace.

Q2. A student gives an agent a vague task — “help me study for my chemistry test” — and the agent runs for its full 20-turn budget without ever producing a final answer. Which of the four termination types is this?
  • A Task complete.
  • B Turn budget exhausted.
  • C Unrecoverable error.
  • D User stop.
Show explanation

Answer: B. The loop ran out of turns because the task had no unambiguous “done” condition. The model kept finding one more thing to do — more review questions, more topic summaries — because nothing in the task told it when to stop. This is why a good task specification names its own done condition (e.g., “make a 10-question practice quiz and save it to practice.md”). A requires a final answer with no tool call. C requires a tool error. D would have shown an interruption in the trace.

Q3. You look at an agent trace after a failed task. On Turn 3, the tool call returned an error (“URL not found”), but on Turn 4 the model wrote a confident summary as if the page had been read successfully. What is the most accurate description of what went wrong?
  • A The loop terminated too early.
  • B The model ignored the tool result and hallucinated from the pattern “I read a page, here is the summary.”
  • C The tool is broken and needs to be replaced.
  • D The turn budget was too small.
Show explanation

Answer: B. This is an “ignored tool result” failure — the Lesson-1.2 mechanism at work inside an agent loop. The model kept producing confident output because its pattern for “summarize a retrieved page” is dense in training data, regardless of whether the page actually arrived. The fix is usually in the task specification or in how errors are surfaced to the model, not in the tool itself. Trace reading is what lets you tell the difference.

Q4. A friend complains that an agent “isn’t doing what I want” on a multi-step task. Given this lesson, what is the first thing you should ask to look at?
  • A The final answer the agent produced.
  • B The agent’s trace — which turn went wrong, and at which step.
  • C A screenshot of the model’s settings panel.
  • D Your friend’s original prompt.
Show explanation

Answer: B. The final answer tells you something is wrong; the trace tells you where. D — the original prompt — is often the actual problem (vague task, no done condition), but you will make a much better guess about how the prompt was wrong after seeing what the loop did with it. Starting with the trace is the highest-signal diagnostic move. A and C give you almost no diagnostic information.

Reflection prompt

Sketch a recent trace.

In 5–7 sentences, answer: Think of one recent time you used an AI tool for a multi-step task. It does not matter whether it went well or badly. Describe the task briefly. Then sketch what the loop probably looked like — how many turns, roughly, and what tools it likely used. Finally, name the turn that was most likely to go wrong, and say what you would check in the trace next time.

If you cannot recall the trace because you were not looking at it at the time, that is the point of the exercise. From now on, look.

Project checkpoint

Tools the agent will need; and what “done” looks like.

Open the note you started in Lesson 1.1 — by now it has the candidate task plus the “confidence ≠ correctness” risk from Lesson 1.2. Underneath, add two more lines:

“The tools this agent will probably need are ____.”
“The task is done when ____.”

Do not worry yet about whether the tools exist or how to hook them up. Name them in plain English — “a calendar reader,” “a file writer,” “a search,” “an email drafter.” Same for the done condition: state it in one short sentence that a reader could test. You are practicing a key directing move — making the loop’s success measurable — before you touch any tool in Module 2.

Next in Module 1

Lesson 1.4 — Context windows, memory, and state.

Three layers of agent memory, why context windows silently truncate, and the four distinct ways an agent “forgets” — each with its own fix.

Continue to Lesson 1.4 →