Lessons in this module CORE
Work through these in order. Each lesson ends with a project checkpoint that carries forward into the capstone.
-
1.1 What is an AI agent, really?
COREThe four parts — model, loop, tools, state — and how they combine to turn a chatbot into something that can do real work. Chatbot vs. agent through a concrete Dust Bowl research example.
Open Lesson 1.1 → (also available free, no signup)
-
1.2 How the model thinks (and why confidence ≠ correctness).
COREWhat the model is really doing under the hood, why fluent output can still be wrong, and the three directing moves that follow. Lays the groundwork for trace-reading in Lesson 1.3.
Open Lesson 1.2 → (also available free, no signup)
-
1.3 The loop: prompt → reason → tool call → response → repeat.
CORE + RECIPEThe five-step loop, a worked Dust Bowl trace, the four ways a loop terminates, and a first recipe sidebar on reading a real agent trace in Cowork. Culminates in the Trace Annotation activity.
-
1.4 Context windows, memory, and state.
COREThree layers of agent memory (working context, persistent project state, long-term skills), why context windows silently truncate, and the four distinct ways an agent “forgets” — each with its own fix. Pairs with the Context Overload sandbox activity.
-
1.5 The directing mindset.
COREThe shift from “AI user” to “director.” The seven directing moves, a working definition of responsibility that scales with leverage, and the my-first-loop.md activity — the first artifact that will travel with you to your capstone.
Module wrap-up
End-of-module check.
Ten questions — six multiple choice, two short-answer, two applied — pulling from all five lessons. A parent scoring summary makes it straightforward to document as a module assessment on a transcript.
Activities in this module CORE
Two browser-based activities run alongside the lessons that need them. Open each one in a separate tab when the lesson tells you to — they are integral to the work, not optional extras.
- Trace Annotation — a four-turn Salt Lake City weather-agent trace you annotate turn by turn. Pairs with Lesson 1.3.
- Context Overload Sandbox — fill a simulated context window and watch what gets dropped when you exceed its limit. Pairs with Lesson 1.4.
Optional bonus. If you want to feel the next-chunk prediction mechanism more directly, the Next-Token Roulette activity lets you guess the model's most-likely continuation on six prompts. Skip it unless you're curious — the core work for Lesson 1.2 is the “Catch a hallucination in the wild” exercise inside the lesson.
What you should have when this module is done CORE
By the time you close out Module 1, you should be able to point to four concrete artifacts in your project folder:
- The first-loop note you started in Lesson 1.1 and added to across Lessons 1.2–1.4 — candidate task, confidence risk, tools, done condition, and a line about state. Five short lines, in any text file or note.
- Completed Trace Annotation worksheet (from Lesson 1.3’s activity).
- Completed Context Overload reflection (from Lesson 1.4’s activity).
- my-first-loop.md — the one-page directing plan from Lesson 1.5, expanded from your first-loop note. This is the first capstone artifact.
If any of these is missing, go back to the checkpoint and finish it before moving on. Module 2 will assume you have them.
Coming next
Module 2 — Your AI workstation.
The hands-on setup module. Install the Claude desktop app, sign in with Anthropic Pro, and verify the three tabs (Chat, Cowork, Code) you'll use through the rest of the course. Optional local LLM on your own hardware (Ollama + Llama 3 or Mistral), dev tools, and safe defaults for privacy and cost. Heavy on recipe content — every step dated and versioned.
Module 2 opens when Module 1 is complete in your portfolio.