Recipe Book status for this module
Module 2 is heavy on RECIPE content — tool-specific install steps that change as vendors update. Every recipe in this module was last verified 2026-04-17 and is on the next-review queue for 2026-07-17. If you find a step that has drifted from current reality (e.g., a renamed setting, a moved button), flag it to your instructor or to support so we can refresh the recipe before the quarterly cycle.
CORE blocks (the four-axis framing in 2.1, the four key-hygiene rules in 2.3, the cost-and-privacy posture in 2.5) are tool-agnostic and do not need refreshing — they are the part of Module 2 that will still be true in three years.
Lessons in this module CORE + RECIPE
Work through these in order. Each lesson ends with a project checkpoint that adds at least one concrete line to your my-first-loop.md file.
-
2.1 The workstation decision: local, cloud, or both.
CORE + small RECIPEConceptual only, no installs. The four axes that decide every workstation question (capability, cost, privacy, reliability), worked task calls, why most students should default to a hybrid setup, and the “default model + switching condition” rule that ends most second-guessing. Pairs with the Local vs. cloud decision tree printable.
-
2.2 Your dev toolkit: the foundations.
CORE + RECIPEInstall VS Code, your terminal, Python, and git — the four foundation tools every later lesson assumes are working. Five terminal commands you actually need (cross-platform), one-time git config, and a verification activity that lands four version numbers in your loop file.
-
2.3 Running a local LLM on your own machine.
mostly RECIPEInstall the software that runs an AI model on your own computer (Ollama or LM Studio), download a small open-source model, and ask it your first question — all without an internet connection. The CORE blocks explain what a local model is and what makes one usable on a typical laptop. Recipes cover macOS and Windows; a printable worksheet helps you measure how fast the model responds on your machine. This is the only lesson with heavier hardware requirements (about 8 GB of RAM, 15 GB of free disk space) — students whose laptops don't meet that bar can take this lesson read-only, skip the install, and pick up at Lesson 2.4. Nothing downstream breaks.
-
2.4 Cloud AI accounts, API keys, and coding agents.
CORE + heavy RECIPEStand up an Anthropic console account (and optionally OpenAI), set a real monthly cost cap, generate an API key, store it correctly in .env, run a verified Python test, and install Cowork and Claude Code as your coding agents. Pairs with the API-key intake checklist and the first-directed-edit walkthrough.
-
2.5 Safe defaults: cost, privacy, and a posture you’ll actually keep.
heavy COREWrite the posture you'll carry forward through the rest of the course. A four-line cost posture (ceiling, warning, response, high-volume rule) and a four-line privacy posture (default placement, always-local, fine-in-cloud, when-unsure), the four-part new-tool checklist you copy verbatim, and a frozen v1 of the posture saved into your capstone folder. The reflection prompt — which line are you least likely to actually keep? — is the load-bearing step.
Module wrap-up
End-of-module check.
Ten questions across 15 points — six multiple choice, two short-answer, two applied. Closed-book for the recall sections; open-workstation for the applied pair, where you grade your own posture document and route five real-week tasks by axis. Passing bar: 11.5 / 15 with full credit on at least one applied. The parent scoring summary makes it straightforward to document as a module assessment on a transcript.
Resources for this module RECIPE
Five printable companions run alongside the lessons. Each is a print-to-PDF page that prints cleanly on letter paper. Use them in the order their lessons reference them.
- Local vs. cloud decision tree — eight-task printable worksheet for Lesson 2.1. Routes each task through the four axes and lands on a local / cloud / hybrid call with a one-line defense.
- Local latency & cost worksheet — paper companion for Lesson 2.2. Three prompt rows (short / medium / long) for first-token, tok/sec, and verdict; a 7-row break-even box comparing local vs. cloud cost.
- API-key intake checklist — two-part printable for Lesson 2.3. A 10-item intake checklist (account → cap → key → .env → .gitignore → verified test) and 10 paper-drill scenarios with an answer key.
- First directed edit walkthrough — 30–45 minute companion for Lesson 2.4. Same starter repo through both Cowork and Claude Code with capture grids for prompt / first-try / surprise, then a side-by-side comparison.
- Workstation posture template — the printable that becomes Lesson 2.5’s deliverable. Six parts: cost posture, privacy posture, default-model rule, installed stack, four-part checklist, sign + date.
An interactive HTML latency calculator for Lesson 2.2 is on the roadmap; until it ships, the latency & cost worksheet is the canonical version of that activity.
What you should have when this module is done CORE
By the time you close out Module 2, you should be able to point to seven concrete things on your machine and in your project folder:
- A working local model — if your hardware allows — with a measured tok/sec verdict you can defend.
- A live cloud API key stored in .env (not in code, not in chat, not in a screenshot), with a real monthly cost cap set in the provider console.
- An editor open on a project folder, with the integrated terminal working.
- At least one coding agent installed and authenticated (Cowork, Claude Code, or both) and a default named in your loop file.
- An updated my-first-loop.md with all seven Module-2 lines filled in: model / local runner / local throughput / cloud provider / key storage / monthly cap / editor + coding agents + default.
- A frozen workstation-posture-v1.md in your /capstone/ folder — the second capstone artifact, paired with my-first-loop-v1.md from Module 1.
- The completed end-of-module check in your portfolio, scored at 11.5 / 15 or better with full credit on at least one applied question.
If any of these is missing, go back to the checkpoint that produced it and finish before moving on. Module 3 will assume your workstation is real and your posture is written.
Coming next
Module 3 — AI coding partners.
With the workstation standing and the posture written, the next module turns Cowork and Claude Code into real collaborators: reading what they produce, directing them toward the tasks they do well, catching them on the tasks they do badly, and folding their output back into your own work.
Module 3 opens when the Module 2 portfolio is complete — the seven items above, including the end-of-module check.