AI Architect Academy · Module 9

Security, privacy & responsibility.

Module 9 is the first module that is not about what you can do — it's about what you have to protect. You name the threats, harden your workstation and pipeline, write the one document that says in your own words what your system is allowed to see, where its data goes, what happens when something goes wrong, and who hears about it. You ship one frozen artifact: /capstone/security-posture.md.

Recipe Book status for this module

Module 9 has a focused RECIPE spine — the concrete commands for environment variables, OS keychain entries, provider-console settings, and the key-rotation drill. Last verified 2026-04-20, next review 2026-07-20.

Lessons in this module CORE + RECIPE

Work through these in order. Each lesson writes or finishes one section of /capstone/security-posture.md; Lesson 9.5 runs the incident drill and freezes the document — the ninth capstone artifact of the course.

  1. 9.1   The threat model: three adversaries, one student.

    Mostly CORE

    The framing lesson of Module 9. You name the three adversaries who actually threaten a one-student system — the careless self (the misplaced key, the over-scheduled task, the draft sent because audience-equals-you was posture and not habit), the hostile internet (any text your agents read from the open web, from an inbound email, from a stranger's PDF), and the supply chain (every plugin, MCP, and skill running third-party code inside your session). You commit to the audience-equals-you rule as the reason this threat model is tractable at all — a system that publishes to no one has a smaller, more legible attack surface than one that does. You create /capstone/security-posture.md with six section placeholders and fill in Section 1 — Threat model — naming which adversaries are realistic in your context and which are not, and ending with the one adversary you are most underprepared for.

    Open Lesson 9.1 →

  2. 9.2   Prompt injection and the trust boundary.

    CORE + RECIPE · headline technical insight

    The module's headline technical insight. You install the trust-boundary mental model — trust is a property of text, not of sources — and name the four injection surfaces on your system (web pages a research agent summarizes, emails an inbox agent reads, shared-state files upstream pipeline stages write, responses third-party MCPs return). You install the three defenses: segregation (untrusted text is fenced, labeled, and never concatenated into the instruction block), refusal (the agent does not act on instructions found inside untrusted text), and containment (the audience-equals-you rule that makes the worst case of a successful injection — a draft only you read — survivable). You run the Spot the injection HTML drill against five realistic examples and begin Section 3 — Trust boundaries — which Lesson 9.4 finishes for your own pipeline.

    Open Lesson 9.2 →

  3. 9.3   Secrets, API keys, and credentials across agents.

    Mostly RECIPE

    The recipe-heavy lesson. Three non-negotiable rules govern every secret on your system: (a) no secret ever appears in a conversation — not in a Claude Code CLI prompt, a Cowork-tab chat, a committed skill, or a scheduled-task definition; secrets live only in environment variables, OS keychains, or provider vaults; (b) every key has a scope and a cap — a key that can do everything is a key that, when leaked, does everything, so your cloud-provider key is scoped to the projects it needs and capped at a monthly dollar amount you pick deliberately; (c) a leaked key is rotated, not patched. You run the full rotation drill against your Module 2 cloud-provider key — the one with the largest blast radius and the one most likely to have been pasted into a transcript at some point — using freshness-tabled recipes for your shell, your OS keychain, and the provider console. You produce Section 4 — Secrets posture — with rotation cadence (90 days plus immediately on suspicion), per-key dollar cap, and the one secret manager you will use going forward.

    Open Lesson 9.3 →

  4. 9.4   Local vs. cloud: routing data with intent.

    CORE + RECIPE

    The routing lesson. You install the three-class data taxonomypublic (already on the open internet; any model, local or cloud), personal (about you or your household, unpublished; a vetted cloud model whose terms you have read), sensitive (medical, financial-account, immigration/legal, anyone else's credential; local only or not at all) — and derive the routing rule mechanically from it: class in, model out, no negotiation against convenience. You walk every folder in your capstone (/capstone/pipeline-v1/, my-first-loop.md, the Module 5 inbox/calendar posture, the Module 7 plugin register) and label each artifact. As the drill, you re-route one sensitive-data flow to a local model on Ollama or LM Studio during the lesson. You produce Section 2 — Data classification — and finish Section 3 — Trust boundaries — for your own pipeline.

    Open Lesson 9.4 →

  5. 9.5   Cost, responsibility, and the incident loop (capstone).

    Mostly CORE

    The responsibility lesson and the capstone freeze. Cost is a security topic — a quietly-consumed \$200 of cloud spend is the same class of loss as a leaked key — and you install the three cost horizons: the pre-flight estimate from Module 8, the monthly budget you pick deliberately, and the hard cap set in the provider billing console below which the provider refuses to serve more requests. You install the four-step incident loopstop (kill the pipeline; rotate the key; revoke the plugin), assess (what did it touch, spend, produce, who saw it), repair (update the posture; tighten the rail that failed; rerun cleanly), tell one human — and you name the one human (parent, mentor, instructor) who will hear about every incident you cannot fully explain within 24 hours. You run the incident drill end-to-end against one of four scripted scenarios, write the after-action note, fill in the next-review date (≤ 90 days out), and freeze /capstone/security-posture.md with all six sections. Ninth capstone artifact.

    Open Lesson 9.5 →


Resources for this module RECIPE

Six printable companions and one interactive activity run alongside the lessons. Each printable is a print-to-PDF page that prints cleanly on letter paper. Use them in the order their lessons reference them.

  • Threat-model sketch — printable for Lesson 9.1. A one-page worksheet that walks you through the three adversaries (the careless self, the hostile internet, the supply chain) and rates your current exposure to each in your own context, ending with the single adversary you are most underprepared for. The raw material for Section 1 of the posture document.
  • Trust-boundary map — printable for Lesson 9.2. A block for each of the four injection surfaces (web summary, inbound email, shared-state file, MCP response) with space for the text source, the hardening in place, and the failure mode if the boundary is crossed. Used alongside the Spot the injection drill; becomes the scaffold for Section 3 of the posture document.
  • Key-rotation register — printable for Lesson 9.3. One row per key (provider, scope, dollar cap, rotation cadence, last rotated, next rotation) with room for the rotation drill's before-and-after lines. Preserved as the secrets appendix to the posture document and updated on every rotation.
  • Data-classification table — printable for Lesson 9.4. The three-class taxonomy (public / personal / sensitive) laid out as a table with one row per artifact in your capstone, a classification column, and a routing column (which model this class goes to). The worksheet you fill in while walking your capstone folders and the raw material for Section 2.
  • Incident-drill scripts — printable for Lesson 9.5. Four realistic scenarios (leaked key, runaway pipeline, a draft the agent almost sent, a successful prompt injection), each with a narrative setup and a blank four-step response template (stop / assess / repair / tell one human). You pick one, run it end-to-end, and append the after-action note to the posture document.
  • Security-posture template — the capstone freeze scaffold. A six-section markdown template (threat model / data classification / trust boundaries / secrets / cost / incident loop) with the one-human block, the next-review date line, and the five safety-norms footer ready to be written in your own words. The template you copy into /capstone/security-posture.md in Lesson 9.1 and freeze in Lesson 9.5.
  • Spot the injection — interactive HTML activity for Lesson 9.2. Five realistic passages (a search result, an email body, a shared-state file, an MCP response, a web page), each with or without an injection attempt hidden in the text. For each, you mark inject / clean and name the surface; the drill scores on first submission and reveals the hardening that would have contained the attempt. The rep that installs the trust-boundary reflex. Students who score below 4/5 redo the exercise after re-reading the Lesson 9.2 content block on injection patterns.

What you should have when this module is done CORE

By the time you close out Module 9, you should be able to point to six concrete things on your machine and in your capstone folder:

  • A frozen /capstone/security-posture.md with all six sections complete (threat model, data classification, trust boundaries, secrets, cost, incident loop), the one human named in Section 6, and a next review date no more than 90 days out on the last line. The document is two to four pages, written by you, and reads as your own commitments — not a template with the blanks filled in.
  • A rotation register with at least one real rotation logged from the Lesson 9.3 drill — the key that was rotated, the scope it now has, the dollar cap it now carries, the date rotated, and the next rotation date. The drill is the evidence that the secrets posture is a practiced muscle, not a paragraph.
  • A cost register with the monthly budget you picked deliberately, the hard cap set in the provider billing console (with a screenshot of the cap field), and the alert threshold configured to fire at a percentage of the budget. The cap is the difference between a bad week and a survivable mistake; the register is where you prove it is in place.
  • At least one logged incident-loop walk-through — the scripted drill from Lesson 9.5 counts — with an after-action note appended as an appendix to the posture document. The note says what happened, what the loop caught, which rail failed, and what you tightened. An incident loop you documented without walking is not an incident loop you can run when you need it.
  • The Spot the injection drill scored 4/5 or better, with a screenshot of the result saved alongside the trust-boundary map. If your first attempt scored below 4/5, the screenshot is of the passing retry after you re-read the Lesson 9.2 content block on injection patterns.
  • The completed end-of-module check in your portfolio, scored at 11.5 / 15 or better with full credit on at least one applied question.

If any of these is missing, go back to the checkpoint that produced it and finish before moving on. Module 10 assumes the posture document is already frozen — the capstone documentation you ship in Module 10 will reference security-posture.md more than any other artifact. A student who arrives at Module 10 without this document has a hole at the center of their capstone.

Coming next

Module 10 — Ship your agentic system.

Module 10 is the capstone. You build and ship a personally useful agentic system with at least three integrated agentic components — your posture document is the security specification, your pipeline from Module 8 is often one of the three components, and the documentation you ship cites Modules 1–9 as its architecture inheritance.

Open Module 10 →