Weekly is a different job, not a longer brief CORE
It is tempting to think of a weekly report as a morning brief with more input — seven days’ inbox instead of one, seven days’ calendar, seven days of research notes. That framing produces a bad artifact. A seven-day version of a morning brief is a wall of categorized messages and events; nobody reads it.
A weekly report answers three specific questions the morning brief cannot:
- What did I actually get done this week? Not “what was on my calendar” — what moved. Shipped commits. Finished research notes. Replies sent. Drafts closed. Things that left a mark.
- What did I say I would do that I didn’t do? Carried commitments. Open threads still waiting on me. Drafts that sat in the drafts folder for seven days. A weekly report is where the honest accounting of not-done lives.
- What changed in the world of the thing I’m working on? If the student has a research topic, a project, a school unit, a competition — the weekly report summarizes changes in that world over the past seven days. New arguments, new sources, new deadlines.
That framing sets up the whole lesson. A weekly report is a synthesis and accountability artifact; a morning brief is a routing artifact. They compose different inputs, use different section shapes, and ask the student for different kinds of review attention.
One consequence: the weekly report’s audience is still you, but your reviewing posture is different. A morning brief is read at 7 AM, scanned in two minutes, used as routing. A weekly report is read at the start of a planning block on Monday, read slowly, used to decide what the next seven days will look like. The artifact is longer — three to four pages, not one — because the reviewer is slower.
The weekly report’s structure and its three inputs CORE
A weekly report that follows a predictable structure is cheaper to audit later. The recommended five-section shape:
- Header line. Run timestamp, week ID (week-2026-17), the date range the report covers (2026-04-20 through 2026-04-26).
- Shipped this week. A numbered list, max ten items. Each item is a link or path to evidence — a commit hash, a saved research note, a thread URL, a capstone file name — plus one sentence of what it is. The link to evidence constraint is the integrity rail: the agent cannot claim shipped work without a pointer to it.
- Said-I-would, didn’t. A numbered list of carried commitments. Each item: what was supposed to happen, what blocked it, whether it carries to next week or drops. This is the section students want to skip; it is the section the report is for.
- World changes in the topic. A synthesis (no more than eight sentences) of what changed in the research topic the student is tracking. This section is produced by the scheduled research refresh (Block 3) and pasted into the weekly report at compose time.
- Health footer. Same shape as the morning brief. What the agent could not do this week. What sources were unavailable. What went silent.
The three inputs the prompt names, and the recipe wires up:
- Commit history for the student’s project repos. git log output for the past seven days across the student’s active repos, scoped to their own authorship. The prompt tells the agent to attribute commits specifically — do not claim commits it did not see, do not invent messages, do not round up.
- The capstone folder’s changes. Any file under /capstone/ modified in the past seven days. Diff the my-first-loop.md file against its state seven days ago and report the sections that changed.
- The research refresh’s latest diff. The output of a scheduled research refresh (defined in Block 3) becomes section 4 of the weekly report. This couples the two tasks — the refresh runs on its own schedule; the weekly report composes its latest result into the synthesis.
If the student has no active repos (common for younger students early in the course), the commit-history input is replaced by the capstone folder input alone. That is fine. The report still reads as a report.
Scheduled research refresh: watching a topic across runs CORE
Module 4 taught directed research — a one-time or multi-session investigation into a topic, producing a research brief. A scheduled research refresh is a research agent rigged to re-run the same investigation on an interval and report only what changed since last time.
Examples of the shape:
- A student tracking a specific court case wants to know weekly whether any new filings, decisions, or commentary appeared.
- A student tracking a legislative bill wants to know weekly whether new hearings, amendments, or votes happened.
- A student doing a research paper on a fast-moving topic (a recent technology, a current policy debate) wants to know weekly whether new peer-reviewed sources or reputable news analyses appeared.
The refresh framing changes the job. A one-time Module 4 research agent summarizes a topic. A scheduled refresh summarizes the delta — what’s new and, equally importantly, what stayed the same but moved in emphasis. The artifact is a short run note: three to eight sentences, explicit about what changed and what did not.
Why does this belong in a module on automation rather than in Module 4? Two reasons. First, refresh is an automation pattern — it is defined by the schedule and the idempotency key, not by the depth of the investigation on any one run. Second, the refresh is the thing you feed into the weekly report; wiring them together is the point of the lesson.
The refresh’s output should be small. A run note that’s two pages long is not a refresh; it’s a re-run of the initial brief. If the agent cannot summarize this week’s changes in eight sentences it probably found too many things and is including everything instead of deciding. The prompt will put a ceiling on output length and instruct the agent to rank by relevance, not count.
The diff contract: what counts as meaningful change CORE
The hardest part of a scheduled research refresh is deciding what counts as a change worth reporting. A naive refresh that reports every new URL it found will produce twenty sentences of noise a week, which the student will stop reading, and the automation is now worse than no automation.
The diff contract is a short paragraph in the prompt that defines “meaningful change” for this specific refresh. It has three parts:
- What to include. Concrete types. “New primary-source filings, decisions, or hearings. New analyses from recognized legal scholars or national news outlets. New official statements from named parties.”
- What to exclude. Concrete types. “Opinion columns in regional news. Reposts or syndications of items already reported last week. Commentary on commentary.”
- What counts as ‘new.’ Specifically. “Published after [date of last run]. If a piece has been updated since last run, include it only if the update itself is material — a new section, a substantive correction, a reversal.”
The contract is not universal. The examples above fit a court-case refresh; the contract for a technology-announcement refresh or a policy-debate refresh will look different. The research-refresh prompt template provides three contract examples the student can adapt.
One more discipline: the diff contract also names a zero-state behavior. If a refresh run finds nothing meaningful, the artifact should say so explicitly — “No meaningful changes this week” — and not fabricate items to fill space. This is the same anti-confabulation rail you met in Lesson 6.2. It is more important here, because the agent has a bias toward producing something from a research task, and an empty refresh is a valid and useful output.
The idempotency keys for both tasks CORE
Weekly report: key is the ISO week number in the canonical time zone. week-2026-17.md is the week of April 20–26, 2026, which is ISO week 17. A re-run in the same week overwrites. Why ISO weeks instead of “Sunday-to-Saturday” or “Monday-to-Sunday”? Because ISO week 17 is the same week number in every tool — your calendar, your git log, any service that reports weeks — and you want a label that is stable across tools.
Scheduled research refresh: key is the query ID plus run timestamp. research/q-supreme-court-patel/run-2026-04-22T0800.md, with a latest.md symlink in the same folder pointing to the most recent run. The refresh is fundamentally re-runnable within the same interval — maybe you want to pull a refresh twice on a busy day — so the timestamped filename is the key, and the latest.md symlink is what the weekly report reads from.
The subtle bit is how the diff is computed. The refresh’s prompt must know where to look for the prior run’s content so the agent can compute the delta. The simplest form: the prompt tells the agent “read research/q-supreme-court-patel/latest.md (if it exists) as the prior-run summary; your job is to produce a new summary and a short diff paragraph.” The recipe wires this in. The student does not have to write the logic, but should understand that the refresh’s “state” — the thing that makes it scheduled rather than one-time — lives in that one file.
One failure mode to watch for: the agent reads latest.md, decides nothing has changed, and quietly writes “no meaningful changes” even when the underlying world has moved a lot. This is a silent failure because the artifact still has the expected shape. The defense is the same one you met in the morning brief: a health footer, every run, that reports what the agent searched and how many sources it read, so the student can detect a run that was lazy rather than uneventful.
Weekly report setup — primary path (Cowork tab) RECIPE
| Tool | Claude desktop app — Cowork tab's scheduled tasks (primary). Optional advanced: Claude Code CLI + cron / launchd / Task Scheduler. |
| Last verified | 2026-04-17 |
| Next review | 2026-07-17 |
| Supported OSes | macOS, Windows (Claude desktop app) |
- Open the Claude desktop app, switch to the Cowork tab, and start a session titled Weekly report — setup.
- Paste the prompt template from the weekly-report prompt template. Fill in: the repo paths the agent should git log against; the capstone folder path; the research-refresh folder path (research/q-<your-id>/latest.md).
- Confirm the output contract in chat. The agent should name the five sections, the idempotency key (week-YYYY-WW.md using ISO week numbers), the audience (me only), and the evidence-link rail (every shipped item must have a commit hash or file path).
- Save as scheduled task: Weekly report, schedule Sundays 18:00 local.
- Run an on-demand dry run now against the past seven days. Read the artifact. If section 2 (Shipped) lacks evidence links on any item, the evidence rail is not strict enough — rewrite it as “you may not include an item without a specific path, hash, URL, or filename.”
- File register entry 2.
Optional advanced — Weekly report via the Claude Code CLI + a local OS scheduler (scriptable) RECIPE
For terminal-comfortable students who want the weekly report checked into the same repo as the Module 3 project so commits, prompt, and report all live next to each other. The Cowork-tab path above is sufficient; skip this section if scriptable automation isn't a goal.
| Tool | Claude Code CLI + cron (macOS / Linux), launchd, Task Scheduler |
| Last verified | 2026-04-17 |
| Next review | 2026-07-17 |
| Supported OSes | macOS, Linux, Windows |
- mkdir -p ~/ai-architect-academy/automation/weekly-report/; add prompt.md and run.sh (or .ps1).
- Output path: ~/ai-architect-academy/automation/reports/week-$(date -v-sun +%G-%V).md on macOS (ISO year + ISO week); Get-Date -UFormat "%G-%V" in PowerShell on Windows.
- Hand-run the script. Read the output. Confirm evidence links.
-
Schedule:
- cron: 0 18 * * 0 cd ~/ai-architect-academy/automation/weekly-report && ./run.sh >> ./log.txt 2>&1
- launchd: per the recipe, every Sunday at 18:00
- Windows Task Scheduler: weekly trigger, Sunday 18:00
- File register entry 2.
Scheduled research refresh RECIPE
| Tool | Claude desktop app — Cowork tab's scheduled tasks (primary). Optional advanced: Claude Code CLI + cron / launchd / Task Scheduler. |
| Last verified | 2026-04-17 |
| Next review | 2026-07-17 |
| Supported OSes | Cowork tab: macOS, Windows. Optional advanced: macOS, Linux, Windows. |
- Pick a real topic you are tracking. Give it a short query ID: q-supreme-court-patel, q-ohio-bill-hb-42, q-electric-aircraft-certification. Make the ID specific enough to not collide with future queries.
- Create the folder: ~/ai-architect-academy/automation/research/q-<your-id>/. Inside, create an empty latest.md with a header “Prior run: none yet — this is the first refresh.”
- Paste the prompt template from the research-refresh prompt template. Fill in: the topic (one sentence); the diff contract (include/exclude/new-from-when); the source cadence (how the agent should go looking); the ceiling (eight sentences).
- Save as scheduled task: Research refresh — <query ID>, schedule Fridays 17:00 local (or your chosen cadence; weekly is the default).
- Run an on-demand dry run. The first run produces a baseline — there is no prior to diff against — and writes to run-<timestamp>.md and updates latest.md.
- Run a second on-demand dry run the next day, to exercise the diff path. The second run should say something like “Since the prior run (yesterday), no meaningful changes” or name specific new items.
- File register entry 3.
Troubleshooting
If the second run fabricates a change that is not in any real source, the diff contract’s what counts as new line is not strict enough — rewrite with explicit publication-date conditions and re-run.
Try it — Weekly report + research refresh, first two runs each RECIPE CORE
over the course of one week (most of the time is between runs, not during them)
Setup.
- Pick the research refresh’s topic. Write the query ID. Write the one-sentence topic statement. Draft the diff contract (include/exclude/new-from-when). Save the topic and contract in your filled-in copy of the refresh prompt template.
- Set up the research refresh per the recipe. Dry-run the baseline. Confirm latest.md was written and has the expected shape.
- Set up the weekly report per the recipe (Cowork or scriptable path). Do not run it yet.
First scheduled refresh (Friday evening, say).
- The refresh fires. Open the produced run file. Read it. Verify the diff section names what changed since the baseline and that the health footer reports what the agent searched.
- If the refresh invented a change (low probability after a one-day gap, but possible), tighten the diff contract and re-run.
First scheduled weekly report (Sunday evening).
- The weekly report fires. Open week-YYYY-WW.md. Read every section.
- Section 2 (Shipped): verify every item has an evidence link (commit hash, file path, thread URL). Any item without a link means the evidence rail is not strict enough; rewrite the prompt now.
- Section 4 (World changes) should contain the refresh’s latest delta. Verify it is the refresh’s Friday output, not invented prose.
- Section 5 (Health footer) should name what the report could not do. If empty, re-read Lesson 6.2’s block on honesty rails.
Deliverable. Register entries 2 and 3 complete. Artifacts: one week-YYYY-WW.md in /capstone/automation-artifacts/reports/ and two dated refresh run files (baseline + first scheduled) in /capstone/automation-artifacts/research/q-<id>/. Each artifact carries a three-sentence student review at the bottom.
Done with the hands-on?
When the recipe steps and any activity above are complete, mark this stage to unlock the assessment, reflection, and project checkpoint.
Quick check
Five questions. Tap a question to reveal the answer and the reasoning.
Show explanation
Answer: B. The “shipped / didn’t-ship / world-changes” frame is the structural reason the weekly report is a different job from the morning brief. A is a triage-shaped question more suited to a brief. C is a planning-shaped question — valid, but a different artifact. D is the daily brief’s job, not the weekly report’s.
Show explanation
Answer: B. The rail is specifically against confident claims of shipped work with no pointer, which is the most common weekly-report failure mode: the agent writes “finished research paper draft” with no link, and the student nods and files the report, and nothing was actually finished. A path, hash, or URL forces the claim to be auditable. A, C, D are unrelated citation conventions.
Show explanation
Answer: C. The contract is the anti-noise rail specific to research refreshes. A is the schedule, set on the scheduler, not in the prompt. B is model choice, set elsewhere. D is a cost ceiling, named in the register entry. Without a diff contract, refreshes produce either too much (every URL the agent found) or fabricate change to justify the run.
Show explanation
Answer: C is the most common cause and is the failure this module specifically teaches you to watch for. A is possible but unlikely on a topic you’ve noticed moving. B is possible but shows up in the footer; you would see real items listed as “excluded.” D shows up as missing artifacts, not wrongly-empty ones. Fix C by re-stating the “search all sources every run, report what you searched” instruction in the prompt and verifying the footer changes run-to-run.
Show explanation
Answer: B. The point is cross-tool consistency, which makes the filename portable and the task’s idempotency verifiable by any tool that can compute ISO weeks. A is trivially true and not the reason. C is true but not the reason. D is false.
Reflection prompt
Name the event that would retire this refresh.
In 6–8 sentences, answer: Your weekly report’s “Said-I-would, didn’t” section is the one most students want to skip. Be honest with yourself — is it the section you want to skip too? Why? Describe what you think will happen after six weeks of reading the “didn’t” section carefully: how will it change what you carry week-to-week, and what will you do when the same item shows up in “didn’t” four weeks in a row? For your research refresh, what is the one real-world event — named specifically — that, if it happened, would cause you to retire this refresh?
The last question is why this section exists. Most refreshes retire themselves; the student stops caring about the topic. Naming, in advance, the event that retires the refresh keeps it honest.
Project checkpoint
File register entries 2 and 3, and archive the first real runs.
Open automation-register-v1-draft.md and add two entries.
Entry 2: Weekly report.
Schedule: Sundays 18:00 local. Idempotency key: ISO week (week-YYYY-WW.md). Scope: your repos + /capstone/ + research/q-<id>/latest.md. Audience: me only. Cost ceiling: [$X/run, $Y/month]. Last success: first Sunday’s run. Next review: 60 days from today.
Retirement trigger: “I have not acted on the Said-I-would, didn’t section for three consecutive weeks.”
Entry 3: Research refresh — q-<your-id>.
Schedule: Fridays 17:00 local. Idempotency key: query ID + timestamp, latest.md symlink. Scope: public sources relevant to the topic. Audience: me only. Cost ceiling: [$X/run, $Y/month]. Last success: the baseline run. Next review: 60 days from today.
Retirement trigger: [the specific event you named in the reflection].
Copy the first week’s artifacts into /capstone/automation-artifacts/ in the appropriate sub-folders.
Do not proceed to Lesson 6.4 until entries 2 and 3 are complete and at least one real scheduled run (not just a dry run) has produced an artifact for each.
Next in Module 6
Lesson 6.4 — Alerts and watchers.
Most runs say nothing. One run in twenty makes you stop what you’re doing. Build a conditional watcher with a precise threshold, an event-ID idempotency key, and a quiet-run behavior that still writes a log line. Register entry 4 and the Signal-vs-noise drill. Three failure modes to avoid: false positives, false negatives, alert fatigue.