S1 · AI Landscape Quickstart
A fast, source-linked orientation to the AI landscape and Anthropic’s place in it. Use this to open briefings and to set shared vocabulary.
This page is your map. It ties together the client brief (S1–S6), Foreseeable Misuse, the Penumbral Privacy Spine, policy overlays, the reading stack, and the tools & templates.
Use S1–S6 when you are telling the high-level story to boards, executives, and regulators. Each segment has a specific job; together they form a coherent counsel narrative.
A fast, source-linked orientation to the AI landscape and Anthropic’s place in it. Use this to open briefings and to set shared vocabulary.
Map the key actors in the ecosystem and highlight Anthropic’s distinctive governance and technical posture (ASL-3, RSP, transparency work).
Connect Anthropic’s posture to concrete recommendations and the acceptance matrix. Use these segments to structure decisions and record trade-offs.
These bundles extend S1–S6 into scenarios, doctrine, and supporting material. Treat them as overlays on top of Anthropic’s live policies and technical reports.
Scenario library and acceptance table for risky uses of Claude. Helps you decide what to accept (with controls), defer, or decline.
Organises key privacy and autonomy cases into interpretive packets, linked to an acceptance table and counsel usage guide.
Use the policy overlays, tools & templates, reading stack, and welfare handout as your reference shelf when turning S1–S6 into concrete client work.
Policies & overlays · Tools & templates · Reading stack · Model evals & welfare
Treat these pages as a map and a set of overlays, not as an auto-pilot. Use the steps below when you turn S1–S6, Foreseeable Misuse, the Penumbral module, or the welfare handout into live legal work product.
Begin here, then click into the bundle that matches your task: S1–S6 for the story, Foreseeable Misuse for risky scenarios, the Penumbral module for doctrine, the welfare handout for model behaviour, or the tools/templates when you are ready to structure a decision.
Each high-stakes bundle is wired to an RPE watchpoints JSON file. Skim the relevant bullet points
(for example, client_brief_rpe_watchpoints_v0_1.json, fm_rpe_watchpoints_v0_1.json,
welfare_rpe_watchpoints_v0_1.json, penumbral_rpe_watchpoints_v0_1.json) before
you rely on that bundle in opinions, board packs, or redlines.
Use the Policies & Overlays and Reading Stack indexes to find the live RSP, ASL-3 docs, evaluation reports, and key cases. Cite those directly in your work product, including version or date, rather than treating this pack as the primary source.
Ask what would have to change in Anthropic’s policies, product behaviour, or law for your advice to become misleading. Pay special attention when you are making claims about AI welfare, rights, or penumbral privacy holdings; keep those tethered to the underlying texts and uncertainty notes.
When you make an accept/mitigate/defer/decline call, use the S5/S6 tools to capture the decision, the upstream sources and bundle pages you relied on, and any RPE watchpoints that were especially salient. Treat that record as part of your institutional memory.
The Risk-Prediction Engine (RPE) runs in the background for this project. For S1–S6, it surfaces a small set of watchpoints you should keep in view when you rely on the brief in opinions, board packs, or redlines.