2. Core Story for Anthropic lead counsel
Anthropic should present Claude as a safety-governed product, not just speech and not a free-floating toy.
The frame for foreseeable misuse is product-safety and governance first, disclaimers second.
Under product-liability style regimes, the central question is what safety a user is entitled to expect, given the system’s
presentation, instructions, and reasonably foreseeable uses and misuses. Anthropic’s first line of defence is the
Responsible Scaling Policy (RSP) and Anthropic Safety Levels (ASL) stack, which tie model capabilities
to evaluations, deployment controls, and pause/rollback mechanisms. See the public policy announcement and PDF text for
the current RSP version and ASL-3 posture:
Claude is framed as a powerful assistant, not an oracle or licensed professional. Public positioning, UX, and policies
stress fallibility and the need for human judgment and verification, especially in domains like law, medicine, and finance.
Evaluation and interpretability work on hallucinations and grounding feed into how capabilities are exposed and described.
Marketing, system cards, and legal terms are meant to be harmonized: where strong capability claims are made, they are paired
with transparent documentation of limits and the continued need for human professionals. Disclaimers and terms still matter,
but as the second line of defence: they align expectations and allocate responsibility where risk cannot
be fully eliminated, on top of concrete design and governance choices.
Key sources for the core story
The links below are primary controlling documents for this framing (policies, governance, and external law):
3. Flagship Foreseeable Misuse Scenarios (FM1–FM7) — Cards with Sources
These scenario cards mirror the FM1–FM7 rows and the acceptance table. Each card includes a short description of the
risk and posture, plus nearby links to key sources. In our module, these URLs correspond to the handles in the
masterplan_v3_foreseeable_misuse_citations_v001 manifest.
FM1 — Professional-advice reliance (legal / medical / financial)
Risk: users, including some professionals, treat Claude’s outputs as if they were licensed advice—
pasting unreviewed drafts into filings, relying on it to interpret regulations, or treating it as a substitute for consultation.
Posture: Anthropic positions Claude as a drafting and research assistant; emphasises fallibility,
grounding, and verification; and uses terms and usage policies to make clear that professionals remain responsible
for their own advice and filings.
Key sources for FM1
FM2 — Hallucinated law and authoritative citations
Risk: Claude fabricates case law, misstates statutory text, or blends jurisdictions, and users fail to
verify before using the output in real matters.
Posture: evaluation and technical work on hallucinated citations; UX and guidance that steer users to
verify against primary sources; and terms that require customers to maintain review and cite-checking practices rather
than auto-filing model output.
Key sources for FM2
FM3 — Safety-critical and high-availability deployments
Risk: Claude is wired into systems where failure could cause serious harm—operational decision-making,
critical infrastructure, or emergency workflows—without adequate safeguards.
Posture: treat these as a distinct high-risk category; emphasise in documentation and terms that customers
are responsible for integration, redundancy, and oversight; and expect more formal testing, governance, or gating for
these deployments.
Key sources for FM3
FM4 — Downstream toolchains and agents
Risk: customers build agents, automations, or multi-model toolchains around Claude that amplify errors,
including in safety-sensitive or financial contexts.
Posture: usage policies and technical controls limit certain unsafe behaviours; guidance stresses
monitoring, logging, and guardrails around downstream tools; disclaimers and contracts allocate responsibility for those
autonomous systems where Anthropic has limited visibility.
Key sources for FM4
FM5 — Misuse for harmful or prohibited content
Risk: attempts to use Claude to generate disallowed content such as fraud, targeted harassment, or other
abusive or illegal material, including attempts to bypass safety systems.
Posture: usage and safety policies clearly forbid such uses; safety filters, monitoring, and
enforcement are designed as primary controls, with disclaimers backing up those guardrails rather than substituting for them.
Key sources for FM5
FM6 — Misalignment between marketing and reality
Risk: external messaging, testimonials, or partner materials suggest capabilities (for example, complex
litigation performance) that, if taken literally, could create unrealistic expectations about accuracy and reliability.
Posture: align claims with system cards, evaluations, and honest caveats; ensure that strong claims are
contextualised and supported by testing; and avoid promising outcomes the system cannot guarantee.
Key sources for FM6
FM7 — Data, privacy, and security expectations
Risk: customers or end users misunderstand how data is processed, retained, or used for training, leading
to privacy or security complaints tied to misuse or over-collection.
Posture: rely on data-processing addenda, privacy policy, and security/transparency materials to set clear
expectations, and design product flows that minimise unnecessary collection; disclaimers reinforce, but do not replace,
those design choices.
Key sources for FM7
4. Likely Questions & Short Answers
These are condensed Q&A entries keyed to FM scenarios and source families. They can be expanded using the dedicated
Q&A prep artifact.
Q1. What is the core story on foreseeable misuse and disclaimers?
A: We treat foreseeable misuse as a product-safety and governance problem first, and a disclaimer problem
second. The real defence is design, evaluation, gating, and monitoring; disclaimers sit on top to align expectations and
allocate responsibility where risk cannot be fully eliminated.
Q2. How do you address lawyers relying on Claude despite hallucination risk?
A: We position Claude as a drafting and research assistant, not a licensed professional. We invest in
hallucination and grounding research, design UX and guidance around verification against primary sources, and require
customers to keep human review and responsibility for their own advice and filings.
Q3. Why aren’t terms and disclaimers enough on their own?
A: Courts look at what safety a user was entitled to expect, given presentation and reasonably foreseeable
uses. If the underlying design ignores foreseeable misuse, disclaimers will not cure the defect. Our approach is to make
concrete safety and governance choices first and then use terms to reflect that reality.
Q4. How do you think about safety-critical or high-availability deployments?
A: We treat them as a distinct risk category. We emphasise customer responsibility for integration,
safeguards, and oversight, expect more formal testing and governance, and may impose technical or contractual limits on
certain high-stakes use cases.
Q5. What commitments can you credibly put on the table?
A: We can commit to: continuing evaluation and publication on hallucination and reliability; aligning marketing
and documentation with real capabilities; tightening controls and guidance for higher-risk use; and updating policies and
safeguards in light of incidents, regulator guidance, and new research.