P3.10 · D5 + D4 · Process

Conversational AI Patterns.

Think of this as the long-running version of the support agent. The one that keeps making sense on turn 15 just like it did on turn 1. When a customer comes back five messages later, this agent still knows who they are, what they decided, and what they explicitly asked you not to do. It exists because most production conversations are not three turns; they are fifteen, and a system that quietly forgets the customer ID by turn eight is worse than no system at all. Everything below is how to make a conversation continue feeling like a conversation, even after the underlying context has been compressed.

24 min build·5 components·8 concepts

A multi-turn conversational pattern that survives context compression. The harness pins a CASE_FACTS block at the top of every system-prompt iteration (immutable, re-read every turn), summarizes turns 2-14 into 3 lines at turn 15, gates clarification through a PreToolUse hook (not the prompt), respects explicit human-handoff requests immediately (not sentiment), and keeps the agent reading stop_reason rather than message text. Confirmed on the real exam by two independent pass-takers as one of the highest-leverage scenarios outside the published guide.

35% exam weight
SourceBeyond-guide scenario · empirically witnessed on the real CCA-F exam
What do the colours mean?
Green
Official Anthropic doc or API contract
Yellow
Partial doc / inferred
Orange
Community-derived
Red
Disputed / changes frequently
Stack
Python or TypeScript SDK · CRM · session store
Needs
Tool calling · stop_reason · case-facts pattern
Exam
35% of CCA-F (D5 + D4). 15% D5 · 20% D4. Highest-weight scenario on the test. Master this one and you've covered most of it.
Loop — the ACP mascot — illustrated as a calm customer-support agent at a walnut desk with headset, notebook, and a small speech-bubble holding an inbound question.
End-to-end flow35% of CCA-F (D5 + D4)
01 · Problem framing

The problem

What the customer needs

  1. Pick up the conversation at turn 15 and still see the customer ID, decision, and contact preference from turn 2.
  2. Be honored immediately when they say 'I want to speak to a human'. No negotiation, no 'let me try first'.
  3. Not be re-asked the same clarifying question three turns after they already answered it.

Why naive approaches fail

  1. Single-block message history hits lost-in-the-middle by turn 9; the agent loses the order ID and re-asks.
  2. Prompt-only clarification language ('don't repeat questions') leaks 8% of cases; the agent re-asks anyway.
  3. Sentiment-triggered escalation creates 50% false positives: angry-but-valid customers get escalated unnecessarily.
Definition of done
  • Turn-15 retrieval of customer_id + prior decision = 100% (case-facts pinned, never summarized)
  • Repeat-clarification rate < 1% (programmatic prerequisite block, not prompt language)
  • False-escalation rate < 5% (policy-gap + explicit-request triggers only, sentiment ignored)
  • p95 turn latency < 5s including hook overhead and history compression
02 · Architecture

The system

03 · Component detail

What each part does

5 components, each owns a concept. Click any card to drill into the underlying primitive.

Case-Facts Block

immutable customer state, top of prompt

Pinned at the very top of every system-prompt iteration. Holds customer_id, decision_made, contact_preference, escalation_requested, policy_cap. Survives history compression and is re-read every turn. That is the entire point.

Configuration

system: f"CASE_FACTS:\n customer={cust_id} · decision={decision} · contact={pref} · escalated={escalated}". Updated by hooks after any state-changing tool call. Never paraphrased; always exact.

Concept: case-facts-block

Session State Manager

decisions + flags between turns

Tracks the structured state that case-facts cannot: which clarification questions have been answered, which tool results are still in play, whether the customer has explicitly asked for a human. Updated post-each-tool-call. Read by the hook before any subsequent tool dispatch.

Configuration

state: {clarifications_answered: [order_id, refund_or_credit], last_tool_result, escalation_requested: false, contact_preference: 'email'}. Persisted in session store, loaded into prompt as a serialized block.

Concept: session-state

History Summarizer

turns 2-14 → 3 lines at turn 15

Watches conversation length. When the message list exceeds 15 entries, replaces turns 2-N-1 with a single 3-line summary preserving decisions, not transcripts. Case-facts stays untouched at the prompt top. The summary lives in the message list, not in case-facts.

Configuration

if len(messages) > 15: summary = compress_to_3_lines(messages[1:-1]); messages = [messages[0], {role: 'user', content: summary}, messages[-1]]. Keeps token count flat while preserving decision continuity.

Concept: context-window

Clarification Gate Hook

PreToolUse · prerequisite block

Sits between Claude's tool_use request and tool execution. If a downstream tool needs verified_id and case-facts.verified_id is null, exits 2 with a deterministic message routing Claude to call get_customer first. This is the difference between probabilistic prompt language and 100% prerequisite enforcement.

Configuration

Hook fires before process_refund / update_account / escalate_to_human. Reads case_facts.verified_id and conversation flags. Exit 2 with stderr message routes Claude back; no leakage, no exceptions.

Concept: hooks

Stop-Reason Loop Control

branch on the field, not the text

Reads stop_reason after every API response. end_turn → exit cleanly. tool_use → execute, append result, continue. max_tokens → save partial state and escalate (never silently truncate). Never branches on response text containing 'done' or 'goodbye'.

Configuration

while True: resp = client.messages.create(...). if resp.stop_reason "end_turn": return. if resp.stop_reason "tool_use": dispatch + append. if resp.stop_reason == "max_tokens": persist + escalate.

Concept: agentic-loops
04 · One concrete run

Data flow

05 · Build it

Eight steps to production

01

Define the case-facts anchor block

Pin the immutable customer facts at the very top of the system prompt. These survive compression, are re-read every turn, and are never paraphrased. The block is the single load-bearing pattern of the whole scenario. Get this wrong and turn 15 forgets turn 2.

Define the case-facts anchor block
from anthropic import Anthropic
client = Anthropic()

def build_system_prompt(case_facts: dict) -> str:
    return f"""You are a conversational support agent.

CASE_FACTS (immutable; re-read every turn; never paraphrased):
- customer_id: {case_facts['customer_id']}
- decision_made: {case_facts.get('decision_made', 'none')}
- contact_preference: {case_facts.get('contact_preference', 'unset')}
- escalation_requested: {case_facts.get('escalation_requested', False)}
- policy_cap: ${case_facts.get('cap', 500)}

Constraints:
- Branch on stop_reason. Never on response text.
- If escalation_requested is True: route to human queue, no negotiation.
- If a clarifying question was already answered, do not re-ask (state below)."""
↪ Concept: case-facts-block
02

Build the session-state structure

Case-facts holds immutable customer state; session-state holds the conversational state. Answered clarifications, last tool result, escalation flag. Together they replace the lost-in-the-middle problem with structural retrieval. Loaded into the prompt as a serialized block right after CASE_FACTS.

Build the session-state structure
def build_session_block(state: dict) -> str:
    return f"""SESSION_STATE (updated post-each-turn):
- clarifications_answered: {state.get('clarifications_answered', [])}
- last_tool: {state.get('last_tool')}{state.get('last_tool_result_summary')}
- escalation_requested: {state.get('escalation_requested', False)}
- contact_preference: {state.get('contact_preference', 'email')}
"""

# After every tool call, update + persist
state['clarifications_answered'].append('order_id')
state['last_tool'] = 'lookup_order'
state['last_tool_result_summary'] = 'order_status=delivered'
session_store.save(case_facts['customer_id'], state)
↪ Concept: session-state
03

Wire the PreToolUse clarification hook

Programmatic prerequisite enforcement. Before any account-modifying tool, the hook checks case-facts + session-state. Missing prerequisites → exit 2 with a structured stderr message; Claude reads it and routes to the prerequisite tool first. Prompt language alone leaks 8%; this hook is 100%.

Wire the PreToolUse clarification hook
# .claude/hooks/clarification_gate.py
import sys, json

def main():
    payload = json.loads(sys.stdin.read())
    tool_name = payload["tool_name"]
    case_facts = payload.get("case_facts", {})
    session = payload.get("session_state", {})

    # Account-modifying tools require verified identity
    if tool_name in ("process_refund", "update_account", "escalate_to_human"):
        if not case_facts.get("verified_id"):
            print("verified_id missing. Call get_customer first", file=sys.stderr)
            sys.exit(2)

    # Honor explicit escalation request. No further tool calls
    if session.get("escalation_requested") and tool_name != "escalate_to_human":
        print("user requested human; route to escalation queue, no other tools", file=sys.stderr)
        sys.exit(2)

    sys.exit(0)

if __name__ == "__main__":
    main()
↪ Concept: hooks
04

Run the loop on stop_reason, not text

The single most-tested distractor in this scenario is parsing response text for 'done'. Claude can return text + tool_use in the same message; the structured stop_reason field is the only authoritative termination signal. Branch on it. Always.

Run the loop on stop_reason, not text
def run_conversation_turn(user_msg: str, case_facts: dict, state: dict, max_iter: int = 12):
    messages = load_history(case_facts['customer_id']) + [{"role": "user", "content": user_msg}]
    system = build_system_prompt(case_facts) + "\n\n" + build_session_block(state)

    for _ in range(max_iter):
        resp = client.messages.create(
            model="claude-sonnet-4.5",
            max_tokens=2048,
            system=system,
            tools=tools,
            messages=messages,
        )
        if resp.stop_reason == "end_turn":
            persist_history(case_facts['customer_id'], messages, resp)
            return extract_text(resp)
        if resp.stop_reason == "tool_use":
            tool_uses = [b for b in resp.content if b.type == "tool_use"]
            results = [execute_tool(t, case_facts, state) for t in tool_uses]
            messages.append({"role": "assistant", "content": resp.content})
            messages.append({"role": "user", "content": results})
            update_state(state, tool_uses, results)
            continue
        if resp.stop_reason == "max_tokens":
            persist_partial(case_facts, state, resp)
            return {"status": "partial_escalate", "text": extract_text(resp)}
    return {"status": "iteration_cap"}
↪ Concept: agentic-loops
05

Compress conversation history at turn 15

When the message list exceeds 15 entries, replace turns 2 through N-1 with a single summary that preserves decisions, not transcripts. Case-facts stays at the prompt top, untouched. The summary lives in the message list. This frees ~40% of tokens with zero decision loss.

Compress conversation history at turn 15
def compress_history(messages: list, threshold: int = 15) -> list:
    if len(messages) <= threshold:
        return messages

    # Preserve the original user message + the most recent exchange.
    first = messages[0]
    last = messages[-1]
    middle = messages[1:-1]

    # Summarize: extract decision points only (not full transcript)
    decisions = extract_decisions(middle)  # e.g. ["user_asked_for_refund", "agent_verified_customer", "policy_allowed_$50"]
    summary = "CONVERSATION SUMMARY (turns 2-" + str(len(messages) - 1) + "):\n" + "\n".join(f"- {d}" for d in decisions)

    return [first, {"role": "user", "content": summary}, last]

# Usage in the turn loop
messages = compress_history(messages, threshold=15)
↪ Concept: context-window
06

Honor explicit human-handoff requests immediately

When the customer says 'speak to a human', the agent does not negotiate. The hook flips session_state.escalation_requested → true. The next tool dispatch is escalate_to_human; everything else is blocked. Sentiment is orthogonal. Angry customers with valid requests still get the answer first.

Honor explicit human-handoff requests immediately
# Detection lives in the agent's prompt; latching lives in state
EXPLICIT_HUMAN_PHRASES = [
    "speak to a human",
    "talk to a person",
    "i want a human",
    "transfer me",
    "give me a human",
]

def detect_explicit_handoff(user_msg: str) -> bool:
    msg = user_msg.lower()
    return any(phrase in msg for phrase in EXPLICIT_HUMAN_PHRASES)

def handle_user_turn(user_msg: str, case_facts: dict, state: dict):
    if detect_explicit_handoff(user_msg):
        state['escalation_requested'] = True
        # The next agent loop will see this in session_state and the hook
        # will block any tool except escalate_to_human.
    return run_conversation_turn(user_msg, case_facts, state)

# Sentiment is intentionally NOT consulted here.
↪ Concept: escalation
07

Cache the system prompt + tools

System prompt + tool definitions are stable across turns; only case-facts and session-state change. Mark the stable parts with cache_control: ephemeral and pay ~90% less for those bytes on every turn after the first. With 5-min TTL on continuous traffic, hit rate stays above 70%.

Cache the system prompt + tools
# Split system into stable (cached) + dynamic (fresh) blocks
resp = client.messages.create(
    model="claude-sonnet-4.5",
    max_tokens=2048,
    system=[
        {
            "type": "text",
            "text": STABLE_SYSTEM_PREAMBLE,  # role + constraints, never changes
            "cache_control": {"type": "ephemeral"},
        },
        {
            "type": "text",
            "text": build_case_facts_block(case_facts) + build_session_block(state),  # changes per turn
        },
    ],
    tools=tools,  # tools array also auto-cached when stable
    messages=messages,
)
# Inspect resp.usage.cache_creation_input_tokens / cache_read_input_tokens to verify hit rate.
↪ Concept: prompt-caching
08

Audit-log the conversation arc

Every closed conversation writes a structured row: customer_id, turn_count, tool_calls_in_order, escalation_reason (if any), elapsed_ms_total, csat. Skip the full transcript. The structured trace is enough to replay any failure and is 50× smaller. Store for 90 days minimum.

Audit-log the conversation arc
def audit_conversation(case_facts: dict, state: dict, agent_path: list, elapsed_ms: int, csat: int | None):
    db.audit.insert({
        "ts": datetime.utcnow(),
        "customer_id": case_facts["customer_id"],
        "turn_count": len(agent_path),
        "tool_calls": [c["name"] for c in agent_path if c.get("type") == "tool_use"],
        "stop_reasons": [c.get("stop_reason") for c in agent_path],
        "escalation_reason": state.get("escalation_reason"),
        "compression_fired_at_turn": state.get("compression_at_turn"),
        "elapsed_ms": elapsed_ms,
        "csat": csat,
    })
↪ Concept: evaluation
06 · Configuration decisions

The four decisions

DecisionRight answerWrong answerWhy
Multi-turn customer statecase-facts block at top of prompt + session-state blockprogressive summarization of customer_id + amountTransactional values must be pinned, never paraphrased. Summarization erodes precision; case-facts is structural.
Customer says 'speak to a human'set escalation_requested → block all tools except escalate_to_humannegotiate ('let me try once more') or suggest alternativesExplicit user requests are non-negotiable. Cost of overriding stated preference (churn, complaint escalation) exceeds any benefit of one-more-attempt.
Long conversation context fillscompress turns 2 through N-1 into 3 lines at turn 15; case-facts stays untouchedkeep all messages OR summarize the case-facts blockConversation history has diminishing returns; case-facts are structural. Compress history, never facts.
Angry customer with valid requestprocess the request normally; sentiment does not trigger escalationescalate on negative sentimentSentiment is orthogonal to escalation need. Angry-but-valid customers should get the answer; only policy gaps + tool limits + explicit requests warrant escalation.
07 · Failure modes

Where it breaks

Five failure pairs. Each one is one exam question. The fix is always architectural, deterministic gates, structured fields, pinned state.

Context loss after compression

By turn 9, agent has summarized cust_4711's order ID to 'a recent order'. Treats turn 10 as a new conversation.

AP-35
✅ Fix

Pin CASE_FACTS at top of system prompt. Re-read every turn. Never paraphrased. Compression only touches the message list, never the case-facts block.

Prompt-only clarification gate

System prompt says 'do not re-ask answered questions'. Agent re-asks 'which order?' on turn 4 and turn 8. 8% leakage.

AP-02
✅ Fix

Track answered clarifications in session_state. PreToolUse hook checks state and blocks downstream tools if a prerequisite clarification is unanswered. Deterministic, not probabilistic.

Conversation history inflation

50 turns fill the context window. Lost-in-the-middle effect drops the order ID. Agent makes contradictory recommendations.

AP-03
✅ Fix

At turn 15, summarize turns 2 through N-1 into 3 lines preserving decisions only. Case-facts stays at prompt top. Frees ~40% tokens with zero decision loss.

No session-state tracking

User said 'I do not want to be contacted by phone' on turn 3. Agent suggests phone callback on turn 9.

AP-04
✅ Fix

Persist session_state with contact_preference + decision flags. Read into prompt every turn alongside case-facts.

Sentiment-based escalation

Angry customer with a valid refund request is escalated because tone is negative. 50% false-positive rate, customers learn that anger = faster service.

AP-22
✅ Fix

Escalation triggers only on (a) policy gap, (b) tool limit, (c) explicit user request. Sentiment is logged for reporting but never gates escalation.

08 · Budget

Cost & latency

Per-conversation tokens
~4,800 input · 1,800 output (avg 12 turns)

12 avg turns × (cached system + tools + dynamic case-facts/session-state + accumulating history). Cache hits ~70% on stable preamble + tools.

Per-conversation cost
~$0.022 (Sonnet 4.5)

Pre-cache: ~$0.05. With ephemeral cache on stable system + tools: ~$0.022. ~55% reduction on long conversations.

p95 turn latency
~4.6 seconds

Streaming first token in ~150ms. Tool round-trips 1.5-2s each. Average 1.5 tool calls per turn + hook (<100ms) + compose.

History compression saving
~40% token reduction post-turn-15

12 verbose message blocks (≈8K tokens) → 3-line summary (≈250 tokens). Frees the long-tail conversations from lost-in-the-middle and OOM-on-history.

Cache hit rate
≥ 70% on stable system + tools

5-min ephemeral TTL on stable preamble + tool definitions. Continuous chat traffic keeps cache warm. Per-turn case-facts/session-state stays fresh, as it should.

09 · Ship gates

Ship checklist

Two passes. Build-time gates verify the code; run-time gates verify the system in production.

Build-time

  1. Case-facts block pinned at top of every system-prompt iterationcase-facts-block
  2. Session-state block serialized into prompt after case-factssession-state
  3. PreToolUse clarification gate hook (deterministic prerequisite block)hooks
  4. Loop branches on stop_reason, never on response textagentic-loops
  5. History summarizer fires at turn 15; case-facts left untouchedcontext-window
  6. Explicit human-handoff phrase detection latches escalation_requested = trueescalation
  7. Sentiment is logged but never gates escalation
  8. Stable system preamble cached with cache_control: ephemeralprompt-caching
  9. Tool definitions cached (unchanged across turns)tool-calling
  10. Audit log per closed conversation (structured, not transcript)
  11. Iteration cap (max_iter=12) as a safety net, not the primary control

Run-time

  • Unit tests for case-facts block construction (immutable, ordered, exact-string)
  • Integration test: 20-turn conversation with case-facts assertion at every turn
  • Hook test: missing verified_id → exit 2; escalation_requested + non-escalate tool → exit 2
  • Compression test: at turn 15, message list shrinks; case-facts unchanged; decisions preserved
  • Explicit-handoff test: 'I want a human' → escalation_requested latches true → hook blocks other tools
  • Latency monitor: alert if p95 turn-latency > 6s for ≥ 5 min
  • Cost monitor: alert if per-conversation cost > $0.04 (signals cache hit rate dropped)
  • False-escalation monitor: alert if sentiment-only escalations exceed 1% of total
10 · Question patterns

Five exam-pattern questions

By turn 8 of a long conversation, your agent has lost the customer's order ID and refund amount. The agent treats turn 9 as if it were turn 1. What is the architectural fix?
Pin a CASE_FACTS block at the very top of every system-prompt iteration. The block holds customer_id, order_id, refund_amount, policy_cap, decision_made, contact_preference, escalation_requested. It is immutable and re-read every turn. Transactional values (IDs, amounts) must never be summarized; only conversation reasoning chains can be paraphrased. When the message list grows past 15, compress turns 2 through N-1 into a 3-line summary. The case-facts block stays untouched at the prompt top. Tagged to AP-35.
Your agent asks 'which order?' on turn 4 and again on turn 8, even though the customer specified the order ID on turn 2. The system prompt says 'do not re-ask answered questions'. What's leaking?
Prompt-only clarification gates leak ~8% in production because the model is probabilistic about following instructions. The fix is structural: track answered clarifications in session-state (clarifications_answered: ['order_id', 'refund_or_credit']). A PreToolUse hook reads session-state before any downstream tool dispatch and exits 2 if a prerequisite clarification is unanswered. The hook is deterministic; the prompt is probabilistic. For business-critical guarantees, structural beats linguistic. Tagged to AP-02.
A 15-turn conversation has filled 60% of the context window. The agent begins making contradictory recommendations (suggests escalation on turn 12, then re-engages with solving on turn 14). What approach minimizes token waste while preserving conversation quality?
Separate case-facts (immutable, pinned to prompt top) from conversation history (the message list). At turn 15, replace turns 2 through N-1 in the message list with a single 3-4 line summary that preserves decisions only: 'user requested refund · agent verified customer · policy allows full refund · customer chose refund over credit'. Discard the verbose back-and-forth. Case-facts is untouched. It was never going to be summarized. This frees ~40% of tokens with zero decision loss.
On turn 5, the customer says 'I want to speak to a human'. Your agent says 'Let me see if I can solve this for you first' and continues the conversation. Why is this wrong, and how do you fix it architecturally?
Explicit customer requests are non-negotiable. The cost of overriding a stated preference (churn risk, support complaint, regulatory friction) exceeds any benefit of 'one more attempt'. Architecturally: detect explicit human-handoff phrases on the user turn, latch state.escalation_requested = true, and configure the PreToolUse hook to block all tools except escalate_to_human when that flag is set. The agent's only legal next move is the structured handoff to the human queue. No negotiation, no alternatives.
An angry customer is requesting a refund that the policy clearly allows. Your agent escalates because tone analysis flagged the message as 'distressed'. What's the failure mode, and what should the escalation criteria actually be?
Sentiment is orthogonal to policy. Distress alone is not an escalation trigger; otherwise angry-but-valid customers learn that anger = faster service (perverse incentive), and you generate ~50% false-positive escalations. Correct escalation criteria are structural: (1) the agent lacks a tool to solve the request, (2) policy explicitly blocks the request, (3) the customer explicitly asks for a human, or (4) confidence falls below a threshold on a high-stakes decision. Tone is logged for analytics but never gates routing. Tagged to AP-22.
11 · FAQ

Frequently asked

Why pin case-facts in the system prompt instead of passing them as a tool result?
System prompt content is re-read and weighted highest by the model. Tool results live in the message list, which can be compressed, summarized, or fall victim to lost-in-the-middle as the conversation grows. Case-facts must survive every single turn unchanged, so it lives at the structural top of the prompt. The only place that's immune to summarization and attention drift.
What's the right threshold for triggering history compression?
~15 messages is a defensible default. Below that, full history fits comfortably and the cost of compression isn't justified. Above it, lost-in-the-middle starts degrading recall on details from turns 4-8. Tune per workload: customer-support averages 8-12 turns and rarely needs compression; technical-support can run 30+ and benefits from compressing earlier (turn 10).
If case-facts is immutable, how do I update it when the customer changes their decision?
Case-facts is immutable per-iteration, not immutable forever. After every state-changing tool call (e.g., customer switches from refund to store credit), the harness re-builds case-facts with the updated values and pins the new version on the next turn. Within one turn, it's fixed; between turns, it's the deliberate, hook-controlled write path.
Why a PreToolUse hook for clarification, instead of putting the rule in the prompt?
Prompt-only enforcement is probabilistic (~92% in this scenario). Hooks are deterministic. They read structured state, exit 2, and route Claude back. For business-bearing guarantees (don't re-ask answered questions, don't process unverified accounts), the 8% leak from prompt-only is unacceptable. Use prompts for tone and persona; use hooks for hard guarantees.
Should I cache the case-facts block?
No. Case-facts changes whenever the customer makes a decision or a tool updates state. Caching it kills hit rate. Split the system prompt into a stable preamble (role + constraints, cached with cache_control: ephemeral) and a dynamic block (case-facts + session-state, fresh every turn). You get ~70% hit rate on the cached portion and zero staleness on the dynamic portion.
What if the customer switches topics mid-conversation (refund → tech support)?
Re-route through triage. Push current case-facts (customer_id + decisions made so far) to the new specialist's task string, and either context-switch this agent or spawn a sub-agent for the new intent. Trying to handle multi-intent in a single specialist erodes accuracy and pollutes the case-facts block. The original intent's decisions get tangled with the new intent's evidence.
How do I test that conversation continuity is actually working?
Two-step regression: (1) On turn 1, customer says 'My order is #123, I want a refund.' Verify case-facts is updated. (2) On turn 8, after compression has fired and verbose history is summarized, ask 'Which order was that again?'. The agent must NOT re-ask; it must read case-facts and answer immediately. If it re-asks, your case-facts block isn't being re-read every turn. That's the bug to chase.
What goes in the audit log for a long conversation?
Per closed conversation: customer_id, turn_count, ordered list of tool_calls (just names + timestamps), per-turn stop_reasons, compression_fired_at_turn (if any), escalation_reason (if any), elapsed_ms_total, csat (if surveyed). Skip the full transcript. The structured trace is 50× smaller and replays any failure path. Retain 90 days minimum for production debugging and exam-style retrospectives.
P3.10 · D5 · Context + Reliability

Conversational AI Patterns, complete.

You've covered the full ten-section breakdown for this primitive, definition, mechanics, code, false positives, comparison, decision tree, exam patterns, and FAQ. One technical primitive down on the path to CCA-F.

Share your win →