This is the most important domain. At 27% of the exam, it is worth more than any other single topic. It appears in three main scenarios: Customer Support Resolution Agent, Multi-Agent Research System, and Developer Productivity Tools.
01 Agentic Loops
An agentic loop is the fundamental execution cycle of any Claude-based agent. Here is how it works:
- Send a request to Claude via the Messages API
- Inspect the
stop_reasonfield in the response - If
stop_reasonis"tool_use": execute the requested tool(s), append the tool results to the conversation history, send the updated conversation back to Claude - If
stop_reasonis"end_turn": the agent has finished, present the final response
Tool results must be appended to conversation history so the model can reason about new information on the next iteration.
The Three Anti-Patterns You Must Reject
The exam tests three specific wrong approaches to loop termination:
| Anti-Pattern | Why It Is Wrong |
|---|---|
| Parsing natural language to determine loop termination (e.g., checking if the assistant said "I'm done") | Natural language is ambiguous and unreliable. The stop_reason field exists for exactly this purpose. |
| Arbitrary iteration caps as the primary stopping mechanism (e.g., "stop after 10 loops") | Either cuts off useful work or runs unnecessary iterations. The model signals completion via stop_reason. |
Checking for assistant text content as a completion indicator (e.g., response.content[0].type == "text") | The model can return text alongside tool_use blocks. Text presence does not mean completion. |
Model-Driven vs Pre-Configured
The exam favours model-driven decision-making — Claude reasons about which tool to call based on context — over pre-configured decision trees or tool sequences. But for critical business logic, use programmatic enforcement (covered in section 04).
Practice Scenario
A developer's agent sometimes terminates prematurely. Their code checks if response.content[0].type == "text" to determine completion. The bug: Claude can return text alongside tool_use blocks. The fix: check stop_reason == "end_turn" instead.
02 Multi-Agent Orchestration
The architecture is hub-and-spoke:
┌─────────────┐
│ Coordinator │
└──────┬───────┘
┌───────────┼───────────┐
▼ ▼ ▼
┌──────────┐ ┌──────────┐ ┌──────────┐
│ Subagent │ │ Subagent │ │ Subagent │
│ (Web) │ │ (Docs) │ │(Synthesis)│
└──────────┘ └──────────┘ └──────────┘
All communication flows through the coordinator. Subagents never communicate directly with each other.
The Critical Isolation Principle
This is the single most commonly misunderstood concept in multi-agent systems:
- Subagents do NOT automatically inherit the coordinator's conversation history
- Subagents do NOT share memory between invocations
- Every piece of information a subagent needs must be explicitly included in its prompt
Coordinator Responsibilities
The coordinator handles:
- Task decomposition — analyse query requirements, dynamically select which subagents to invoke
- Scope partitioning — assign distinct subtopics or source types to minimise duplication
- Iterative refinement — evaluate synthesis output for gaps, re-delegate with targeted queries
- Error handling — route all communication through coordinator for observability
The Narrow Decomposition Failure
The exam tests whether you can trace failures to their root cause. Example: a coordinator decomposes "impact of AI on creative industries" into only visual arts subtopics, missing music, writing, and film. The root cause is the coordinator's decomposition, not any downstream agent.
Practice Scenario
A multi-agent research system produces a report on "renewable energy technologies" covering only solar and wind. It misses geothermal, tidal, biomass, and fusion. The correct answer identifies the coordinator's task decomposition as the root cause — not the web search agent, not the synthesis agent, not the document analysis agent.
03 Subagent Invocation and Context Passing
The Task Tool
The Task tool is the mechanism for spawning subagents. The coordinator's allowedTools must include "Task" or it cannot spawn subagents at all. Each subagent has an AgentDefinition with a description, system prompt, and tool restrictions.
Context Passing Rules
- Include complete findings from prior agents directly in the subagent's prompt
- Use structured data formats that separate content from metadata (source URLs, document names, page numbers) to preserve attribution
- Design coordinator prompts that specify research goals and quality criteria, not step-by-step procedural instructions
Parallel Spawning
Emit multiple Task tool calls in a single coordinator response to spawn subagents in parallel. This is faster than sequential invocation. The exam tests latency awareness.
fork_session
Creates independent branches from a shared analysis baseline. Use for exploring divergent approaches (e.g., comparing two testing strategies from the same codebase analysis). Each fork operates independently after the branching point.
Practice Scenario
A synthesis agent produces a report with claims that have no source attribution. The web search and document analysis subagents are working correctly. Root cause: context passing did not include structured metadata. Fix: require subagents to output structured claim-source mappings.
04 Workflow Enforcement and Handoff
The Enforcement Spectrum
| Level | Mechanism | Reliability | Use When |
|---|---|---|---|
| Prompt-based guidance | Instructions in the system prompt | Works most of the time, non-zero failure rate | Low-stakes: formatting, style |
| Programmatic enforcement | Hooks or prerequisite gates | Works every time | High-stakes: financial, security, compliance |
The exam's decision rule: when consequences are financial, security-related, or compliance-related, use programmatic enforcement. The exam will present prompt-based solutions as answer options for high-stakes scenarios. Reject them.
Multi-Concern Request Handling
- Decompose requests with multiple issues into distinct items
- Investigate each in parallel using shared context
- Synthesise a unified resolution
Structured Handoff Protocols
When escalating to a human agent, compile: customer ID, conversation summary, root cause analysis, refund amount (if applicable), recommended action. The human agent does NOT have access to the conversation transcript. The handoff summary must be self-contained.
Practice Scenario
Production data shows that in 8% of cases, a customer support agent processes refunds without verifying account ownership. Four options:
- A) Programmatic prerequisite gate
- B) Enhanced system prompt
- C) Few-shot examples
- D) Routing classifier
Answer: A. A prerequisite gate physically blocks refund tools until verification is confirmed. B, C, and D are all probabilistic — they reduce but cannot eliminate the failure.
05 Agent SDK Hooks
PostToolUse Hooks
Intercept tool results after execution, before the model processes them.
Use case: normalise heterogeneous data formats from different MCP tools. Unix timestamps become ISO 8601. Numeric status codes become human-readable strings. The model receives clean, consistent data regardless of source.
Tool Call Interception Hooks
Intercept outgoing tool calls before execution.
Use cases:
- Block refunds above $500 and redirect to human escalation
- Enforce compliance rules (require manager approval for certain operations)
The Decision Framework
| Mechanism | Guarantee | Use For |
|---|---|---|
| Hooks | Deterministic (100%) | Business rules that must never fail |
| Prompts | Probabilistic (~95%) | Preferences and soft rules |
Rule of thumb: if the business would lose money or face legal risk from a single failure, use hooks.
06 Task Decomposition Strategies
Fixed Sequential Pipelines (Prompt Chaining)
Break work into predetermined sequential steps:
Analyse each file → Run cross-file integration pass → Generate report
- Best for: predictable, structured tasks (code reviews, document processing)
- Advantage: consistent and reliable
- Limitation: cannot adapt to unexpected findings
Dynamic Adaptive Decomposition
Generate subtasks based on discoveries at each step:
Map structure → Identify high-impact areas → Create prioritised plan → Adapt as dependencies emerge
- Best for: open-ended investigation tasks
- Advantage: adapts to the problem
- Limitation: less predictable
The Attention Dilution Problem
Processing too many files in a single pass produces inconsistent depth. Some files get detailed feedback. Others have obvious bugs missed. Identical patterns get flagged in one file but approved in another.
Fix: split into per-file local analysis passes PLUS a separate cross-file integration pass. Local passes catch local issues consistently. The integration pass catches cross-file data flow issues.
Practice Scenario
A code review of 14 files produces detailed feedback for some files but misses obvious bugs in others, and flags a pattern as problematic in one file while approving identical code elsewhere. Problem: attention dilution in single-pass review. Solution: multi-pass architecture.
07 Session State and Resumption
Three Options
| Method | When to Use |
|---|---|
--resume <session-name> | Prior context is mostly still valid, files have not changed significantly |
fork_session | Need to explore divergent approaches from a shared analysis point |
| Fresh start with summary injection | Tool results are stale, files have changed, or context has degraded |
The Stale Context Problem
When resuming after code modifications, inform the agent about specific file changes for targeted re-analysis. Do not require re-exploration from scratch. Starting fresh with an injected summary is more reliable than resuming with stale tool results.
08 Where to Learn More
- Agent SDK Overview — agentic loop mechanics and subagent patterns
- Building Agents with the Claude Agent SDK — hooks, orchestration, sessions
- Agent SDK Python repo + examples — hands-on code for hooks, custom tools,
fork_session
09 What to Build
Build a multi-tool agent with:
- 3-4 MCP tools with proper
stop_reasonhandling - A PostToolUse hook normalising data formats
- A tool call interception hook blocking policy violations
- A coordinator with two subagents (web search + document analysis)
- Proper context passing with structured metadata
- A programmatic prerequisite gate
This single exercise covers most of Domain 1.