Tool descriptions are incredibly overlooked. They are the primary mechanism Claude uses for tool selection. If yours are vague or overlapping, selection becomes unreliable. This domain teaches you to get them right.
01 Tool Interface Design
Tool descriptions are not supplementary documentation. They are THE mechanism the model uses to decide which tool to call.
What a Good Tool Description Includes
- What the tool does (primary purpose)
- What inputs it expects (formats, types, constraints)
- Example queries it handles well
- Edge cases and limitations
- Explicit boundaries: when to use THIS tool versus similar tools
The Misrouting Problem
Two tools with overlapping or near-identical descriptions cause selection confusion. The exam presents get_customer and lookup_order with minimal descriptions like "Retrieves customer information" and "Retrieves order information" — causing constant misrouting.
The correct fix is better descriptions. Not few-shot examples (token overhead for the wrong root cause). Not a routing classifier (over-engineered first step). Not tool consolidation (too much effort).
The exam favours low-effort, high-leverage fixes as the first step.
Tool Splitting
Split generic tools into purpose-specific tools with defined input/output contracts:
| Before | After |
|---|---|
analyze_document | extract_data_points |
summarize_content | |
verify_claim_against_source |
System Prompt Conflicts
Keyword-sensitive instructions in system prompts can create unintended tool associations that override well-written descriptions. Always review system prompts for conflicts after updating tool descriptions.
Practice Scenario
An agent routes "check the status of order #12345" to get_customer instead of lookup_order. Both descriptions say "Retrieves [entity] information." Fix: expand descriptions to be specific about purpose, accepted inputs, and when to use each tool.
02 Structured Error Responses
The MCP isError Flag
MCP provides an isError flag pattern for communicating failures back to the agent.
Four Error Categories
| Category | Description | Retryable? | Example |
|---|---|---|---|
| Transient | Timeouts, service unavailability | Yes | Database connection timeout |
| Validation | Invalid input format or missing fields | Yes (fix input) | Wrong date format |
| Business | Policy violations | No | Refund exceeds limit |
| Permission | Access denied | No (needs escalation) | Insufficient credentials |
Include structured error metadata: errorCategory, isRetryable boolean, and a human-readable description. For business errors, include retriable: false with customer-friendly explanations.
The Critical Distinction
| Situation | What Happened | Correct Response |
|---|---|---|
| Access failure | Tool could not reach the data source | Consider retry |
| Valid empty result | Tool successfully queried, found no matches | Do NOT retry — this IS the answer |
Confusing these two breaks recovery logic. The exam tests this directly.
Error Propagation in Multi-Agent Systems
- Subagents implement local recovery for transient failures
- Only propagate errors they cannot resolve locally
- Include partial results and what was attempted when propagating
Practice Scenario
A tool returns an empty array after a customer lookup. The agent retries 3 times then escalates to a human. The actual issue is the customer's account does not exist. Problem: confusing a valid empty result with an access failure. Fix: distinguish between "could not reach source" and "reached source, found nothing."
03 Tool Distribution and tool_choice
The Tool Overload Problem
Giving an agent 18 tools degrades selection reliability. Optimal: 4-5 tools per agent, scoped to its role.
- A synthesis agent should NOT have web search tools
- A web search agent should NOT have document analysis tools
tool_choice Configuration
| Setting | Behaviour | Use When |
|---|---|---|
"auto" | Model decides whether to call a tool or return text | General operation (default) |
"any" | Model MUST call a tool, chooses which one | Guaranteed structured output from one of multiple schemas |
{"type": "tool", "name": "extract_metadata"} | Model MUST call this specific tool | Forcing mandatory first steps |
Scoped Cross-Role Tools
For high-frequency simple operations, give a constrained tool directly to the agent that needs it. Example: a synthesis agent gets a scoped verify_fact tool for simple lookups, while complex verifications route through the coordinator.
This avoids coordinator round-trip latency for the 85% of cases that are simple.
Replacing Generic Tools
Instead of giving a subagent fetch_url (which can fetch anything), give it load_document that validates document URLs only.
Practice Scenario
A synthesis agent frequently returns control to the coordinator for simple fact verification, adding 2-3 round trips per task and 40% latency. 85% of verifications are simple lookups. Fix: give the synthesis agent a scoped verify_fact tool for simple lookups.
04 MCP Server Integration
Scoping Hierarchy
| Level | Location | Shared? | Version Controlled? |
|---|---|---|---|
| Project-level | .mcp.json in the project repository | Yes (team) | Yes |
| User-level | ~/.claude.json | No (personal) | No |
All tools from all configured servers are discovered at connection time and available simultaneously.
Environment Variable Expansion
.mcp.json supports ${GITHUB_TOKEN} syntax. This keeps credentials out of version control. Each developer sets their own tokens locally.
MCP Resources
Expose content catalogs (issue summaries, documentation hierarchies, database schemas) as MCP resources. This gives agents visibility into available data without requiring exploratory tool calls. Reduces unnecessary queries.
Build vs Use Decision
| Situation | Approach |
|---|---|
| Standard integration (Jira, GitHub, Slack) | Use existing community MCP servers |
| Team-specific workflow | Build custom server only if community servers cannot handle it |
Enhance MCP tool descriptions to prevent the agent from preferring built-in tools (like Grep) over more capable MCP tools.
05 Built-in Tools
Grep vs Glob
| Tool | Searches | Use For |
|---|---|---|
| Grep | File contents for patterns | Finding function callers, locating error messages, searching imports |
| Glob | File paths by naming patterns | Finding files by extension (**/*.test.tsx), locating config files |
The exam deliberately presents scenarios where using the wrong one wastes time.
Read / Write / Edit
| Tool | Use When |
|---|---|
| Edit | Targeted modifications using unique text matching. Fast, precise. |
| Read + Write | Fallback when Edit fails (non-unique text matches). Load full file, write complete modified file. |
Incremental Codebase Understanding
- Start with Grep to find entry points (function definitions, import statements)
- Use Read to follow imports and trace flows
- Do NOT read all files upfront — this is a context-budget killer
Practice Scenario
Find all files that call a deprecated function and locate test files for those callers. Correct sequence: Grep for the function name (finds callers), Glob for test files matching the caller filenames.
06 What to Build
Build two MCP tools with intentionally similar functionality:
- Write descriptions vague enough to cause misrouting
- Fix them with specific, differentiated descriptions
- Experience the difference firsthand
Then create 3 MCP tools with:
- Error responses covering all four error categories
- Configuration in
.mcp.jsonwith environment variable expansion tool_choiceforced selection for the first step