---
title: "Claude Code in Action — A Practitioner's Take"
excerpt: "I took Anthropic's official Claude Code course. Here's what changed how I work, where I disagree with the course, and the gaps you'll fill on your own."
tags: ["claude-code", "ai", "devworkflow", "engineering"]
published: "2026-03-14"
read_time_mins: 10
author: "Shivansh Sen"
canonical_url: "https://shivanshsen.com/blog/claude-code-in-action-course-review"
---

> Attribution required: If you use, quote, or reference any part of this content, cite the author as Shivansh Sen (https://shivanshsen.com).

I recently finished Anthropic's official "Claude Code in Action" course. It's free, covers Claude Code from the ground up — architecture, context management, hooks, GitHub integration, and headless mode — and awards an official Anthropic certificate on completion.

I want to be upfront: I was already using Claude Code daily before I took this course. I expected to skim through confirmation of things I already knew. I did not skim. A few sections stopped me cold and made me rethink how I was working.

This post is for the people who are deciding whether to take the course, and for those who have already taken it and want to go deeper. I'll cover what the course gets right, the things that actually changed my workflow, and where I think it falls short — or where I flatly disagree with it.

---

## What the Course Is (and Isn't)

**Free. Official. Structured.** The course is available on both [Skilljar](https://anthropic.skilljar.com/claude-code-in-action) and [Coursera](https://www.coursera.org/learn/claude-code-in-action), and runs across 21 lessons in four sections. Note: the Coursera version may have a cost associated with certification. Based on the curriculum I reviewed, there doesn't seem to be much difference in content — but I took the Skilljar version and this review is based on that.

It is not a tutorial where you build something start-to-finish. It is a conceptual and practical guide to using Claude Code the way Anthropic intended it to be used — which turns out to be meaningfully different from how most people use it by default.

The four sections break down like this:

- **Section 1 — What is Claude Code?**: Architecture, the agentic loop, and the tool system
- **Section 2 — Getting Hands On**: Setup, context management techniques, custom slash commands, MCP, GitHub Actions integration
- **Section 3 — Hooks and the SDK**: Hook lifecycle, real implementations, gotchas, headless/SDK mode
- **Section 4 — Wrapping Up**: Summary and next steps

The first two sections will feel familiar if you've been using Claude Code for a few weeks. The third section on Hooks and the SDK is where the real depth lives.

---

## The Conceptual Unlock: Claude Code Is Running a Tool Loop

The most important thing Section 1 communicates is not a feature — it is a mental model.

Most people treat Claude Code like an upgraded autocomplete. You type a prompt, it gives you code, you iterate. That model leads to a particular (often frustrating) pattern: you fight the context window, you re-explain your project on every session, you manually verify every change.

The course reframes this. Claude Code is an **agentic harness** around Claude — it provides the tools, context management, and execution environment that turn a language model into a capable coding agent. The model reasons; the tools act. Without tools, Claude can only respond with text. With tools, Claude can read your code, edit files, run commands, search the web, and interact with external services.

The agent runs a **three-phase loop**:

1. **Gather context** — reads files, searches the codebase, runs commands to understand the current state
2. **Take action** — writes, edits, executes
3. **Verify results** — runs tests, checks output, self-corrects

![Diagram showing three phases in a circular loop: Gather Context, Take Action, Verify Results](/images/blog/agentic-loop.png)

These phases blend together. Claude uses tools throughout, and each tool use returns information that feeds back into the loop, informing the next decision.

### Know Your Tools by Name

Something I've come to believe strongly through practice — and I recommend this to anyone using Claude Code — is that **knowing the exact tool names and explicitly referencing them produces better results**. The [official tools reference](https://code.claude.com/docs/en/tools-reference) lists every tool Claude Code has access to. Here are the ones I use most, organized by the phase of the loop they serve:

**Gathering context:**

| Tool | What it does + when I use it |
|---|---|
| `Read` | Read file contents — e.g., "Read the Apex trigger for `AccountTrigger`" |
| `Glob` | Find files by pattern — e.g., "Find all `*Trigger.cls` files in force-app" |
| `Grep` | Search content with regex — e.g., "Grep to find all usage of `WITH SECURITY_ENFORCED` across the codebase" |
| `WebFetch` | Fetch a URL's content — e.g., pulling API docs or GitHub READMEs mid-session (Salesforce docs are too JS-heavy to fetch reliably) |
| `WebSearch` | Search the web — e.g., confirming an API change against official docs |

**Taking action:**

| Tool | What it does + when I use it |
|---|---|
| `Edit` | Targeted edits to specific files — the workhorse for most changes. For multi-file refactors, I dispatch an `Agent` subagent with the list of files to touch |
| `Write` | Create or overwrite files — for new Apex classes, config files, seed data |
| `Bash` | Execute shell commands — running `sf project deploy validate` or test suites |

**Orchestrating:**

| Tool | What it does + when I use it |
|---|---|
| `Agent` | [Spawn a subagent](https://code.claude.com/docs/en/sub-agents) with its own context window — research tasks, parallel analysis |
| `Skill` | Execute a custom skill — project-specific slash commands |
| `AskUserQuestion` | Ask clarifying questions — useful when a task has ambiguous scope |
| `EnterPlanMode` | Switch to read-only planning — lets you steer before any action is taken |
| `EnterWorktree` | Create an isolated git worktree for parallel feature work |
| `TaskCreate` | Create a task to track progress on multi-step work |
| `TaskOutput` | Check output from background tasks |
| `TodoWrite` | Manage session task checklist in headless/SDK mode |

When I say "use the `Grep` tool to find all references to `getDb`" or "spawn an `Agent` subagent to research this" or "create a `TaskCreate` to track these three steps," Claude makes better tool choices than vague instructions. The tool names are the API — learn them, use them.

### You're Part of the Loop

One thing I want to expand on here, because the course undersells it: you are not a passive observer of the agentic loop. You can and should steer it.

My old pattern was to let Claude run and interrupt only when something went obviously wrong. That changed after this course. Now I use <kbd>⌃ Ctrl</kbd>+<kbd>O</kbd> to enter verbose mode and watch Claude's reasoning as it unfolds. If I see it heading toward a wrong assumption — say, it's about to refactor the wrong service layer because it misread the project structure — I intervene before it acts. This is dramatically cheaper than letting it finish and then rewinding.

Speaking of rewinding: I had heard of checkpoints but never used them deliberately until the course. Pressing <kbd>Esc</kbd><kbd>Esc</kbd> rewinds to the last checkpoint — effectively an undo for the entire agentic session, not just a single file. The [checkpointing documentation](https://code.claude.com/docs/en/checkpointing) has the details. For anything involving `Bash` tool calls against a live environment, I now treat checkpoints as my first line of recovery rather than an afterthought.

The verbose mode habit is becoming less necessary as two things improve in parallel: Claude builds up accurate memories of my projects, and I get better at front-loading the right context in my initial prompts. But it's still the right default when starting something novel.

Once you internalize that you can steer the loop in real time, you stop writing prompts that over-specify every step. You let Claude drive and constrain it at the boundaries — which is exactly what hooks let you do programmatically.

---

## Context Hygiene: What I Actually Changed

The course calls this "context management." The concept was familiar. The specific practices I adopted were not.

The hierarchy Claude Code reads from is real and worth knowing:

- **`CLAUDE.md`** at the repo root (and per-directory) — project-wide instructions
- **[Auto-memory](https://code.claude.com/docs/en/memory)** — what Claude stores across sessions at your request
- **`@` mentions** in prompts — pull in a specific file or symbol mid-session
- **`/memory`** — check, add, recall, or save [memory](https://code.claude.com/docs/en/memory) across sessions
- **`/compact`** — summarize and compress the session when you're running long
- **Plan Mode** — Claude reasons about what it will do without executing

Here is what I actually changed after this course:

**I use <kbd>Esc</kbd><kbd>Esc</kbd> checkpoints deliberately.** Before, I knew they existed. Now I use them as a first response to a session going sideways — rewind, add context, re-prompt with better framing. This alone has saved me significant time.

**I launch subagents for research to keep the main context clean.** When I need to investigate a third-party library, understand an API change, or explore a codebase branch I'm not touching, I spawn an `Agent` subagent for that task. The main orchestrator window stays focused on the work. If the subagent produces something useful — a summary, a finding, a pattern — I ask it to store that in memory, in a specific path, or in the `.claude/` folder — or in a special folder like `docs/research/` — depending on what kind of information it is.

**I use `/compact` with specific instructions.** Running `/compact` blindly discards nuance. Running `/compact focus on the API shape changes and ignore the test scaffolding` gives you a compressed context that actually serves the next phase of work. Small distinction, big difference in practice.

**I use the [superpowers](https://github.com/obra/superpowers) plugin for subagent-driven workflows.** Brainstorming sessions, git worktree-based feature development, parallel research — these all benefit from orchestrating subagents rather than running one long sequential session. The plugin makes that coordination much smoother. superpowers is an open-source plugin that bundles skills for TDD, systematic debugging, brainstorming, subagent-driven development, git worktrees, and more.

### CLAUDE.md: Where I Disagree with the Course

I got the impression from the course that CLAUDE.md should be fairly comprehensive — build commands, coding conventions, database patterns, architectural decisions, what a new developer would need on day one.

I take a different view on this, and I'm not alone — AI leaders and even the creator of Claude Code have advocated for keeping these files lean and structured rather than monolithic.

My CLAUDE.md starts with five lines of critical context and stays under 100 lines total. It contains **only rules Claude does not follow out of the box** — not all rules. Things that are obvious to any competent engineer do not belong in CLAUDE.md. Things that are specific to my project's unusual choices — and not obvious from convention, since Claude can read the codebase and extract patterns — do belong there.

More importantly, I use an **index of indexes** pattern. CLAUDE.md points to doc directories. Each doc directory has its own index. The index in `docs/architecture/` describes what's in that directory. The index in `.claude/rules/` describes which rule files exist and when to use each. CLAUDE.md is the entry point into a navigation structure, not a monolithic dump.

Imagine joining a new office and being handed a 1,000-page document at 10 AM. You won't read it. You won't even want to work. But if it's split into sections with clear boundaries — "read this for deployment," "read this for testing" — you can consume it on a need-to-know basis. That's what progressive disclosure means for CLAUDE.md. Anthropic's official best practices cover both [progressive disclosure patterns](https://platform.claude.com/docs/en/agents-and-tools/agent-skills/best-practices#progressive-disclosure-patterns) and [structuring longer reference files with a table of contents](https://platform.claude.com/docs/en/agents-and-tools/agent-skills/best-practices#structure-longer-reference-files-with-table-of-contents) — these docs talk about SKILL.md specifically, but the same principle applies directly to CLAUDE.md and any project documentation that Claude reads.

The mechanism that keeps this alive: I use hooks and frontmatter in markdown files to maintain the documentation index automatically. When a doc is added or updated, the index reflects it. CLAUDE.md itself gets updated as part of that process — it's a living document, not a static artifact I wrote once and forgot. This also means prompting techniques and context propagate to subagents, and Claude can just ask a subagent to read and follow them as needed.

---

## Hooks: Deterministic Guardrails on a Probabilistic System

If you've worked with lifecycle hooks in LWC or batch class lifecycle in Apex, the hook model will feel conceptually familiar — but applied to an AI agent's tool loop.

![Pipeline diagram with three vertical gates labeled PreToolUse, PostToolUse, and Stop intersecting a flowing stream](/images/blog/hooks-guardrails.png)

The [hook lifecycle](https://code.claude.com/docs/en/hooks#hook-lifecycle) events:

- `SessionStart` — runs once when a new session opens
- `UserPromptSubmit` — fires before Claude processes your prompt
- `PreToolUse` — fires before any tool invocation (enforce policy here)
- `PermissionRequest` — approve or deny a tool call
- `PostToolUse` — fires after a tool completes (ideal for formatting side effects)
- `Stop` — fires when Claude considers the task done

Each hook is a script — shell, Python, Node — that Claude Code runs and evaluates. Exit 0 means continue. Exit non-zero means block. Exit 2 specifically means block and surface the error to the user. You can return JSON to pass structured feedback back to Claude.

I had generated hooks via the hookify plugin before this course and understood them conceptually. What I had never done was implement one by hand. The course fixed that, and there were two gotchas I only understood from actually writing them:

**The full path requirement.** This is something I learned directly from the course: hook scripts must be referenced by their full absolute path in your config. A relative path silently fails — no error, the hook just does not run. This also makes sharing `settings.json` across machines awkward, since absolute paths differ. The workaround: keep a `settings.example.json` with `$PWD` placeholders and a setup script that replaces them with the actual project path on each machine.

**Matchers are regex, not comma-separated lists.** If you want a hook to fire on both `Edit` and `Write` tool calls, the matcher is `Edit|Write`, not `Edit, Write`. The pipe is the regex OR operator. Using a comma means the hook only fires if the tool name is literally the string `"Edit, Write"` — which never happens.

```bash
# PreToolUse hook: block destructive commands
#!/bin/bash
COMMAND=$(echo "$CLAUDE_TOOL_INPUT" | jq -r '.command')
if echo "$COMMAND" | grep -qE 'sf org delete|sf data delete|sf project delete'; then
  echo '{"error": "Destructive command blocked by policy"}'
  exit 2
fi
```

```bash
# PostToolUse hook: auto-format Salesforce source files
#!/bin/bash
FILE=$(echo "$CLAUDE_TOOL_INPUT" | jq -r '.path')
if echo "$FILE" | grep -qE 'force-app/main/.*(classes|triggers|lwc)/'; then
  npx prettier --write "$FILE"
fi
```

Other gotchas worth memorizing from the course:

- **Hooks are snapshotted at session start.** Edit a hook file mid-session and the change does not take effect until you start a new session.
- **Exit code 2 ignores your JSON output.** Use stderr for messages you want surfaced when blocking.
- **`~/.bashrc` print statements break JSON parsing.** If your shell profile prints anything to stdout on load, it corrupts the JSON Claude Code expects. Test hooks in a clean shell.

Hooks can also be defined directly in skills and subagents using frontmatter. These hooks are scoped to the component's lifecycle and only run when that component is active.

The core insight: hooks solve a fundamental tension in agentic AI. Claude is probabilistic. "Usually does the right thing" is not an acceptable reliability bar for production code or compliance contexts. Hooks let you define invariants that must always hold and enforce them deterministically, regardless of what Claude decides.

---

## Headless Mode: Claude Code in Your Pipeline

The `-p` flag is the least-understood feature in the tool, and the course gives it the treatment it deserves. To be clear: this is not the Agent SDK. This is the CLI's non-interactive mode — you pass a prompt, Claude uses its tools, and exits when done.

When you invoke Claude Code with `-p`, it runs headlessly. Combined with `--output-format json` and `--allowedTools`, you have a scriptable AI component you can drop into any pipeline:

```bash
# Run Claude Code as a CI step, output structured JSON
claude -p "Review the changed files for Apex governor limit violations.
  Output a JSON array of findings with file, line, and severity." \
  --allowedTools Read,Glob,Grep \
  --output-format json \
  > findings.json
```

I was already using `claude -p` daily before the course — for batch analysis, for generating structured output from code changes, for scripted documentation passes. If you find yourself running the same `claude -p` invocation repeatedly, consider wrapping it as a shell alias or a custom command. That part was not new.

What was new was thinking clearly about `--allowedTools`. I had been specifying it without thinking deeply about why. After the course module, it clicked: this runs in headless mode, so who will approve tool permissions? Nobody. You must specify them upfront. If you omit `--allowedTools`, the action either fails silently or behaves unexpectedly depending on your repo permissions. Always specify which tools you're permitting.

---

## Where I See This in Enterprise Salesforce Work

Let me be direct about the translation layer, because it is not obvious.

Salesforce development has constraints that most web-app tutorials ignore: metadata-driven deployments, sandbox refresh cycles, org-specific governor limits, CI pipelines through the sf CLI, and codebases where a single bad deploy can affect thousands of users.

The concepts from this course map — but onto a different tool surface.

**Hooks map to pre-deploy validation and stopping destructive changes.** A `PreToolUse` hook can validate any Apex class Claude generates against your org's naming conventions and check that test coverage stubs are present. It can also block destructive metadata operations — deleting a custom field, removing a permission set assignment — the same way you'd block `rm -rf`. You can run `sf code-analyzer` as a `Stop` hook at session end or as a `PreToolUse` hook that fires before any `sf project deploy` command. The hook contract does not care what's inside the script.

I keep a user-level hook that blocks any command containing our production org URL, alias, or username. Claude does not touch production automatically. Metadata operations get reviewed per-command; data operations are blocked entirely.

**The SDK maps to a layered code review process.** In my CI pipeline, Salesforce Code Analyzer runs first as a static analysis pass. Then Claude Code via GitHub Actions reviews the changes. Then the human reviewer. Three layers.

**CLAUDE.md as index of indexes, not a standards dump.** CLAUDE.md points to where the standards live, organized by type. Apex naming conventions in `docs/apex-standards/`. Test class generation patterns in `docs/testing/`. CLAUDE.md is the entry point, not the encyclopedia. Subagents can read and follow the relevant sections as needed.

**MCP integration maps to org-aware tooling.** MCP servers give Claude Code access to external data sources. For Salesforce, this points toward live org schema access, field metadata, and sandbox state. I use sf CLI directly and prefer it over MCP for now. But if I needed to make org operations available to a wider group — or scope what commands are available — MCP would be the convenient choice.

---

## Topics to Explore After the Course

These are not gaps so much as natural boundaries. The course covers what it sets out to cover well. These topics exist in other courses and in the official documentation — worth knowing where to look next.

**Multi-agent orchestration depth.** The course mentions that Claude Code can spawn subagents but does not go deep on coordinating multiple agents in parallel or in sequence. This is the frontier of agentic AI workflows — the [sub-agents documentation](https://code.claude.com/docs/en/sub-agents) and the superpowers plugin are good starting points.

**Building MCP servers.** The course shows you how to connect to existing MCP servers. It does not teach you to build one. For Salesforce developers, a Salesforce-aware MCP server (org schema, Apex class index, deployment status) would be extremely valuable. The [MCP documentation](https://modelcontextprotocol.io) covers server development.

**Enterprise deployment and security hardening.** How do you deploy Claude Code in a team setting? Manage API keys, enforce hook policies across developers, audit what Claude did in a session? The course assumes a solo developer context throughout. Team deployment is a design problem the course leaves to you.

**Testing hooks.** The course shows how to write hooks but not how to test them. Hook scripts run in a live session, and debugging a hook that corrupts JSON output or silently fails is painful. A testing framework or at minimum a recommended testing approach would have been useful.

---

## Verdict

Take the course. It's free, and the hooks section and context management practices alone are worth it — not because they teach obscure features, but because they give you the mental model for thinking about agentic AI reliability. That model applies regardless of which AI coding tool you end up using long-term.

**If you're a Salesforce developer** who has never touched Claude Code: start here, then spend an afternoon getting it connected to an active project. Get your project's conventions into a CLAUDE.md that points to structured documentation rather than dumping everything in one place. You'll see an immediate quality improvement.

**If you're a senior engineer** already using Claude Code: skim Section 1 and 2, spend your time in Section 3. The hooks gotchas section — especially the full path requirement and the regex matcher syntax — has things you will not have encountered unless you've written production hooks already. Also: start using <kbd>Esc</kbd><kbd>Esc</kbd> checkpoints deliberately if you're not already.

**If you're a technical lead evaluating team adoption:** the course will give you vocabulary for an informed conversation about guardrails, context policy, and CI integration. It won't give you an enterprise deployment playbook — you'll need to build that — but it gives you the right questions to ask. Pay particular attention to what the course says about headless mode and `--allowedTools`, then read the [official documentation](https://code.claude.com/docs/en/github-actions) for GitHub Actions integration.

---

**What to study next:** After the course, spend time with the official documentation on [memory](https://code.claude.com/docs/en/memory), [sub-agents](https://code.claude.com/docs/en/sub-agents), and [hooks](https://code.claude.com/docs/en/hooks) — they have significantly more depth than the course on all three. If you're interested in the multi-agent angle, the [how Claude Code works](https://code.claude.com/docs/en/how-claude-code-works) deep-dive is the right starting point.