Claude Code's Monitor tool streams background process output directly into the agent session, replacing polling loops with event-driven reactions that cost nothing during idle periods.

Most AI coding agents are polling machines. They check something, wait, check again, wait, check again. It works until you notice how many tokens you are burning on "nothing changed yet."
Claude Code's Monitor tool, shipped in v2.1.98 on April 9, 2026, flips this. Instead of Claude repeatedly asking whether your build finished, Claude watches the output stream and reacts the moment something interesting appears. The difference sounds modest. In practice it changes how you think about what an AI agent can observe.
Before Monitor, the standard approach to having Claude watch a long-running process was /loop. You would set an interval — check every two minutes, check every five minutes — and Claude would re-run the prompt on that schedule. Each cycle consumed tokens whether the CI pipeline had advanced or not.
Anthropic's PM for Claude Code, Noah Zweben, summarized the problem at launch: "Big token saver and great way to move away from polling in the agent loop."
For a two-hour deployment pipeline checked every minute, that is 120 polling cycles. If your deploy fails at minute 90, you have paid for 90 cycles that told you nothing. The Monitor tool pays nothing for silence.
The Monitor tool launches a shell command and treats its stdout as an event stream. Each line printed to stdout becomes one notification delivered to the active session. Claude reacts to that notification immediately — or, if several lines arrive within 200ms of each other, they batch into a single notification so related output groups naturally.
Stderr does not trigger events. It goes to an output file you can read later. If you need stderr failures to trigger Claude, merge them explicitly:
python train.py 2>&1 | grep --line-buffered "ERROR\|WARN\|FATAL"
The key parameter people miss: grep --line-buffered. Without it, the kernel buffers pipe output and batches it in large chunks. Your "real-time" log watcher might delay events by minutes. This is the most common mistake when first using Monitor, and it is an easy one to make because the error is silent — the monitor just seems slow.
Watching a dev server. Your Next.js or Vite dev server prints a predictable error pattern when it crashes or encounters a module resolution failure. Instead of alt-tabbing to the terminal, set Monitor to watch that output:
npm run dev 2>&1 | grep --line-buffered "error\|Error\|failed\|FAILED"
Claude gets notified the moment the server surfaces an error and can start diagnosing before you have noticed the page stopped responding.
Tailing CI output. GitHub Actions and most CI systems emit structured log output. You can pipe gh run watch into a filter and have Claude notify you only when a job fails or completes:
gh run watch --exit-status 2>&1 | grep --line-buffered "fail\|error\|success\|completed"
This is genuinely useful for long test suites. Claude can start writing a fix for a failing test while the remaining tests are still running.
Tracking a build or training job. Long-running jobs often emit progress lines with predictable patterns — epoch numbers, percentage completions, checkpoint saves. You can filter for the lines you care about:
python train.py 2>&1 | grep --line-buffered "Epoch\|loss\|checkpoint\|ERROR"
PR status monitoring. If you want Claude to know when a PR gets reviewed or its CI changes state, you can write a small polling script that emits output only when the status actually changes:
watch -n 30 -d "gh pr view 123 --json statusCheckRollup,reviewDecision" 2>&1 | grep --line-buffered "FAILURE\|APPROVED\|CHANGES_REQUESTED"
Two parameters control how long a monitor lives.
persistent: true keeps the monitor alive for the duration of your session. Use this for watchers you want running continuously — a dev server observer, a long-running log tailer, a PR monitor you want active all day.
timeout_ms auto-kills the monitor after a specified duration. Use this for bounded tasks: watching a specific test run, following a deployment window, observing a single training job. Without a timeout, monitors for finite tasks continue running after the task completes, consuming resources quietly.
Getting this wrong in the other direction — using persistent: true for a monitor tied to a specific deploy — means you have a zombie watcher running after the deploy finishes. Not catastrophic, but worth being deliberate about.
Piping without --line-buffered. The monitor appears to work but events arrive in batches or with significant delay. Always add --line-buffered to any grep in your pipeline.
Streaming unfiltered output. If your command emits thousands of lines per second, every line becomes a notification. Always pipe raw output through a filter. The goal is for Claude to see meaningful signals, not noise.
Using Monitor as a replacement for proper alerting. Monitor is session-scoped. If your Claude Code session ends, the monitor ends. It is not a production monitoring solution — it is a developer-session tool. For alerts that need to survive outside your terminal session, use your actual monitoring infrastructure.
Forgetting to merge stderr. If the process you are watching crashes with a stderr message, Monitor will not see it unless you add 2>&1 to the pipeline.
Running Monitor in the wrong direction. Monitor is for reacting, not for automating actions without visibility. If Claude is configured to auto-fix errors it observes, make sure you understand what "fix" means before you let it run unattended.
Keep your filter command as a one-liner. Complex filtering logic inside the monitor command is hard to debug when it silently fails. If you need sophisticated filtering, write a small script that emits clean output and pipe that instead.
Use timeout_ms for anything with a natural endpoint. Deploys finish. Test runs complete. Training jobs converge. Set a generous timeout that covers the expected duration plus a buffer.
Test the filter command in your terminal first. If tail -f app.log | grep --line-buffered "ERROR" does not produce output in your terminal when an error occurs, it will not work in Monitor either.
Log what Claude does in response to events. If Monitor triggers a sequence of actions, you want a record of what happened and why. This matters especially when Monitor is running in a longer automated workflow.
Use Monitor for anything where you are currently alt-tabbing to check progress. Dev server crashes, test failures, build completions — these are exactly the bounded, verifiable signals it handles well.
Be more careful about chaining Monitor to automated actions. Watching logs and alerting is low risk. Watching logs and auto-applying changes is a different category. Make sure you understand the action being triggered before you leave it running unattended.
Skip Monitor for production alerting. It is the right tool for a development session, not for infrastructure you need to watch 24/7.
The shift from polling to event-driven is not just an efficiency improvement. It changes the granularity at which an agent can respond to a running system. Claude Code was already useful for writing code alongside a terminal. Monitor makes it useful for observing what the code actually does when it runs. That is a meaningful step toward workflows where the agent and the running system stay in tighter contact without you having to manually relay information between them.
The --line-buffered flag is not optional. Neither is a sensible filter. But once those are in place, Monitor is one of the more immediately practical additions Claude Code has shipped this year.
Comments
Sign in to join the discussion.
No comments yet. Be the first to share your thoughts.