Skip to content

Systematic Debugging — 4-phase root cause debugging: understand bugs before fixing

4-phase root cause debugging: understand bugs before fixing.

SourceBundled (installed by default)
Pathskills/software-development/systematic-debugging
Version1.1.0
AuthorHermes Agent (adapted from obra/superpowers)
LicenseMIT
Tagsdebugging, troubleshooting, problem-solving, root-cause, investigation
Related skillstest-driven-development, writing-plans, subagent-driven-development

The following is the complete skill definition that Hermes loads when this skill is triggered. This is what the agent sees as instructions when the skill is active.

Random fixes waste time and create new bugs. Quick patches mask underlying issues.

Core principle: ALWAYS find root cause before attempting fixes. Symptom fixes are failure.

Violating the letter of this process is violating the spirit of debugging.

NO FIXES WITHOUT ROOT CAUSE INVESTIGATION FIRST

If you haven’t completed Phase 1, you cannot propose fixes.

Use for ANY technical issue:

  • Test failures
  • Bugs in production
  • Unexpected behavior
  • Performance problems
  • Build failures
  • Integration issues

Use this ESPECIALLY when:

  • Under time pressure (emergencies make guessing tempting)
  • “Just one quick fix” seems obvious
  • You’ve already tried multiple fixes
  • Previous fix didn’t work
  • You don’t fully understand the issue

Don’t skip when:

  • Issue seems simple (simple bugs have root causes too)
  • You’re in a hurry (rushing guarantees rework)
  • Someone wants it fixed NOW (systematic is faster than thrashing)

You MUST complete each phase before proceeding to the next.


BEFORE attempting ANY fix:

  • Don’t skip past errors or warnings
  • They often contain the exact solution
  • Read stack traces completely
  • Note line numbers, file paths, error codes

Action: Use read_file on the relevant source files. Use search_files to find the error string in the codebase.

  • Can you trigger it reliably?
  • What are the exact steps?
  • Does it happen every time?
  • If not reproducible → gather more data, don’t guess

Action: Use the terminal tool to run the failing test or trigger the bug:

Окно терминала
# Run specific failing test
pytest tests/test_module.py::test_name -v
# Run with verbose output
pytest tests/test_module.py -v --tb=long
  • What changed that could cause this?
  • Git diff, recent commits
  • New dependencies, config changes

Action:

Окно терминала
# Recent commits
git log --oneline -10
# Uncommitted changes
git diff
# Changes in specific file
git log -p --follow src/problematic_file.py | head -100

4. Gather Evidence in Multi-Component Systems

Section titled “4. Gather Evidence in Multi-Component Systems”

WHEN system has multiple components (API → service → database, CI → build → deploy):

BEFORE proposing fixes, add diagnostic instrumentation:

For EACH component boundary:

  • Log what data enters the component
  • Log what data exits the component
  • Verify environment/config propagation
  • Check state at each layer

Run once to gather evidence showing WHERE it breaks. THEN analyze evidence to identify the failing component. THEN investigate that specific component.

WHEN error is deep in the call stack:

  • Where does the bad value originate?
  • What called this function with the bad value?
  • Keep tracing upstream until you find the source
  • Fix at the source, not at the symptom

Action: Use search_files to trace references:

# Find where the function is called
search_files("function_name(", path="src/", file_glob="*.py")
# Find where the variable is set
search_files("variable_name\\s*=", path="src/", file_glob="*.py")
  • Error messages fully read and understood
  • Issue reproduced consistently
  • Recent changes identified and reviewed
  • Evidence gathered (logs, state, data flow)
  • Problem isolated to specific component/code
  • Root cause hypothesis formed

STOP: Do not proceed to Phase 2 until you understand WHY it’s happening.


Find the pattern before fixing:

  • Locate similar working code in the same codebase
  • What works that’s similar to what’s broken?

Action: Use search_files to find comparable patterns:

search_files("similar_pattern", path="src/", file_glob="*.py")
  • If implementing a pattern, read the reference implementation COMPLETELY
  • Don’t skim — read every line
  • Understand the pattern fully before applying
  • What’s different between working and broken?
  • List every difference, however small
  • Don’t assume “that can’t matter”
  • What other components does this need?
  • What settings, config, environment?
  • What assumptions does it make?

Scientific method:

  • State clearly: “I think X is the root cause because Y”
  • Write it down
  • Be specific, not vague
  • Make the SMALLEST possible change to test the hypothesis
  • One variable at a time
  • Don’t fix multiple things at once
  • Did it work? → Phase 4
  • Didn’t work? → Form NEW hypothesis
  • DON’T add more fixes on top
  • Say “I don’t understand X”
  • Don’t pretend to know
  • Ask the user for help
  • Research more

Fix the root cause, not the symptom:

  • Simplest possible reproduction
  • Automated test if possible
  • MUST have before fixing
  • Use the test-driven-development skill
  • Address the root cause identified
  • ONE change at a time
  • No “while I’m here” improvements
  • No bundled refactoring
Окно терминала
# Run the specific regression test
pytest tests/test_module.py::test_regression -v
# Run full suite — no regressions
pytest tests/ -q

4. If Fix Doesn’t Work — The Rule of Three

Section titled “4. If Fix Doesn’t Work — The Rule of Three”
  • STOP.
  • Count: How many fixes have you tried?
  • If < 3: Return to Phase 1, re-analyze with new information
  • If ≥ 3: STOP and question the architecture (step 5 below)
  • DON’T attempt Fix #4 without architectural discussion

5. If 3+ Fixes Failed: Question Architecture

Section titled “5. If 3+ Fixes Failed: Question Architecture”

Pattern indicating an architectural problem:

  • Each fix reveals new shared state/coupling in a different place
  • Fixes require “massive refactoring” to implement
  • Each fix creates new symptoms elsewhere

STOP and question fundamentals:

  • Is this pattern fundamentally sound?
  • Are we “sticking with it through sheer inertia”?
  • Should we refactor the architecture vs. continue fixing symptoms?

Discuss with the user before attempting more fixes.

This is NOT a failed hypothesis — this is a wrong architecture.


If you catch yourself thinking:

  • “Quick fix for now, investigate later”
  • “Just try changing X and see if it works”
  • “Add multiple changes, run tests”
  • “Skip the test, I’ll manually verify”
  • “It’s probably X, let me fix that”
  • “I don’t fully understand but this might work”
  • “Pattern says X but I’ll adapt it differently”
  • “Here are the main problems: [lists fixes without investigation]”
  • Proposing solutions before tracing data flow
  • “One more fix attempt” (when already tried 2+)
  • Each fix reveals a new problem in a different place

ALL of these mean: STOP. Return to Phase 1.

If 3+ fixes failed: Question the architecture (Phase 4 step 5).

ExcuseReality
”Issue is simple, don’t need process”Simple issues have root causes too. Process is fast for simple bugs.
”Emergency, no time for process”Systematic debugging is FASTER than guess-and-check thrashing.
”Just try this first, then investigate”First fix sets the pattern. Do it right from the start.
”I’ll write test after confirming fix works”Untested fixes don’t stick. Test first proves it.
”Multiple fixes at once saves time”Can’t isolate what worked. Causes new bugs.
”Reference too long, I’ll adapt the pattern”Partial understanding guarantees bugs. Read it completely.
”I see the problem, let me fix it”Seeing symptoms ≠ understanding root cause.
”One more fix attempt” (after 2+ failures)3+ failures = architectural problem. Question the pattern, don’t fix again.
PhaseKey ActivitiesSuccess Criteria
1. Root CauseRead errors, reproduce, check changes, gather evidence, trace data flowUnderstand WHAT and WHY
2. PatternFind working examples, compare, identify differencesKnow what’s different
3. HypothesisForm theory, test minimally, one variable at a timeConfirmed or new hypothesis
4. ImplementationCreate regression test, fix root cause, verifyBug resolved, all tests pass

Use these Hermes tools during Phase 1:

  • search_files — Find error strings, trace function calls, locate patterns
  • read_file — Read source code with line numbers for precise analysis
  • terminal — Run tests, check git history, reproduce bugs
  • web_search/web_extract — Research error messages, library docs

For complex multi-component debugging, dispatch investigation subagents:

delegate_task(
goal="Investigate why [specific test/behavior] fails",
context="""
Follow systematic-debugging skill:
1. Read the error message carefully
2. Reproduce the issue
3. Trace the data flow to find root cause
4. Report findings — do NOT fix yet
Error: [paste full error]
File: [path to failing code]
Test command: [exact command]
""",
toolsets=['terminal', 'file']
)

When fixing bugs:

  1. Write a test that reproduces the bug (RED)
  2. Debug systematically to find root cause
  3. Fix the root cause (GREEN)
  4. The test proves the fix and prevents regression

From debugging sessions:

  • Systematic approach: 15-30 minutes to fix
  • Random fixes approach: 2-3 hours of thrashing
  • First-time fix rate: 95% vs 40%
  • New bugs introduced: Near zero vs common

No shortcuts. No guessing. Systematic always wins.