Superpowers: a skills framework that pulls coding agents back into engineering process

A summary of obra/superpowers: positioning, installation targets, base workflow, skills library, and boundaries. It combines brainstorming, planning, TDD, code review, worktrees, and subagents into a coding-agent methodology.

obra/superpowers is both a skills framework for coding agents and a software development methodology. Its goal is not to add another universal prompt, but to make agents follow a process: clarify goals, produce a design, write a plan, implement through TDD, then review and finish.

Project: https://github.com/obra/superpowers

At the time of writing, the GitHub API shows more than 190,000 stars, an MIT license, and recent activity. The README describes it plainly: An agentic skills framework & software development methodology that works.

What problem it solves

Many AI coding tools are not weak at writing code; they are too eager to write code.

A user says something vague, the agent edits files, and the result looks finished while boundaries, tests, and architecture remain unclear. Small tasks may survive this. Complex projects turn it into rework and technical debt.

Superpowers makes the agent enter a workflow before touching code:

  1. When the user wants to build something, ask about the goal first.
  2. Turn the conversation into a spec and confirm it in sections.
  3. After design approval, write an implementation plan.
  4. After the user says “go”, begin implementation.
  5. During implementation, emphasize TDD, YAGNI, DRY, and code review.

This is not new software engineering. It is important because fast agents need stronger guardrails.

Supported tools

Superpowers is not tied to a single agent. The README lists installation paths for Claude Code, Codex CLI, Codex App, Factory Droid, Gemini CLI, OpenCode, Cursor, and GitHub Copilot CLI.

That makes it more like a workflow layer across harnesses than a model-specific trick.

The base workflow

The base workflow has several stages.

First is brainstorming. Before implementation, the agent turns rough ideas into an executable design and confirms it with the user.

Second is using-git-worktrees. After design approval, it creates an isolated worktree and branch, then checks that install and test baselines are clean.

Third is writing-plans. It decomposes design into small tasks with paths, code scopes, and validation steps. The plan should be clear enough for someone without context to execute.

Fourth is execution. subagent-driven-development can dispatch tasks to subagents, while executing-plans runs them in batches. Each task should be reviewable and verifiable.

Fifth is test-driven-development: true RED-GREEN-REFACTOR. Write a failing test, confirm failure, implement minimally, confirm pass, refactor.

Sixth is requesting-code-review. Reviews happen between tasks; critical findings block progress.

Finally, finishing-a-development-branch validates tests and offers choices such as merge, PR, keep, or discard the worktree.

What is in the skills library

The skills library can be grouped by purpose.

Testing centers on test-driven-development.

Debugging includes systematic-debugging and verification-before-completion. They focus on reproduction, minimization, hypotheses, validation, and not claiming completion before verification.

Collaboration skills include:

  • brainstorming
  • writing-plans
  • executing-plans
  • dispatching-parallel-agents
  • requesting-code-review
  • receiving-code-review
  • using-git-worktrees
  • finishing-a-development-branch
  • subagent-driven-development

Meta skills include writing-skills and using-superpowers.

Together they give the agent engineering habits: when to ask, when to plan, when to test, and when to stop for review.

How it differs from a prompt

A normal prompt often piles rules into one system message: do not over-edit, think first, test, explain, be concise. As rules accumulate, complex tasks make the model forget or ignore some of them.

Superpowers splits rules into phase-specific workflow modules. Each skill is shorter and focused. The agent knows the current phase, complex processes become checkable, and teams can turn their own practices into reusable skills.

The lesson is not just “use a smarter model”. Give the model a repeatable way to work.

Who should use it

Superpowers is most useful for developers already using coding agents on real projects, especially when:

  • The task spans multiple files.
  • The agent should design before implementation.
  • TDD or validation matters.
  • Multiple branches or worktrees are common.
  • Subagents can help with implementation or review.
  • A team wants to encode its workflow as skills.

For a one-line config change, it may feel heavy. For multi-step development, the constraints are valuable.

Notes before using it

Do not treat it as full autopilot. It gives the agent process, but humans still own requirements, tradeoffs, and final acceptance.

TDD and review add upfront cost. For small tasks they may slow things down; for complex tasks they reduce rework.

Parallel subagents are not always better. They work when boundaries and write scopes are clear. If the requirement is still fuzzy, parallelism only multiplies confusion.

Teams must maintain skill quality. Outdated processes, vague instructions, and conflicting rules can also hurt agents.

Summary

Superpowers is valuable because it pulls coding agents away from “receive request, edit code” and back into software engineering process.

AI coding often lacks not generation speed, but clarification, planning, verification, review, and closure. The stronger the model becomes, the less these steps should be skipped.

If you use Codex, Claude Code, Cursor, or Gemini CLI on real projects, Superpowers is worth studying. Even if you do not install it, its skill decomposition is a good reference for designing your own agent workflow.

记录并分享
Built with Hugo
Theme Stack designed by Jimmy