mattpocock/skills is a public collection of AI coding agent skills from Matt Pocock.
It is not a full application, nor a new chat client. It is a set of working skills that can be used by AI coding assistants. The idea is practical: break common AI coding problems into small skills that an Agent can call in the right task, instead of relying on one huge prompt every time.
If you often use Claude Code, Codex, Cursor, or similar AI coding tools, this kind of skills collection is worth watching. What really affects the AI coding experience is often not whether the model can write code, but whether it can move through the task in your preferred working style.
What Problem It Solves
AI coding assistants are powerful, but they can easily go wrong.
Common situations include:
- Starting code changes before understanding the requirement
- Modifying too many files at once
- Producing lots of explanation but little useful action
- Blindly trying things after errors
- Not running tests or checks in time
- Ignoring existing project patterns
- Introducing unnecessary abstractions to finish a task
- Writing code without truly reviewing risks afterward
These problems are not always caused by weak model capability. Often, the workflow is not constrained well enough.
The value of mattpocock/skills is that it turns these common failure modes into reusable operating methods, making the Agent behave more like an experienced engineering collaborator in different scenarios.
What Are Skills
In the AI Agent context, a skill can be understood as a reusable task instruction, working method, or professional workflow.
It does not have to be a code plugin, and it does not always need to call an external service. In many cases, a skill is simply a clear set of rules:
- When to use it
- What to do first
- What not to do
- What output is required
- How to judge task completion
This is somewhat like a normal prompt template, but the granularity is closer to a task capability.
Normal prompt templates are usually copied and pasted manually by the user. Skills are better as part of an agent toolbox, allowing the Agent to choose the right workflow for the task.
Why Small and Composable Matters
The README emphasizes that these skills are small and composable.
This direction matters.
If one skill tries to handle everything, it quickly becomes a new giant prompt: long, vague, and hard to maintain. The advantage of small skills is clear boundaries.
For example, one skill can focus on:
- Planning first
- Fixing TypeScript errors
- Running tests and fixing based on results
- Doing code review
- Summarizing project conventions
- Improving prompts
- Removing unnecessary abstractions
These skills can be combined according to the task. A simple task may need only one skill, while a complex task can chain several together.
This is closer to real engineering work. You do not use the same workflow for every problem; you choose tools according to the situation.
Keeping the Engineer in Control
One important direction of this repository is keeping the engineer in control.
AI coding can easily slide into two extremes.
The first is fully manual. AI only helps write a few lines of code, while all context, planning, and verification still depend on you.
The second is fully hands-off. You throw a task to an Agent, let it change a lot of things, and then face a diff that is hard to review.
Skills help find a more stable middle position.
They let AI take on more repetitive workflow, while still constraining it with rules:
- Understand the task before acting
- Read relevant files before editing
- Keep the modification scope controlled
- Report uncertainty
- Verify after changes
- Do not refactor unrelated code just to show off
This does not weaken AI. It makes AI actions easier for humans to review and take over.
Alignment Problems
The first kind of AI coding failure is often alignment failure.
The user wants a very specific change, but the Agent may understand it as a larger refactor. The user only wants a bug fixed, but it changes styles along the way. The user wants existing architecture to be followed, but it introduces a new pattern.
Skills can help the Agent do several things at the start of a task:
- Restate the goal
- Identify the impact scope
- Recognize existing implementation patterns
- Provide a plan
- Clarify what will not be done
This step is like an engineer’s self-check before starting work.
If the Agent cannot clearly state the task boundary and starts writing code directly, it is easy for the task to drift.
Feedback Loop Problems
AI should not write code through one-shot generation alone.
In real development, feedback loops matter:
- Change a small piece
- Run tests or type checks
- Read the errors
- Fix them
- Verify again
Many Agents fail because they skip the middle feedback. They change many things at once and then summarize from intuition that “it should work.”
Skills can make the feedback loop explicit. For example, they can require the Agent to:
- Run relevant checks after modification
- Read error messages first if checks fail
- Avoid blindly changing unrelated files
- Re-verify after each round of fixes
- Report final verification results
This makes AI coding more like real debugging and less like one-shot writing.
Architecture Control Problems
AI is good at generating abstractions, and also good at over-generating abstractions.
To complete a small requirement, it may create a service layer, helper functions, configuration objects, type wrappers, and adapters, making the code much more complex than the requirement itself.
This is especially dangerous in large projects. AI-generated abstractions often look “professional,” but they may not match existing project style and may increase maintenance cost.
Good skills remind the Agent to:
- Prefer existing patterns
- Avoid unnecessary new abstractions
- Avoid refactoring unrelated areas
- Match the change to the size of the task
- Understand the code before designing structure
This reduces output that looks engineered but is actually harder to maintain.
Why Review Skills Matter
Writing code and reviewing code are different states.
When an Agent writes code, it usually tends to prove that its implementation works. It may explain why the change should work, but it does not always actively look for risks.
The purpose of a review skill is to switch the Agent’s role:
- Find potential bugs
- Find behavior regressions
- Find missing tests
- Find edge cases
- Find increased complexity
- Find inconsistencies with existing conventions
This matters for AI coding because AI generates code quickly. Without review, users can easily be overwhelmed by large diffs.
A good review output should list issues first, not praise the implementation first. It should help the engineer decide whether the change can be merged.
Difference from Normal Rules Files
Many AI coding tools support rules, instructions, or memory.
These files usually record long-term rules, such as:
- Project tech stack
- Naming conventions
- Test commands
- Directories not to modify
- Answer style preferences
Skills are more focused on task workflow.
Rules tell the Agent “how to behave in the long term,” while skills tell the Agent “how to execute this kind of task.”
The two work best together.
For example, rules can say the project uses pnpm test, while a review skill requires checking test coverage after changes. Then the Agent knows not only the command, but also when to use it.
Suitable Scenarios
Repositories like mattpocock/skills are suitable for:
- Frequent use of AI coding tools
- Agents working on real codebases
- Reducing out-of-scope AI edits
- Making the Agent verify results more actively
- Turning your engineering habits into skills
- Learning how others design agent workflows
- Turning temporary prompts into a maintainable skill collection
If you only occasionally ask AI to write a small function, you may not need to maintain skills.
But if you already treat AI as a long-term development partner, skills become increasingly important. They are like a reusable working method for the Agent.
How to Learn from This Repository
Even if you do not use every skill directly, you can learn several things from this repository.
First, write down failure modes.
Do not only complain when AI makes a mistake. Turn the patterns it often gets wrong into rules, so a skill can prevent them next time.
Second, keep skills short.
One skill should solve one clear problem. The shorter it is, the easier it is to call correctly and maintain.
Third, make output format clear.
If you want the Agent to list a plan first, execute next, and summarize verification results at the end, write that structure clearly. Vague requirements usually produce vague results.
Fourth, keep human handoff points.
A good skill should not let AI run too far alone. When there is uncertainty, expanded impact scope, failing tests, or a product decision, it should stop and explain the situation.
Notes for Use
First, do not turn everything into a skill.
Too many skills make the system complex, and the Agent may not know which one to choose. Start with the highest-frequency and most painful scenarios.
Second, skills need iteration.
The first version of a skill may not be good. Watch how AI actually executes it, then gradually delete, add, and rewrite.
Third, do not let skills replace engineering judgment.
Skills can improve workflow, but they cannot guarantee correct implementation. Tests, review, build checks, and human judgment still matter.
Fourth, pay attention to differences between Agents.
Claude Code, Codex, Cursor, and Copilot support instructions, skills, and rules differently. The same idea can be reused, but the specific format should be adjusted for each tool.
Reference
Final Thought
What makes mattpocock/skills worth watching is not one magic prompt inside it, but the practical AI coding idea it demonstrates: break engineering experience into small skills, then let the Agent combine them by scenario.
As AI coding moves from occasional assistance into daily workflow, skills become important tools for constraining Agents, keeping engineers in control, and improving feedback quality.