A GitHub project about AI coding has been getting a lot of attention recently. Its core is not a complex codebase, but a roughly 65-line CLAUDE.md file. The reason it attracted so many stars is not technical complexity. It is that it captures problems many people repeatedly run into when using AI to write code.
The background starts with Andrej Karpathy’s observations on AI coding. Karpathy is an influential educator and engineer in AI: a Stanford PhD, an early OpenAI contributor, and a former Tesla AI leader responsible for Autopilot’s vision system. He has continued to share his views on large models, education, and AI tools, so his comments on changes in programming workflows tend to draw a lot of attention from developers.
He once said that after using Claude Code for a few weeks, his programming style changed noticeably. Previously it was roughly 80% handwritten code and 20% AI assistance. Now it is closer to 80% code written by AI and 20% edits by himself. He described it as “programming in English”, telling an LLM what to write through natural language.
But he also pointed out several recurring problems in AI coding.
01 Wrong Assumptions
The first problem is that models easily make assumptions on behalf of the user, then keep writing along that path. They do not always manage their own confusion, and they do not always stop to ask questions when the requirement is ambiguous.
For example, if the user only says “add a user export feature”, the model might assume it should export all users, output JSON, write to a local file, and skip any confirmation around permissions or fields. Only after the code is done does the user discover that the model’s understanding does not match the real scenario.
A better approach is to list the uncertainties first: should it export all users or filtered results? Should it trigger a browser download or run as a background job? Which fields are needed? How large is the data set? Are there permission constraints? If these questions are not clarified, writing faster only means drifting farther.
02 Over-Complexity
The second problem is that models often turn simple problems into complex ones. A task that could be handled with one function might receive abstract classes, strategy patterns, factory patterns, configuration layers, and a pile of extension points that may never be needed.
This kind of code can look engineered, but in practice it increases maintenance cost. AI is especially good at quickly generating large structures, but it does not always judge whether those structures are necessary. The result is that a task solvable in 100 lines becomes inflated into 1,000 lines.
The test is straightforward: would a senior engineer look at the change and think it is over-designed? If the answer is yes, remove the extra layers and solve the current problem with the least code needed.
03 Collateral Damage
The third problem is that models sometimes modify or delete code they do not fully understand. While fixing a small bug, they may casually change comments, reformat nearby code, clean up imports that look unused, or even touch logic unrelated to the current task.
These “drive-by improvements” are risky because they expand the change scope and make review harder. The user may only want to fix a validator crash caused by an empty email, but the model may also enhance email validation, add username validation, and rewrite docstrings. In the end, it becomes hard to tell which line changed behavior.
A safer rule is: only change what must be changed, and only clean up issues caused by your own change. Existing dead code, formatting problems, or historical baggage should not be touched unless the task explicitly asks for it. At most, mention it.
04 Turning Complaints Into CLAUDE.md
After Karpathy’s comments spread widely, developer Forrest Cheung did something clever: he organized these complaints into executable behavior rules and put them into a CLAUDE.md file.
The project does not contain complicated code. Its key idea is to turn the most failure-prone parts of AI coding into clear working rules. They can be summarized as four principles.
The first is to think before writing. Do not silently assume. Do not hide confusion. If a requirement has multiple interpretations, list them. If there is a simpler approach, say so. Ask when clarification is needed, and push back when needed.
The second is to keep things simple. Do not add features that were not requested. Do not abstract one-off code. Do not add unnecessary configuration. Do not write large amounts of defensive code for extremely unlikely scenarios. If 50 lines can solve it, do not write 200.
The third is to make precise changes. Every changed line should trace directly back to the user’s request. Do not improve nearby code as a side quest. Do not refactor something that is not broken. Match the existing project style as much as possible.
The fourth is goal-driven execution. Do not give the model only a vague instruction. Give it a verifiable success criterion. For example, “fix the bug” can become “write a test that reproduces the bug, then make it pass”; “add validation” can become “write invalid-input tests and make them pass”. The clearer the success criterion, the easier it is for the model to loop toward completion.
05 Why It Took Off
This project became popular not because the content is mysterious, but because it is close to real development work.
Many people using AI for coding have seen similar scenes: the model confidently misunderstands the requirement, the code gets more complex as it goes, or it touches places it should not touch. The value of CLAUDE.md is that it turns those experiences into collaboration rules that can be placed inside a project.
The entry cost is also low: one file can start making a difference, with no complicated integration. Combined with Karpathy’s influence and the project’s practical comparison examples, it naturally spread through the Claude Code user base and the broader AI coding community.
More importantly, these rules are not only for Claude Code. No matter which AI coding tool you use, the underlying issues are similar: the model needs to know when to ask, when to simplify, when to stop, and how to decide that the task is complete.
06 What Developers Can Take Away
The lesson for ordinary developers is simple: AI coding is not about throwing one sentence at a model and waiting for a miracle. The effective approach is to give the model boundaries.
When the requirement is unclear, ask it to expose its assumptions first. When the implementation starts getting complicated, ask it to return to the smallest viable solution. When changing code, keep it focused on the task goal. When finishing work, use tests, commands, or explicit checkpoints to verify the result.
AI is already very capable at writing code, but it still needs good collaboration constraints. The fact that a short CLAUDE.md can attract so much attention shows that developers do not only need smarter models. They also need more reliable ways of working.
In short:
- Think before writing to reduce wrong assumptions.
- Keep things simple to avoid over-design.
- Make precise changes to control change scope.
- Work toward goals with verifiable success criteria.
These four rules are not complicated, but they are practical. The prerequisite for AI coding to truly improve efficiency is not making the model write more. It is making it write more accurately, with less code, and under better control.