Peter Steinberger’s career is a useful lens for understanding what is changing in AI software development.
He is not a newcomer who suddenly became visible because of AI. Before OpenClaw, he was already the founder of PSPDFKit, a company focused on PDF rendering, document processing, and developer tools. Products like that are hard to win with concept packaging alone. They have to deal with performance, compatibility, API design, enterprise customers, and long-term maintenance.
So when Steinberger later built OpenClaw with AI tools and shared views around AI agents, personal automation, and AI coding, the point was not simply that “one person wrote a lot of code.” The more interesting part is how he combined years of software engineering experience with a new generation of AI coding agents and rethought the development process.
AI coding is not a magic button
Discussions about AI coding often fall into two extremes.
One side says AI can already write code, so programmers are almost obsolete.
The other side says AI-generated code is unreliable, so real engineering still has to be hand-written by people.
Steinberger’s experience points to a third view: AI changes the unit of operation in software development, but it does not remove engineering judgment.
In the past, developers mainly worked around editing code. Requirements breakdown, architecture decisions, implementation, testing, and bug fixing all revolved around manual code changes.
Once AI coding agents enter the workflow, developers increasingly manage an execution system:
- Explain the goal.
- Provide context.
- Set boundaries.
- Let the agent modify code.
- Run tests and checks.
- Iterate based on results.
This is not simply handing the keyboard to a model. It is moving humans from “typing every line” toward “defining direction, designing feedback, and judging results.”
Why he is skeptical of calling it vibe coding
One phrase that often appears around Steinberger is vibe coding.
The term originally described a new style of development: developers describe ideas in natural language, let AI generate large amounts of code, then keep adjusting based on runtime results and feedback.
But Steinberger is not entirely sold on the phrase. Public coverage has noted that he sees vibe coding as potentially dismissive, implying that AI-assisted development is just “generating by feel” while ignoring the skill, judgment, and experience behind it.
That criticism makes sense.
Effective AI coding is not about typing a casual sentence and trusting the model’s output. It requires:
- Breaking vague requirements into executable tasks.
- Detecting when the model misunderstands the goal.
- Designing tests and acceptance criteria.
- Judging whether the code structure will remain maintainable.
- Knowing when to stop generating and switch to human review.
In other words, AI reduces the friction of writing code, but it does not reduce the responsibility of understanding the system.
The loop is the key
One idea often associated with Steinberger’s interviews and writing is the importance of the loop.
Letting AI generate code is open-loop.
Letting AI generate code, run it, read errors, fix problems, and run tests again is closer to closed-loop development.
That difference matters.
Open-loop generation easily creates software that looks usable on the surface. The page opens, features appear to exist, and there is plenty of code. But once it enters a real environment, problems with state management, permissions, exception handling, edge cases, and deployment quickly appear.
Closed-loop development means output must be constrained by feedback. The simplest loop is:
- Write down the goal clearly.
- Let AI modify the code.
- Automatically run tests, type checks, lint, or a build.
- Feed errors back to AI.
- Repeat until it passes.
- Let a human review the critical path.
This is where AI software development can truly improve efficiency. Not because the model gets everything right the first time, but because it can participate quickly in a cycle of generation, validation, and repair.
More experience makes AI more useful
One of the easiest misconceptions about AI coding is that experience no longer matters.
Steinberger’s case suggests the opposite: experience becomes more important, but its role changes.
An experienced engineer is better at deciding:
- Which tasks are suitable for an agent.
- Which modules need tests first.
- Which changes are too risky for broad AI refactoring.
- Which generated code merely looks plausible.
- Which problems should be solved through architecture rather than more patches.
AI can generate many candidate solutions. The more candidates you have, the more judgment you need. An inexperienced person may be impressed by “it runs.” An experienced engineer asks: can it be maintained? Can it scale? Does it break a security boundary? Can we debug it when something goes wrong?
That is why AI coding agents do not turn software engineering into pure chat. They outsource part of the execution work while amplifying planning, review, validation, and trade-off decisions.
OpenClaw matters beyond the project itself
OpenClaw drew attention not only because it is an open-source AI agent, and not only because it grew quickly.
It is also a signal: developers increasingly want AI to do more than answer questions. They want it to connect to real tools and perform real actions.
Traditional chatbots stay inside the chat box. They can explain code, write drafts, and give advice, but people still need to copy, paste, open software, and run commands.
The agent direction connects models to tools:
- File systems.
- Browsers.
- Terminals.
- Email.
- Calendars.
- Third-party services.
- Project repositories.
Once models can use those tools, the boundaries of software development shift. AI is no longer just code completion. It can participate in project reading, task decomposition, file editing, test execution, PR preparation, and workflow automation.
That is also why Steinberger’s move to OpenAI drew attention. He represents not just a single developer story, but a product direction: personal agents moving from demos into everyday work.
What this means for ordinary developers
For ordinary developers, Steinberger’s experience is not something everyone can copy directly.
Not everyone can manage multiple agents at once. Not every project is suited to heavy AI generation. Not every team accepts a workflow of “generate first, iterate quickly.”
But several lessons are useful.
First, write tasks clearly.
AI is sensitive to vague goals. If you say “optimize this,” it may change style, structure, features, and logic. If you say “change the login failure message from English to Chinese without altering the authentication flow,” the result is usually more controllable.
Second, standardize validation commands.
If a project has no tests, no build command, and no lint, AI has trouble forming a loop. Even basic commands like npm test, go test ./..., pytest, or hugo are better than relying only on visual inspection.
Third, control the scope of changes.
Having AI handle one module, one bug, or one page at a time is usually more reliable than asking it to “refactor the whole project.”
Fourth, keep human review.
For authentication, payments, permissions, data deletion, deployment scripts, database migrations, and security configuration, do not lower the review bar just because the code was generated by AI.
Fifth, review prompts and failure patterns.
If AI often misunderstands a certain type of task, write those constraints into project rules, agent instructions, or skill files. AI coding capability comes not only from the model, but also from the work environment you build around it.
Where AI software development is going
Steinberger’s story suggests that AI software development is moving from “helping write code” toward “organizing software production workflows.”
Early AI coding tools were mainly useful for function completion, error explanation, and template generation. The shift now is that agents can work across files, call tools, run checks, and continue fixing based on feedback.
This points to several trends.
First, the productivity ceiling for individual developers will rise.
One person can push more prototypes, scripts, internal tools, and small products. But higher output does not automatically mean higher quality. The faster code is generated, the more validation matters.
Second, project structure becomes more important.
The clearer the code, tests, and documentation, the easier it is for AI to make correct changes. Messy projects are hard for humans and hard for AI.
Third, software engineers will look more like workflow designers.
In the future, what matters will not only be whether someone knows a programming language, but whether they can organize requirements, context, tools, tests, deployment, and permissions into a controlled loop.
Fourth, security boundaries become more sensitive.
If an agent can do things, it can also do the wrong things. If it can read files, run commands, and access services, then permissions, audit, and rollback become infrastructure for AI development environments.
Summary
The most valuable part of Peter Steinberger’s view of AI software development is not how much code AI generated. It is the development posture he demonstrates.
Humans are no longer only typing line by line inside an editor. They are designing goals, managing agents, building feedback loops, reviewing results, and adjusting the system. Code remains important, but it is no longer the only center of labor.
If traditional software development emphasized “writing the code correctly,” AI software development increasingly emphasizes “making the system continuously produce verifiably correct results.”
This is not just about lowering the engineering barrier. It changes the shape of engineering ability: from manual implementation toward task decomposition, context management, tool orchestration, automated validation, and final judgment.