I’ve made this point before, but let me reiterate it: Software has always advanced by raising the level of abstraction. We moved from assembly to high-level languages, from handwritten SQL to ORMs, and from managing servers to calling cloud APIs. Each step made creation faster and shifted the hard work elsewhere.
Andrei Karpathy captured the latest leap neatly when he wrote that “the hottest new programming language is English.” Now that we no longer have to tell computers what to do line-by-line, it’s raised the level of abstraction to a point where not just the tools have changed, but the people who use them have changed.
Dan Shapiro recently framed this evolution as a spectrum—from AI as “spicy autocomplete” (Level 0) all the way to fully autonomous software factories (Level 5). What’s striking about his argument isn’t the endpoints, but the middle. He suggests that for the foreseeable future, most real teams will live in Levels 3 and 4: systems where AI can generate substantial portions of software, but humans still decide what actually runs.
I agree, and think it shifts the core problem. If humans remain in the approval loop—but AI is doing most of the writing—the question stops being “Can we build faster?” and becomes something more fundamental: “Can we clearly see what we’re approving?”
This is why recent debates about the future of no-code feel so timely. Some argue that no-code’s moment is fading—that if AI can generate production-ready systems directly from natural language, visual builders are simply an unnecessary middle layer. My good friend JJ Englert’s recent article reflects that growing sentiment: when prompting becomes the fastest path from idea to implementation, why bother assembling flows in a UI at all? If we really are moving into a world where humans approve AI-generated systems rather than authoring every line themselves—as Shapiro’s middle levels suggest—that question becomes unavoidable.
In this post, I’d like to present a different way of looking at the value of “no-code” that takes Shapiro’s point into account. To put it simply, JJ’s perspective rests on one important assumption: that the primary role of visual development is about building something without knowing how to write code. Maybe that used to be true. But what if the role of visual development has changed? What if it’s no longer about creation, but about validation?
What I mean by visual validation
When I talk about visual validation, I’m not talking about decorative diagrams or dashboards layered on top of opaque systems. I mean something much more concrete: the ability to inspect how business rules are applied, how data moves through a system, how permissions are enforced, where decisions branch, and which services ultimately hold authority over an outcome—all through representations that humans can grasp at a glance.
Traditional software practices gave us guardrails for human-written systems: code reviews, automated tests, staging environments, deployment pipelines. Those remain essential. But as AI begins to generate large portions of application logic, they become insufficient on their own. Models can produce interconnected systems far faster than any person can read line by line, creating a new kind of risk—not just syntax errors, but structural opacity. Systems that technically work, yet violate business policy, compliance requirements, or product intent.
Visual validation is the interface between AI-authored logic and human judgment. In a Level 3 or Level 4 world—where humans still govern what runs, but no longer hand-write every path—that interface becomes the deciding factor between informed oversight and blind sign-off. It exposes structure. It makes decision paths explicit. It lets teams ask not only “Does this run?” but “Is this how we want our company to operate?”
No-code, reframed
JJ Englert is a friend and fellow podcaster (you can check out our chats on Xano’s Futureproof podcast and JJ’s AI + No-Code podcast), and I don’t disagree with his point that many of no-code’s original advantages—speed, accessibility, abstraction—have been overtaken by AI’s ability to generate raw code directly.
The diagnosis is right, but incomplete. What’s fading isn’t the need for visual systems. It’s the old premise that their primary purpose was to help people who couldn’t program. In a world where AI can draft schemas, APIs, and services on demand, the scarce resource isn’t the ability to write code, it’s the confidence in the code that’s written. Most teams won’t jump straight to fully autonomous software factories. They’ll live in the messy middle—approving, editing, constraining, and shipping systems largely written by machines.
The emerging role of no-code is to make distributed, AI-authored business logic governable. A visual layer provides a way to inspect where rules live, how decisions propagate, which policies are being enforced, and whether those behaviors align with organizational intent.
Why visual validation becomes non-negotiable
AI increases the surface area for failure, and the catch is that it often manages to do this pretty subtly. Research has already shown that machine-generated code often introduces bugs and security weaknesses when accepted uncritically, which mirrors what many teams are seeing in practice. Models excel at producing plausible-looking logic, and plausibility is precisely what makes errors harder to detect with traditional review processes.
That’s why visual validation becomes non-negotiable. In Levels 3 and 4 of AI-driven development, the most dangerous failure mode isn’t runaway autonomy. It’s humans approving systems they don’t fully understand. When teams can see how rules are composed, where data is transformed, and how control flows across services, they gain a new form of early-stage governance. They can reason about blast radius, compliance boundaries, and edge cases before those issues reach production. They can move fast without relying on blind trust. In the future, authority is going to increasingly live in the backend—and visual validation is how humans are going to exercise that authority.
The builder’s role Is shifting
All of this adds up to a subtle but profound change in what it means to build software. Historically, builders spent much of their time translating intent into code. Increasingly, that translation is handled by machines. The human role is moving toward defining goals and constraints—and then validating, governing, and understanding what emerges.
Visual validation is becoming the connective tissue between generation and production, between experimentation and enterprise control, between velocity and responsibility. It is how teams maintain a single, intelligible picture of system behavior even as AI floods the stack with new logic.
No-code, in the traditional sense of the word, isn’t dead. But it has evolved. If the first era of no-code was about making software possible for more people, the next era is about making AI-authored software safe for organizations.
Like this take on the future of software development in the AI era? Get the latest posts straight in your inbox by subscribing to the Futureproof newsletter on LinkedIn.





