Opinion

Bubble Just Bolted AI Onto Its Platform — But Is It Enough to Stay Relevant?

Bubble's May 2026 AI Agent update brings multi-turn edits, image uploads, and impressive activation metrics. But is bolting AI onto a legacy no-code platform enough to stay relevant when AI-native tools are rewriting the rules?

Bubble Just Bolted AI Onto Its Platform — But Is It Enough to Stay Relevant?

Bubble shipped a big AI update this month. Multi-turn edits, image uploads for UI generation, full expression support in workflows, a UX overhaul with plan and success states. They switched to Anthropic's Claude back in March and are reporting a 2x increase in first-week user activation and a 30% bump in satisfaction. Anthropic even featured them in a case study.

On paper, this is impressive work. Bubble's engineering team clearly put serious effort into making their AI Agent feel less like a bolted-on chatbot and more like a collaborator. Credit where it's due.

But here's what I keep coming back to: does any of this change what Bubble fundamentally is?

What is Bubble?

For those who haven't come across it: Bubble is one of the original no-code platforms, launched in 2012. It lets you build web applications visually — dragging and dropping UI elements, configuring workflows through a point-and-click editor, and connecting to databases without writing code. It's powerful, but notoriously complex. Think of it as the Salesforce of no-code: capable of almost anything, but with a learning curve that puts most people off before they ship anything useful.

At its peak, Bubble was *the* platform serious no-code builders chose for complex apps — marketplaces, SaaS products, internal tools. They raised $100M in 2021 on that promise. But the world has moved fast since then.

The metrics need context

A 2x increase in first-week activation sounds great until you ask: activation from what baseline? Bubble has always had a brutal learning curve. Their own community members talk openly about spending months getting comfortable with the editor, the workflow system, privacy rules, responsive design, the API connector. One long-time user on the Bubble forum put it bluntly in March: the AI gap is "compounding, not closing."

If your previous first-week activation was terrible (and for a platform this complex, it probably was), doubling it might still leave you behind competitors where activation is near-instant because there's nothing to learn.

The 30% satisfaction increase tells a similar story. Thirty percent better than frustrated is... less frustrated. It's not the same as delighted. And satisfaction with what, exactly? With the AI Agent specifically? With the overall editing experience? Bubble doesn't break this down, and the distinction matters. A user can be satisfied with the AI's suggestions while still finding the platform overwhelming.

I don't want to be unfair here. These are real improvements for real users. But the question isn't whether Bubble got better. It's whether Bubble got better fast enough.

Squeezed from below: the prompt-and-ship crowd

While Bubble was building multi-turn edit suggestions, a different category of tool was making the whole concept of a visual editor feel optional.

Lovable lets you describe an app in plain English and generates a full-stack working application. Bolt.new runs your entire dev environment in the browser, with chat, code editor, and live preview side by side. v0 by Vercel turns natural language into production-ready React components you can deploy in seconds.

These tools don't have learning curves. They don't have workflow editors or privacy rule configurations or workload unit pricing. You talk to them. They build. You iterate by talking more.

For a surprising number of use cases, that's enough. Internal tools, MVPs, client portals, landing pages with logic, simple marketplaces. The stuff that used to be Bubble's bread and butter. And these tools are improving at a pace that makes quarterly updates look glacial. Lovable and Bolt are shipping weekly, sometimes daily, because they don't have a decade of backwards compatibility to protect.

So who's still choosing to spend weeks learning Bubble's visual programming paradigm when they could describe what they want and have it running in fifteen minutes? The answer is: fewer people every month.

Bubble squeezed from both sides — AI-native platforms pressing from above, prompt-and-ship tools pressing from below

Squeezed from above: AI-native platforms

The pressure from below is obvious. The pressure from above is more interesting.

Platforms like Stacker went live with a fully AI-powered builder at the start of this year. Not an AI assistant bolted onto an existing editor. A platform where the primary interface *is* the conversation. You tell it who your portal is for and what they need to do. It asks clarifying questions. It builds. You steer.

The difference is architectural, not cosmetic. When AI is the foundation rather than a feature, the entire product can be designed around that interaction model. There's no legacy editor to maintain backwards compatibility with. No decade of visual programming abstractions to work around. The AI doesn't need to translate your intent into Bubble's proprietary workflow language because there is no proprietary workflow language to translate into.

Bubble's AI Agent, by contrast, has to operate within the constraints of a platform designed in 2012. It can suggest next steps after making an edit, sure. But it's still making edits to a visual editor that was built for humans to click through manually. The AI is a layer on top. Not the thing itself.

This isn't a knock on Bubble's engineers. It's a structural problem. You can't retrofit a fundamentally different interaction paradigm onto a product without eventually hitting the ceiling of what the original architecture allows.

The mobile-first analogy writes itself

We've seen this pattern before. When mobile became the dominant computing platform, companies split into two groups: those that built mobile-first products, and those that made their desktop products responsive.

The responsive crowd did fine for a while. Their apps worked on phones. They checked the box. But over time, the mobile-first products won because every decision, from navigation to data loading to interaction patterns, was optimised for the actual context of use. The adapted products always felt slightly wrong. Slightly too complex. Slightly too slow.

Bubble is building a responsive app in a mobile-first world. Their AI features work. They're getting better. But the underlying product was designed for a different era of interaction, and no amount of Claude integration changes that foundation.

What Bubble still has (and whether it matters)

I'll be honest about what Bubble does well. For complex applications with sophisticated data models, custom logic, and specific business rules, Bubble still offers a level of control that prompt-based generators can't match. If you need granular permissions, complex API integrations, and multi-step workflows with conditional branching, Bubble's depth is real.

The problem is that this depth is also its weight. Every feature that makes Bubble powerful for complex apps is a feature that makes it harder for AI to abstract away. The AI Agent can help you build a form or wire up a simple workflow, but it can't (yet) architect a complex multi-tenant SaaS with role-based access control from a single prompt. And if it could, you'd still be locked into Bubble's hosting, Bubble's pricing tiers, Bubble's workload units.

The users who need that complexity are a real market. But it's a shrinking one, because the definition of "too complex for AI" keeps moving. Six months ago, generating a working backend from a prompt felt like a party trick. Now it's table stakes.

The takeaway

Bubble's AI update is good engineering solving the wrong problem. The problem isn't "how do we make our existing platform easier to use with AI assistance." The problem is that the market is splitting, and Bubble is on the wrong side of the split.

On one side: platforms where AI is the interface, where you describe outcomes and the system figures out implementation. On the other: legacy builders adding AI features to products designed before AI was a factor.

I've watched enough technology transitions to know how this plays out. The bolt-on approach buys time. It keeps existing users happy. It makes for good case studies and impressive-sounding metrics. But it doesn't change the trajectory.

If you're starting a new project today, I'd pick an AI-native platform or a prompt-based generator over Bubble. If you're already deep into a Bubble app, these AI improvements will make your life better, and that's worth something. But I wouldn't start a new long-term bet on a platform that's playing catch-up with the paradigm shift rather than leading it.

Bubble built something impressive for its era. This AI update proves they're still capable of shipping. The question is whether shipping faster on the old model is enough when the model itself is changing. I don't think it is.

Want to read
more articles
like these?

Become a NoCode Member and get access to our community, discounts and - of course - our latest articles delivered straight to your inbox twice a month!

Join 10,000+ NoCoders already reading!