Generative UI: how AI agents are building their own interface
More and more AI agents decide for themselves when to render buttons, forms or status messages. What is generative UI, why is it breaking through now and what does it mean for your customer conversations?
Author: Heyloha Team
The new generation of AI agents is taking the UI back from designers
Until recently every chat interface was designed up-front. A designer decided which buttons appeared when, a developer built the flow and the visitor walked the path. The agent itself was nothing more than a text box with a blue send button.
That is changing. With generative UI, the AI agent decides for itself, per situation, whether a button, a form or a status message will keep the conversation moving. No pre-built menus, no flow builder, just the right UI component at the right moment.
What is generative UI exactly?
Generative UI ('gen UI') is a design principle where the user interface is assembled on demand by an AI model instead of being defined in advance. The moment the model asks a question or uses a tool, it picks the UI component to send along: choice buttons, a form, a table, a chart or a status label.
In practice it works like this: alongside text output, the agent has a set of components available. For a question with four possible answers, it serves four buttons. For a request to qualify a lead, it renders a form with the right fields. For a tool call that takes a few seconds, it shows in real time what it is doing.
The underlying shift is fundamental. The UI is no longer a static script but an output of the model.
Why this is breaking through now
Generative UI is taking off now for three reasons.
First, frontier models (like Claude 4 and GPT-5) have become reliable enough at tool use to consistently pick the right component. A year ago you still saw models render a form at random moments, or never. Now models consistently pick the UI that fits the context.
Second, developer tooling has matured. The Vercel AI SDK, OpenAI's Canvas, Anthropic Artifacts and the tool-calling frameworks from LangChain and LlamaIndex make building generative UI realistic for product teams.
Third, user expectations have shifted. ChatGPT and Claude have shown rich content (images, code blocks, charts) inside chat for years. End users now expect a chatbot to do more than text and drop off interfaces that are just a typing box.
Examples in the wild
OpenAI Canvas opens a second pane where code or text is edited side by side, instead of pasting everything into one long chat thread.
Anthropic Artifacts renders standalone documents, presentations and mini-apps generated by the model, with live preview.
Vercel v0 writes React components from a prompt and shows them directly as a usable interface, not as code.
In customer conversations the same pattern is taking off. More and more AI agents in healthcare, finance and B2B services render forms inside the chat, present buttons for booking appointments and show real-time status of tool calls.
What this means for your customer conversations
For businesses that work with customer contact, generative UI brings three concrete benefits.
Lower drop-off. Tapping a button is faster and less work than typing a sentence. For multi-field lead capture, visitors prefer filling in a compact form to answering question by question.
Cleaner data. Buttons produce standardised answers. Required form fields prevent typos in email addresses and missing phone numbers. The data that lands in your CRM is usable straight away.
More transparency during waits. Tool calls that take a few seconds used to feel like a black box. With a real-time status label ('Checking availability...') the visitor knows what they are waiting for and drops off less often.
How Heyloha applies it
We have built generative UI into Heyloha's AI chatbot in three ways.
Reply buttons appear when the agent asks a question with a finite set of answers. For example 'What kind of property is it?' (Buy, Rent, Investment or Other), or available time slots from a connected Microsoft Calendar.
Forms inside the chat appear when several fields are needed at once: lead capture, callback requests, valuation requests for an estate agency. The visitor fills it all in at once, no redirect, no pop-up, in your branding.
A live action indicator shows in real time what the agent is doing: checking a postcode, calculating a valuation, scheduling an appointment. No five-second black box.
The agent decides on its own when to show a button, a form or an indicator. No scripts, no flow builder, just the right UI component for the situation.
The next step
Generative UI is not a visual upgrade of existing chatbots. It is a shift in how interfaces come into being: from defined-in-advance to generated-in-context. For customer conversations it means fewer drop-offs, cleaner data and conversations that actually keep moving.
Want to see it for yourself? Try Heyloha free for 14 days on your own website, or watch the live demo and see the agent render its own UI during a conversation.