Back to writing
·4 min read

The Canvas, Not the Form: Rethinking AI Interfaces

What a sales exec learned building his own AI forecasting tool

I run enterprise sales for a living. But I've been coding as a hobby since I was a teenager—databases, websites, the occasional mobile app. AI tools reignited that spark, and I started building things again.

One of those things: an AI-assisted forecasting tool for my own GAM pipeline—runs locally, data anonymized before any API calls, the whole proper setup.

And at some point I noticed something: the biggest risk in my pipeline was buried in the "Risks" section alongside five minor concerns. Same font. Same box. Same visual weight.

The AI knew it was a problem. I could see it in the data. But my beautiful template treated every risk the same.

That's when I realized: I'd built a form when I should have built a canvas.

The Template Trap

Google's Generative UI research articulated what we were missing: "explaining the microbiome to a 5 year old requires different content and a different set of features than explaining it to an adult."

The same applies to data. A healthy pipeline needs different presentation than a pipeline in crisis. The AI sees the data. The AI should decide the form.

Google has a name for this now: Generative UI. Salesforce calls it Generative Canvas. The idea is simple. Instead of making AI fill slots in a fixed layout, you give it a vocabulary of components and let it decide how to compose them.

Vocabulary, Not Grammar

The solution isn't arbitrary interface generation. That way lies inconsistency and security nightmares. Instead: give the AI a vocabulary of components and let it compose.

Think of it as the difference between asking someone to write a sentence (grammar constrains but expression is free) versus asking them to fill a form (slots constrain and expression is limited).

My vocabulary includes structural components, emphasis signals, and domain-specific elements. The AI composes these based on what the data demands. Three blocked deals? Lead with them, expanded, highlighted. Strong coverage? Summary with confident indicators. Mixed signals? Side-by-side comparison.

The styling is mine. The structure is mine. The composition is the AI's.

What I Learned

Letting AI control form sounds simple. Rebuilding my tool this way took experimentation—and reminded me why I stopped coding for years before AI made it fun again.

The AI invents things. I documented data path syntax carefully. The API kept returning non-compliant syntax that broke the renderer. Explicit examples of wrong patterns helped more than documentation of right ones.

Editorial choices can be wrong. Early versions buried important information instead of surfacing it. I went through nine prompt versions before things stabilized. User override is still on the roadmap.

Debugging gets harder. When every response has different structure, you can't diff against expected output. I store full prompts with responses. Investigating issues means understanding what the AI saw and why it chose what it chose.

Vocabulary design is harder than template design. Adding a new component means updating types, rendering logic, documentation, and prompts. The barrier to change is higher. This is probably good: it forces intentionality.

The Trade I Made

Templates are predictable. You know exactly what you're getting. Testing is straightforward. Debugging is simple.

Component vocabulary is adaptive. The AI can emphasize what matters. But testing is harder, debugging requires more context, and prompt engineering becomes a critical skill.

I made the trade because predictability was costing me value. The AI had insights the template wouldn't let it express.

This is a personal tool, not a product. I don't have systematic data on whether the adaptive layouts are objectively better—I built it to scratch my own itch. What I can say: I notice things I missed before. For my purposes, that's enough. If you're building something similar at scale, instrument it properly.

Beyond Fill-in-the-Blank

Not every interface needs this. Chat is fine as a chat. Search results work as a list. But data analysis—synthesizing complex information into something actionable—needs a canvas, not a form.

Salesforce is going the same direction with Generative Canvas in Agentforce. Dynamically generated layouts, grounded in your org's Lightning components, respecting your data permissions. Same philosophy, enterprise trust built in.

The shift from templates to component vocabulary reflects a broader principle: stop treating AI as a function that maps inputs to outputs and start treating it as a collaborator that can make judgment calls within constraints.

I define the vocabulary. I control the styling. I set the boundaries. Within those boundaries, the AI has agency to communicate effectively.

Once it worked, I started noticing things I'd missed before. That felt like progress.


For implementation details, failure modes, and code examples, see the companion piece: From Templates to Component Vocabulary