🥞PancakeJS

The New UI Paradigm

The way humans interact with software is undergoing a fundamental transformation. For decades, the pattern was simple: users navigate to a website, interact with a UI built by developers, and that UI communicates with a backend. Now, AI agents are becoming the primary interface. This changes everything.

The Traditional Model

In conventional web applications, the flow is straightforward:

┌──────────────────────────────────────────────────────────────────┐
│  Traditional Web Application                                      │
│                                                                   │
│    User  ──→  Website UI  ──→  Backend API  ──→  Database        │
│      ↑           │                                                │
│      └───────────┘                                                │
│         (renders response)                                        │
└──────────────────────────────────────────────────────────────────┘

Users learn your interface. They click buttons, fill forms, and navigate menus you've designed. The UI is the primary contract between your service and the user.

The AI-First Model

With AI agents like ChatGPT, Claude, and others becoming primary interfaces, the paradigm shifts dramatically:

┌──────────────────────────────────────────────────────────────────┐
│  AI-First Application                                             │
│                                                                   │
│    User  ──→  AI Agent  ──→  Your App (MCP Server + Widget)      │
│      ↑           │                  │                             │
│      │           │                  ▼                             │
│      │           │            Backend API  ──→  Database          │
│      │           │                  │                             │
│      │           ▼                  │                             │
│      └────  Widget UI  ←────────────┘                             │
│         (rendered in chat)                                        │
└──────────────────────────────────────────────────────────────────┘

The AI agent becomes the orchestrator. It interprets user intent, decides which tools to call, and determines when to surface a UI. Your interface is no longer the entry point. It's a component the AI pulls in contextually.

This isn't about replacing traditional UIs entirely. It's about creating a new class of applications that are AI-native, designed from the ground up to work within conversational contexts.

What Changes for Developers

This shift introduces several fundamental changes in how we think about building applications:

1. Intent Over Navigation

Users don't navigate to features. They express intent. Instead of clicking through "Flights → Search → Enter Details," a user says "Find me flights to Tokyo next week." The AI interprets this and invokes the right tools.

// The AI decides when to call this based on user intent
app.tool({
  name: 'searchFlights',
  description: 'Search for available flights between two cities',
  inputSchema: z.object({
    origin: z.string().describe('Departure city'),
    destination: z.string().describe('Arrival city'),
    date: z.string().describe('Travel date'),
  }),
}, async (input) => {
  const flights = await flightAPI.search(input);
  return renderWidget('flight-results', { data: { flights } });
});

2. Contextual UI Rendering

Your UI appears only when needed. Instead of showing everything at once and letting users find what they need, the AI surfaces exactly the right interface at the right moment.

TraditionalAI-First
User navigates to dashboardAI shows relevant data on demand
Full-page forms with many fieldsFocused widgets with just needed inputs
Static navigation menusContextual tool invocation
User learns your interfaceAI learns user intent

3. Bidirectional Communication

In this new model, data flows in both directions between the AI and your widget. The widget can:

  • Receive initial data from tool invocation
  • Send user actions back to the AI
  • Request the AI to call other tools
  • Trigger follow-up messages in the conversation
function FlightBookingWidget() {
  const { data } = useToolInvocation(); // Data from AI
  const callTool = useCallTool('bookFlight'); // Call back to AI
  const sendMessage = useSendMessage(); // Continue conversation

  const handleBook = async (flight) => {
    await callTool({ flightId: flight.id });
    sendMessage('Show me hotel options near the airport');
  };

  return <FlightList flights={data.flights} onBook={handleBook} />;
}

4. Multi-Step Workflows Become Natural

Complex workflows that traditionally required careful UX design (wizard forms, multi-page checkouts, guided onboarding) become natural conversations. The AI maintains context across steps while your widgets handle the visual interactions.

Why This Matters

Massive Distribution

AI assistants have hundreds of millions of users. Building for these platforms means accessing audiences you couldn't reach through traditional app stores.

Lower Friction

Users don't need to find, download, or learn your app. They just ask for what they need and the AI handles the rest.

Contextual Value

Your UI appears exactly when users need it, pre-loaded with relevant data. No more cold starts or empty states.

Compound Workflows

Users can combine multiple apps in a single conversation. Book a flight, then a hotel, then restaurant reservations, all through natural dialogue.

The New Mental Model

Think of your application not as a destination, but as a capability the AI can invoke. Your job shifts from:

  • Building complete experiencesBuilding focused capabilities
  • Designing user flowsDesigning for AI understanding
  • Creating navigationCreating tool descriptions
  • Handling all statesHandling the moment of need

This doesn't mean less work. It means different work. You're optimizing for discoverability by AI, clarity of purpose, and seamless integration into conversational contexts.

Where PancakeJS Fits

This paradigm shift brings a challenge: different AI hosts implement these patterns differently. ChatGPT Apps, MCP Apps, and future platforms each have their own APIs, rendering models, and communication protocols.

PancakeJS exists to abstract these differences. You define your tools and widgets once, and the SDK handles:

  • Platform-specific rendering
  • Host-guest communication protocols
  • Capability detection and fallbacks
  • State management across hosts
// This works everywhere: ChatGPT, Claude Desktop, future hosts
app.widget({
  name: 'flight-search',
  description: 'Search and book flights',
  inputSchema: flightSearchSchema,
  ui: { entry: 'src/widgets/flights.tsx' },
}, async (input) => {
  return renderWidget('flight-search', { data: await searchFlights(input) });
});

Next Steps

Now that you understand the paradigm, dive deeper into the technical architecture:

On this page