Content is user-generated and unverified.

The Chatform

I built an AI interviewer for Play Into Being this week. It's a tool that guides creative founders through a structured conversation about their relationship with their project — where they are with it, how they're relating to the work, what's stuck, what's at stake. At the end, it generates a report and sends it to our team.

It's not a chatbot. It's not a form. It's something in between that I've been calling a chatform — a conversational interface with a bounded purpose, a specific set of questions, and a structured output. It knows what it's trying to learn. It asks one question at a time. It follows the thread when something interesting surfaces. And when it's done, it wraps up and hands you a summary.

I think this pattern has legs beyond what I built it for.

Why not a form?

Forms are efficient but dead. You fill in boxes. The boxes don't respond to what you said in the previous box. A form can ask "How are you feeling about your project?" but it can't follow up with "Say more about that" when you write something surprising.

For intake, this matters. The most useful thing a new client can give you isn't a list of facts — it's the texture of how they talk about their work. Where they light up. Where they go vague. What they avoid. A form can't capture that. A conversation can.

Why not a chatbot?

Most chatbots are open-ended. They're waiting for you to drive. "How can I help you today?" is the wrong energy for an intake. You don't want the person figuring out what to say — you want to lead them through something.

A chatform is bounded. It has a job. It has a question structure. It has a personality tuned to the context. And it has an exit — when the work is done, it stops. No infinite loop. No "Is there anything else I can help you with?"

What I built

Project Relationship Assessment — a conversational intake tool for creative founders joining our residency or hiring us for production support.

The flow is simple. You land on a page that explains what's about to happen. You tap a button. An interviewer greets you and starts asking questions. You can type or speak — there's a mic button that opens a recording bar with a live waveform, and your voice gets transcribed and sent as text. The interviewer works through four sections: where the project is, how you're relating to it, what's at the edge, and who's supporting you. It follows threads, presses gently on surface answers, and wraps up when the conversation feels complete. Then you get a structured report — project overview, relationship quality, key quotes, edges and tensions, patterns observed.

The whole thing is shaped by a single file called soul.md. That's the system prompt — it defines the interviewer's voice, pacing, question structure, and behavioral rules. Want the interviewer to be warmer? Edit the file. Want it to push harder on avoidance? Edit the file. Want to use the same architecture for a completely different intake? Write a new soul.

The pattern

Here's what makes a chatform a chatform:

Bounded purpose. It knows what it's trying to learn and when it's done. The conversation has an arc — beginning, middle, end. Not a help desk.

A soul file. The personality, pacing, and question structure live in one editable file outside the codebase. You can tune the interviewer without touching code. This is the most leveraged piece of the whole thing.

One question at a time. Don't stack. Let each answer breathe. This is what makes it feel like a conversation instead of a survey.

Follow the thread. If someone says something alive, go there before moving to the next scripted question. The structure is a guide, not a rail.

Structured output. The conversation is freeform but the output is formatted. You get a report you can actually use — not a raw transcript you have to wade through.

Voice as a first-class input. People say different things when they speak than when they type. Richer, less guarded, more alive. Making voice easy (not just possible) changes what you get back.

Under the hood

For the builders reading this:

The interviewer runs on Claude Sonnet via the Anthropic API. Each turn sends the full conversation history with the soul.md as the system prompt. The model decides when the interview is complete and signals it with a tag in its response.

Voice input uses the Web Audio API for real-time waveform visualization — an AnalyserNode feeding RMS amplitude data to a canvas at 60fps. The recording bar replaces the text input (same footprint, no layout shift) and captures audio via MediaRecorder. When you tap send, the audio goes to Whisper for transcription and auto-submits as your message.

The summary is generated by a separate API call after the interview closes. It takes the full transcript and produces the structured report. This call fires immediately when the interview ends — not when the user navigates to the results page. By the time they tap "Show results," it's usually ready.

State bridges between pages via localStorage. The completion page polls every 500ms until the summary arrives. No database, no accounts, no persistence beyond the session. The email to our team is the permanent record.

Built with Next.js, TypeScript, and Tailwind. Deployed on Netlify. The mobile chat interface was its own adventure — Safari's virtual keyboard behavior is hostile to fixed-position inputs, and 100vh lies about viewport height when the keyboard is open. The fix is dvh (dynamic viewport height), flexbox layout instead of fixed positioning, and testing on a real iPhone because simulators don't replicate the behavior.

Where this goes

The chatform pattern works anywhere you need structured information from a conversation rather than a form. Onboarding. Discovery calls. Research interviews. Client intake. Creative briefs. Any context where the quality of what someone shares depends on how they're asked.

The soul file is the key. It's what makes the same architecture reusable across completely different contexts. A hiring intake and a creative assessment and a customer discovery interview are all the same shape — a bounded conversation with a personality, a question structure, and a formatted output. Different souls, same bones.

I'm going to keep building on this. If you make something with it, I'd like to see it.

Content is user-generated and unverified.
    Chatform: AI Interviewer Pattern for Structured Conversations | Claude