CASE STUDY: SHAPING AI CONVERSATIONS
CASE STUDY: SHAPING AI CONVERSATIONS
Context
Omada is an app that helps people manage chronic health conditions. The company was exploring how AI could support members in real time in two key areas:
– Nutrition education: Answering questions with evidence-based guidance
– Motivational interviewing: Supporting behavior change through conversation
The problem
Support at the time was spread across lessons, resources, and coaching. It worked in some ways, but when members had immediate questions, they often either waited for a coach or turned to Google or TikTok for faster but less reliable answers.
The opportunity
AI created a new opportunity: we could give members help in the moment, not just static content. We could also ground those experiences in Omada’s clinical knowledge and expertise to differentiate our AI from other options in the space.
Goals
From a metrics standpoint, our goal was to increase engagement and retention. From a product perspective, we aimed to make AI experiences feel genuinely useful in the moment, grounded in Omada’s clinical expertise, and available whenever members needed support.
My role and process
I led content design across both AI experiences. I defined how the systems should communicate and, in partnership with clinical leads, how it should behave to deliver the right care to members. I partnered with team on system prompts and evaluation frameworks to assess quality, safety, and brand fit. I also helped create the in-app entry points for both experiences, which involved shaping the UX and writing UI content (headers, value props, etc).
Step 1
I started by grounding both AI experiences in a shared voice. Omada's existing brand voice was defined by four characteristics:
For brand consistency, I wanted to adopt these voice characters, but I made a key decision to not stop there. These two use cases needed noticeably different tones depending on intent. Building on the foundation of the voice characteristics, I added tone options for the two experiences:
– Nutrition education: clear, supportive, and easy to scan
– Motivational interviewing: more reflective, restrained, and conversational
This gave us consistency at the brand level, while still allowing each experience to feel appropriate to its context — without defaulting to a generic AI personality.
I documented these voice and tone decisions to align cross-functional partners and ultimately translate them into system prompts that shaped the AI experience.
Step 2
Next, I focused on turning content principles into something the system could consistently execute.
This meant defining how the AI should behave across interactions, including:
I also defined clear boundaries:
I aimed to create a strong conversational principles that were repeatable at the system level.
Step 3 I then moved into writing and iterating system prompts for both experiences, reviewing outputs to understand how the AI behaved in practice.
This is a description underneath your image.
The engineering team set up a space in Langsmith where I could input a prompt, test the results in a simulated experience, and make adjustments as needed.
This iterative process was key, and the prompt evolved rapidly as I tested the prompt and made adjustments.
I ran this loop across both Nutrition Education and Motivational Interviewing, using output review as the main way to test and improve system behavior.
Step 4
Because clinical safety and brand consistency were extremely important, the team used a second behind-the-scenes LLM to judge the output of the user-facing LLM powering the two AI experiences. To train this LLM judge, I created a rubric for grading AI output. The clinical team created a similar rubric to grade the clinical quality of the AI output.
I also partnered closely with clinical and legal teams to design how the AI handled guardrail moments. I created the messaging for these scenarios, ensuring responses were clinically appropriate, legally sound, and still clear and supportive for members.
We iterated against both sets of criteria until the system consistently scored highly, giving us confidence to release the experiences to members. I also created the in-app content members saw before starting a session, setting expectations and clearly communicating the key value props upfront.
Impact
This work helped move both AI experiences from concept to launch as the company’s first major step into AI ahead of its IPO.
For Nutrition Education, over half of members who logged a meal engaged with the experience, contributing to an overall 4% engagement lift attributed to the AI.
Motivational Interviewing was more challenging: only 43% of members who started a chat completed a full session, and discovery was a key issue. Unlike Nutrition Education, which was embedded in the food tracker, MI was surfaced as a home screen tile, an interaction model that probably didn’t fit the timing or headspace needed for a reflective conversation. Hearing from members
It was very rewarding to see members use the features and find value in them!