How behavioral research shaped an AI tool for stakeholder communication

Why do skilled communicators still struggle with stakeholder alignment? Is the bottleneck a communication problem or a cognitive one?

This case study includes the following research and creative methods:

Problem identification

Qualitative research

Behavioral insight synthesis

Tool design strategy

Workflow development

Problem identification • Qualitative research • Behavioral insight synthesis • Tool design strategy • Workflow development •

My Role:

This is an independent project I initiated, lead, and owned. I identified the behavioral problem through self-directed qualitative research, designed the tool's strategic logic and workflow, and am building it directly using ChatGPT. I led a small team supporting the project and hold final decision-making authority on all design, workflow, and strategic direction. Every research, design, and product decision originates with me.

The Challenge

Teams managing multi-tenure stakeholders were spending disproportionate time recalibrating communication for every touchpoint, slowing projects and diluting message clarity.

The Approach

Qualitative research across 18 conversations to identify the behavioral root cause, translated into the design logic of an AI coaching tool

The Outcome

[TBD]

[add in photo of chatgpt]



In complex project environments, communication breakdowns rarely happen because people don't know how to write. They happen because teams are constantly switching between audiences: adjusting depth, tone, and framing for every seniority level and every stage of a project's lifecycle. The cognitive cost of that constant recalibration rarely gets named as the problem. It gets blamed on the communication itself. Identifying the real bottleneck was the foundation of this work. Building a tool around it was the outcome.

Why this mattered

#

Time saved per communication

#

Testers reporting reduced friction

#

add your third metric here


The research began informally, noticing a recurring pattern in team conversations about presentation preparation and stakeholder communication. To test whether the pattern was consistent, I conducted 3 structured conversations with colleagues, exploring how they approached communication across different stakeholder tenures and project stages.

In parallel, approximately 15 observational and casual conversations with team members surfaced consistent language around the same friction points: the time cost of tailoring, the uncertainty about how much detail to include, and the mental effort of re-modeling the audience before every touchpoint.

How the research worked

Questions Asked:

The bottleneck isn’t communication skill — it’s the cognitive tax of re-modeling your audience every single time before you can write a single slide or send a single message.
— What the research revealed
  • Patterns emerged that shaped both editorial and commercial success.

    • Audience relevance drives return visits
      Content grounded in local impact and human interest encouraged repeat engagement.

    • Narrative consistency builds recognition
      A recurring structure helped audiences quickly understand and anticipate the series.

    • Digital first formats invite sponsorship
      Clear series identity made it easier for sales teams to position the content to advertisers.

    • Start-to-finish ownership improves cohesion
      Producing the series end to end ensured alignment across storytelling, production, and distribution.

  • Develop digitally exclusive editorial series grounded in qualitative audience insight that support both engagement and monetization goals.

  • The project included concept development, scripting, video production, writing, editing, web publishing, and social distribution for a recurring digital video series hosted on the station’s website.

  • Produced and published over multiple episodes across an ongoing editorial cycle.

Key Insights

1

2

3

4

The real problem was audience-switching cost, not communication ability.

Across every conversation, the frustration wasn't that people didn't know what to say. It was that they had to mentally rebuild their understanding of who they were talking to — their seniority, their context, their level of project involvement — before they could begin. That rebuilding process happened repeatedly across every email, every slide deck, every direct message throughout a project's lifecycle. The communication itself wasn't the drain. The constant recalibration was.

Seniority alone didn't determine how communication needed to be tailored. Project involvement did.

A senior stakeholder who had been present throughout a project needed fundamentally different framing than a senior stakeholder being briefed for the first time. Most people were making this adjustment intuitively, without a framework to guide it, which meant the calibration was inconsistent and time-consuming.

[Add a specific example from conversations that illustrated this clearly]

The cognitive load showed up differently depending on the communication channel.

Slide decks required a different kind of recalibration than emails or direct messages — but the underlying audience-modeling problem was the same across all three. People were solving it three separate times, in three separate contexts, without a shared mental model to anchor any of them.

[Add any channel-specific frustrations testers described during usability sessions]

People had workarounds, and the workarounds revealed what the real need was.

[Add the informal workarounds people described during conversations — e.g. keeping a mental stakeholder profile, copying past emails as templates, asking a senior colleague to review before sending. Workarounds are your strongest qualitative finding.]

  • [verbatim quotes from a testing session]

    —User


The insight reframed the design challenge entirely. The tool shouldn't teach people how to communicate, it should externalize the audience modeling process so communicators can focus on what they're saying rather than who they're constantly recalibrating for. This meant the tool needed to lead with audience context before content. Rather than asking what do you want to say, the workflow begins with who are you saying it to, what is their relationship to this project, and what do they need to do with this information.

Strategy direction


[Complete after testing — describe how the tool works in practice, the user experience, how it coaches rather than just generates, and what adjustments were made based on tester feedback]

Execution

Timeline

discovery

date

workflow design

date

user testing

date

interation

ongoing

Reflections

This project reinforced something the research made visible early: the most persistent workplace communication problems are rarely about skill. They're about cognitive infrastructure, or, the lack of it. When people are given a framework that externalizes the thinking they were already doing intuitively, the communication gets clearer almost automatically.

The most important design decision wasn't what the tool does. It was what it asks first. Leading with audience context before content mirrors how human communication actually works psychologically, and that sequencing is what makes the tool feel like coaching rather than generation.

[Add after testing — what you learned from watching real users interact with the tool that you didn't anticipate from the research alone. The gap between what research predicted and what testing revealed is often the most honest and compelling part of a case study.]