AI Has Given Medical Affairs Everything Except the Answer

By Briana Belford, Practice Leader, Integrated Intelligence

If you talked to me at all in Q1, you probably heard about all the travel, events and AI-related conversations I was obsessing over. I probably used a lot of hand gestures and gushed over the *transformative moment* we were in, in insights, in medical communications, and in professional corporate life in general. As I’ve settled back in, applied new learnings to ongoing projects and simply gotten a chance to breathe before a spring travel surge, I’ve finally processed the common threads across Reuters Pharma USA, Pharmabrands Age of AI Europe, and various insights & innovation workshops across Real Chemistry’s Medical Affairs clients. Each had different audiences, formats and goals, but all circled surprisingly similar questions.

At Age of AI, the conversation was sweeping and expansive: multi-agentic workflows, synthetic environments, AI-generated tools spun up on demand with magic wands (at least that’s how it feels, right?). At Reuters Pharma USA, the discussion was more grounded: physicians’ shrinking attention, segmentation models that don’t quite work. In my daily work with medical insights teams looking to show impact, the stubborn reality that the outcomes we care most about take a year to show up persisted despite having access to more data and technology than ever.

AI has dramatically expanded what Medical Affairs can do—without resolving the harder question of what we should prioritize. Here are my takeaways from a Q1 spent in 8 cities across 3 continents.

Language Models Are Quickly Becoming World Models

As a linguist who first learned about “large language models” in a dark college classroom (why were all the linguistics dept rooms in the basement??) decades before ChatGPT existed, I found that one of the most provocative assertions at Age of AI Europe was the idea that AI is no longer just predicting the next word; it’s starting to predict the next physical state. What might look like an immersive video game that could not be relevant to healthcare today could tomorrow become simulated patient journeys, clinic dynamics or health system behavior. This very real potential explains why AI experimentation is now everywhere in pharma—yes, even in Medical Affairs.

Which leads to an uncomfortable but necessary question, especially for those of us who support the sector: If content, analyses and even tools can be generated on demand, where does real value now come from in Medical Affairs?

Personalization Fails When It Ignores the Decision

At Reuters Pharma USA, I moderated a panel on “orchestrating personalized scientific exchange at scale,” and the same tension kept surfacing: Many personalization efforts aren’t failing because we lack data or technology. They’re failing because they are optimizing for past behavior.

Segmentation models still lean on historical behavior, like what was clicked, downloaded or requested, because it’s clean and available. But as Janine Gaiha Rohrbach of Biogen put it, that can create a “scientific echo chamber,” where we optimize for familiarity and end up serving yesterday’s questions back to clinicians who are facing new ones.

AI doesn’t fix this. Trained on backward-looking data, it can reinforce the same patterns at scale. The opportunity here is shifting from targeting people to understanding the decision context – what question is in front of the HCP right now, and what evidence would actually change what they do next?

And this is probably part of why HCPs are increasingly taking decision-related questions to LLMs. They face a new, specific question every day, and an AI assistant will generate an answer on demand, regardless of whether the underlying evidence is easy to find, consistent and correctly framed.

Which brings me to my next takeaway, which is where the AI conversation became very tangible and very urgent.

GEO Is an Evidence Strategy, Not a Buzzword, and Yes, It’s Medical’s Job, Too!

GEO, also referred to as AEO, AIO, or [insert this month’s latest acronym], is Generative Engine Optimization: the practice of ensuring your content, data and critical information is readily accessible, readable and citable by large language models.

As AI assistants and LLM-powered platforms become physicians’ first touchpoint for medical information, it’s crucial that medical content be discoverable, interpretable and consistent across sources. If your core evidence lives in gated PDFs, inconsistently formatted decks, or fragmented repositories, it’s effectively invisible to the systems clinicians increasingly rely on. That’s not a communications failure. It’s an evidence readiness problem.

GEO forces uncomfortable but necessary alignment between study design and downstream use, publications and digital architecture, and, importantly, medical, commercial and regulatory precision.
AI doesn’t absolve us of those responsibilities; rather, it magnifies the consequences of operating in functional siloes and expecting one strategy to shape the public narrative around your science, company or your latest data.

The Metrics We Trust Come Too Late. So What Do We Track?

One of the most difficult realities in Medical Affairs measurement has always been that the outcomes we actually care about, such as better clinical decisions, more guideline-directed therapy, and fewer patients left untreated, arrive after a long delay, often a year or more. AI has made this lag more visible, and our willingness to wait for feedback has been drastically minimized.

Leaders are expected to make weekly decisions using signals that won’t validate for months. So we reach for proxies such as engagement, activity counts and altmetrics. They’re not wrong, but they’re incomplete unless they’re connected to what’s happening in the field and across the broader scientific ecosystem.

What’s missing isn’t another dashboard. It’s integration: pulling together content performance, medical information trends, congress intelligence, KOL and field insights, and external signals into one coherent view that supports judgement rather than just reporting. In a world of infinite outputs, trust, verification and decision clarity become the scarce currency.

AI has empowered everyone to be able to produce content and quickly analyze thousands of data points. But the ability to interpret uncertainty and translate mixed signals into the next best action is what will actually move the needle.

This is where Medical Affairs (and the partners that support it) still matter most:

  • Interpreting ambiguity without overreacting to noise
  • Pressure-testing assumptions (and the data behind them)
  • Connecting disparate inputs into insights that change decisions

AI can accelerate synthesis, surface patterns and simulate scenarios, but it can’t decide what to optimize for. That still takes purpose, judgement and a relentless focus on decision quality.

Where This Leaves Us

Wrapping up my quarter on the road, I felt a mix of excitement and unease. After 15 years in this industry, I recognize that signal as evidence that I’m in the right rooms and that we’re on the precipice of true transformation.

It will be of no surprise to anyone: AI is not a side project for Medical Affairs anymore. It’s reshaping how evidence is generated, accessed, interpreted and trusted. That means we must help shape what AI has to say.