[00:00:00] We can plug in the wall and get electricity on demand and use it to power machines that can do physical work. The AI is like the same thing, but for cognitive labor. And so anywhere where intelligence would be beneficial, you can plug into this cognitive
[00:00:15] labor and get that really on demand. And so what part of an organization doesn't need electricity? Well, pretty much every part of an organization needs electricity for something. And it's the same thing with intelligence on demand.
[00:00:29] Welcome to the very first episode of the very first season of the PharmaBrands Podcast. I'm your host, Neil Follett. We started this podcast because we wanted to share the stories of the people, the brands, the organizations and the businesses that are making a difference in Canadian healthcare marketing.
[00:00:46] This is an industry full of those stories. Personal, impactful and, we hope, interesting enough to keep you coming back every few weeks for a little more. We're excited to announce that this season is brought to you in part by PaperCurve.
[00:00:59] PaperCurve helps life science companies streamline their content creation, review and sharing with fast setup, high team adoption and real human support. We are thrilled to have them supporting the show. Today we're speaking with Simon Smith, EVP of Generative AI at Click Health. Let's get started.
[00:01:20] Simon, thanks for joining us today. I was going to introduce you, but you have such a long and storied background at the intersection of digital and health that I think it's best that you handle your own introduction. Yeah, so hi. I'm Simon Smith.
[00:01:36] I am currently the Executive Vice President of Generative AI at Click Health, which I assume many of your listeners will know. This is a life science commercialization partner. I have been focused on AI and Generative AI at least professionally for the last about seven years as a hobby.
[00:01:58] I look back through my journal entries for many years prior, but I rejoined Click in this January. I was there from 2012 to 2017 leading digital strategy for one of their portfolios. Then I really wanted to get into AI.
[00:02:12] So I left to join a company in Toronto called Benchside where I was the Chief Marketing Officer and there we were doing AI for Biomedical R&D. I was the Chief Marketing Officer for about five years and then I asked to move into an engineering role.
[00:02:27] I've always been a hobbyist programmer and I wanted to really get my hands dirty with Generative AI. I'd been playing with it and I saw chat GPT and I felt like this was a definitive
[00:02:37] moment and I really wanted to understand how the technology worked and how it could be applied into Biomedical R&D. So I spent about a year and a half there doing that. Then came back to Click because there was a real belief that aligned with mine about
[00:02:52] the disruptive potential of the technology. I was very fortunate to find a place where I could explore this potential full time. You mentioned seven years ago you started dabbling. This world must have looked very, very different.
[00:03:07] What were you doing from an AI standpoint at least seven years ago? What were you exploring? What was going on in the space back then? Very different and I should say again, I've always been a hobbyist programmer.
[00:03:20] I work professionally as an engineer for about a year and a half. People listening to this might laugh at some of the things I was doing. But before or I'd say around 2017, some of the things I was doing was experimenting with very early generative models.
[00:03:39] For example, recurrent neural networks. I remember one of the projects that I wanted to try was training a model on drug names and then trying to get it to generate novel drug names because the naming drugs can be pretty fraught.
[00:03:56] I've worked on drugs before that have gone to market and then after they're in market you have to change the name because it turns out that the way physicians are writing it looks confusing to the pharmacist and people get the wrong drugs.
[00:04:09] I've literally worked on a very large brand where the name had to be changed post-launch. I was experimenting with that. That was one example. How can we get these very primitive neural networks to generate novel ideas or not novel brand name ideas?
[00:04:26] That's one example of something that I was playing with and there were huge limitations with it. They were, it was character based, not even like token based at the time. It was almost like letter based and you would get a lot of names that were interesting
[00:04:44] but potentially nonsensical. You could even see then that the models would learn things like the sequence of consonants and vowels. After you trained it enough you wouldn't typically get outputs that were just a bunch of random strings. There would be things that look kind of like words.
[00:05:02] There was even less reasoning than you would see today in some of the language models out there. That's the kind of thing I was playing around with at the time. It feels like things are moving so quickly.
[00:05:14] A span of seven years has been compressed to seven days that things are launching moment by moment as I was getting ready for the interview. I took a little glance at your LinkedIn content and you're posting sometimes twice a day and they're not just kind of ruminations.
[00:05:30] They're actually fairly significant advancements, announcements, launches. Maybe orient in your world right now trying to be what feels like in the eye of a kind of like AI release tornado. What is it like right now to try to stay on top and make sense of everything that's
[00:05:49] going on? It's a great question. It's actually something I struggle with because, well, you know what I'm going to come back to that. Let me tell you about this morning as one great example of this.
[00:06:00] So on Tuesday next week I am going to deliver a presentation to a pharma company, to a team there about generative AI and strategies and tactics and so on. Before I give these presentations, I always schedule in time for myself to go through
[00:06:17] the entire presentation and update everything based on anything that's changed. This morning as just one example, I saw a McKinsey report that came out that talked about generative AI adoption in businesses. And 10 months ago it was, I believe 33%. And in the latest report, 10 months later it's 65%.
[00:06:40] So that is the kind of thing that I now have to go into my presentation and update a bunch of stats because- That's not 33 to 37. Like we're talking about- No, it's a doubly, right? So that's clearly a meaningful thing that I'm going to want to talk to people
[00:06:55] about next week. This is my life. So my life is a good chunk of my time is spent monitoring developments. And so I've sort of developed some systems for that, tried to find the best places to capture all this information.
[00:07:12] But as of right now, I monitor probably multiple hundreds of different sources of information using some tools to make that a little bit more manageable. But that crosses across mainstream publications, the individual companies themselves, key influencers, research institutions and so on to try to stay up to date.
[00:07:32] And I struggle with it because based on my previous work, let's say as a strategist where so much of your time is spent or even as an engineer, like you're working so hard on the problems, the work itself, it feels like a bit of an indulgence
[00:07:48] that a lot of my time is spent basically researching to make the right decisions. But then literally every day, I have those moments where I get this huge dopamine hit because I've discovered something new that now allows us to be much more
[00:08:06] strategically aligned with where the technology is going and also finding new tools that allow us to be more impactful. So I do struggle with it and I have these moments where I'm like, ah, you know, is this really like the best use of my time?
[00:08:20] But then like this morning, knowing that the adoption has been so fast now allows me to talk in a way that is much more, I think, strategically valuable to our customers because I can say, you know, hey, this is not standing still.
[00:08:34] Here's a very clear sign that if you're not part of that 65%, you're probably going to be in trouble. How much of this sort of daily dopamine hit do you think is around advancements or launches or announcements that are going to stick?
[00:08:53] There's so much noise in the Gen AI space that I feel that, you know, you hit refresh and the thing that was really meaningful becomes obsolete in 36 hours and, you know, thinking a year ahead or nine months ahead when you're updating that presentation again.
[00:09:10] How much of what's going on is foundational building blocks that AI of nine months from now is going to stand on the shoulders of? And how much of it is hype that is essentially just going to sort of go
[00:09:24] by the wayside as the industry sort of consolidates a little bit? Yeah, that's a great question. And I think that that's probably part of the challenge in my role is to try to make that determination.
[00:09:40] I think I would say first and foremost that there is you do have to try to peer through the media narratives. And maybe I should mention here that I began my career as a journalist. That was my my background. I did my undergrad as a journalist.
[00:09:57] And so I have a pretty good sense of how that profession works. And one of the sort of phrases that people use in journalism is, you know, to to comfort the afflicted and afflict the comfortable. And so you see this narrative where AI becomes popular
[00:10:14] and media helps to drive that narrative. It's so exciting. Look at how great the technology is. It's amazing. And then that narrative has become the standard. And so now it's time to afflict the comfortable. So open AI used to be the underdog.
[00:10:28] Now let's flip the narrative and put them on a defensive and attack everything that they do. Now, Anthropic is the underdog. Let's like go to their side. So I think there's a lot of that where you have to try to peel back,
[00:10:41] you know, what the media narrative is versus the reality. And I think a lot of the reality is based around fundamental technological advances and trends that you can forecast out with reasonable confidence because they've been around a long time. So let's start with that last one.
[00:11:01] Yeah, yesterday there was a report that came out that looked at the history of the amount of compute used to train various machine learning models. And this was going back many years, I think over a decade. And they were forecasting out that right now,
[00:11:18] I believe it's every year the amount of compute used to train the frontier models is increasing about four to five X. And because of the scaling laws, we know that for as long as those scaling laws hold, you know, more compute, more data leads to better models.
[00:11:32] So that's a trend where we can forecast out and we know that, you know, Microsoft, for example, has been building a hundred billion dollars worth of these massive, massive data centers so we can anticipate that that four to five X is going to continue to hold.
[00:11:45] And so you can look out like, OK, if the frontier models are four to five X better or at least four to five X more compute in a year from now, what does that mean? So I think that's one where we can be pretty confident.
[00:11:58] And then on the technological advances side, you see something like the recent multimodal models that have come out where instead of having a model that takes audio and turns that into text and then takes the text
[00:12:12] and reasons over it and then takes the text and turns it into output of sound. Now you just have one model that takes sound in an output sound and does the reasoning in the middle. That's a fundamental technological improvement that is not going anywhere.
[00:12:26] It opens up a bunch of new use cases. So I think there are things that are very clearly going to have significant impact. These fundamental technological advances and then the trend lines that that seem to be holding that allow you to forecast the future
[00:12:40] with some degree of confidence. Yeah, this some degree of confidence, I think, is probably a factor of the hundreds of platforms that you're monitoring, because I think that those of us who get their news from your LinkedIn feed as opposed to our hundreds.
[00:12:56] Have a lower degree of confidence. And I think the folks that I talk to, especially in the health care space, feel, I think, somewhat overwhelmed. I think there's a spectrum on all technology, right? Where you've got the adopters and the anxious.
[00:13:11] And I feel like there's a really big cohort in that anxious that doesn't know where to start, doesn't know what to do, doesn't know what they're allowed to do. And it's interesting when you're saying, you know, you updated that report
[00:13:21] and there's 60 plus percent of businesses have adopted or integrated AI in some ways, I feel like AI more than really any other technology. And I don't even know what you would call a productivity technology, creative technology. Like it's a bit of everything that the distance between the folks
[00:13:42] that are doing their work every day, you know, the brand managers or the people who are leading brands and then the people who are integrating AI or standing up pilots. And this is maybe a bit more Canadian specific. That distance is really quite great, it feels like.
[00:13:59] And I would hypothesize that if you tapped a whole bunch of folks, even within those organizations that say they're adopting, they would say, it hasn't really hit me yet. It hasn't been ruled out yet. I've been told I can't use chat, GPT.
[00:14:16] I've been told I can't use these. It feels like there's a really big gulf between those who are familiar and trying to figure this out and those who are still doing their jobs as though it was 20, 23. I don't know. What are your thoughts about that?
[00:14:35] Yeah, I think that I don't know what the source of this is, but I feel that at least in part, there's a legacy here of companies feeling like change management is a very difficult process. Innovation requires you to spin up these innovation teams
[00:14:58] and then have these technological diffusers. And there's this model we have in our heads about how difficult it is to roll out new technologies. And I've been through this. I've seen the world in pharma for social media. I've seen the world for mobile.
[00:15:15] I know how long it typically takes for innovations that hit the rest of the world to diffuse out through pharma and through other industries. But from what I see, the misconception here is that it's difficult. It's really not difficult. You get the maximum.
[00:15:35] You get a huge amount of productivity benefit by literally just giving everybody access to the tools in a way that they can use them for the use cases that they discover that nobody from the top down
[00:15:48] is going to be able to figure out and just let them use it. They're not in it. They're not in it. Do the work today. The idea that the CEO of a company is going to be able to
[00:15:57] or maybe not the CEO, but like some generative AI steering committee of executives is going to be able to figure out the various use cases that some brand manager is going to have for generative AI on a day to day
[00:16:09] basis from summarizing an email to coming up with talking points for an upcoming meeting. I just think that's not the way this technology is going to diffuse. And one of the best examples, I think out there right now in the pharma space, which I talk about a lot.
[00:16:26] So I think I often come across like I am working for open AI. So I am not. I have no financial interest in this whatsoever. But I think one of the best examples that I've seen of pharma adoption
[00:16:40] of generative AI is Moderna, where they rolled out chat GPT enterprise to the entire company. And within two months had over, I believe, 750 custom GPTs that people had just created for individual use cases that they discovered. They had 100% adoption in their legal team, just really reinforcing
[00:17:00] the privacy security of the tool. And they continue to get good momentum. So we have heard directly from them that this is something that is truly being rolled out throughout the entire company. Not that difficult to do. PWC just announced that they're giving 100,000 people in the company
[00:17:17] access to chat GPT enterprise. That's the biggest rollout yet. So I think that the problem is I'm not sure if it's intentionally done by companies that benefit from making this seem more complicated than it actually is. But a lot of people use chat GPT in their personal life
[00:17:38] and then are simply not allowed to use it in their business life. And I see this as very similar to the iPhone moment of many. I was going to say, it still reminds me of having lunch with a client
[00:17:49] where they've got multiple phones on the table and like, this is my private phone that I'm not allowed to use. It's like, yeah. Yeah, it doesn't make sense. Right. And so even now people eventually caved in and they were like, all right.
[00:18:02] Number one, we're not going to force you to use a BlackBerry if you're familiar with the iPhone because we want you to use the tools you're comfortable with. Number two, we're not going to try to build our own iPhones because that's ridiculous.
[00:18:13] So you see the trend happening very similar here where companies are their first like, oh, well, we're going to build our own internal LLM based systems that are going to be way inferior to what you can get off the shelf from Anthropic or OpenAI.
[00:18:27] But they meet some sort of criteria and nobody uses them. And people are like, I don't know what to do with them. Whatever. Meanwhile, they're on chat GPT asking it for recipes or what have you. And I think it's something very similar is happening here.
[00:18:41] It's very much an iPhone moment. And I think if companies really leaned into the trend strongly and said, hey, our people are adopting this tool, this is what they're familiar with. We want them to find the use cases that are best for them in their particular jobs.
[00:18:55] We're just going to put in place a system that they can use on a day-to-day basis that they're already familiar with so we can get an increase. And you see the data on this is like, you give people these tools and they can get 40% productivity increase.
[00:19:09] But even if it's a 10% productivity increase, just from being able to summarize long documents or to help them do little things here and there, I still think across an organization that's a pretty big impact. Well, I think at a human level,
[00:19:25] the debate about is AI good or bad? Like we can park that for a second, or maybe just park that for the entire conversation. But I don't talk to any brand manager or frankly anybody in an agency who says to me, man, you know what?
[00:19:41] I am being asked to do less with way more resources than ever. Like it's the opposite, right? Everybody's being asked to do more with less. I mean, I remember when I first started my agency whatever, 18 years ago, you know, you used to have a brand manager
[00:19:56] and two assistant brand managers and probably an intern and you had like a whole brand team for even a mid-sized brand. And fast forward to today, you often have one brand manager who might even be playing double duty. So that 10% gain,
[00:20:10] if you look at it from the business standpoint does not have a huge cost associated with it. And if you look at it at a human level, I think most brand managers would say, boy, if I could just get 10% back,
[00:20:24] it would alleviate so much pressure and stress, right? Like there's two sides of it there. You know, when you're talking about the iPhone, it reminds me a little bit. The other analog to me is the old Digital Center of Excellence, right? Where the Digital Center of Excellence
[00:20:38] wasn't everybody like we are a digital organization and these folks are way ahead of it. It's sort of a, we've baked in digital knowledge over here in a way that indicates that you don't need it over here. And I think that your example of under is
[00:20:54] we are going to be a Gen AI organization. They also probably have a group that's piloting really advanced stuff, but there's a baseline that they've set, right? Yeah, exactly. I think another analogy in addition to the iPhone that I like to use
[00:21:08] because you mentioned like what kind of technology is this and it's a general purpose technology. It's basically intelligence that we can plug into. The other analogy I like to use is electricity. So electricity is like physical labor we can plug into.
[00:21:21] We can plug into wall and get electricity on demand and use it to power machines that can do physical work. The AI is like the same thing, but for cognitive labor. And so anywhere where intelligence would be beneficial, you can plug into this cognitive labor
[00:21:36] and get that really on demand. And so what part of an organization doesn't need electricity? Well, pretty much every part of an organization needs electricity for something. And it's the same thing with intelligence on demand. So the idea that this would be controlled
[00:21:51] in some kind of top down way is almost as ludicrous as saying, we're gonna review who gets to plug in what products to the electrical system of our company. And we don't do that because we know that it's important for people to get electricity on demand
[00:22:06] for whatever their particular need might be. And I think it's like that's another analogy that I think can work pretty well. I just wanted to give one concrete example that happened to me recently of where you can just get these quality of life improvements that are productivity improvements
[00:22:26] and quality of life improvements. So recently I had to draft up a proposal or some parts of a proposal. And of course those often have to be in a very structured format and that can be a bit stilted. And they take a long time to write sometimes
[00:22:38] because you sort of have to put on your proposal mindset of like, how am I gonna craft this message? Whatever. So I just opened up a chat GPT desktop. We use chat GPT, a business version of chat GPT. I opened it up, I opened up the microphone.
[00:22:52] I said like, I'm just gonna talk to you about what this proposal should say and do and tell you my ideas and you're gonna turn it into a formal proposal. So it's like, yep, sure. And so I just barfed it out for like five minutes
[00:23:04] of just talking and then got the output that was totally structured. Copy, paste, done. You know, obviously reviewed it but it was exactly right because it basically just took what I barfed out and turned that into something more customer friendly. And I think that's an example of where,
[00:23:20] wow, what a great quality of life improvement and what a huge time saver. Took a task that would have taken me a couple of hours down to a few minutes. And instead of me having to sit there and try to think about the best way to write this,
[00:23:30] I just barfed out what was in my head and let it figure that out. We'll be right back after this message from Paper Curve. Season one of the Pharma Brands podcast is brought to you in part by our partner Paper Curve. Paper Curve helps you take the pain
[00:23:47] out of your content review and compliance process. With Paper Curve's intuitive workflow, seamless setup and support by real humans, you can reduce content review and approval times by 60% and who doesn't need a little bit of time back? Paper Curve is a Canadian product designed from the ground up
[00:24:03] for the Canadian healthcare regulatory environment. One week set up, best in class pricing with no hidden fees, dedicated support and high velocity, high accuracy reviews. Find out more at papercurve.com. Now, back to the show. It feels to me when I listen to that example,
[00:24:24] like that feels magical. I just wanted barf stuff out and have it come back formatted and that's fantastic, right? And an amazing use. But as I sit here, I feel like for me, the friction between hearing you say that and me trying to figure out how the hell
[00:24:41] that would actually work or how to implement it is really significant. And I think that that's maybe that common feeling where you as a non kind of super user, hear these examples where you go, I want that, but then you're lost as to how to get there
[00:24:58] is getting to the point where someone can just talk into their mic and have a structured output a lot closer than we all think. And we're all just a bit kind of overwhelmed. Those of us who are in that category who haven't really stood everything up yet
[00:25:13] is that because you're a bit of a super user and you used to be a developer and a journalist? Yeah, I don't think it is. There's nothing technical about what I just said. It is literally a button that so like a microphone button that you press
[00:25:27] and then you just tell it what you want it to do and it does it. I think for me, the difference between people that I see be successful with these tools and the people that aren't and here I'm talking tools like, you know chat GPT or Anthropic Claw
[00:25:44] or one of these that are very easy designed to be easy to use is a function of two things. The first one, maybe these are very related I don't know which is first or second but the first one would be a function
[00:25:57] of how often they use it for everything. And the second one is a function of how much of a reflex it is to want to start with it always. So in this case, I just have the reflex action of like, I really don't want to write this proposal.
[00:26:16] I'm not in a mood. I'm just gonna procrastinate. Hey, you know what? I'm just gonna see if chat GPT can do it because like what do I lose taking five minutes to see if it can do it? I don't really lose anything.
[00:26:26] And so I think if you get enough of those if your instinct is always like I'm gonna start with the AI and I'm gonna see how far I can get with the AI and that will help me A, learn what the limits of the AI are
[00:26:37] and B, potentially get a solution to this. And then for the next time, I'll be much better at doing that. There is no, at this point in a technology, you don't need to be a prompt engineer. You don't need to be a machine learning expert.
[00:26:51] You really just have to be capable of having a conversation with a slightly alien intelligence and trying to figure out what it can do and how you ask it to do things in a way that gets you what you want. Which I think we're all capable of
[00:27:06] because we interact with people all the time. Like some of us have kids and our kids are when they're really young, they're like aliens as well. You try to figure out how do I get them to do what I want?
[00:27:15] I see you and I can still see someone alien. There you go, right? So I don't think it's more of a mindset and I think unfortunately there is probably a benefit to some parts of the world we live in making people feel like this is really complex
[00:27:33] and they need to pay somebody a lot of money to help them figure it out. And I would argue that the opposite is true, that the more comfortable people get with these things and just try them for everything and treat them like a slightly alien intelligence
[00:27:46] and have conversations with them. Honestly, you'll get further treating it like a person than you will treating it like a machine even though it's not obviously a person. So I just think that they're like forces that are making people feel more trepidation than I think they should.
[00:28:01] And I would say, no, don't do that, just try. You can't break it, just ask it. You've given some use cases on how you use this. You talk about kind of pharma use cases. I thought it was really relevant when you said
[00:28:15] that the CEO or the head of AI isn't going to understand everybody's use case and be able to rule out a platform that addresses everybody, partly because people do different jobs and they do their different jobs differently. In your position, not only as EVP of Gen AI
[00:28:32] but at CLIC, which we all know is an organization that has unmatched scale really in healthcare. I'm actually unmatched scale in a lot of categories but I imagine you get access to people and conversations and doors open to rooms that wouldn't be available to the rest of us.
[00:28:55] And without telling state secrets, what are you hearing from those folks who are much, much closer to rolling this out at scale in some of those really big organizations that we know in the healthcare vertical? Sure, yeah. I can definitely talk about this in generalities
[00:29:19] and we have been fortunate in that regard. We CLIC actually launched this thing called the CLIC prize where they are giving away a million dollars to employees to come up with ideas for use cases for generative AI and the judges for those are a client council
[00:29:36] made up of representatives from, I believe it's 13 different pharma and biotech companies and so those are people who are quite senior in those companies who would have an overview of things going on with generative AI. My sense of the market right now is that in the biggest companies,
[00:29:57] so I would say first off, for all companies, this is a top priority, like one of the highest priorities to an extent that again, I haven't seen before with things like mobile and social and so on, there was a recognition that it was important
[00:30:13] but as you mentioned earlier, these were often things that were done through innovation centers or digital centers of excellence. Here, this is central to the organizations. They are fully focused on AI adoption. So I think that's one thing that is very clear
[00:30:30] that the companies are very dedicated to it. The second one is that I see kind of a trifrification of the market, which I looked that up, that is actually a word. It is the three version of bifurcation. So there's a trifrification.
[00:30:46] The biggest companies are very advanced in their adoption to a level that I wouldn't have predicted would be the case before getting involved here. So very advanced means that they are deep into even rolling things out. I wouldn't say all of the biggest companies
[00:31:07] but most of the biggest companies, they're building out internal capabilities, they are deploying tools, they already have use cases that they've either deployed or are soon to deploy and they tend to have a pretty strong strategic perspective on what they're doing.
[00:31:25] So the big pharma companies are quite sophisticated, more sophisticated than I've seen with previous technological shifts, I would say. And then there's two other sort of parts of the market. There's a part of the market where they are looking, the organization culturally has accepted that this is important
[00:31:49] but there may not be this sort of top-down strategic drive to get things done. So you tend to have these innovators in the organization that are kind of taking things into their own hands and pushing forward with different initiatives within different groups.
[00:32:03] And because the technology is pretty accessible, if you have people who are moderately technically capable, they can do a lot within an organization. So those are sort of like the innovators who are driving things. And then there's this third category which I would say are companies
[00:32:18] which are recognize the urgency, feel the urgency but are a little uncertain about where to start. And they're looking for guidance and kind of quick wins. So that would be my sense of the market right now. I think it's rapidly maturing.
[00:32:34] I think that as examples like the Moderna example I gave start filtering out, people are going to start adopting even more aggressively. One last comment that I would say that surprised me amongst many things that have surprised me, but one particular surprising thing to me
[00:32:52] was how fast generative AI has been adopted for content and creative production. If you listen again to, let's say some of the media narrative, it's like, oh, these tools, these imagery tools are gonna output things that potentially are violate copyright or that you can't trademark or,
[00:33:14] and sometimes these things hallucinate and oh, they're awful and they have no use cases. That is not, first of all, that's not true. There are things you can do to address all of those on the IP side as well as the hallucination side. But also within pharma companies
[00:33:31] the benefit to risk ratio is very much tilted in favor of benefit. And there is a tremendous demand and use of generative AI for both imagery production and copy production. And this may not be the case, let's say for a launch brand,
[00:33:54] but for brands that are more mature where budgets are much tighter, there is tremendous appetite. Well, in brands that are more mature where there's a tremendous amount of content generation, it is complex often in its nuance and not in the fact that it's novel.
[00:34:12] When I think about running the agency, there were some brands that a very large portion of the work that we did was pretty workman-like. It wasn't sort of capital C creative, it was an update to a detail aid that had been updated a number of times.
[00:34:27] And you wanna do that well and creatively and in a way that's engaging, but no new claims, no new, it's really sort of incremental changes that feel very much ripe for efficiency. And you think about that at scale in an organization that has a bunch of brands
[00:34:48] that are mature, what do you think that's gonna do to this sort of client agency dynamic? Do you think that I know that there are some agencies who are standing up as AI first agencies, there's a lot of agencies that are wondering how to pivot.
[00:35:02] I talked to a lot of clients who are talking about we're just gonna do a whole bunch of this stuff in-house. We'll use our agencies for some stuff, but there's a whole bunch of stuff we can do in-house. What are your thoughts about what does that do
[00:35:15] to the agency-client relationship? Yeah, it's something obviously we think about quite a lot and we talk very openly about with our clients to try to understand what the market wants. I think the first thing I would say is that there is stuff that clients
[00:35:34] or I would say most clients, many clients do really, really value. They really value strategic partnerships, strategic guidance and creative thoughtfulness, ideation and so on. So that strategic value seems to be something that at least for now and I think for the foreseeable future
[00:35:56] they still want experienced expert humans to be doing and they value that a lot. I think that also holds for the concepting, the early stage let's say of work on a launch brand coming up with ideas. People will use generative AI as part of that ideation process.
[00:36:18] At this point, I think clients expect that to be fairly table stakes. You will use whatever tools you have available to you to come up with the best ideas but don't inundate us with a thousand things. Use that process internally
[00:36:30] then come forward with your best ideas for us. But again, we still want to have a human involved in that important strategic creative ideation process. So I think there's still an appetite and a value for that. If nothing else because it gets an external perspective
[00:36:49] outside of their own organization and it gives them new fresh ideas that they might not have had from people who have potentially more expertise across a wider range of brands. Then I think you get into the more derivative stuff and there it's potentially a different story.
[00:37:09] I think that either agencies are gonna have to find a way to deliver more value and that value, people may think it's cost but a lot of time it's also speed, right? So they're gonna have to find a way to meet the increased expectations from clients
[00:37:28] for delivering things in a way that meets the client's demands because now clients know what's possible. And in some cases, I think clients may bring stuff back in-house that they feel confident that they can do now that they have these tools available.
[00:37:46] But I also think that the technology itself is gonna create new market opportunities. So a couple of examples would be we have clients now that would never have a budget to do a photo shoot where they would hire a photographer and get models and something like that.
[00:38:07] They just, their brand isn't big enough or it's at such a mature stage, they don't have the budget. But what we're able to do now is a generative photo shoot where we can do pretty much everything that we would previously have done
[00:38:20] in terms of what does the model look like? What is their outfit gonna be? What is the scenery and so on? But we can do it all through generative AI for brands that can't afford to do that. And there's a case where that's an entirely new line
[00:38:31] of business for us because previously we would have managed the photo shoot and so on but we wouldn't have been doing the actual production of that, the actual model generating it out of nothing ourselves. That's one example of something new.
[00:38:45] And I think the other one is what happens when generative AI can produce so much more content? Well, in a pharma company, the first thing that's gonna happen is that the medical regulatory legal teams or compliance teams are gonna get completely overwhelmed.
[00:39:02] So we think there's also an interesting opportunity around the use of generative AI to assist clients with that part of the process as well because whether they're doing this stuff internally or working with an agency on derivative content still has to go through some sort of compliance review,
[00:39:20] the more content you produce, the more bottleneck you're gonna have while maybe there are some solutions that we can offer to help to alleviate that. If you talk about the photo shoot, even brands that have a budget to do a photo shoot
[00:39:32] the effort that goes into selecting models and reaching out to talent and all the rest of it, if that could be collapsed down into a morning of work with gen AI to say, okay, here's your models. There's a velocity piece there
[00:39:46] that helps even the brands that have the budgets but brings those brands that don't have the budgets into the game in a way that they weren't able to play before. Yeah, absolutely. Again, I think that what I think generative AI
[00:39:59] is going to do or one of the many things I think it's going to do is help us as humans identify the things that we still want to be human and what we value about humans. So let's say the photo shoot example.
[00:40:13] If you were going to do a photo shoot and the models you were going to use were really not going to be actual patients anyway and you really want to have something that represents what your patients look like, then maybe you're going to choose generative AI
[00:40:27] because you weren't going to use real patients anyway. So why not? But if you were going to do a photo shoot that used real patients or a TV commercial that used real patients, you're still going to use real patients
[00:40:39] because that's not a situation where you want to try and pretend that some generative AI person was an actual patient when you can actually have real patients talk about their experience. So there's an example of where it's helped us define,
[00:40:53] you know what, actually, if we have a real patient and we want to tell a real patient story, let's use a real patient. I think that's important. And I think it's going to be the same with, let's say on the creative side of a project.
[00:41:04] Like for sure the generative AI, we've seen there's enough research now to say that it can actually be more creative than most people. And it can really, with the right direction, give you ideas you might have not thought about before.
[00:41:19] But we probably still want a human to look at those ideas and think about does that really feel right? What has this company done before? Does this brand manager like these kinds of ideas? Is there anything offensive in this that we haven't thought about?
[00:41:32] It probably needs to go through that level of human scrutiny because as of right now, the models don't have sufficient context to do that themselves. But we should sure as hell be using the models to help us ideate. I'm going to pick up on a couple of words
[00:41:46] that you just used in your last sentence, which is as of right now. So not that anybody I think has a crystal ball on this, but when you look forward and it feels like the generational churn on this stuff is measured in kind of days, weeks, months
[00:42:03] and not years, I think about other really, really disruptive technologies. I'll use email as a bit of a benign example. Huge shift in the way everybody works, but then once it was installed, email isn't radically different than email was 20 years ago or 15 years ago.
[00:42:21] I feel like there's both an adoption component to this and an evolution component where it continues to evolve so exponentially. What is it going to look like? Are we going to have this kind of radical pace of change forever on this stuff?
[00:42:39] Is it going to settle out a little bit? Like what is all this going to look like in, I don't know, you picked the timeframe a year, six months, two years. Give me a sense of where you think we're going. Yeah, as you mentioned, this is very fraught.
[00:42:55] People will often say like, hey, what do you think it's going to look like in five years and I'm like, that is, I have no idea. I think I have a reasonably goodish handle on the next six to maybe 12 months because there's often enough information leaked
[00:43:13] that and you have enough sense of scale and what's currently training and stuff like that to be like, I think within the next six to 12 months, it's probably going to look something like this. So I would say in the next six to 12 months,
[00:43:26] we're going to have most likely models that are better at reasoning and planning, which is a limitation right now, they don't do very well at long horizon tasks because they're not particularly good at planning. And part of that is that in their post-training process,
[00:43:42] they don't really haven't to date been properly sort of tuned or reinforced to really understand how to make good long-term plans. So better planning and reasoning is probably coming soon, whether that's open AIs next model, next big model or something along the way to its next big model
[00:43:59] or what have you, that's coming. Agentic behavior, I think we've seen enough of this now from multiple companies to know that the models are going to be able to take actions on your behalf outside of conversations. So it's not just gonna be like, hey, chat, GPT,
[00:44:12] can you blah, blah, blah, sure, here you go. It's gonna be like, hey, can you research this, go there, review this document, write it to me, email it to this address and or something like that where it can take multiple steps including steps that happen offline.
[00:44:28] So we're pretty, I think I'd say with pretty good confidence that that's coming in the next six to 12 months. And then I think the big shifts are gonna happen when the models have achieved across most tasks, human level capability with strong reliability
[00:44:49] so that you can really tell it to do something that you would have assigned to a colleague and be confident that it will do it. Once you have these coworkers instead of co-pilots, I think a lot of things are gonna start to shift.
[00:45:05] And one of those shifts is gonna be the calculation for a lot of people starting new companies. There's been a trend with new AI companies that they have fewer and fewer people. And I think that trend is going to start to percolate out that newer companies
[00:45:20] are gonna have fewer and fewer people. And my sense is that that is going to shift the market overall. Sam Altman's talked about this idea that he has a bet with some of his friends on when we're gonna see the first one person company
[00:45:36] that's worth a billion dollars. And I think once we start seeing companies like that or similar to that, even if it's a 10-person company worth a billion dollars but run mostly with AI coworkers, I think that's gonna change people's conception of what's possible and that will be
[00:45:54] the next kind of soul-searching moment. Now again, I see the next six to 12 months. I don't think that's gonna happen in the next six to 12 months, but I think in the next six to 12 months, we'll see better reasoning and planning and we'll see more agentic offline behavior
[00:46:07] where models can do things on your behalf. And that's gonna be a significant step towards AI coworkers, not just co-pilots. Well, that's been recorded. So we can play it back in six to 12 months. If people really wanna know what's,
[00:46:22] if they have a good sense of what's possibly coming, prediction markets are a great place to look because there are people that money or reputation on this. You know, I have reputation maybe a little bit at stake, but that's one, that's my closing tip. Look at prediction markets.
[00:46:36] Amongst a lot of really good tips. I really, really appreciate the time. I always love chatting with you and this is just a topic where frankly, and I think many people are really interested, excited. Some are a little bit scared and this conversation, a little bit of guidance,
[00:46:52] is really, really helpful. Thank you very much for the time this morning. Thanks Neil, this was a lot of fun. I love talking about this stuff so thanks for the opportunity. That's it for episode one. I hope you enjoyed it because we have a ton more planned.
[00:47:06] We're gonna release an episode every two weeks and it would be great if you hit subscribe. It helps the show get found and it ensures that you don't miss an episode. Coming up on the next episode we'll have Jen Meldremont to chat about Peak Pharma and the OPMA.
[00:47:20] Thanks again to Paper Curve for being our season one partner. Check out papercurve.com to find out more and when you're there, look for their blog. It's got some great thought leadership. We'll see you in two weeks.

