Description Transcript
At Retool Summit 2025, we sat down with Kli Pappas, the Global Head of AI at Colgate-Palmolive for a conversation on the realities of enterprise AI, empowering new builders, and a view of what's next.
👉 Start building in Retool for free: https://retool.com/
0:04 Please welcome to the stage global head 0:06 of AI at Colgate Palm Olive, Cle Papus, 0:10 and Retool head of technical account 0:13 management, Elizabeth Ray. 0:16 [Music] 0:22 [Applause] 0:25 [Music] 0:27 Who was I supposed to do that? I don't 0:29 think so. 0:31 Um, welcome, welcome everybody to a 0:34 conversation that we have all been 0:36 looking forward to. I'm Elizabeth Ray, 0:39 head of technical account management at 0:41 Retool, and I am thrilled to be joined 0:43 today by Cle Papa, Colgate's global head 0:46 of AI and predictive analytics. 0:50 Cle, thank you so much for taking the 0:52 time to share your experience and 0:54 insights with everybody here today. 0:56 >> Thanks, Elizabeth. Great to be here on 0:58 this fireside chat. I was promised a 1:01 fire. I was hoping for a tealight, 1:04 but maybe we'll make we'll make it 1:06 happen. But it's great to be kind of 1:07 underdelivered on that front. 1:10 >> Um, so we are all here at Summit today 1:12 exploring the future of enterprise AI 1:15 app development and Colgate's journey 1:18 has really been particularly 1:19 fascinating. When Colgate originally 1:22 came to retool, what was it 11 12 months 1:25 ago in 2024, AI was not even part of our 1:28 initial conversations. We had a lot of 1:31 other meaty areas, topics, challenges to 1:34 to solve for. Fast forward to today and 1:38 AI really has shifted from a side part 1:41 of the conversation, a nice to have to 1:44 front and center and truly a 1:46 requirement. Y 1:47 >> and Cle, you certainly have been at the 1:49 center of that movement. Um, so that's 1:52 going to get me to that first question, 1:54 I promise. Um, we're seeing more and 1:56 more organizations spin up and create 1:59 dedicated AI teams and councils 2:03 uh, similar to teams like your own. So, 2:05 can you tell us a little bit more about 2:08 Colgate team and its creation and 2:10 perhaps mandate? 2:11 >> Yep. Yeah. So, first um, great to be 2:14 here with all of you. So really quickly 2:16 about Colgate because usually when 2:18 people see an AI person with Colgate, 2:20 they're just really confused like what's 2:21 going on. So Colgate, yes. Yes. The 2:24 toothpaste company as well as other 2:25 things, not the university. Um um so we 2:28 make all sorts of products, Palm Olive 2:30 and so on. We're in virtually every 2:32 country in in the world. Um no, we are 2:35 not owned by Proctor and Gamble, another 2:38 conglomerate. Another question I get all 2:40 the time, although I know I'm joined in 2:41 the room somewhere by um Proctor People 2:44 and others. Um so yeah, we're a global 2:46 consumer uh consumer goods company. 2:48 We're actually in more than 60% of the 2:51 households on the planet. Um one in 2:53 three of you probably buys one of our 2:55 products, but since we're in Northern 2:56 California, maybe most of you buy 2:58 Sensidine, I won't hold that against 2:59 you. Um it's just more expensive and 3:02 it's not any better, so it's up to you. 3:04 Uh 3:06 so so my role Yeah. So um I'm the global 3:08 head of AI for for Colgate which is a 3:10 role that is increasingly becoming much 3:13 more common and when my role was created 3:16 in mid 2023 about 6 months after chate 3:19 PT came out um it was a a novel thing to 3:22 have a central person that's in charge 3:24 of of AI. Um and now I think it's very 3:27 common and probably in the next couple 3:29 of years every major company's going to 3:31 have to have someone uh w with that 3:33 remitt. So my remmit is really AI soup 3:35 to nuts for the company. So I'm the 3:37 enterprise risk owner. So I report out 3:38 to the board on risk related to AI. Um 3:41 technology choices go through me. 3:43 Learning, education and upscaling um go 3:46 through me. And then um uh strategic 3:49 builds. So you know big focus areas 3:51 where we want to put a lot of um company 3:54 resources against those will happen on 3:56 my on my engineering team. And what 3:58 changed from when you originally started 4:00 speaking to us where AI was not front 4:02 and center and so over the past 12 4:04 months to where it truly is a 4:06 requirement today? 4:07 >> Yeah. What's funny is many executives 4:09 you know when when Chad GBT first came 4:11 out had Bitcoin on their mind and you 4:15 know the zeitgeist was yeah it'll be 4:17 like Bitcoin and it'll come and go and 4:19 tech people are making a big deal about 4:20 it. Um, and so that obviously changed, 4:24 you know, really quickly, especially as 4:26 the models just significantly improved 4:28 so so rapidly that it became apparent to 4:31 everyone that this was not something 4:32 that that you can ignore. And just the 4:34 incredible amount of investment flowing 4:37 in our um stakeholders or publicly 4:40 traded company asking more and more 4:42 questions about what we're doing that 4:43 really brought AI um front and center. 4:46 Mhm. And then speaking to that, there 4:48 are obviously other players in the 4:50 space. Uh, and you had to be quite 4:52 deliberate about choosing a product, a 4:55 platform that can support your AI 4:58 strategy, vision. Um, what were some of 5:00 those key considerations that you were 5:02 thinking about and keeping in mind? 5:04 >> Yeah. So, um, you know, we started 5:06 talking to RETL about six months, 12 5:08 months ago. We're a recent customer, so 5:09 we really just signed on to RET the past 5:11 three to six months and bringing them on 5:13 board. And a lot of that was at the very 5:15 beginning of when agent offering was 5:17 coming into retool. And I'd say it was 5:20 it was pretty clear by the beginning of 5:22 this year that the future was going to 5:24 be all agents. You know, not everything 5:26 had settled yet, but it was clear that 5:28 it was going to be agents. And um it was 5:30 clear looking at RET's road map that 5:32 that was something that that you all 5:33 were taking very seriously. And you know 5:35 at that time we were already you know 5:37 thinking really deeply about um uh 5:40 enterprise controls and how do we 5:43 actually do this at scale. We have about 5:45 30,000 employees at Colgate about half 5:47 work in manufacturing and half work um 5:49 at the desk and our actual technical 5:52 team we're not a tech company. So the 5:54 technical team is very small right and 5:56 so we have like a huge responsibility 5:58 where with a small number of people we 6:00 have to bring capabilities to a large 6:03 number of people you know where 6:04 technology you know fundamentally is you 6:06 know mostly in back office. Um, so 6:09 Retool was a place where it seemed like 6:12 uh you all as a company really were with 6:14 it and getting where things were going 6:16 and covered the things that we needed to 6:19 cover from a risk governance standpoint 6:22 which I'm sure we'll get get more into 6:23 later. Um so so we really thought that 6:25 the roadmap and direction was where we 6:28 thought it was going and um was 6:31 enterprise ready which is you know 6:32 something in many cases that we didn't 6:34 find from other platforms we're looking 6:36 at 6:36 >> you know open source and other places 6:38 you know nadn and all the langs langfuse 6:41 lang graph lang chain we looked at 6:43 everything and you know competitors and 6:45 and felt like retool was the best fit 6:47 >> you somewhat mentioned this but were 6:48 there any deal breakers? Yeah, I mean 6:51 deal breakers were just things like 6:54 role-based access control, secret 6:56 management, um um being able to set up 7:00 templates and different spaces for 7:02 different groups. There's just a lot of 7:04 nitty-g gritties when you get into it 7:06 and you're talking about software that 7:08 you know you you ultimately want 7:10 thousands of people to be able to engage 7:12 with that just have to be true. You 7:15 know, there's a quote, I think it was 7:16 David, that showed it earlier. um I 7:18 forget what it was but essentially there 7:20 are non-negotiable things with with 7:22 enterprise software and um you know 7:24 retool had those out of the box and then 7:26 the roadmap was very much where we 7:28 thought the the industry was going. 7:29 >> Great. So we hear a lot about 7:32 organizations really getting stuck in 7:34 the pilot prototype phase. Um but you 7:38 have actually made that leap and a fair 7:39 amount of time ago. Um, so you've 7:41 shipped an internal AI hub that's 7:44 incredible first of all, uh, and has 7:45 really helped create thousands of 7:47 employeebuilt agents. 7:50 Can you maybe talk to how you close that 7:52 gap, um, between testing, ideation, and 7:56 then production? 7:57 >> Yeah. Um, so maybe it's worth going into 7:59 a little bit just about how we view 8:01 democratization of AI capabilities at 8:03 Colgate and and the path that we went 8:04 down early on, which was um, we built an 8:07 internal platform, internal tool called 8:09 AIHub. So this is about 6 months after 8:11 chatbt came out. Um and we let everyone 8:13 in the company have access to it. We 8:16 wanted everyone to be able to use chat 8:18 GPT. And in mid 2023 that was not the 8:23 most common step. You know most 8:24 companies were were um significantly 8:27 limiting access to IT groups or to other 8:29 smaller groups and had a lot of you know 8:31 concerns about security and other things 8:33 which were all easily worked through if 8:35 you just like had your legal people talk 8:37 to OpenAI as legal people. Right? 8:38 there's a lot of misunderstanding. So we 8:40 said we want everyone to have it. And 8:42 the reason why and and I think this is 8:44 interesting was not you know nominally 8:46 because we thought it was going to help 8:48 the business grow and all those things. 8:49 The reason why was if if some of you 8:52 remember there was a study that LinkedIn 8:53 and Microsoft ran um a couple summers 8:56 ago. So they run a a a work index study 8:59 every year where they ask people around 9:00 the world workrelated questions and they 9:02 asked a question how many people are 9:04 using um Genai for work purposes and how 9:07 many of you are bringing your own genai 9:10 tools and it was something like 70% of 9:12 people said that they were using Genai 9:14 tools at work and 70% of those 70% were 9:19 using their own tools because their 9:20 company didn't provide it to them. So 9:22 like purely from a risk standpoint, 9:24 giving everyone access to a language 9:26 model was a critical thing for us to do. 9:29 Leave aside, you know, the benefits that 9:31 it will have for the company. The fact 9:32 is the technology is available to 9:35 everyone and it's free and it's super 9:38 powerful. So if you weren't putting it 9:40 in the hands of your employees, they're 9:41 using it anyway and creating a lot more 9:43 risk for for your organization. So we 9:45 built this platform called AIHub and we 9:47 said, you know, we want everyone to 9:49 start using them. We gave training. And 9:50 then the second thing and this sort of 9:52 gets into you know where we think the 9:54 journey with retool is going is um uh 9:57 many of you might be familiar with 9:58 custom GPTs. A custom GPT is something 10:01 that you can build inside GPT. It's 10:02 basically instructions and some files 10:04 maybe a tool call. It's a bot that you 10:06 can build and um there was a really 10:08 powerful API that openai released the 10:11 assistance API that lets you build the 10:13 these custom GPTs. So what we did was we 10:15 exposed that to everyone in the company. 10:17 We said everyone in the company, you are 10:20 all allowed to build your own, we called 10:22 it assistance internally. You're all 10:24 allowed to build your own AI assistant, 10:26 which is just instructions, some files. 10:28 It does rag and and some other basic 10:30 things. Um, with a little bit of 10:33 governance around it, you can build it. 10:34 No one's going to ask any questions. You 10:36 can share it with up to 20 people. It's 10:38 all fine. If you want to share it with 10:40 more than 20 people with an entire 10:42 department, entire line of business, 10:44 then we have a lightweight governance 10:45 process you go to and there's some 10:46 approvals and we and we check your 10:47 emails. Um, and that took off like gang 10:51 busters. We had 3,000 people build them 10:54 and about 3 to 500 get deployed 10:57 globally. So deployed as in like 10:59 deployed to an entire department, 11:00 function, uh, line of business. Um, so 11:04 it was incredible to see how many 11:07 people, you know, wanted to be builders 11:10 and wanted to engage and had a problem 11:13 to solve and really just needed like the 11:16 platform to do it and some guard rails 11:19 that let them play first and then have a 11:22 pathway to to get it uh approved. So, 11:25 it's always funny when I, you know, I 11:26 get these surveys and I and I hear 11:28 survey outputs like how many production 11:30 AI use cases? One or two, you know, or 11:34 or none, right? And for us, it's hard 11:36 for me to define that because in a 11:37 sense, we have, you know, several 11:40 hundred assistants that are in 11:41 production that are being used around 11:43 the world. Somewhere around 5,000 people 11:45 a week use one of those assistants. And, 11:47 um, to me, those are production AI use 11:50 cases. Like, they're out in the world. 11:51 They're being used for productive 11:52 business purposes. Um but they weren't 11:55 built by a software engineering team 11:57 with lots of things around it. Um and 12:00 they're limited in what they can do. You 12:02 know, chat bots themselves, you know, 12:03 unless you move into agent territory are 12:06 limited, but the set of use cases that 12:08 they can accomplish is huge. Um so that 12:10 was something that we did early on that 12:12 really um uh that really took off and 12:14 and surprised us. 12:15 >> That's great. Given you have risk and AI 12:19 literally uh within your job, can you 12:22 share some advice to folks that may be 12:24 stuck in the earlier phases? 12:27 >> Yeah. So when I when I talk with other 12:28 people in my position, I find the most 12:30 common setup at companies is there is 12:32 someone from the legal department who's 12:34 in charge legal or compliance who's in 12:36 charge of risk and then you have someone 12:37 from uh usually IT. I actually don't 12:40 report up to our through our IT lineage. 12:43 which I report up through analytics 12:44 which is also a little bit unique but 12:45 usually someone from IT who runs the 12:47 tech side and usually those two two 12:49 people don't get along um because if if 12:52 you're just in charge of risk then 12:55 mitigating risk is easy you just default 12:57 to no for most things um and then you 13:00 know you've done your job and you're 13:02 safe and then if you're in charge of 13:04 growing the company it's really easy to 13:06 get shut down with any riskbased 13:09 argument especially at a large company 13:11 you know where you have a lot of equity 13:13 Um so the difference for me so I'm in 13:15 charge of risk I also have a group of 13:17 people who includes my my uh chief legal 13:19 counsel and and uh head of IT 13:21 infrastructure and we work really close 13:23 together. Um but what that means is we 13:26 seriously consider the risk of inaction 13:29 right like I truly believe and hope that 13:32 most of you who are here today like also 13:34 agree that um the world is changing 13:38 virtually every business model is 13:40 changing how you operate or what you do 13:43 and I think a lot of companies are 13:45 letting uh the frog get boiled which is 13:48 like if you've never done this 13:50 experiment I encourage you not to do it 13:51 but you put a frog in boiling water and 13:53 frogs can't detect small temperature 13:55 changes. So everything seems fine until 13:57 they're boiled, right? And I think a lot 13:59 of companies are stuck in this like 14:00 incrementalism phase where they just 14:03 feel like things will move and we'll get 14:06 there. And I don't think that that's how 14:07 it's going to play out at all. I think 14:09 all the indications of um how much 14:12 investment is coming in, how quickly 14:13 technology is changing, how quickly 14:16 government uh you know of all groups is 14:19 like getting into this suggest that 14:21 things are going to change very fast and 14:23 uh you know leaders really do need to 14:25 treat this like a once in a generation 14:27 change and that creates business model 14:30 risk. And so if you look at our risk 14:32 framework, one of our key risks is 14:34 business model risk from inaction. And 14:37 we see we think it's a serious risk if 14:39 we don't act quickly enough. And so if 14:42 you put that on the table that balances 14:45 out a lot of the other risks that come 14:46 from bringing in AI and how you do human 14:48 and oversight and all the other things 14:50 that you have to get get worried about 14:52 um really are seriously balanced or 14:55 outweighed by the fact uh by the 14:57 question of you know will my business 15:00 continue to exist in 10 or 20 years if 15:02 we don't really take this seriously 15:05 today. I love how you're evaluating this 15:08 as opportunity cost. Um, yeah, thank 15:10 you. 15:11 >> So, let's switch gears a little bit. Um, 15:13 and we touched upon enterprisegrade, 15:15 enterprise ready, but what does that 15:17 actually mean in practice? So, you're a 15:19 very large enterprise. Um, can you talk 15:22 about the guardrails, standard security 15:24 measures that you know you need to 15:26 account for? 15:27 >> Yep. Um, so I I won't get like too wonky 15:30 because I don't know the crowd, but 15:31 there's lots of regulations coming out 15:33 around AI. So there's the EU AI act 15:35 which um like most things done in Europe 15:38 is like a huge overreach and it's just 15:39 for Europe it covers the whole world and 15:42 the critical thing about that for for 15:43 you to know um um if if you're thinking 15:46 about this for for the companies is you 15:48 know regulations related to AI both in 15:51 Europe and in and in some states in 15:53 America are based on the use case not 15:55 based on the technology. This makes it 15:58 very very difficult to figure out um how 16:02 to manage AI use cases because it's not 16:05 like what's the software and does the 16:08 software you know meet GDPR or does the 16:12 software you know handle confidential 16:13 data it's the usage of the software so 16:16 chat GBT is not approved or non-approved 16:18 you can go to chat GBT and ask it to 16:20 review a resume and that is a high-risk 16:24 regulated use case in Europe if you're 16:26 doing it in the US and you think you're 16:27 safe. If one person from Europe applies 16:29 to your job, then you're now then you're 16:31 now in trouble, right? So, the way that 16:33 the regulations are shaping up is really 16:35 challenging to deal with. It's use case 16:36 focused. Um and then on top of that you 16:40 have all the considerations around um 16:43 bringing in builders who have not gone 16:46 through whatever training your company 16:49 probably has for software developers to 16:52 make sure that they build things 16:52 securely and handle secrets in the way 16:55 that they should be and follow solid. 16:57 I'm sure many of your companies have 16:58 training and best practices that you 17:01 have your software engineers go through. 17:03 the rest of the people in your 17:05 organization, the the people who are 17:07 going to bring like scale by using 17:08 things like retool or other democratized 17:11 capabilities, they don't know about any 17:12 of those things, right? And so, you 17:14 know, the mantra has been uh we need to 17:17 set up systems in a way where you can't 17:20 do something if you're not supposed to 17:23 do it, right? If you're able to do a 17:25 thing that you're not supposed to do, 17:26 it's not your fault, it's our fault, 17:28 right? Um because we didn't we didn't 17:30 configure it properly. Um so when we 17:32 think about enterprise ready you know 17:35 platforms really that that are going to 17:37 let us bring you know technology to the 17:39 masses there's one um can we have a 17:42 governance process over use cases and 17:44 that we have a separate platform and we 17:46 push people through an AI registry but 17:48 um retools configured such that we can 17:50 stage gate as um applications and agents 17:54 move through environments. Um second is 17:56 just is you know user groups. We want to 17:58 have different user groups with 17:59 different uh capabilities. Rolebased 18:02 access access control. We use octa. We 18:04 need to be able to pass through octa 18:06 tokens and know is the person who's 18:08 having an agent query a data set on 18:11 their behalf. Are they authorized to use 18:13 it? Um and then the rest of like the 18:16 100page cyber security checklist which I 18:18 don't know all the things but I know 18:19 retool passed all the things which is 18:21 not um not typical. Um, so we really 18:24 needed a way to segment populations and 18:27 have granular enough controls that um, 18:30 if you if we don't want you doing a 18:32 thing, you can't to do it. And that's 18:33 the safest way to to set up systems. 18:36 >> Your take, can vibe coding be brought to 18:38 the enterprise? 18:39 >> Yeah. So, I'd love um, so someone at RET 18:43 Brie, you know, gave me this quote, vibe 18:45 coding for the enterprise, which I 18:46 really love and and really sort of stuck 18:47 with me. So, you know, software 18:50 development and SAS as a whole is 18:52 completely transformed. Some companies 18:55 have figured it out. Some SAS companies, 18:57 some purchasers of SAS have, some have 19:00 not. But the fact is it is transformed. 19:02 It's not like going to be transformed. 19:04 It's changed and the rest of the world 19:06 is trying to figure out um figure out 19:07 what that means. And it's changed for 19:10 two reasons. One, because you have 19:11 coding agents. So, the cost to develop 19:13 software is exponentially less than it 19:16 was. So just the determination about do 19:18 I build something or do I buy something 19:20 is much different because the cost to 19:22 build is much lower. The business model 19:24 for SAS companies is different because 19:27 um their cost to build is lower and the 19:30 need to have one platform that fits 19:32 everybody because you can only invest 19:34 your capital once is kind of getting 19:37 lifted because it's easier to to develop 19:39 more things to meet different market 19:40 niches. Um and then two you have you 19:43 know these coding agents that let people 19:45 vibe code. So you have lots of people 19:46 who have not previously been able to 19:48 engage um now they're able to to vibe 19:51 code. And so the whole talent landscape 19:53 of who are the types of people that you 19:55 can hire to build things um is much 19:57 different. And of course like the 19:59 canonical issue with vibe coding or 20:01 democratizing anything is oh my god then 20:04 they'll build things which is like so 20:05 funny. Like I literally had a person say 20:07 to me I said yeah we're going to bring 20:08 this in. we want people to develop this 20:09 was about assistance and it was an 20:11 executive and they said oh my god 20:13 they're going to build AI assistance 20:15 they said that's exactly what I told you 20:17 they're going to do right um so the gut 20:19 reaction is like oh my god we can't let 20:21 people do this stuff um and that very 20:25 much is tied to I think where vibe 20:27 coding is now if you look at things like 20:29 firebased studio or other you know 20:31 there's lots of other other open source 20:32 and paid for vibe vibe coding platforms 20:35 um which is they're too open right 20:37 there's just not enough um control for 20:40 what people can do, how they manage 20:43 secrets, how they talk to databases, are 20:45 they overquerying or under querying, are 20:47 they following our um enterprise design 20:49 system guidelines? Like we have 20:51 enterprise design systems, we like our 20:52 things to look generally similar to each 20:54 other, so it's not a hodge podge. Um and 20:58 those just really aren't controls that 20:59 you can easily get through a vibe coding 21:01 platform. You could hack your way into 21:02 it. Um but all of those things are sort 21:04 of out of the box and retool. some of 21:05 the things that I heard about for the 21:07 first time this morning that are coming 21:08 into retool with um sharable functions 21:12 and Figma integration um really bring 21:14 that to light which is I do want people 21:18 um vibe coding because I want the people 21:20 who are closest to the problem to be 21:23 able to be engaged in building the the 21:25 solution right they're the ones who 21:26 really know at the lowest level what the 21:30 issues are and if you're thinking about 21:32 agents in particular that are taking 21:34 over tasks like those are not tasks that 21:37 even managers understand. It's the 21:39 people at the lowest level who do the 21:41 work and that work generally is not 21:43 written anywhere like we all especially 21:45 large companies have policies and 21:46 procedures right um but most of the work 21:49 that happens day-to-day lives between 21:52 the people that do it and if the person 21:55 changes the nature of the work changes a 21:57 little bit and so we felt it's 21:59 critically important for the people at 22:01 the lowest level to be able to be 22:02 builders and that means um they need to 22:05 be able to build agents and AI powered 22:08 applications 22:09 Um, and vibe coding makes that possible, 22:12 but vibe coding within retool, you know, 22:14 we were one of the earliest users of 22:16 assist um, really makes that feasible 22:19 for for for people to to do. 22:21 >> I know you have a lot to say on this 22:23 topic. You wrote an article for the AI 22:25 journal on democratizing AI. um you 22:28 spoke to this, but can you share some 22:30 more specifics about how this lower 22:33 barrier to entry for and and build? Um 22:37 what are the types of profiles that 22:38 you're seeing at Colgate starting to 22:40 build that perhaps weren't 12 months 22:42 ago? 22:43 >> Yep. Yeah, I think um you know for me 22:45 democratization is sort of like a dirty 22:47 word. You know when people hear 22:48 democratization it rubs some people the 22:50 wrong way because you know the sense is 22:52 well not everyone should be doing all 22:54 the things and this is the reason why we 22:55 have specialization. um which is true in 22:58 some senses but also 23:02 the latest language models are very very 23:04 good like if anyone following there's a 23:06 you know benchmark GPT GPQA diamond this 23:09 is a test of you know PhD level skills 23:12 and latest language models do better 23:13 than people with PhDs I have a PhD in 23:15 chemistry and when I ask GBT5 stuff it 23:18 answers better than I would answer and 23:20 so like I am fully disabused of the idea 23:23 that um specialists are as special as 23:26 they were, there is still a a place for 23:29 specialists to come in. Um, but the 23:31 models are so good, right? And if you 23:34 take chat GBT, there's 700 million 23:36 monthly users of ChatGpt. That's like 23:38 almost the country of China. And so 23:41 treating chat GBT like a thing that you 23:44 think you're going to control, how 23:46 people interact with it and what they 23:48 ask it and what they get from it is to 23:50 me just like foolish. we're already not 23:52 there and that's not where the world um 23:55 is going. So it's already democratized. 23:58 It's like that that's the net net of it. 24:00 These tools are free for everyone to 24:02 use. People around the world are using 24:05 them. I think we just hit a milestone. I 24:07 forget what it was that you know um chat 24:10 GBT or in the aggregate usage outside 24:12 the US is higher than the usage in the 24:14 US which is the fastest that's that's 24:16 happened for major tech offerings. So 24:18 it's everywhere. It's global. it is 24:21 democratized. So, language models are 24:23 democratized. You have to start there 24:25 and then say, okay, you know, what are 24:28 we going to do about it and how are we 24:29 going to make sure that's democratized 24:31 um within our company, you know, the way 24:33 that that we think is the is the right 24:36 way to to do it. Um, so yeah, the piece 24:38 I wrote in the AI journal was mostly 24:39 just our surprise at when we did open up 24:42 the capability, the number of people who 24:46 really wanted to engage with this and 24:48 like spend a significant amount of their 24:50 time building. And um, we expected when 24:53 we looked at the functional breakdown 24:55 for it to you know mostly be marketing 24:57 people and you know where where you sort 24:59 of see tech first adopters and that was 25:01 not the case at all. Like we had people 25:04 who were plant managers in different 25:06 parts of the world um loading plant 25:08 documents that are written in German and 25:10 having a language model ask a person who 25:12 speaks Greek how to fix an instrument 25:14 when the manual is written in German 25:16 right huge use case we have people 25:18 around the world doing that we have 25:19 people in our sales teams you know using 25:21 AI to help them create better sales 25:23 pitches so it really was everywhere and 25:26 there's very little functional variation 25:29 we we suspected that it was going to be 25:30 a lot of marketing people and and that 25:33 broadly didn't um wasn't how it panned 25:35 out. Maybe that was because initially 25:37 the image generating models weren't as 25:39 good and you know now they're 25:40 significantly better. Um but we we 25:43 really saw everyone across the the 25:45 company engaging in a way that was 25:46 surprising. 25:47 >> And speaking to that um you have shared 25:50 that AI really only changes a business 25:52 when everybody uses it. So you've spoken 25:56 a bit about how uh folks have adopted 26:00 obviously in in kind of more than your 26:02 wildest dreams, but what about 26:04 education? So how do you or what advice 26:07 would you give to leaders, leaders like 26:08 yourself in terms of making sure 26:10 everybody's empowered, equipped 26:12 >> to use? Yeah, 26:13 >> the the everybody piece is so important. 26:15 So you know we're thinking a lot about 26:17 processes AI processes and um you know 26:20 this maps cleanly to sort of um how how 26:23 workflows in in retail architected but 26:25 AI processes right and the problem with 26:28 uh you know the the thing with 26:31 democratization that you often find 26:32 happens is people will take the task 26:34 that they have and build an automation 26:36 around it which is great. It makes them 26:38 able to do the thing that they do 26:39 faster. But if you work in a team, 26:41 right, and your task is now 10x faster 26:44 and then you hand it off to a person 26:46 where you had to email them a thing and 26:48 it sits in their inbox queue for two 26:50 weeks because they have too much. You 26:51 haven't actually solved a company 26:53 problem like you haven't moved anything 26:55 faster. And so it is critically 26:57 important for everyone to be engaged. 27:00 Otherwise, we don't have processes that 27:02 get transformed. And especially at 27:04 complicated companies, everything is a 27:06 multi-stage, multifunctional process. 27:09 Um, so everyone really needs to 27:11 understand how how this stuff works or 27:13 else we're not really going to transform 27:14 processes. Some process transformation, 27:16 a lot of it needs to happen from the top 27:18 down, but some of it needs to happen 27:19 from the bottom up. And um, even just, 27:22 you know, some people say that language 27:23 models kill agile, which I haven't 27:25 decided if I agree with or or not. Um, 27:28 but it is the case that when you have a 27:30 project team that's formed, if half 27:32 those people, you know, know what they 27:34 can do with an agent or an AI workflow 27:37 and a language model and half the people 27:39 have no idea, that is a nonfunctional 27:41 team to begin with. Like they don't even 27:43 like imagine you have a team and half 27:44 the people know what Excel is and half 27:47 the people don't know what Excel is. You 27:49 can't even work together because just 27:51 your conception of how much time it 27:53 takes to do different tasks and how you 27:54 sequence things and who does what. uh 27:57 completely falls apart. Um so it's 27:59 really important for us that everyone 28:00 knows. So from a education standpoint um 28:03 we've done a sort of multi-tered 28:05 program. So we have the basic mandatory 28:08 asynchronous everyone must do this 28:09 training and it fulfills our regulatory 28:11 requirements. By and large uh and then 28:14 we also have asynchronous training 28:15 that's optional. By and large we focused 28:17 on um small group we call community-led 28:21 um training efforts. And this is the 28:23 same way that we're doing the retool 28:24 roll out, which is I have about 20 28:27 people situated globally, one per 28:29 function and per line of business. Each 28:31 they're my uh AI leads. Each one of 28:33 those has about a dozen ambassadors or 28:36 people on the ground. And all of them, 28:38 there's about 300 globally, have gone 28:40 through train the trainer training. So 28:42 they have training materials to be able 28:44 to run a lunch and learn, to be able to 28:46 run a workshop, and they're going 28:48 through retool train the trainer. So 28:50 that way they learn how to build agents 28:51 and then can train everyone else how to 28:53 do it, right? Um and literally what they 28:55 do is they run sessions around the 28:57 world. Like we're we're close to a 28:59 hundred now sessions that are groups of 29:01 10, 20, up to 50 people sitting with a 29:06 local person who knows what they do like 29:08 as a local salesperson who's been 29:10 through training and they show them 29:12 actual things that they do. And nothing 29:14 moves the needle needle better than 29:16 that. It's the most expensive way to go 29:18 about it and the most timeconuming way 29:20 to go about it. But like I said before, 29:22 if you're actually taking this seriously 29:24 and you actually think in five years 29:26 your business model will not be the same 29:28 anymore uh for you to still be viable, 29:30 then that's the approach that you have 29:32 to take because you really have to bring 29:33 everybody along. It has the other 29:35 benefit of people are scared about AI 29:39 and what it means for them. And a good 29:42 way to make them more scared is to roll 29:44 out corporate training that comes from 29:46 like a big city in the US. And a good 29:49 way to make them less scared is to have 29:50 a local person tell them, "We want you 29:53 to do it and I'm going to show you how 29:55 to build your own thing so you can 29:57 become an AI person." Actually, our 29:59 training is called Super Youu. It's 30:01 called SuperU training. Um, and that's 30:03 the way that we frame the whole thing 30:05 and it's all framed around you are in 30:07 control of your own destiny using AI. We 30:10 want to teach you how to do it and we 30:11 want to give you examples that are 30:13 useful for for you and that has a huge 30:14 impact. 30:15 >> I love that. So we are together 30:18 rebuilding the AI AI hub. So two years 30:20 later um what other media areas medi 30:24 challenges are we going to be addressing 30:26 together? 30:27 >> Yeah. So um if you look at Colgate's um 30:30 strategic focus areas. So one is 30:32 obviously broad-based. We we feel like 30:33 everyone needs to be able to engage and 30:34 needs to be able to bring their own 30:36 value. strategic pillars that we have 30:38 are um innovation, marketing and 30:40 operations. And if you look at how those 30:42 spread, they're essentially split 30:44 between what we call uh you know what 30:46 would be growth driver and what would be 30:48 like an efficiency or productivity 30:49 driver, right? So innovation is the 30:51 heart of what we do as a company. We 30:54 launch new products to meet new consumer 30:56 needs or new niches um or or fulfill 31:00 pain points that people have around the 31:01 world and they vary everywhere around 31:03 the world. Um the second is marketing 31:05 which you know really is about marketing 31:08 content creation. There's lots of other 31:09 things in marketing but if you think 31:10 Genai specifically where a lot of the 31:13 energy is getting focused is marketing 31:14 content. Uh this is ad week today and 31:17 I'm sure it's all about ad week this 31:18 week and I'm sure it's all about AI and 31:20 marketing content. Um I think there's a 31:23 lot of different people at cans having 31:24 existential crises. Um so that's you 31:26 know marketing which for us is about 31:28 growth and and innovation. you know, 31:30 innovating the way that we communicate 31:32 with people and also growing the 31:33 business through marketing. And then 31:34 operations, which is our supply chain 31:36 and and R&D, which is really about 31:37 efficiency and and productivity. Um, you 31:40 know, so where are we looking to engage? 31:42 One is just our broad-based platform for 31:44 agents. So, we're bringing people onto 31:47 um agents into Retool to have them learn 31:49 how to build agents. Um, a thing that 31:51 we're currently speaking with retool 31:53 about is similar idea to AIHub, which is 31:56 we need a discoverability platform for 31:58 people to be able to find agents and 32:00 ultimately um, build towards multi- aent 32:04 architectures where, you know, if I have 32:06 an agent that I've built and I sit in 32:08 R&D and someone else has an agent that 32:09 they've built and they sit in 32:10 regulatory, the dream is if those agents 32:13 are good and they're all uh can 32:15 communicate via A2A, which Kent tells me 32:17 is is the case, 32:20 then we can start to build these much 32:21 more complex A2A structures that do much 32:24 more complicated things 32:26 having the people on the ground build 32:27 the building blocks right and then we 32:29 can start to architect on top of it so 32:31 we're looking at what our AI hub looks 32:34 like um in an agent future where agents 32:36 need to be invoking each other 32:38 >> and then specifically you know around 32:40 our verticals of marketing content um 32:42 marketing content in particular we think 32:44 that that's something that especially 32:45 now with um Nano Banana and and Surro2 32:49 two uh s or two coming out um that we 32:52 can orchestrate through through 32:53 workflows and take on a significant 32:55 amount of amount of that work. And on 32:56 the operation side, it's just like 32:58 limitless opportunities for automation. 33:00 There's just a lot of um data work that 33:04 comes in and uh a lot of intelligence 33:07 type reporting um that we think agents 33:09 can can do a significant amount of. 33:12 >> So I feel a lot of people here are 33:14 looking at you as inspiration. Who do 33:16 you look to companies, individuals, 33:18 resources, publications? 33:21 >> Yeah. So, um, so I'm a big fan of Ethan 33:23 Malik, if anyone's familiar with Ethan 33:25 Mollik. So, he's a key opinion leader 33:27 here. He has a 33:30 roundt group that sometimes I I go to 33:32 and he writes a blog post about every 33:33 other week. So, if you're not familiar 33:34 with Ethan Malik, strongly recommend 33:36 Ethan Malik. He has a framework actually 33:38 for running AI, which is leadership lab 33:40 crowd. So leadership lab crowd and 33:42 that's a framework that we've you know 33:44 very much internalized with how we think 33:46 about things. I mean everything that I 33:47 just talked you through is really that 33:49 how do we get the crowd on board. Um 33:52 leadership we haven't talked a lot about 33:53 but having a person in my role is very 33:56 important for for leadership to be able 33:57 to talk um directly to leaders and then 34:00 lab which is really your advanced tech 34:04 team that like you know when nano banana 34:07 drops who are the people who are 34:09 spending that week figuring out what it 34:12 means and bringing it to the business 34:14 before all of your vendors tell you what 34:17 they think it means for you right and so 34:18 that's lab so leadership lab crowd came 34:20 from Ethan Malikin. I'm a big fan there. 34:22 Um, there's a daily podcast called the 34:24 AI Daily Brief. Maybe some of you listen 34:27 to it. We're in San Francisco, so 34:28 probably more than when I ask a New York 34:29 crowd. Um, but I listen to that 34:32 religiously every day. I think in 34:34 general the the thinking uh on that 34:36 podcast is is is aligned with with how I 34:39 think about it and a good way to stay 34:40 stay up to date. Um, and then I still 34:42 from the time that I was a data 34:44 scientist um get a medium daily digest 34:47 and I read the medium religiously. Not 34:50 so much because I think all the articles 34:51 in the medium are right. Like a lot of 34:53 them like, you know, don't make any 34:55 sense. It's not like clear. You wouldn't 34:56 follow it. But it gives you a very good 34:59 feeling for the zeitgeist of the 35:02 community of people and what they're 35:04 thinking about and where they're 35:05 developing. So less so from like a 35:07 literal standpoint like this person gave 35:09 me code and I'm going to send it to my 35:10 team and then they'll be angry at me 35:12 because I sent them some random article. 35:14 It's more what are the things that 35:15 people are thinking about in the data 35:17 science and AI engineering community. So 35:19 those are the three places that I go. 35:20 Polythan malik has a key opinion leader 35:23 AI daily brief to find out what's going 35:25 on and I subscribe to the medium to 35:27 understand you know at the practitioner 35:28 level what the zeitgeist is. 35:30 >> Perfect. And final question because we 35:31 are wrapping up um where do you see the 35:35 biggest potential leap uh in our space 35:37 in our industry in the next two years? 35:39 Um, it's all agents and it's not a 35:42 surprise, I guess, for me to say that, 35:43 but it's it's all agents. And I think a 35:46 lot of it is just the way that agents 35:48 will transform uh transform your your 35:50 your business. And um there was an 35:54 interesting graphic that I saw where it 35:55 was like the y-axis was um competitive 35:58 differentiation over time was the x-axis 36:01 and it was split out by you know 36:02 expertise uh data um and technology and 36:07 maybe another one. And the basic idea 36:09 was that the um uh uh importance of 36:14 expertise 36:16 um and 36:18 expertise in particular over time in 36:21 being a competitive differentiator for 36:23 you is going down and the thing that 36:25 becomes more important is uh the fourth 36:28 one was was change management. Does your 36:31 organization get it and do they have a 36:33 way to engage with the technology? And 36:35 two, data. Is your data in order? And is 36:38 it structured in a way that agents can 36:40 can talk to it? Um, 36:43 I don't think anyone I don't think it's 36:44 controversial to to say that agents are, 36:47 you know, businesses are going to be 36:49 transformed. Agents are going to be 36:51 running a larger and larger part of 36:53 business with people managing agents. No 36:55 one knows exactly how it's going to look 36:56 or if we're going to have contract agent 36:58 employees. Um, but it's definitely going 36:59 to be the case that agents are going to 37:01 be doing much more. And if you want to 37:05 take advantage of that, you need to have 37:07 your organization on board and have a 37:10 way for them to engage and engage 37:12 quickly, you know, which for us, this is 37:15 why we brought on retool. We think 37:17 retool gives people a way for people to 37:19 engage and do it in a way that's safe 37:20 and secure. Um and um you have to have 37:24 your your data in a place that agents 37:26 can can talk to it. And not surprisingly 37:29 the people that understand your data 37:30 like the ontology of your data the best 37:32 are the people at the lowest level who 37:34 are the people who you want building 37:36 stuff directly at that lowest level 37:38 because then they will define the 37:39 ontology of the data naturally which 37:42 will then allow you to connect other 37:44 agents into it and do you know the thing 37:46 that everyone wants to do with which is 37:47 like you know multi-agent or ambient 37:49 agent um stuff. Awesome. Well, we're 37:52 wrapping up. This was incredibly 37:54 thoughtful, candid. You are a fantastic 37:56 leader. Um, I hope for everybody here 37:58 today that you have new ideas um and 38:02 curiosity sparked. We are all at 38:04 different phases of this journey. The 38:06 team, my incredible team is all here 38:07 today. So, we'd love to continue that 38:09 discussion. Um, and with that, please, 38:12 please, please a big round of applause 38:14 for Cle 38:18 >> and for over here. Awesome.