Description Transcript
Generative AI is changing who can build inside an organization. Tasks that once required engineering resources can now be automated or orchestrated across the enterprise, reshaping how companies modernize and compete. Yet most organizations remain stuck in experimentation mode, unable to translate promising pilots into measurable operational impact.
In this fireside chat, Kevin McCurdy (Global Consumer Goods Partner Leader, AWS) and Abhishek Gupta (Chief Product Officer, Retool) will unpack a practical AI maturity progression that takes you from early experimentation to agent-driven, production-grade workflows. Drawing on hundreds of production AI deployments, they'll share what works when operationalizing AI without sacrificing governance or security.
We’ll cover:
The phases of AI maturity: what it takes to move from experimentation to enterprise-scale outcomes.
How to tie AI to business outcomes: Strategies for measuring real impact across capacity, cost, and throughput
Scaling AI responsibly on AWS: embedding intelligence into core workflows in a way that's governed, measured, and built to last.
Walk away with actionable insights to help your teams build faster, standardize what works, and scale AI-powered internal tools with confidence in 2026.
Read more 0:00 Thanks everyone for joining. Uh we're 0:02 glad you're here. Before housekeeping 0:04 notes, uh would love for everyone to 0:06 just maybe in the chat to uh share where 0:08 you're joining in joining in today. 0:14 All right. Um so yeah, a couple 0:16 housekeeping notes. We are recording 0:18 this conversation and we'll send it out 0:19 in an email after. Uh if you have any 0:22 questions throughout, please use the Q&A 0:23 feature uh to submit these. uh we'll 0:25 save time for a Q&A session at the end 0:27 of the conversation and if we don't get 0:29 to any questions we'll certainly uh try 0:31 to follow up afterwards. So really 0:33 appreciate everyone joining today. 0:34 Thanks for the time and welcome. We're 0:35 glad you're here. Um before we jump into 0:38 the fireside chat I wanted to give some 0:40 context around the conversation. Uh 0:42 Retool recently shared our 2026 build 0:45 versus buy shift report. We'll drop the 0:47 link in the chat so you all can take a 0:49 look after. Um, I want to start with a 0:51 number that's really been stuck in my 0:52 head since that came out and that's 93%. 0:55 That's the share of enterprise builders 0:57 who are already using AI at work models 0:59 and their workflows and their tools. 1:01 It's table stakes for a lot of people 1:03 already. Uh, but here's the number that 1:06 should give us pause and that's 19%. And 1:08 that's a share who describe their 1:10 organizations AI maturity as advanced. 1:12 So we have near universal adoption, 1:14 right? Majority of folks uh do not feel 1:17 like the organizations have fig figured 1:18 it out though and that gap uh between 1:20 access and oper operationalization is 1:23 exactly why we're here today. So we 1:26 surveyed over 800 builders for that 1:27 report and what I kept coming back to is 1:29 that the challenge has shifted right a 1:31 year ago the question that companies 1:33 were asking were how do we get our 1:35 people using AI and that battle has 1:37 largely been won at least among 1:39 builders. Uh the question's now a little 1:41 harder. How do we turn all this 1:43 experimentation into something that 1:45 actually moves the needle for the 1:46 business? The data tells us a pretty 1:48 honest story about where most teams are 1:50 today. So 35% of organizations have 1:53 already replaced at least one SAS tool 1:55 uh with something customuilt. That's 1:57 happening right now quietly across the 1:59 industry. 78% expect to build even more 2:02 custom software this year. And perhaps 2:04 most striking is that 35% of 2:06 organizations still report no measurable 2:08 AI productivity gains at all. It's not 2:11 because the technology doesn't work. Uh 2:12 it's because productivity without 2:14 measurement doesn't scale and AI without 2:17 operational infrastructure is staying a 2:19 pilot forever. So the phrase I keep 2:21 coming back to is the productivity trap. 2:23 And that's really where a lot of smart 2:25 well-resourced teams are still stuck 2:26 today. We're seeing a lot of teams break 2:28 through this though. And so we'll walk 2:30 through a number of those examples 2:31 today. Today Kevin and Abashek are going 2:34 to focus and give me their thoughts on 2:37 AI maturity. We're going to start with 2:38 early experimentation. uh we'll be 2:40 honest about where most organizations 2:42 actually are, what's blocking progress, 2:44 and what the path forward looks like 2:46 based on success stories we've seen and 2:48 been a part of. Um we're going to cover 2:49 the governance question, which I think 2:50 is more urgent than people realize. 2:53 We're going to talk about the pivot from 2:54 AI productivity to intelligent 2:56 automation because those are actually 2:58 different things. And we'll talk about 3:00 what it looks like when this works at 3:02 scale in production with real 3:04 accountability. So, we've got a a lot of 3:06 ground to cover. Uh let's get into it. 3:08 But first would love to have the 3:10 panelists introduce themselves. Kevin, 3:12 let's start with you. 3:15 >> It's Pete. Great to be here. Uh Kevin 3:17 McCertie. I am the global partner leader 3:20 for the consumer goods vertical at AWS. 3:22 So I uh lead strategy and go to market 3:25 with our partners specific for the 3:26 consumer goods vertical uh like retool. 3:29 So great to be here. 3:36 >> Hey everyone. Uh thanks for having me. I 3:38 am Abisha Gupta and I'm chief product 3:41 officer uh at Retool. So I'm responsible 3:43 for the uh you know the delivery and the 3:47 quality of the platform uh that retool 3:50 offers to customers and we're so 3:52 delighted to be able to partner with AWS 3:54 to provide uh some of these amazing 3:57 things that we've been able to provide 3:58 to some of our joint customers. 4:00 >> Awesome. Well, thanks guys. Uh Kevin, 4:02 we're going to start with you. Um, we 4:04 know that AI adoption is widespread, yet 4:06 only a small percentage of organizations 4:09 can consider themselves advanced in this 4:10 area. From AWS's perspective, where do 4:13 most companies get stuck when they move 4:16 from uh AI experiments to production 4:18 systems? 4:19 >> Yeah, it's a good question. We've and 4:21 it's been interesting to watch the 4:23 evolution of this gen generative AI 4:25 journey, if you will. Um, I think the 4:27 the biggest impediment has often been 4:29 around data infrastructure, data 4:30 quality. So we often find fragmented 4:33 data uh data is in a variety of silos 4:37 where it's not easily accessible. Um but 4:40 the other area now as we've moved 4:42 forward as we're getting more into 4:44 production and um scaling these we're 4:47 actually seeing uh the largest concern 4:49 is around talent and skills gap. uh so 4:52 the ability of going beyond 4:54 experimentation but actually to move it 4:56 to production and then production to 4:59 scale across other use cases. So I think 5:01 that's been the biggest gap. The other 5:03 that's kind of interesting is the 5:05 perceived cost uh and risk and that is 5:08 not just the direct cost but the um kind 5:10 of with regards to the risk of 5:12 hallucinations and others within AI. 5:15 What I loved about you, one of your case 5:17 studies with Perno Ricard was they 5:19 actually cited retool from one of the 5:22 big benefits is actually the speed to 5:24 which they can go from proof of concept 5:26 to production u because you're doing it 5:28 on the live system. So I I think there 5:30 there's different examples that are out 5:32 there data quality talent skills gap but 5:34 I think there's there's offerings here 5:36 that can move quickly to production. 5:39 >> Yeah, that's great. Um Abashek once a 5:41 lot of those foundations do exist um why 5:44 do a team still struggle to 5:45 operationalize AI inside their actual 5:48 workflows? 5:50 >> Yeah, that's uh it's an important one to 5:53 to dig into. So I mean you kind of cited 5:55 this right at the beginning of your um 5:58 uh beginning of this webinar where you 6:00 mentioned the you know that report where 6:01 we surveyed 800 people right? I mean two 6:04 stats really stood out to me from that. 6:06 one was, you know, 93%, so the vast 6:08 majority actually already use LLMs at 6:10 work, but less than 20%, I think about 6:13 19% would consider themselves um, you 6:16 know, mature. And I think that really 6:18 tells you the problem isn't so much the 6:21 access part of AI, it's really going 6:22 from experimentation, production. And 6:25 so, you know, where we've seen companies 6:28 really get stuck here. Um, it's it's a 6:30 it's kind of kind of probably say three 6:32 three areas. One is on the integration 6:35 front. Second is on the trust side and 6:37 third is is the people building on the 6:40 first one on the integration front. 6:42 Look, you know, the models if you just 6:43 use it by itself, they typically work 6:45 well, right? But the problem is you need 6:48 to connect it, right? You and and real 6:51 workflow lives in lots of different 6:53 systems, right? Messy systems. You've 6:55 got Salesforce over here, Service Now, 6:57 Data Bricks, Postgress. 7:00 um and getting it all connected to real 7:02 production data with the right 7:04 permissioning is is generally we've seen 7:06 teams really slow down. The second big 7:08 part is usually on the uh trust layer, 7:11 right? So um you know you want to make 7:14 sure that you don't have uh critical 7:16 workflows without guardrails. And so we 7:19 saw a lot of people actually in that 7:20 same report site that maybe only a third 7:22 or almost uh you know have unclear ROI 7:25 or security concerns around this. And so 7:28 we've just seen that teams that 7:29 typically succeed have humans really 7:31 involved in making these decisions. 7:34 Uh and the third thing I mentioned is 7:36 you know the the builder side right so 7:38 um you know we want to make sure that 7:40 you actually have the right people 7:42 enabled to do this people who are close 7:43 to the problem right people who are 7:45 operations analysts domain experts 7:47 people who have a real challenge to 7:49 solve. Um what's great about this 7:51 builder report is actually like the 7:53 twothirds are not actually engineers 7:55 which means that quite a few people can 7:58 actually be able to uh go in and uh come 8:02 in for uh building things now which is 8:04 great. So what we have seen works here 8:06 is is effectively the opposite right you 8:08 know you want to connect your AI to real 8:10 data systems you want to add strong 8:12 governance layers uh and you want to 8:14 make sure that there's a uh domain 8:17 experts who really have access to all 8:19 these things to be able to do this well 8:22 um you know just uh and kind of before I 8:24 wrap up the section I'll give you an 8:25 example of where we've seen this um you 8:27 know really done well at scale um so one 8:29 of you know we have a a mutual customers 8:31 Colgate right Colgate Palm Olive you 8:33 know them has uh toothpaste, but to us, 8:36 you know, they're a really large 8:37 organization uh that has 34,000 plus 8:41 employees, you know, operating 100 plus 8:43 countries and they basically had to do 8:45 all these things to successfully get 8:47 into production, right? They built it 8:49 with guardrails, they added role-based 8:51 access controls, secrets, templates, um 8:54 you know, they had an internal hub that 8:56 all domain experts would come in and 8:57 access and it was connected to all their 8:59 systems, right? And so the AI leader has 9:01 said something that I love which is the 9:03 system makes a happy path the easiest 9:05 path and I think that is super important 9:07 as you're thinking about shifting from 9:09 experimentation you know to production 9:11 with AI. 9:14 >> That's a really good example. Um before 9:16 we go to the next question we're going 9:17 to drop a poll. Would love for everyone 9:19 to participate and share a little bit 9:20 more. Uh where is your organization with 9:22 AI today? So experimenting, you have 9:25 pilots of production, perhaps you're 9:27 actually scaling some of those workflows 9:29 like uh you know we have examples 9:32 mentioned here today or perhaps you're 9:33 fully operationalized. 9:35 Give it a few seconds here for folks to 9:38 answer. Um really eager to see kind of 9:42 the results here. Um, 9:56 yeah, some interesting patterns showing 9:58 up here. It's really cool. Um, that's 10:01 great. So, 10:04 I want to talk a little bit about what I 10:06 mentioned earlier in my intro around the 10:08 the productivity productivity chat. Um, 10:14 walk us through I'll go to Abashek 10:15 first. Um, can you walk us through a 10:17 situation where a team was convinced AI 10:20 would dramatically move the needle for 10:21 productivity but hit an unexpected wall? 10:24 You know, what was the what was the real 10:26 bottleneck and then what did it take uh 10:29 for that that organization or team to 10:30 break through? 10:32 >> Yeah. Yeah. Uh such a good relevant 10:34 question uh Steve. So um you know one 10:38 thing that really stands out to me is uh 10:41 again you know highly uh suggest using 10:44 this report is about a third of 10:46 organizations report no measurable 10:48 productivity gains from AI right uh and 10:51 they don't have a lot of metrics around 10:53 it either and so it's not so much the 10:55 fact that it's the the productivity side 10:58 they don't have the operational layer 10:59 around how do you make things more 11:01 productive right um now I'll give you a 11:05 couple examples where I've seen this 11:06 kind of play out two different ways, 11:07 right? One is, you know, I've seen 11:10 companies think that deploying AI just 11:12 kind of solves everything. Um, and the 11:16 what the reality was is was different, 11:18 right? So, you know, we work with a very 11:19 large global CPG company. Um, you know, 11:22 uh, it's, uh, about 100k employees, you 11:25 know, one of the biggest ones in the 11:26 world. They actually had 10 agents 11:29 running across their business, employee 11:30 service, consumer relations, order 11:32 management, lots of different things. 11:34 But they hit a wall, right? They they 11:36 were quite technically impressive. Um, 11:38 and again, the issue here wasn't models, 11:40 right? I mentioned this in the last 11:41 question like models by themselves in 11:42 isolation are good. It's actually around 11:44 orchestration, right? Which is like that 11:46 other piece which is, you know, 11:48 employees need access to a lot of 11:49 different things. They need access to 11:50 SAP, they need all internal knowledge 11:53 management systems, workday, other kinds 11:55 of systems, right? And each agent is 11:57 running on different systems built by 12:00 different teams, different frameworks. 12:02 And so the challenge isn't so much the 12:04 again the agent itself. It's how do you 12:07 build this orchestration layer around 12:09 the you know agents themselves right so 12:11 it's like how do I route corrective how 12:13 do I route these agents correctly how do 12:15 I have the right observability how do I 12:16 have the right governance the right 12:17 escalation paths how to make sure I know 12:20 who has access to what all of this you 12:22 know is is what's important and so this 12:24 is the infrastructure we mean that we've 12:27 seen companies typically hit uh when 12:29 they don't have 12:31 now there's another version of this 12:32 which is actually the opposite happened 12:34 which is the second example where rather 12:36 than trying to do everything, right? 12:38 Trying to run all these different 12:39 systems and different uh agents across 12:41 all of them was just start with one 12:43 simple thing, right? Uh so we have a 12:46 another customer that's a kind of it's a 12:48 very large global manufacturer and their 12:51 facility managers, you know, they they 12:53 assumed AI wasn't relevant for them uh 12:56 until, you know, someone built a really 12:57 simple tool. Basically, they connected a 13:00 language model. They looked at some 13:01 equipment manuals that they couldn't 13:03 find a translation from another 13:05 language. They used AI to actually 13:07 translate it uh to translate the 13:09 documentation right on the floor itself. 13:11 And now operators could ask questions uh 13:14 you know about this machinery instantly. 13:16 So nothing super fancy. It was just AI 13:18 working with the right data with the 13:20 right problem and the right person to 13:22 drive it. Right? But I would love to I 13:24 love that example because it's you know 13:26 it's it's it's a really simple use case. 13:29 Um but it changed how their organization 13:31 viewed AI, right? It changed how they 13:33 saw that okay if we can actually find 13:35 the right problems you know then we can 13:37 figure out how to bring it into 13:37 production. And so now they expanded it 13:40 to expand it to expense auditing. They 13:41 expanded to marketing workflows. They 13:43 expanded to innovation processes. And so 13:45 you know my takeaway for for for 13:47 everyone here is you don't have to start 13:50 with the most complex thing. You don't 13:52 have to say, "Hey, we need to run 10 13:53 agents in production like that first 13:56 example." Start with something simple, 13:58 right? Start with something that makes 13:59 your, you know, it's it's Tuesday, so 14:01 make your Tuesday afternoon easier. Uh, 14:04 you know, and and and start there. I 14:05 think once you start building momentum 14:07 there, um, it it, you know, catches on 14:10 super fast if you solve the right 14:11 problems. 14:13 >> Yeah. Starting small, uh, can make all 14:14 the difference in the world. Um Kevin 14:16 from your lens um you've talked about 14:18 some slightly different kind of 14:20 challenges that you've experienced in 14:22 working with customers around 14:24 implementing some of their AI 14:25 transformation. Would love to hear about 14:27 uh how some of those organizations were 14:29 able to break through those challenges. 14:31 >> Yeah. Yeah. Absolutely. So I the and I'm 14:34 going to tie back I think to the earlier 14:35 discussion one in this case that I'm 14:37 going to talk about with the data 14:38 foundation was that uh was the 14:41 underlying weak link but we we have a 14:43 customer US foods uh their food 14:45 wholesale distributor wanted to automate 14:49 uh the process as they go out each of 14:51 their salespeople to their to their 14:54 customers i.e. the restaurants, they 14:56 need to recommend orders, um, recommend, 14:59 suggest new products, etc. that go in. 15:01 And sometimes it's a very heavy manual 15:04 process that they've got to do a lot of 15:06 research around new products, new 15:08 ingredients, understand the menus that 15:11 those customers have. And so there's a 15:13 lot of a lot of unstructured as well as 15:16 structured data that's available. But, 15:18 uh, what they uncovered is it was highly 15:20 siloed uh, in some cases. um uh it was 15:25 on people's laptops or or manually they 15:28 had access to that data. So they 15:30 actually had to start um modernizing 15:33 their data foundation and pulling that 15:35 data together. Um so that was probably 15:37 the biggest gap they had. But that's 15:39 often one of the the biggest parts of 15:41 that that we that we find with many of 15:43 these pilots. Um and I like it from what 15:46 Abashek said, starting small. Start from 15:48 with a use case which they tried to do 15:50 here. Um, and so they were able to 15:53 modernize that data structure, pull 15:54 together some of the unstructured 15:55 structured information. Um, and then uh 15:59 were able to spin up a PC within roughly 16:02 6 weeks with one developer uh to test 16:05 that out and then we're able to scale. 16:07 And so it wasn't about the AI here. It 16:10 was more about the data foundation 16:12 driving and then they were able to 16:13 leverage the AI to really speed the 16:15 outcome and then they scaled it to 2500 16:17 sellers. So the other interesting thing 16:20 I think here too is a lot of people fear 16:22 this AI is going to take away and it was 16:25 really what they identified is it 16:27 automated some of the redundant tasks 16:29 they were doing but actually then 16:32 allowing them to spend more time 16:33 building relationships with their 16:35 customers that actually led to higher 16:37 sales. So got a lot of benefit out of 16:39 it. But again focusing on the data 16:42 foundation and then able to scale it um 16:44 as they went with uh postpilot. 16:53 >> I think it's muted I think. 16:57 >> Thank you Kevin. Um 17:00 >> appreciate it. Yeah. So let's gears a 17:02 little bit. Um I want to talk about kind 17:03 of the governance side of things here. 17:06 Um and Kevin, we'll go back to you first 17:08 with this question. So right now, you 17:10 know, and I think maybe we see this a 17:11 little bit with some of the poll 17:13 results. Um we have companies kind of 17:15 operating under two extremes. So one is 17:18 completely uncontrolled, unfettered 17:20 experimentation, like let everyone rip, 17:23 let everyone go um and see what cool 17:25 things get built. And then on the other 17:27 side, you see heavy governance where 17:29 things tend to get shut down often 17:31 before they can get started. um what's a 17:34 middle path look like in practice from 17:37 your from your perspective? 17:39 >> Yeah, I think the the most effective 17:40 approach we're seeing is combining kind 17:43 of automated protections with a 17:45 risktiered oversight. Um and in that we 17:49 want to treat we we kind of treat 17:51 governance as an enabler of safe 17:53 experimentation. And I often equate it 17:55 also people think about guardrails we 17:57 think about brakes on the car. brakes on 18:00 the car actually don't make you go 18:01 slower, they allow you to go faster. Um, 18:05 so that's the analogy I often talk about 18:07 there. And so we want to be able to one 18:10 have the the the right guard rails. So 18:13 as we talk about Amazon Bedrock 18:15 guardrails, an example, allow you to put 18:17 the right filters, the right boundaries 18:19 in place. um have the right monitoring 18:22 or alerts at the right levels and then 18:25 focus the risk base kind of the tiers 18:28 based on high, medium, low. So from a 18:30 high perspective, anything that's 18:32 customerf facing um so if I've got um if 18:35 I'm I'm doing uh uh content generation 18:40 um highly personalized invites to 18:42 customers or things that might impact uh 18:45 let's say a supply chain decision that's 18:47 going to impact directly on a customer, 18:48 I I might want to have more review and 18:51 approval processes in place there. On a 18:54 medium risk, I might want to streamline 18:56 with some automation. And then on more 18:58 just internal uh productivity type 19:00 things, low risk, I can let that be 19:02 self-service 19:04 um and not have as much of of the guard 19:06 rails as a part of that. So um kind of 19:08 the automated controls but then also 19:11 allowing with the riskbased tiers to 19:13 have some free freedom to go on and and 19:16 leverage the different models. 19:19 >> Yeah, that's great. I love the I love 19:21 the kind of uh high medium lowrisk model 19:24 which which I think gives the framing 19:25 for everyone to kind of 19:27 >> uh build that framework really quickly 19:28 and then 19:29 >> let people let people go. Um yeah, 19:31 Abashak would would love to hear 19:33 >> um from your perspective what you're 19:35 seeing as well. 19:37 >> Yeah, I mean um you know I love 19:39 governance. I mean that's so critical in 19:41 AI and I never thought I'd be finding 19:43 myself saying that. Um but you know it's 19:46 uh it's become so important and it's not 19:48 just me who saying that it's actually I 19:50 talked to a lot of CIOS and engineering 19:52 leaders who all say that um and and they 19:55 basically feel stuck right between two 19:57 bad options. one is kind of what you 20:00 know Kevin said which is experimentation 20:02 go wild or lock things down right and so 20:06 um you know very similarly companies who 20:09 have um 20:11 uh who have seen success they tend to 20:14 find uh the middle path right and I'll 20:16 talk a little bit about what this middle 20:17 path is but I think the mistake a lot of 20:20 customer a lot of folks make sometimes 20:22 is is they frame governance as 20:24 controlling AI right it's it's really a 20:26 question of you know how do you make 20:28 safe AI usage the easiest path. It's 20:30 it's very similar to the brake analogy, 20:32 which is it's not about controlling, you 20:34 know, the car. It is about controlling 20:35 the car, but it's it's fundamentally 20:37 about giving you more speed, right? Um 20:39 it's a very similar system and really 20:41 similar framework. Um and so I think 20:44 about three principles you can think 20:45 about when you go down this middle path 20:46 and and again some of them may be 20:48 similar to what Kev said, but the first 20:50 one is really around tiering the risk, 20:53 right? Really making sure you understand 20:54 how to how to access that. The second 20:57 one is around leveraging governance but 21:00 really making it invisible. Not it 21:02 doesn't have to be front and center. Uh 21:04 and the third one is about how do you 21:05 measure effectiveness, right? That's 21:07 super important. So let's talk about the 21:09 first one. Um the first principle is 21:11 about how do you cure risk not access. 21:13 You know one big issue I think companies 21:15 make is they have blanket policies that 21:17 apply to everything. Either um no AI or 21:20 everything goes right. And the reality 21:22 is it's it's not that easy, right? It's 21:24 not that black and white. Um usually 21:26 we've seen companies tier this uh use 21:28 cases by risk actually. Uh so for 21:31 example you know one of our um uh uh 21:35 customers again 100,000 CPG global 21:37 company they actually took this approach 21:39 they basically said okay lowrisk use 21:42 cases we want to tier this we want 21:44 people to move fast limited controls 21:47 right um high-risk workflows things that 21:50 maybe are touching more important 21:51 elements right um that requires much 21:54 more additional review much more 21:55 scrutiny and so that's how they've 21:57 really made sure um that they uh tier 22:00 that. Now, it's not just about tiering. 22:03 You also want to make sure that you've 22:04 got um you know what what we would 22:06 consider heavy enablement, right? Um EI 22:09 is fundamentally change management in 22:11 large organizations, right? And so, you 22:13 want to make sure that there's baseline 22:14 training for everyone. You want to make 22:15 sure that you know the power users, the 22:18 champions are the ones who uh can push 22:20 some of these deeper programs and 22:22 they're effectively ambassadors, right, 22:23 that each team is basically operating 22:25 with. Um but what we've seen with this 22:28 company was you know you you take the 22:30 combination of people who really know 22:31 what they're doing who understand the 22:33 governance tiers um and you basically 22:35 can get to thousands of employees you 22:37 know using tools with better guardrails. 22:40 Now the second principle is about making 22:42 it invisible, right? Um the best 22:44 governance is when you don't have to 22:46 think about it, right? Uh and it's not 22:48 so much that the the capabilities are 22:51 policies, they're actually just 22:52 capabilities in the platform, right? The 22:54 platform should give you those 22:55 capabilities, right? Uh role-based 22:57 access controls, secrets, audit logging, 22:59 audit trails, like all these things are 23:01 actually so much more important. You 23:02 don't want to build these from scratch 23:04 when you're building new uh apps, right? 23:06 You want to make sure that the platform 23:07 offers that out of the box. And if 23:09 they're built into the system, you know, 23:11 then everything gets it automatically, 23:13 right? And so, um, know, in some ways, 23:16 you're effectively telling people like 23:17 you're choosing to be secure as opposed 23:19 to having to build it from scratch. And 23:21 I think that's a much bigger, uh, much 23:23 better, stronger place to be because now 23:25 you're saying security is the default, 23:27 but it's not the a front center policy 23:30 that we think about every single time 23:31 because it's built into the platform, 23:32 right? Um, okay. Okay, the last 23:34 principle uh on this is um just making 23:38 sure you really understand how to 23:39 measure things, right? Um so we see a 23:41 lot of companies that will just create 23:43 policies but they have no idea what's 23:44 going on, right? They have no idea that 23:47 they're run even if they're running a 23:48 scale, you know, they don't know how 23:50 many people are running using these 23:51 tools or what their token usage or uh 23:54 you know which maybe provider is more 23:57 popular than others, right? Um obviously 23:59 cost is a big element of this too. um 24:02 you know it's really important that you 24:04 start measuring a lot of these things 24:05 right you want to measure where is AI 24:07 making decisions versus making 24:08 suggestions you want to measure um you 24:11 know the top users right this helps you 24:13 actually uh gives you much more 24:16 information which then allows you to go 24:18 and figure out okay where do I implement 24:20 more governance um as necessary right uh 24:23 you don't want to make it such that it's 24:24 just a thing you have you want to really 24:26 make sure you understand what people are 24:27 doing right um ultimately what you're 24:29 look the thing you're really trying to 24:31 avoid void with this is uh what you know 24:34 I'm sure a lot of people have heard this 24:35 term it's you really try to avoid shadow 24:37 IT right I I remember I was with a call 24:39 with a CIO once who told me you know uh 24:43 there you know there was an unsanctioned 24:44 tool being used with some uh customer 24:47 data that was going live on the public 24:49 internet and he you know had a meltdown 24:51 when he saw that right but that's the 24:52 problem with shadow reality which is you 24:56 know the same report like 60% of um you 24:59 know builders basically built something 25:01 outside of IT oversight and that's the 25:02 thing similar to that CIO that really 25:05 worries every single one of them and so 25:07 you know if your strategy slow things 25:08 down that doesn't work in my opinion I 25:10 think what you really have to do is go 25:12 down this middle path which you know 25:13 combines some things we talked about 25:14 which is tearing access uh you know 25:17 really making sure that you've got uh 25:19 governance capabilities built into the 25:21 platform and then of course measuring 25:22 all these things um you know we've seen 25:24 companies like Peru Ricard do this well 25:26 at scale you know same same kind of 25:28 example that Kevin mentioned but um you 25:30 know when they do this at scale because 25:31 they've been able to deploy uh some of 25:33 these capabilities. That's actually 25:34 where we see them really start running 25:36 uh towards solving these things at 25:38 scale. 25:40 >> Really good examples. Um the shadow IT 25:42 thing is is fascinating. I think that um 25:45 initial kind of reaction to try and um 25:49 really derisk the entire organization by 25:51 slowing the team down mean um those 25:54 early adopters are going to innovate the 25:56 way they want to and and now all of a 25:58 sudden you've exposed yourself to risk. 26:00 So, um, one of the one of the kind of 26:03 outcomes we've talked a little bit about 26:05 so far today is really getting to the 26:07 point of intelligent automation. So, 26:09 Kevin, we'll start with you. Maybe we'll 26:11 take uh an supply chain consumer goods 26:15 kind of focus again here, which is when 26:17 you think about the workflows worth 26:19 automating, um, where what like what's 26:22 the criteria that that leaders should 26:24 start with looking at to decide where to 26:26 focus those efforts first? 26:29 Yeah, that's a good question and I I 26:31 think I mean we often very much say 26:33 start with the use case uh or start with 26:36 a use case and it goes to u earlier uh 26:40 when Abashek was talking about don't try 26:42 to do a bazillion workflows or boil the 26:44 ocean at once. So the I I love the one 26:46 example um the manufacturing automation 26:50 around uh predictive maintenance and and 26:53 quality control. That's one we've seen 26:54 with a lot of our our organizations that 26:57 start off with but really want to one 26:59 tie to business outcomes and the ROI 27:01 potential. So uh labor productivity and 27:04 workflow automation is often highly 27:06 ranked within there. Um the other is 27:08 what the painoint severity is. So 27:11 looking at high volume um tasks, the the 27:14 one around that manufacturing that we 27:16 saw with several customers that did this 27:18 for a variety of reasons, many people 27:21 predicted maintenance in the plant have 27:24 folks that have been around for a very 27:25 long time. New talent coming in 27:29 have no idea what the sounds on the 27:31 machines are going uh when something is 27:34 is soon to fail. So that was a great use 27:37 case that many of our our manufacturing 27:39 customers started with was just simply 27:41 taking the manuals or other information 27:43 that was in people's heads that have 27:45 been there for 50 years for the the new 27:48 line to be able to go easily query 27:51 automate quickly identify what the what 27:54 task they needed to do what predictive 27:56 maintenance uh was was expected or 27:58 upcoming. So starting with that business 28:00 outcome painpoint and then lastly I'll 28:03 say implementation feasibility 28:05 that really comes in when I want to take 28:09 proof of concept to scale um can we 28:13 effectively implement this scale it go 28:16 to other use cases what are the risks 28:19 within there so really then 28:21 understanding what are the resources I 28:22 need to have in place are there other 28:25 data considerations I need to do are 28:27 there other guardrails I need to put in 28:28 place with regards to how we're using 28:31 the AI. Um, so th those are the three 28:33 main things, the business outcomes, pain 28:35 point, and then implementation 28:37 feasibility. 28:39 >> That's great. Um, Abashek, how are you 28:42 seeing teams pivot towards automation in 28:44 our business? 28:45 >> Yes. Um, yeah. Well, it's a it's a great 28:50 question because it's an important 28:51 question because I think this is 28:52 actually um happening different phases, 28:54 but we're starting to see a lot of uh 28:56 teams start now really running towards 28:58 this. Um, one thing that's important 29:00 about this is um you know, it's actually 29:02 a pretty consistent evolution in in how 29:05 we're seeing teams adopt AI. Um it's 29:08 really what I would say maybe like three 29:10 uh three stages uh where where I call 29:14 them kind of on the maturity curve. The 29:15 first stage is you know building AI 29:17 assisted applications. Um the second 29:19 stage is you know building um probably 29:22 more AI workflows. Uh and the third one 29:25 is really what I would call more complex 29:27 kind of fully autonomous agents right 29:30 that that first stage uh AI assisted 29:33 apps. This is again where most companies 29:35 start right and and where I think a lot 29:37 of CPG companies are today. Um and and 29:39 that's that's okay. That's is great. 29:41 It's a great starting point, right? It's 29:43 basically taking, you know, an existing 29:44 internal tool, right? Think you've got 29:46 um a support dashboard, something for 29:49 order management, inventory, uh 29:51 something for reviewing data, and you 29:53 basically add AI for specific steps, 29:55 right? Maybe you want each to summarize 29:57 a ticket, maybe you want to classify a 29:58 request, um extract fields from a 30:00 document or or summarize a document. Um 30:03 this is still highly relevant um and 30:06 really really useful because it saves a 30:07 ton of time, right? um the human drives 30:09 the workflow is obviously helping at 30:12 certain points but it's a great starting 30:13 point and I highly recommend if you're 30:15 not thinking about this this is a really 30:17 great place to get started now the 30:19 second piece is what I would call uh AI 30:22 assisted uh workflows right it's really 30:25 uh orchestration you know with AI in as 30:28 the backbone right um this is where we 30:30 can you can start think of AI is really 30:32 starting to drive the workflow as 30:33 opposed to the human uh a good example 30:36 of this is Um, you know, let's say 30:39 you've got uh a support ticket coming in 30:41 and you want to triage the right place 30:44 uh or you've got the right supply chain 30:45 exceptions you want to make. Uh you want 30:48 to route them to the right team. Do you 30:49 want to draft a response? Do you want to 30:52 look up information to help step to help 30:55 um you know identify the right 30:56 information to to answer that ticket? Um 30:59 you can see AI now here kind of doing 31:00 more of the orchestration, but the human 31:03 only really steps in when you know maybe 31:05 a couple of ways. Maybe it's to approve 31:07 something or it's you know the decision 31:10 is uh low confidence right or by design 31:13 they want the human to come in above a 31:14 certain threshold right you but the the 31:17 answer is like you know humans are more 31:19 a part of the step as opposed to driving 31:21 the workflow whereas AI now is driving 31:23 the workflow 31:25 the third one is obviously where you 31:28 know is is what I would call the 31:29 frontier right this actually you know we 31:31 actually have a uh a couple of great 31:33 examples uh you know companies like 31:35 Colgate that are what I would consider 31:37 at the frontier of this running complex 31:38 multi- aent systems in the background 31:40 right um they tend to have what I would 31:43 call multiple specialized agents in 31:45 production right doing things for 31:47 employee services um doing things for 31:49 support uh doing things for managing 31:51 orders inventory uh and so the question 31:53 here becomes can these agents work 31:56 together now you've got an agent doing 31:58 something but you know is it actually 32:00 able to work with each other to actually 32:02 give you that that 10x productivity 32:04 right um and so that requires there's 32:06 this whole concept of orchestration 32:07 layers like are you routing requests are 32:09 you how are you coordinating these 32:10 agents how are you monitoring 32:12 performance the guardrails we talked 32:13 about earlier and so those are some of 32:16 the things that you know I think are the 32:18 evolution 32:19 um my suggestion here is a as as I 32:23 mentioned before there's you know it's 32:26 really not worth it trying to go to the 32:28 last stage it's really important to 32:29 start with the early stages the 32:30 beginning stages right don't try to 32:32 automate everything at once uh 32:34 especially for CPG companies, you know, 32:36 you want to pick workflows that you can 32:39 um that can that are easier to start 32:41 with, right? These might be workflows 32:43 that are, you know, high volume, heavily 32:45 run based, right? But still require a 32:47 little bit of judgment. Uh and so you 32:49 know with customers I've seen that be 32:51 things like you know supply chain 32:52 exception handling uh promotional 32:54 analysis that you want to approve um you 32:57 know some sort of documentation around 32:58 quality control uh procurement workflows 33:01 that you want to uh you know you want to 33:02 be able to approve as different steps 33:04 but the point is just pick one workflow 33:07 uh do it well and start building 33:09 momentum from that right um that's kind 33:12 of what I mentioned earlier which is 33:13 Colgate right they started with that 33:14 example of the uh the translation of the 33:17 of the manuals of the factory floor and 33:19 then we're able to take that and now 33:22 running you know complex multi- aent 33:23 systems in the background but that is 33:25 what how it how it wins right and that's 33:27 how you can succeed and so um you know 33:29 the one thing I would mention as you 33:31 know on this on this bit before I wrap 33:33 up is 33:35 the platforms that you know that are 33:38 working well in this space that are that 33:39 are the best in the space they tend to 33:41 support this entire progression right 33:43 you don't want to be stuck with a 33:45 platform that only does one of it or 33:46 does the most advanced stuff. You want a 33:48 platform that supports you anywhere you 33:49 are in your journey and meet you where 33:51 you are, right? And so, you know, you 33:53 want to start by adding AI to your 33:54 internal tools and automate workflows, 33:56 eventually bring in more complex agents. 33:58 Um, and and the key here is really 34:01 making sure that you have one platform 34:02 so you're not replatforming every single 34:04 time you're moving up this maturity 34:06 curve. So, uh hopefully that helps 34:08 answer uh you know that that question 34:10 for you and and um happy to dig in more 34:12 during the live Q&A with anyone. 34:14 >> Yeah, absolutely. Well, and that's 34:16 actually a perfect segue kind of the 34:17 next topic, right, which is and you gave 34:19 some great examples of some of our our 34:21 joint customers who are at that kind of 34:24 they're reaching or are at that full 34:26 maturity stage where AI is embedded, 34:27 it's governed, it's measured, production 34:29 grade, it's live, people are using it, 34:32 agents interacting together. Um, 34:34 Abishek, when you're walking into engage 34:37 with an organization, what are the 34:38 signals that are telling you that 34:40 they're actually scaling AI versus just 34:43 in that experimentation phase? 34:46 >> Yes. Yes. Uh, great question. So, um, 34:52 when you, so when I walk into an 34:53 organization, um, you know, it's it's 34:56 kind of pretty obvious what the 34:58 difference between AI experimentation 35:00 and and AI scale looks like. Um, and 35:02 there's a few signals I look for right 35:04 away, which um, hopefully, you know, uh, 35:06 folks listening in can can also look 35:08 for. The first one is, um, what I would 35:12 call production versus prototype. Uh, 35:16 you know, is is the AI running in the 35:17 background or is it only in a demo or is 35:20 it solving something so low value that 35:22 nobody even cares if it's not working? 35:24 Right? In experimentation, you know, 35:27 typically what you see organizations 35:28 doing is, you know, AI is um, you know, 35:32 living in a in a prototype or it's 35:34 living in a notebook. It's in a slide 35:36 deck, maybe it's in a proof of concept 35:37 that someone built and people keep 35:39 citing it, right? But they don't have a 35:41 version of it running in actually at 35:43 scale. In scaling organizations, it's 35:46 different, right? It's running inside AI 35:48 workflow in certain real workflows, 35:50 right? Even if it's an AI app, it's 35:51 there. It's processing data. It's making 35:54 decisions. Um, right. And and so a 35:57 simple test you can think about is if a 35:59 real AI workflow is in your organization 36:01 and you know something breaks at 2 a.m. 36:03 or or whenever during the day and 36:04 someone gets notified, it's in 36:07 production, right? That means it's 36:08 important for your business. If it stops 36:10 working, uh, you know, it was either a 36:13 demo or, you know, didn't really solve 36:15 something meaningful such that, you 36:17 know, people didn't really care that it 36:19 stopped working, right? So that's a 36:20 really simple test you can you can run 36:23 uh as you think about building this. 36:25 The second signal is what I call 36:28 measurement, right? Um so obviously the 36:31 thing to look for is metrics but as I 36:34 mentioned earlier like very few 36:36 companies actually do this well or or 36:37 measure AI impact. Um you know we saw 36:40 this in our report where you know about 36:42 37% of organizations actually haven't 36:45 established any AI productivity metrics 36:47 yet right and the question you ask 36:49 yourself is if you're investing all this 36:51 you know uh time and resources and money 36:53 into AI why aren't you measuring 36:55 productivity around it right the scaling 36:58 companies though can actually tell you a 37:00 lot more interesting things right they 37:01 can tell you what they've automated they 37:03 have clear examples they have clear 37:06 stories that they can or you know point 37:08 to about ROI that they've gotten. They 37:10 know how much they've saved whether it's 37:12 terms of time or money. Um, you know, 37:14 they've they have a sense of accuracy of 37:17 the AI system running too and they know, 37:19 oh, what it's good at and what it's not 37:20 bad, you know, what maybe what's not 37:22 great at today. And point is is like, 37:24 you know, they're tying it closer to 37:26 their outcomes as a business. They're 37:28 tying it closer to their operations. 37:29 That helps you understand how um, you 37:33 know, how much it's there. um you know 37:35 and then I I know I mentioned metrics 37:37 but really the uh another underlying 37:40 aspect of it is just qualitiveness right 37:42 so it's something that people in the 37:43 organization site to and point to 37:45 stories right like that's when you know 37:46 that organizations really starting to 37:48 scale um okay the last one the last 37:51 signal uh is is really around um the 37:55 platform right and how much are they 37:57 reusing that platform over and over 37:59 again right a great signal honestly is 38:02 how easy it is for other teams to build 38:04 on the you know the same platform right 38:07 uh you know in experimenting 38:08 organizations kind of like what I cited 38:10 earlier you know teams build their own 38:12 stack everyone's doing different things 38:14 different models different frameworks 38:15 this team's not talking to that team 38:17 they're connecting to this data they're 38:18 connected to that data it's all kind of 38:20 dispersed but in scale organizations you 38:23 they have figured out they have to do to 38:24 some degree centralize things right they 38:26 have to have shared platforms they have 38:28 to have shared patterns that governance 38:29 piece becomes important uh and so and 38:33 what ends happening is as you see one 38:34 team build a workflow um now another 38:37 team can reuse that AI workflow or they 38:39 can you know do something else on top of 38:41 it um and now what they're doing is 38:42 they're shipping things that now 38:44 historically may have taken a long time 38:46 maybe months or weeks and now literally 38:48 can do it in less than a day um that's 38:51 when you know you know AI is not really 38:52 a project but really an operating model 38:55 for the business and when you're seeing 38:56 things at that level that's when you 38:58 know you're you're succeeding so um 39:00 those are the signals I look for right 39:01 platform measurement ment data um and uh 39:06 you know production versus prototype you 39:08 know like I said the it's the real 39:10 maturity isn't just how many you're 39:12 running it's really the fact that you're 39:14 using something the 50th time is better 39:16 than the first easier than the fifth 39:18 time and that's when you know it's 39:19 really embedded in your organization so 39:22 um yeah 39:24 >> it's great um Kevin turning over to you 39:27 know AWS historically has had a lot of I 39:30 think kind of first frameworks for say 39:32 how an infrastructure team is deploying 39:34 their cloud things like um well 39:37 architected frameworks and um kind of 39:39 pioneering this notion of like centers 39:41 of excellence. Um how are you and and 39:44 the broader AWS team thinking about 39:46 these signals and and how you're getting 39:48 kind of that early indicator that that a 39:50 customer is really actually approaching 39:52 that that notion. Yeah, I think I so 39:56 well I mean the biggest part there is is 39:58 the 39:59 data what's their data foundation and 40:02 how do they take into consideration how 40:05 are they managing their data 40:06 infrastructure really ties to that 40:08 because it's so critical to go live and 40:12 so I love what Abashek's thing uh 40:14 example like if it's just a demo and it 40:18 it fails and and no one screens you know 40:21 it's not in production. So as you've got 40:24 this demo environment there, but if the 40:26 if something fails in the middle of the 40:28 night and it's going to affect a 40:29 customer or it's going to affect some 40:31 workflow in the supply chain as an 40:33 example, um you know you you are at on 40:36 deployment and scale. So it's have 40:39 seeing things that are actually in 40:40 production. The other is that we will 40:42 start to see or we've started to see 40:44 with organizations that they've got, you 40:46 know, potentially an AI center of 40:47 excellence. Um and in that there's a 40:49 cross functional team. So it's not just 40:51 IT but it's also the business users that 40:55 are applying this AI. It's potentially 40:58 uh legal or compliance teams that you 41:01 know where customer data is coming into 41:03 play or how data is being used. So you 41:05 may have some you start to see uh some 41:09 sort of a a steering committee that they 41:11 want to have discussions with. And then 41:13 the last part is that they've given 41:15 thought to governance and measurement. 41:16 So um they've got rules in around 41:20 governance of how they use data, how you 41:23 apply, where can you put customer data 41:25 in what tools, what what data can can be 41:29 used in certain models versus other 41:30 models within their organization. Um 41:34 they've got that kind of under in 41:36 consideration as a part of the process. 41:37 we start to know that they they're 41:39 starting to deploy. 41:41 Um they're on a a higher level of 41:43 maturity, if you will, on their AI on 41:45 their AI journey. 41:48 >> Love that. Um we're going to go to one 41:50 quick kind of final question here. 41:52 Before that, just want to remind 41:53 everyone we're going to open up to Q&A 41:54 here in a few minutes. Uh we've got a 41:56 lot of great questions, so if there's 41:58 anything top of mind, uh please drop it 42:00 into the Q&A now so we have some time to 42:02 cover it. So yeah, one final kind of 42:05 lightning question uh for for each of 42:07 you. We'll start with Kevin which is um 42:10 maybe give us like top one or maybe two 42:13 practical recommendation for where the 42:15 audience here today can walk away with 42:17 it's going to help them make meaningful 42:18 progress when they go back to their 42:20 business, back to their organization to 42:22 accelerate their AI transformation. 42:24 >> Yeah. I mean I mean I I think the I'll 42:26 say the biggest thing is if you're on 42:28 the fence or you're hesitating, don't 42:30 overthink it. Um and to Abashek's point, 42:33 don't try to boil the ocean with 100 42:35 workflows. um identify a use case in 42:39 your organization or or you know that 42:42 may influ you know impact with your 42:44 customer that you you can have some sort 42:46 of a benefit with automating that 42:48 particular task. So, uh, you know, tie 42:51 it back to a business outcome that you 42:53 can actually show some kind of a result, 42:56 whether it's, you know, improved 42:58 performance, um, speed, accuracy, those 43:01 kind of things that tying in. But I I'll 43:03 say don't hesitate. Um, it's time to get 43:06 started on this journey and, uh, but but 43:09 you know, start small. Start with a use 43:11 case and and where you can tie it to a 43:13 business outcome you can use to justify 43:14 and scale. 43:17 >> Wonderful. Abishek. 43:20 >> Yeah, I mean I I would really echo, you 43:22 know, what Kevin said, which is, you 43:24 know, start with one one problem to 43:27 solve. Don't try to do too many things. 43:29 And I think, you know, as it is with any 43:31 new technology, um, you know, you, it 43:35 sounds scary at first, but the reality 43:38 is it's, you know, it's it's probably 43:39 not as scary as you think it is. And the 43:42 best thing to do is, you know, if you 43:43 thought you're going to get started next 43:45 week, get started today. get started 43:47 tomorrow. Don't wait because the techn 43:49 is moving so fast. It's important that 43:52 you know if you know this is important 43:53 for you whether it's for your business, 43:54 for your team, for your career, you 43:56 know, it's better you just get started 43:58 now. Don't wait. Uh that that's the 44:00 easiest way to learn. Uh and so start 44:02 small, but start now. 44:06 >> Great. Love it. Uh really good practical 44:08 guidance and actually uh I think perfect 44:10 segue. We're going to jump into Q&A. So 44:12 again, thanks for the great questions 44:13 here. Uh first question well AI is still 44:17 in the adoption stage but uh we're 44:19 seeing expectation from industries is 44:22 that those involved should say have x 44:24 number of years of experience or y 44:26 number of projects of experience under 44:28 their belt. Um h how do you see this and 44:31 what's your take on this uh a kind of 44:34 question around experience um and and if 44:37 that's important to get started. 44:39 >> Yeah, I'm happy to take it. Kevin, maybe 44:42 you want to jump in second. 44:44 >> Um, 44:45 >> look, I think the the truth is like 44:49 nobody has 10 years of AI production AI 44:51 experience. It's just not reality, 44:53 right? Um, you know, no matter what you 44:56 may put on your public resumes and and 44:58 things like that, the reality is there 45:01 are there is no one who has that many 45:03 years of experience, right? reality wise 45:05 is the and the reason is because the 45:07 field is moving too fast right for uh 45:09 for traditional experience skills to 45:10 apply 45:12 I think what matters the most is you 45:16 know whether you're someone who can 45:17 figure out how to use AI in a real 45:20 business workflow right obviously with 45:22 all the things we talked about like 45:23 proper governance and and and someone 45:25 who can help take a PC that they've run 45:27 into something at scale right and and I 45:30 you know mentioned this earlier but at 45:32 retool I mean the best builders on our 45:34 platform. Uh the people who we've seen 45:36 bring the best results, they just tend 45:39 to be domain experts who just deeply 45:41 understand the problem. They're close to 45:42 the problem and they learn the products, 45:44 they learn the AI capabilities, they 45:45 learn the tooling, uh and they're 45:48 motivated to go and solve it. They're 45:50 not necessarily AI specialists coming 45:51 with 10 years of experience. They're 45:53 people who are close to the problem and 45:55 they go and figure out how to solve it. 45:56 And so, um that's really how I would 45:58 think about it, which is the top 45:59 builders don't have to have some perfect 46:01 background. It's it's kind of what I 46:03 said earlier. If you have a problem to 46:04 go solve it, go solve it starting today. 46:07 >> Yeah, that's great. Kevin, any extra 46:09 thoughts? 46:10 >> No, I I concur totally with Abashek. So, 46:13 I I love the response. So, 46:15 >> it's great. Wonderful. Uh, next question 46:18 asked by Juan. What specific criteria or 46:21 decision gates do you use to decide 46:23 whether an AI pilot should be scaled, 46:25 redesigned, or shut down? Kevin Abashek, 46:30 which one of you would like to take a 46:31 first? 46:32 >> I mean, I I think a couple I mean, I I 46:34 guess a couple areas. One is as you're 46:37 doing those pilots, there's some 46:39 quantitative metrics. Are you actually 46:41 seeing the time savings expected? So, on 46:44 back to our use case and our business 46:46 outcomes, was there a time saving you 46:47 were trying to do? Um, velocity 46:50 improvement, that kind of thing. There 46:52 may be qualitative feedback. So, is the 46:55 tool actually usable? what's the 46:57 learning curve? Um the the builder 47:00 satisfaction with that upon the output 47:03 um and then you know what's it going to 47:06 take to scale and are there other you 47:07 know are there data dependencies or 47:09 others as a part of that. Uh those are 47:12 kind of be some of the things I think 47:13 I'd take into consideration. 47:17 >> Yeah it's great. Anything to add? 47:19 >> Um 47:21 you know I think I I'd probably think of 47:23 a few gates. I think maybe three gates I 47:25 would think about which is um you know I 47:28 I would first look for things like uh 47:30 you know is this is this something that 47:32 is measurably saving time or money in in 47:36 some sense right and it's not just in a 47:38 demo two I think another gate you can 47:41 think about is you know can it run 47:42 reliably without someone having to 47:45 babysit it all the time I think that's 47:47 an important measure gate as well and 47:50 you know does it pass your governance 47:52 bar for production ction data access and 47:54 and decision-m authority right you know 47:57 if it honestly if it hits all three it's 47:59 saving you time it can run on its own 48:01 and it meets your production data access 48:04 bar you know it's I would scale it then 48:07 um if it fails on reliability or 48:09 governance or then you can go back and 48:11 redesign the oper how the operational 48:13 aspects is done if it fails on driving 48:16 any measurable value then I think you 48:18 should probably not do it and and think 48:20 about a new problem to go and solve so 48:22 um those would be some of things I would 48:23 think about when you're thinking about 48:25 um you know these gates here. 48:28 >> Yeah, love it. Um great. Next question. 48:32 Uh how do leading companies standardize 48:33 the evaluation, governance and security 48:36 for internal AI tools without slowing 48:38 down the teams that are building them? 48:41 Um I think Abashek you spoke a lot to 48:43 this and you're architecting our 48:44 platform to kind of solve for some of 48:45 these 48:46 >> areas. Would love to get your your first 48:48 take. 48:50 Yeah, I think it's um I think it's 48:52 similar to kind of what I mentioned 48:53 earlier, which is, you know, the 48:55 companies getting this right are they're 48:58 the ones that make governance invisible 49:00 because it's a part of the platform, 49:01 right? Um role-based access controls, 49:04 audit logging, secrets management, um 49:09 you know, approved model lists, these 49:11 are all defaults that should exist, not 49:13 afterthoughts that you want to build to 49:14 the product, right? And so you want to 49:16 make sure that builders are basically 49:17 never thinking about compliance. Uh 49:19 right and and so that that example I 49:21 cited Colgate Palm Olive um I think 49:24 that's the right model right you want to 49:26 tier um you know use cases by risk level 49:29 right by low risk versus high-risk uh 49:33 you know you don't want to apply one 49:34 sizefits all to everything. uh and you 49:37 want to make sure that um as you're 49:41 doing this, you know, you're measuring 49:43 the, you know, the that there really 49:45 isn't shadow IT happening, right? Um as 49:48 I said earlier, like 60% of things are 49:51 uh basically considered shadow IT. And 49:53 so, you know, if you're, you know, and 49:55 so the reason I mentioned you want to 49:57 measure it is because if the strategy 49:59 you're putting in place is slowing 50:00 people down and you see that going up, 50:02 then you know that, you know, you're you 50:04 obviously have some work to do here. But 50:05 I would start with those things that I 50:06 mentioned earlier which is a platform um 50:08 has to bake it for you. 50:11 >> Yeah. 50:12 >> Um on that note and we've got a number 50:14 of questions so I'm kind of going to hop 50:16 between the two of you here. Um Abashak, 50:18 there's a next a question here by by 50:21 Karen. Can you provide an example of a 50:23 successful AI platform? And I think I 50:25 know which one you'll recommend first. 50:26 Um and there's certainly uh some 50:29 examples here, but um h how do you think 50:32 about that? 50:33 Well, I'm a little biased, uh, you know, 50:36 in what I would recommend. So, uh, 50:38 obviously I would I would love for 50:39 everyone to use, uh, to use retool, but 50:42 um, maybe maybe what I'll do is, um, uh, 50:46 you know, I'll talk about why why I 50:48 think it it makes sense. I think it's 50:50 it's really because of the governance 50:51 piece, right? I think maybe that's the 50:53 way I would kind of think about this is 50:54 you know we want to make sure that you 50:56 know you as you're building applications 50:58 automations workflows agents for your 51:00 system you know retool can obviously 51:02 support you there and and and do it 51:04 really well but I think what we do 51:06 extremely well is the fact that 51:09 governance is not an afterthought right 51:10 it's it's it's built into our system 51:12 it's something that you get out of the 51:14 box uh you know you can basically manage 51:17 your AI at scale you can deploy it at 51:20 scale and and you know you know you 51:23 don't have to take our word for it. You 51:24 can look at some of the companies using 51:25 us right Colgate is a great example. 51:27 Perno Ricard is a great example. Holland 51:29 Barrett all great companies and CPG uh 51:32 amongst some of the other ones um you 51:33 know that that we talked about earlier. 51:36 Um they're all leveraging retool in in 51:39 some of the ways that I talked about 51:40 which is you know they're using it as 51:42 this layer to be able to bring AI 51:44 productively at scale. Um, and so, uh, 51:47 you know, think about thousand employees 51:49 building with AI. And so, that's how I 51:50 that obviously I'm biased, but I would 51:52 say that this is probably a really 51:54 successful platform you should think 51:55 about using Karen. 51:58 >> Yeah, that's great. And that's kind of 52:00 the answer I expected and I'm certainly 52:01 supportive of this. Um, Kevin, um, you 52:04 know, Retools built on AWS and we 52:06 leverage a lot of the wonderful kind of 52:08 platform kind of infrastructure that you 52:10 provide. And I think what's kind of 52:12 exciting about the world we're in is 52:14 that um you know agents can talk to each 52:16 other, right? There's common protocols 52:18 like A2A. So whether an agent's built in 52:20 retool uh or or built in say um you know 52:24 agent core or some some other tool, they 52:26 they actually still can engage and talk 52:28 to each other. You see this kind of um 52:30 multi- kind of platform world evolving 52:33 much like we've seen in other parts of 52:35 technology. So from the AWS perspective, 52:37 uh would love to get your thoughts as 52:39 well. 52:40 >> Yeah. No, I I mean I I think we've we've 52:43 seen it. Um organizations even today 52:45 have got a variety of applications that 52:49 they're talking to, data they're coming 52:50 from. In many cases, they've got 52:53 multiple ERP systems, multiple 52:55 manufacturing systems um uh that they've 52:58 got to integrate to. So I think um 53:01 having a platform like retool being able 53:03 to integrate and pull from a variety of 53:05 systems but also orchestrate uh with 53:08 different uh different agents or 53:11 different calls that need to be done um 53:13 is going to be critical and I think it's 53:15 going to it'll be critical as we go 53:17 forward and wanting to scale these 53:18 across into production. So 53:22 >> yeah, that's great. Um I'm going to hop 53:24 around some of the questions here 53:25 because there's some patterns um 53:27 emerging. So um David uh prompted the 53:31 team here um would you not recommend 53:33 governance first rather than a middle 53:35 ground uh given that uncontrolled shadow 53:37 IT is a worry I know we don't want to 53:39 stifle innovation but surely control is 53:42 important um Abashek how do we think 53:45 about that maybe from our lens around 53:47 kind of establishing the platform first 53:49 and then giving it to the teams to go 53:51 build 53:52 >> yeah 53:54 it's a great question David and you know 53:56 I actually agree with you more than it 53:59 might sound like governance absolutely 54:01 has to come first but I think the 54:03 question is more what kind of governance 54:07 right and I think uh the I think the the 54:11 mistake I think you can make as a 54:12 company is you can build governance as a 54:15 gate that effectively says no and I 54:18 think that's very different than a than 54:19 governance as a platform that says yes 54:21 but safely and you know because if your 54:24 governance let's say for example is just 54:25 a review board and a long approval 54:28 process then your people are going to go 54:30 and do the shadow IT thing that I 54:31 mentioned earlier right 60% of people in 54:33 our in our survey said that and so again 54:36 like I said the best option here is is 54:38 going back to the idea of the platform 54:40 right it's to stand up a platform where 54:42 on day one you get things like security 54:45 access controls permissioning audit 54:47 logging approved models by default right 54:50 that way you're not choosing between 54:51 control and speed right you're not 54:53 you're not saying no really you're 54:55 saying yes but you're taking it with as 54:57 a part of the controlled path to 54:59 production. Uh and and that's exactly 55:01 why you know I I mentioned the quote 55:03 earlier but you know uh the head of AI 55:06 Colgate had said that you know make the 55:08 make the happy path the easiest path and 55:10 and it sounds like it's a lot of work 55:12 but the reality is it's it's it's much 55:14 simpler than you think it is because the 55:16 platforms have this baked in and so 55:17 that's how really how I would think 55:19 about this. 55:21 Yeah, and I I I agree with you there, 55:23 Abage. I go back to that uh the question 55:25 we had. I think it was the third one, 55:26 but that risk based tiers, we're 55:28 applying governance, but it's just then 55:31 determining that high, medium, low risk, 55:33 at least from our perspective, um of how 55:36 much governance do you want to put in 55:38 place. So, 55:40 >> that's great. We've got time for one 55:42 more question. Um so, I'm going to go 55:45 back up to the top here. Question from 55:46 from Soul, I believe. Um what do you 55:48 think about the idea that an 55:50 organization putting their systems and 55:51 operations dependent on AI will 55:54 essentially uh be making their entire 55:56 operation dependent? Um or one might 55:59 argue subservient on the AI providers 56:01 and the platform. We've already seen 56:03 this what happens when payment systems 56:05 go down or internet goes down. Why is 56:07 this never discussed as a legitimate 56:09 risk factor? Um I think um Abashak and 56:13 Kevin will both have probably similar 56:14 thoughts on this based on the kind of 56:16 platform approaches we have around um 56:18 giving customers flexibility kind of 56:20 being that Switzerland around what kind 56:23 of tools and models are being used. So 56:24 Abashek we'll start with you. 56:28 >> Yeah this is a really important question 56:30 uh soul and um honestly I think it 56:33 probably doesn't get discussed enough to 56:34 be honest here. I think you're right. Um 56:37 you know so our our philosophy has been 56:39 um we we mitigate this by being model 56:42 agnostic right you can you can bring 56:44 your own key you can swap models uh you 56:48 know you're not locked into a single 56:49 provider um it's you know we are 56:52 Switzerland to to to Steve's point and 56:55 we also offer you know self-hosted 56:57 deployment so your data infrastructure 56:58 stays on your terms not ours 57:01 and so the real risk isn't using an AI 57:03 platform right it's using closed ones 57:05 where you're not maybe sometimes able to 57:07 switch providers. I think where you 57:10 can't audit the stack, you can't run 57:11 independently if things go down. That's, 57:14 you know, that's the kind of 57:15 architecture decision I think you should 57:16 be pressure testing. Um, and so that's 57:19 how I would think about this. But our 57:21 philosophy is um is is is really quite 57:23 the opposite. 57:25 >> Yeah, Kevin, I think uh AWS, Amazon more 57:28 broadly speaking have some wonderful 57:30 kind of approaches to this. you know, 57:32 Amazon Bedrock as a uh perhaps too 57:35 simply put as a model garden which of 57:37 course we support in our platform. 57:39 >> Yeah. 57:39 >> How do you think of us this from an AWS 57:42 perspective and and really giving 57:43 customers that choice and flexibility? 57:46 >> Yeah, absolutely. I mean it is it is the 57:48 it's kind of the foundation to our going 57:51 about AI which is provide with bedrock. 57:53 It actually gives you you you the 57:56 customer the choice of models um based 57:59 on the use case or what you're trying to 58:02 do. So you you can select which model 58:04 you can actually evaluate multiple 58:06 models against one another to understand 58:08 you can change models over time. Uh so I 58:11 think that's key. The other part are 58:13 things like the guard rails. So making 58:15 sure you've got some of the automated 58:17 controls in place. So if something uh 58:20 you know something violates a particular 58:22 guardrail or particular rule, you're 58:24 alerted before it's actually impacting a 58:26 customer as an example. Um or even 58:28 backup recovery and redundancies. So if 58:31 if something does go down uh other 58:33 regions or those things are available. 58:35 So I think all of those kinds of things 58:36 are are are um important to take into 58:39 consider. 58:41 >> Yeah. Well, we are we are at time. Uh 58:44 Kevin Abashek um really great 58:46 conversation. Um, appreciate it. Um, 58:49 audience, really appreciate some 58:51 fantastic questions in here. 58:52 >> Yeah. 58:52 >> Um, really great to kind of see where 58:54 where everyone's coming from and the 58:56 challenges you all are facing. I think 58:57 we will certainly follow up with 58:59 everyone individually. So appreciate we 59:02 did not get to all the questions today, 59:03 but we will certainly follow up with you 59:05 uh individually to see how collectively 59:07 together retail and AWS can support um 59:09 your AI transformation journey and your 59:11 initiatives and help your business uh uh 59:15 uh innovate. And I think you know one of 59:17 the last questions that came in is even 59:19 outside of consumer goods again like 59:20 financial services other industries um 59:23 we certainly have some really great 59:24 sources. So, 59:25 >> thanks everyone. Really appreciate it. 59:26 Abishak and Kevin really uh again 59:28 appreciate the time and and thoughtful 59:30 uh conversation and Q&A. So, thanks 59:33 everyone for joining. 59:34 >> Thank you everyone. 59:35 >> Thank you. Yeah. Thank you.