Description Transcript
Most organizations have the LLM part of AI implementation figured out. The real challenge is building AI systems that can reliably access the right data, make decisions based on complete context, and take action without creating new problems.
Teams getting real value from AI aren’t just analyzing data—they’re building systems that act on it. Support tickets that automatically route based on customer context. Forms that pull relevant information from your CRM. Sales processes that qualify and prioritize leads based on behavior patterns.
Analytics Engineer Sam Garfield sits down for a candid conversation about the messy reality of making data work for AI—and the practical steps that actually move the needle.
Read more 0:00 [Music] 0:17 Hi everyone. Uh, welcome to our webinar. 0:20 Is your data holding back your AI? And 0:22 here's how to fix it. Um, that was some 0:25 vibe elevator elevator music. Loved it. 0:27 I 0:27 >> was feeling it. 0:28 Uh, my name is Alicia Harper. I'm a 0:30 technical PMM here at Retool. And today 0:33 we're sitting down with Sam Garfield, a 0:36 staff analytics and data engineer at 0:39 Retool. And we're going to have just a 0:41 candid conversation about some of the 0:43 messy realities of making your data work 0:46 for AI and some of the practical steps 0:49 to get you there. So whether you're an 0:52 IC or a data leader or just interested 0:55 in this really cool intersection of data 0:57 and AI, I hope you guys can all walk 1:00 away with more confidence about your 1:02 data strategy and hopefully have some 1:04 concrete next steps that you can take 1:06 back to your team um and initiatives. So 1:11 before we jump in, a little bit of 1:13 housekeeping. This webinar is recorded. 1:16 So we are going to send a recording to 1:17 everyone at the end of the session. And 1:20 we're also going to hold a Q&A 1:22 afterwards with Sam. So, please share 1:24 your questions in the Q&A feature down 1:27 in Zoom. And if you're having any 1:30 technical difficulties, please reach out 1:32 to Daniel Kim, who I see just sent you 1:35 guys a little welcome. So, Sam, you 1:38 ready and get started? 1:41 >> Absolutely. 1:43 >> Okay. So, before we jump in, can you 1:45 tell me a little bit about yourself? 1:47 like what was your journey into data 1:49 analytics? What drew you into this 1:51 field? 1:53 >> Yeah. So, uh I actually was really 1:56 interested in data in university. Um 1:59 but, uh I hadn't really done the prep 2:02 work to, you know, major in a degree 2:04 that would would get me a job in data, 2:06 at least so I thought at the time. Um 2:08 but ended up studying economics where I 2:10 took as many applied stats classes as 2:12 possible. Uh, and then, uh, after 2:15 college, I actually became a high school 2:17 teacher, but between school years, I 2:20 needed something to do, um, over the 2:22 summer to make some extra money. And I 2:24 got an internship at Rocket Mortgage on 2:27 the data analytics team. Uh, and, you 2:29 know, by the end of summer, it was like, 2:30 this is what I want to do. Um, never 2:33 want to, you know, and then switch 2:34 careers from there. So, it's been, this 2:36 is my 14th year in data. 2:38 >> Wow. So, you but you've always had your 2:40 head in statistics and analytics, it 2:42 sounds like. 2:42 >> Yeah. Very interesting. I've also seen 2:45 you described as someone who is always 2:47 willing to tread the unknown territory, 2:50 which lines up with your history. Um, 2:53 what role do you kind of see yourself or 2:55 other data teams playing in this new AI 2:58 territory? 3:01 >> Yeah. Uh we'll get into much more detail 3:05 later probably, but um you know the uh 3:08 bottleneck for getting value out of AI 3:12 at your in the enterprise or nonprofit 3:15 or wherever you're trying to do it is uh 3:18 usually actually the data, not um you 3:21 know like the models or the uh prompt 3:24 engineering. Um and so you know there's 3:26 an opportunity for data teams to really 3:28 like seize the opportunity and open up 3:32 that bottleneck um and make AI something 3:35 that the enterprise gets value from. 3:38 >> Was there a moment that you kind of 3:40 realized AI was this next challenge you 3:42 wanted to take on or did it was it just 3:45 a personal interest or did it come from 3:46 a business challenge that you needed to 3:48 solve? 3:51 >> Yeah. um you know it's something that uh 3:54 has been a part of my career and 3:56 interest for a long time and say it's 3:58 something um the general theme of 4:00 algorithms and how data relates to 4:03 algorithms um you know I got my start 4:05 doing uh econometric forecasting which 4:09 is about algorithms um but then you know 4:11 after university um started working in 4:13 data I spent a while as like a 4:15 statistician and data mo or a machine 4:18 learning engineer um and I've seen you 4:21 know how uh algorithms can add value or 4:24 um create risks. Uh and that only kind 4:27 of increased with the advent of LLMs 4:30 which are kind of a new category of 4:32 system. Um so you know I've been 4:34 cautiously optimistic and interested uh 4:36 it says in the last year year and a half 4:39 that I really became convinced that 4:40 there's a lot of value there from LLMs u 4:43 that's waiting to be realized. 4:45 >> Interesting. It's interesting that you 4:47 were in ML and then you're now kind of 4:49 seeing the adoption of AI at a more 4:54 high level greater scale. 4:57 >> Yeah. Yeah. Seen similar commoditization 4:59 and like democratization like we had 5:01 with ML over time as well. 5:03 >> Yes. And with that comes a lot of 5:05 challenges in adopting that. So we want 5:08 to dive dive into the deeper questions. 5:10 Um, 5:12 I know like it seems like everyone is 5:14 dealing with like data connectivity 5:15 issues nowadays. I just data lives 5:17 everywhere. It's in a it's in 5:19 spreadsheets. It's in warehouses. It's 5:20 in other business softwares. What 5:23 actually happens to AI when your data is 5:26 scattered like that? And are there steps 5:29 you can take to start fixing that? 5:32 >> Yeah. So uh actually a friend of mine 5:35 who may be in the audience um he said 5:38 something that I stole and will now 5:41 steal from him um which is that there's 5:43 no AI without IIA. Uh IIA standing for 5:46 information architecture. 5:49 Um, and what that means is, uh, you 5:52 know, like I said earlier, the 5:53 bottleneck isn't generally the models or 5:57 the prompts, it's the data because you 6:00 need information and data in order to 6:04 get value out of the LLM that's specific 6:07 to the task you're trying to accomplish 6:08 or your business. 6:11 Um and you know the process of 6:14 architecting that data so that it's 6:16 accessible and consumable by an LLM is 6:19 um not always super straightforward. Um 6:22 so you know I think uh when you have 6:25 data that's scattered 6:28 uh it has a couple impacts on AI and AI 6:32 in the enterprise. So one is it makes 6:35 development or like reaching business 6:37 value much slower. Um, you know, it's 6:41 it's much easier to develop uh rapidly 6:44 develop valuable AI applications that 6:46 have business context if you have a 6:48 well- definfined set of data or small 6:52 number of resources to connect to to 6:54 understand what data is available for 6:56 answering a question or exploring a 6:58 domain. Um, 7:01 this is, you know, probably even more 7:03 true the more advanced the AI 7:05 application you have is. So example the 7:07 most basic level might be like a um a 7:11 enterprise GPT instance that has access 7:13 to your internal documents. But if you 7:16 are building um an AI agent that you 7:18 expect to like take some action on your 7:20 behalf um you those agents need to be 7:23 able to efficiently fetch uh and analyze 7:27 data as needed to make decisions on your 7:30 behalf. And the better the data is um 7:33 the better the decisions it'll make. 7:36 Um I think it's also you know uh it also 7:42 exacerbates one of the biggest risks I 7:45 think from enterprise data which is uh 7:49 debugging when things go wrong or even 7:52 identifying when things go wrong. Um, 7:55 you know, the more kind of data sources 7:57 you have to worry about, uh, the more 8:00 places there are for failures to occur, 8:03 the more likely it is that you aren't 8:05 going to have the right instrumentation 8:07 or observability in place to identify 8:09 when, you know, something changed and 8:11 uh, now your LLM or your agent is making 8:15 erroneous uh, decisions like, you know, 8:17 we've seen some outcomes like uh, agents 8:20 uh, giving people imaginary coupons that 8:23 don't exist. 8:24 uh for discounts on a company's products 8:26 for example. Um so I think the uh most 8:31 straightforward solution at least on its 8:33 face is to centralize your data sources 8:38 uh into as few places as possible. So 8:41 one way of approaching that is a 8:43 traditional data warehouse where you you 8:46 know can take in raw data from a variety 8:48 of sources. Then you can clean it, 8:51 transform it, maintain it um and monitor 8:54 it all in one place. Uh and then serve 8:57 that to an LLM or an agent. Um this is 9:00 particularly useful for you know if you 9:02 want to have context broad business 9:04 context uh you know you want to like 9:06 what are the trends in in the business 9:08 recently or or for across customers. Um 9:12 but you know you can also use solutions 9:13 like retool that allow you to connect to 9:16 different data sources in one place uh 9:18 and then standardize your governance 9:20 your access policies um as as well or 9:23 even better you can combine the two um 9:26 and uh get even better governance 9:28 >> right so I guess we're hearing a lot 9:30 about data using enterprise data 9:34 probably security is a really really 9:35 really top thing of mind is there 9:37 anything that you think teams should 9:39 consider when adopting AI and unifying 9:44 their data and getting an AI ready data 9:46 ecosystem. 9:48 >> Yeah. 9:49 Uh this is a big it's a big question. Um 9:53 so I think I'm a big first principles 9:55 guy. I like to think in terms of you 9:57 know what are the first principles that 9:59 you need to understand to understand a 10:02 system or make um make good decisions 10:04 about a system. And I'd say the key 10:07 principle for AI or is two. One is um AI 10:14 is not a destination, it's a tool. You 10:17 know, you you aren't just generically AI 10:19 ready. You have to have a sense of what 10:21 are we trying to accomplish with our AI 10:24 applications. Um and usually it's best 10:26 to start with a relatively narrow range 10:28 of applications so that you can make 10:30 sure you, you know, stick to landing. 10:33 The other key principle I think is um a 10:37 phrase that I'll steal from one of my 10:39 favorite writers, Max Reed, which is uh 10:41 epistemic humility. 10:44 Um so what that means is um there's two 10:46 kinds of epistemic humility. In other 10:48 words, uh being careful about, you know, 10:52 being overconfident about what you know. 10:54 Uh if you have epistemic humility, you 10:56 assume that you don't know a lot of 10:57 things. Um, so you have to have humility 10:59 a about what uh what do these LLM these 11:04 these systems know? What can they know? 11:07 How correctly do they interpret 11:08 information I provide them? Um, and then 11:11 two, you have to have humility about 11:12 what you yourself understand about how 11:14 the system behaves. You know, these are 11:16 highly complicated systems. Even the 11:18 foundation model developers like OpenAI 11:20 don't have great tools for understanding 11:22 what's happening under the hood. Um, and 11:25 so another way of framing that I've 11:28 mentioned in other places is to think of 11:30 the AI as like a, you know, well-meaning 11:34 CFO or even CEO who um is like 11:38 reasonably proficient with SQL. They can 11:41 write like pretty basic queries. They 11:43 can uh, you know, follow a link to 11:45 documentation to uh, understand the 11:49 data. uh they can even you know look at 11:51 examples to get a sense for how the data 11:54 is related to each other, how people 11:55 have solved problems in the past. But 11:57 you wouldn't just like expect your CEO 12:00 to understand all of the nuance of every 12:03 aspect of your business or to dive deep 12:05 into really complicated convoluted data 12:09 models and kind of come up with correct 12:10 answers by writing their own SQL. 12:14 Um yeah I think what that means in 12:17 practice uh is that to be AI ready uh 12:21 you have to have a number of facets to 12:24 your data. So one is you know unified 12:26 data ideally uh which we kind of 12:29 discussed above um you know making sure 12:31 it's easy for you to manage observe 12:34 transform document it all in one place. 12:38 Uh another aspect is data governance um 12:42 which helps you you know make sure that 12:45 you control access to data 12:48 uh at scale um you know one of the most 12:50 dangerous things about LLM's potentially 12:52 is regulatory risks um if imagine you 12:56 know LLM has access to private data it 12:58 shouldn't have or exposes private data 13:01 to end users um one of the best ways to 13:04 mitigate those risks is to have good 13:06 governance uh and systems that you can 13:08 use to manage access um and revoke it 13:10 where where needed and appropriate. Um 13:14 you also need to have quality. Uh it's 13:18 kind of a you know the importance of the 13:21 quality of your data is higher than 13:23 ever. Uh because you know an LLM it's 13:26 unlikely to look at the result and say 13:28 huh that doesn't align with my 13:30 understanding of how the business works. 13:32 This this uh metric clearly is 13:35 incorrect. Um, and so you really need to 13:38 have a human in the loop somewhere who's 13:40 notified when there are problems so that 13:42 you can intervene when an AI system is 13:44 like drawing incorrect conclusions. Um, 13:47 and then finally documentation. Um, you 13:50 know, it's been said that the most 13:52 valuable thing that uh a software 13:54 engineer can do at this point if you 13:56 believe that in the advent of AI powered 13:58 coding, the most valuable thing a 14:00 software engineer can do is document 14:01 their code rather than write it. And I 14:03 think the same is true um for both the 14:07 code that data engineers or analytics 14:09 engineers or data teams own but even 14:11 more so the data itself you know um if a 14:14 LM has access to really high quality 14:16 documentation to understand the data 14:18 then it can make really good decisions 14:19 about how to use it 14:20 >> right that makes sense so I guess can 14:22 you can you talk a little bit about 14:24 unstructured data and how that can 14:26 interact with AI? 14:28 Yeah. 14:30 So, um, unstructured data is really 14:34 interesting because there this tension 14:36 at the heart of LLMs and getting value, 14:40 um, out of unstructured data through 14:42 LLM. So, on the one hand, uh, the 14:45 highest value ad that an LLM can provide 14:47 is by automating the labor of getting 14:51 insights from unstructured data. you 14:53 know, um, LMS can make things possible 14:55 that were never possible before in terms 14:57 of, uh, crawling through and summarizing 15:00 huge reams of information. On the other 15:03 hand, however, the more extraneous 15:05 information you throw at an LLM, the 15:07 worse it performs on a given task. So, 15:10 these two things are in some of extreme 15:14 tension, you know, in practice, 15:15 especially if you're trying to develop 15:17 very high reliability business 15:18 applications. 15:21 So in practice what you you know I 15:24 usually have to do um especially if it's 15:26 really large unstructured data is to 15:28 walk a kind of a tight rope between 15:30 these these these two aspects of the 15:32 tension. Um so one you know you should 15:37 generally experiment with extracting key 15:40 pieces of information uh from the 15:42 unstructured data. So, one example might 15:45 be, you know, if you're building um uh 15:48 applications off of AI summaries of Paul 15:52 transcripts, um you might want to 15:55 pre-extract information from the 15:58 transcript like key topics that were 16:00 mentioned uh or the uh which of the 16:04 speakers are from within your company or 16:06 are not within your company. 16:09 Um, you also want to 16:13 try to generally transform the data that 16:17 you expose into a more LM friendly 16:20 format. Um, so for example like a vector 16:23 store is going to make it uh a the LM 16:26 can load more context into its memory at 16:29 once and access it more efficiently. Um 16:32 and then finally, you know, I've found 16:34 working on similar problems where we we 16:37 get insights from unstructured data at 16:39 scale, uh it's really useful to provide 16:42 an LLM with very clear examples and 16:46 particularly good um well doumented data 16:51 uh because you know it it helps the LLM 16:53 make sure it doesn't get overloaded in 16:55 the barrage of unstructured data um and 16:59 kind of interpret it correctly. 17:01 So, it's almost like you're you're 17:03 building this ETL, but now it's 17:04 specifically for LLMs 17:08 and giving it the right context that it 17:11 needs. 17:13 >> Yeah. Yeah. Exactly. Yeah. Sometimes you 17:16 can build an ETL for both humans and 17:17 LLMs. Um, but sometimes you just need to 17:20 build one for the LLMs. 17:22 >> We we serve the LLMs nowadays. 17:25 >> Yeah. 17:26 >> Okay. So I want to go back to that 17:28 centralized data warehouse topic that we 17:30 kind of talked about. I know this sounds 17:32 like the ideal state. Um obviously ideal 17:36 state but not not all teams are maybe 17:39 able to iterate that quickly or maybe 17:41 they're just working with a lot of 17:42 different APIs like Zenesk or Salesforce 17:46 or whatever. The tech ecosystem is 17:49 insane nowadays. But do you have any 17:51 advice on teams that are still either 17:54 working to centralize their responses? 18:00 >> Yeah. So um I think uh the key principle 18:05 here uh is that you need to identify 18:11 what are the aspects of our data that we 18:12 need to have the most confidence in that 18:15 we need to be the most reliable 18:18 in order to build the applications that 18:20 we need. Um you know when you're working 18:22 on centralizing your data there's a 18:23 million things to do and you really need 18:25 to prioritize. So again thinking about 18:27 what is the end state we're we're trying 18:29 to achieve and you know for example like 18:32 if your end state is starting by 18:34 creating a um support chatbot uh that's 18:37 going to have you prioritize um very 18:41 different data sources than if your uh 18:44 first application is going to be a uh 18:48 sales analysis agent or something. 18:51 Um so you don't necessarily even need to 18:54 store everything in one place. Um but 18:56 you know you need to be able to trust 18:57 your data. So you can find um other ways 19:01 of of creating governance and data 19:03 quality checks around uh that data. Um 19:06 you know it's hard to talk in two too 19:09 abstractive terms because it really 19:10 depends on the data source what's 19:11 available. So for example Salesforce 19:13 might have way more tools to help you do 19:16 that uh than say some other integration. 19:20 Um 19:22 so I'd also say that it's worth noting 19:24 that uh you know if you're if you want 19:26 to build highquality um agent 19:28 applications that really take action on 19:31 your behalf. Um integrating directly 19:33 with uh external systems is actually a 19:37 key aspect of that. So, you might 19:39 actually be helping yourself out um get 19:41 a get a boost up on developing good 19:45 agents that can write to or take actions 19:48 in external systems by setting up those 19:50 connections early. Uh you just need to 19:52 make sure that they're reliable and uh 19:55 well governed, 19:57 >> right? So, kind of having everything 19:59 under one hood, one platform can be 20:01 really helpful in getting started with 20:03 AI and getting giving it that whole 20:05 ecosystem. 20:08 Yeah, it's worth noting um you know 20:09 retool can be a very useful tool in this 20:12 situation because even if you don't have 20:14 something like centralized data 20:15 warehouse uh retools 20:18 connection uh library and query library 20:21 can be used as ways to um you know 20:25 quickly develop more of that governance 20:27 and centralization 20:29 >> right with sec yeah yeah right security 20:31 is kind of part of the platform is is 20:33 big as well. Um, speaking of agents, 20:36 like have you worked with agents? Can 20:38 you can you kind of describe and tell 20:40 the audience what some of the key 20:42 characteristics of an agent is and why 20:45 having this AI ready data is really 20:47 important? 20:49 >> Yeah, definitely. Um, yeah, I uh 20:53 worked on the launch for Retools AI 20:56 agent product actually. So, uh I think 20:59 I'm at least somewhat um familiar with 21:02 agents. Um but yeah, so I think the key 21:05 idea is that um agents are types of AI 21:10 applications that can act. Uh and you 21:14 know potentially there's potential for 21:16 agents to be kind of like one of the 21:18 keys to unlocking enterprise AI value 21:21 because if you uh set it up so that AIs 21:24 can act safely and securely um you can 21:27 really unlock all sorts of new 21:28 possibilities. uh you can really like 21:31 you know make a lot of people's jobs 21:32 easier um automate more types of tasks 21:35 that kind of thing. 21:37 So let's say in my view an agent has um 21:40 several aspects that make it separate 21:42 from like a regular LLM. Usually an 21:45 agent you know it's it's a particular 21:48 interface to an LLM that runs on a 21:49 special server. U there's several 21:52 aspects. One is going to have some kind 21:53 of goal orientation. So, you know, you 21:56 tell the agent what it must accomplish 21:58 before it finishes executing, whether 22:00 that's resolving a sport ticket uh or 22:03 finishing going through some decision 22:04 tree. Two, it needs to have reasoning 22:07 capabilities for it to make decisions 22:10 including when to terminate its process. 22:13 Three, it needs to have tools uh which 22:16 are the ability to ask the server it's 22:19 running on to run code for it. uh and 22:21 that includes fetching data from 22:23 external sources or say writing uh post 22:26 commands that uh hit an external API or 22:29 writing data to another database. So for 22:31 example, if you're writing um if you're 22:33 creating an uh a customer support 22:36 automation bot, it might need the 22:38 ability to write to your Zenesk to 22:40 update the status of a ticket. 22:43 Fourth, uh I think agents need knowledge 22:46 in memory. They need the ability to 22:47 fetch information to help them make a 22:49 decision using their reasoning 22:50 capabilities. And in some cases, they 22:53 need the ability to store information 22:54 permanently so that future uh agent runs 22:57 can then reference that information. And 23:00 then finally, you need an evaluation 23:02 framework. Uh so you can track, you 23:04 know, test different uh prompts and 23:07 different models and identify which ones 23:10 um are most efficient or effective at 23:14 accomplishing the task you need. 23:16 um and which ones are most cost 23:18 effective uh at accomplishing the task. 23:22 Now um 23:24 pools, knowledge and memory all depend 23:27 potentially in some part on having AI 23:30 ready data. So pools uh knowledge is 23:34 functionally a special kind of tool, 23:36 right? It's a tool that you provide the 23:38 LLM that allows it to fetch information 23:40 that helps it make a decision. If you 23:43 don't have well organized, centralized, 23:45 well-governed data, um the effectiveness 23:49 of the tools that you can provide your 23:51 agent to build knowledge are much more 23:55 limited. 23:56 Um you know so again uh in retool for 24:00 example you can kind of help manage this 24:03 better by using our connectors and query 24:05 libraries but you can also centralize 24:07 your data in a data warehouse that makes 24:09 it uh easier to write concise queries 24:12 that fetch the exact information that an 24:14 agent needs to make a decision. 24:17 Um but uh it's also easier to create 24:21 effective and safe tools if the data is 24:24 well governed. Um, you know, so again, 24:27 if you're talking about an AI system 24:29 that takes actions on your behalf, um, 24:31 believe it or not, there's some 24:32 companies that are even, you know, have 24:34 configured agents to, um, have access to 24:37 Stripe to make refunds or payments 24:39 automatically. You really want to make 24:41 sure that the data that's being used to 24:42 make that decision is extremely uh, well 24:45 governed and looked at very closely. 24:48 Um and then you know memory can be 24:50 related to the data team's work as well 24:52 whether that's u you know storing 24:54 information that an agent needs for 24:57 future reference in a vector store or a 24:59 database you maintain um again worth 25:02 noting ritual you get ritual DB retool 25:04 vector store for free uh which is one of 25:07 the reasons why uh I've done a lot of my 25:10 uh you know AI work in retool um and 25:13 then finally emails can also integrate 25:15 into kind of how a data team works Um, 25:18 ideally what you want to do is track, 25:20 you know, your evaluations for your 25:22 agents over time so you detect drift 25:24 from the original uh, performance and 25:26 then can take corrective action as 25:28 needed, 25:30 >> right? It's a bit of an experimentation 25:31 almost. Um, 25:34 >> yes. 25:34 >> Yeah, you've really been at the 25:36 forefront of a lot of these kind of 25:39 you've introduced a lot of AI solutions 25:41 even to retool itself as a company. Um, 25:45 do you think that like you being a data 25:47 expert, being on this data team, do you 25:49 find that you're best positioned to lead 25:51 this change? 25:53 >> Yeah. Uh, well, you know, I mean, 25:55 obviously I'm biased. 25:58 Um, but, uh, you know, as I said 26:01 earlier, there's no AI without IIA, 26:03 information architecture. The the 26:06 bottleneck generally is the data um, and 26:09 the context you're able to feed the LM. 26:12 So given that uh I see 26:15 data teams kind of across the world are 26:18 really like they're sitting at the mouth 26:20 of this bottleneck that's kind of 26:21 preventing 26:23 the realization of AI value in the 26:26 enterprise. Um and so you know data 26:30 teams particularly if they can upskill 26:35 um adopt tools that empower them to uh 26:39 kind of translate their data expertise 26:41 into AI applications. They can really 26:45 seize the opportunity and kind of open 26:48 up that bottleneck and uh have a lot 26:51 more influence uh within you know the 26:54 enterprise um and a lot more impact. So 26:57 uh you know yes but I'm I'm biased 26:59 >> right I mean well we are seeing a lot of 27:01 companies adopting AI but I know that 27:03 there's a challenge in actually getting 27:06 to a mature point with AI so 27:09 >> yeah I've and yeah I mean have you have 27:13 you well I've seen your apps so like can 27:15 you actually walk me through one of 27:17 these AI powered applications this like 27:19 actual mature AI solution that you've 27:22 managed to come up with while using 27:24 retool as a platform to do it. 27:27 >> Yeah, definitely. Um, yeah, I wish I 27:30 could talk about all of them. Some of 27:32 them, um, you know, kind of under NDA 27:34 given some of the cool stuff I'm working 27:36 on, but um, one example would be, uh, we 27:41 really 27:42 throughout my time at Retool, we really 27:45 developed uh, and worked spent a lot of 27:47 time working on developing the 27:49 enterprises ability to find the data 27:52 that someone needs for an application. 27:53 you know, um, we didn't have a 27:56 professional data team until the company 27:58 was like four years old or something 27:59 like that. So, there's a lot of data. 28:01 It's very hard to figure out if if what 28:03 you need is there and where it is. Uh, 28:06 we had an existing application, a ritual 28:08 application. 28:10 You know, one of the best things are 28:11 retool can build frontends really 28:14 quickly and cheaply. Uh, we had an 28:16 application that was a data search tool. 28:19 Um, and it allowed people to search 28:21 through the data in our Unity catalog, 28:25 uh, which is data brick system for kind 28:27 of managing metadata about your, um, 28:29 data warehouse to see if, you know, you 28:32 could find tables that matched, uh, the 28:34 use case you're looking for. Um, what I 28:37 realized is that, uh, we could actually 28:40 unlock new types of searches, 28:43 uh, by turning it into a rag app. rag 28:45 standing for recall augmented generation 28:48 where you uh fetch data in real time, 28:51 feed it to an LLM and have it help you 28:53 make some decision. 28:55 So uh what I did is I set it up so that 28:58 um the user could enter a semantic query 29:01 like uh I need data on selfserve AR 29:05 metrics 29:07 uh and we would fetch all of the column 29:10 descriptions all of the table 29:12 descriptions and other metadata in Unity 29:14 catalog and then for each data asset we 29:18 would feed that to an LLM. We would ask 29:20 it, hey, rate this on a scale of 0 to 29:22 100 in terms of how likely it is to 29:24 answer the um question being asked by uh 29:27 this this query string and then return 29:30 in a sorted list with some context on 29:32 you know how highly was it rated. And 29:34 this um you know increased the uh 29:38 ability for people to find data 29:40 effectively using the application um and 29:43 made it much easier for people who are 29:45 you know less technical who don't think 29:47 as much in terms of the kinds of uh ways 29:50 of framing questions that would work 29:52 based on the original apps um design to 29:54 access data more. Um however this does 29:57 illustrate you know the limitations of 30:02 your uh enterprise data um in the sense 30:04 that some of our earlier data sets were 30:07 less well documented uh for example some 30:09 of them had been built uh by people at 30:12 the company before there was a 30:13 professional data team and unfortunately 30:15 those kinds of data assets were less 30:17 likely to appear in the search results 30:19 because the LLM didn't have um access to 30:22 that data when it searched unity catalog 30:25 Right. So you were essentially able to 30:27 self-s serve data, give people a way to 30:32 ask creative questions about that data, 30:34 but still needed that data expertise to 30:37 kind of fill in the gaps and fill in 30:39 those 30:40 the unclean data bits. 30:43 >> Yeah, 30:44 exactly too, right? 30:47 >> Sorry, can you repeat that? 30:49 >> And there was like a front end to that 30:51 as well. 30:53 and actually save that themselves. 30:55 >> Yep. Yeah. So the front end retool app 30:58 uh you know big search box enter uh a 31:02 search query and then return a table 31:05 with uh metadata about tables in our 31:07 data bricks uh lakehouse uh kind of in 31:10 response to the query. 31:13 >> That's okay. How do you have you built 31:14 front ends before or was that 31:16 >> Oh, no. No. Never built a front end of 31:19 any kind except for 31:20 >> I mean like you I built like um front 31:23 ends using BI tools. Uh but they have 31:26 they're a little different from kind of 31:28 the full-fledged app development 31:31 framework that you have like RTOL. 31:34 >> Interesting. It's very cool. Um okay, 31:36 just in the in the interest of time, I 31:39 want to pause here. So we this is kind 31:41 of where we're at. We've finished our 31:44 questions for the day, but we do have 31:46 Q&A. Um, so we can jump into 31:50 some audience Q&A. 31:53 Okay, Sam, could you please elaborate a 31:57 bit on the multimodal unsaturated data, 32:00 so images and text for fine-tuning an 32:02 LLM? What are the inherent problems on 32:05 training with these types of data? 32:10 >> That's a really good question. 32:12 Yeah. Um, sorry. Can you The uh 32:16 multimodal saturated data you said 32:18 >> unsaturated data, so images and 32:20 >> unsaturated. 32:22 >> Yeah. So that's actually a great 32:24 question. I personally have never done 32:27 um really much heavy duty data 32:29 development work on um images and text. 32:33 Um I wish I could provide better advice. 32:37 Um, but I'm sure uh you know we can find 32:40 um you know can connect with me 32:42 afterwards on LinkedIn and I can see if 32:44 I can provide you some uh some help. 32:47 Okay. Um okay, next question. So when 32:50 building agents with complex qualitative 32:53 data, should we build one agent to do a 32:56 variety of steps or multiple agents that 32:58 work together? So like one agent to 33:01 clean the data to make it more useful or 33:03 one agent to look for themes, one agent 33:05 to edit the text etc. 33:08 >> Yeah. So this is uh from what I've 33:12 gathered kind of an open question in the 33:15 industry. You know we're kind of in 33:17 early days in terms of identifying what 33:19 are the best practices um how do you get 33:22 the most out of agents? Now, 33:26 >> from what I've seen, there are cases 33:29 where uh 33:31 breaking out an agent, what would be one 33:33 agent into a variety of sub agents that 33:37 handle more granular tasks um actually, 33:40 you know, performs much better than 33:44 having one master agent that is 33:47 responsible for implementing or 33:48 executing all the tasks in of itself. 33:51 And this makes sense on its face. Um, 33:53 generally speaking, you know, as I said 33:55 earlier, the uh less extraneous 33:59 information you throw out an LLM, the 34:01 better it is at making the right 34:03 decision. That also applies to, you 34:05 know, the narrower the range of tasks 34:08 you ask the LM to be responsible for, 34:10 the more likely it is also to make the 34:12 right decision. Um, now I'd say the main 34:16 counterveailing force for 34:19 breaking things out into sub agents more 34:21 frequently is that it becomes more 34:24 difficult to manage, maintain and scale. 34:29 Um, in one sense it might be more 34:32 scalable because some sub aents you can 34:34 actually treat like a aentic API. you 34:37 might have multiple workflows that kind 34:39 of query the same sub agent but also um 34:43 there's a existing you know kind of 34:44 inherent problem in um troubleshooting 34:48 and monitoring these systems and that 34:51 can become multiplied uh the more sub 34:53 aents uh you you create. So if you 34:58 prioritize performance um you should 35:00 definitely experiment with building sub 35:02 aents that handle more narotasks. Uh but 35:05 if you have fewer resources for 35:06 maintaining I would say you should 35:09 probably start by sticking with um you 35:11 know one super agent 35:14 >> right and you've worked a lot with um 35:17 agent evals. Did you develop evals for 35:19 our agents future? Were you working with 35:21 >> uh no I I didn't develop them. Um 35:24 actually I've only worked somewhat with 35:26 evals. You know I uh I I built a lot of 35:29 the uh data we use to measure how 35:31 customers are using evals. Um and you 35:34 know eval critical for um effectively 35:38 building and managing agents, 35:40 >> right? Okay. So that can kind of help if 35:42 you are managing multiple agents or 35:45 building even just a a mega agent. 35:48 >> Yeah. Yeah. Exactly. emails are going to 35:50 be the key to verifying that breaking it 35:52 out into kind of you know smaller sub 35:54 aents is actually more effective. 35:56 >> Okay, this might lead into the next 35:58 question which is can you speak to the 36:01 memory an agent can access to improve 36:03 its results over time and how can we do 36:06 this using retool? 36:08 >> Yeah. 36:10 So um I think this is the impression I 36:13 guess this is kind of a fledgling area 36:16 of agent development and best practices 36:20 as well but in principle 36:23 uh you know let's say uh I'll give you a 36:26 a more straightforward example from a 36:28 tool that many people might be familiar 36:31 with. So uh claude code uh is a tool 36:34 that some people use for developing 36:36 software. Uh one aspect of cloud code is 36:39 that you can create memory for it by uh 36:42 adding a file called cloud.md 36:45 that provides permanent context that the 36:48 agent uses every time it makes a 36:50 decision. So for example, if you're 36:52 working in a data engineering 36:54 repository, you might provide context on 36:56 here are testing standards, here's how 36:58 you um decide which tests to add to this 37:01 data set, here's the naming conventions, 37:04 that kind of thing. And then what's more 37:07 uh if you encounter a shortcoming in 37:10 terms of uh cloud code's ability to 37:13 write code in the way you want you can 37:15 actually have claude code itself update 37:17 its cla MD file uh and therefore you 37:20 have this kind of like self you know 37:22 improving and reinforcing agent. 37:24 >> Now you can apply that oh sorry what' 37:27 you say uh Alicia 37:28 >> it's so meta 37:29 >> yeah it gets super meta very quickly. Um 37:34 now uh what you can do is apply that to 37:37 say a business context with another type 37:40 of agent. So you know one example might 37:43 be that you are developing a um 37:47 customer support automation agent and uh 37:52 when the agent uh ingests a ticket or 37:55 triggers because a new ticket was 37:57 submitted by a customer. Uh there might 37:59 be things that you want to have future 38:02 agent invocations remember about how to 38:05 interact with that customer effectively. 38:07 Uh perhaps you know this customer 38:10 doesn't like being contacted via email. 38:14 Um stuff like that. Um and so what you 38:18 would want to do is set up first, you 38:21 know, identify what system you're going 38:22 to use to track um this this memory. So 38:26 you might set up a database where you 38:29 store um you know some summary string 38:32 about uh on a per customer level about 38:34 how to interact with this customer 38:35 effectively and then you create a tool 38:38 for the agent so that every time the 38:40 agent receives a ticket it checks do I 38:43 have anything in my memory store that 38:44 pertains to this customer if so fetch it 38:47 and load that into my context. if not 38:50 proceed um as normal. Uh there's a lot 38:54 of ways you can kind of architect this 38:55 but the same idea it's the same basic 38:58 idea you know like identify something 39:01 the agent needs to know in the future 39:02 next time it runs um or another time 39:04 that it runs create a system where it 39:06 can fetch that context when it runs 39:09 again um and then help it decide what to 39:11 do with that information. 39:14 >> Right? Right. So, kind of building those 39:15 tools around your agents workflows, etc. 39:19 >> Yeah. 39:21 >> Do you have any advice on how to reduce 39:23 LLM token consumption? 39:27 >> That's an interesting question. 39:29 H 39:33 >> uh 39:34 you know, that's actually not an aspect 39:37 of this that I've looked at in a lot of 39:41 depth. Um I mean one 39:45 general approach can be uh using concept 39:49 called compaction and is functionally a 39:52 kind of memory process 39:54 uh where you ask the LLM periodically to 39:59 uh summarize the uh text that's or other 40:03 tokens that are in its context window in 40:06 a much more concise format. then start a 40:09 new chat where the uh context provided 40:12 in the new chat is the summary of your 40:14 previous discussions. Um but you know 40:17 there's there's a lot of uh different 40:19 techniques for that. Um I'd say that's 40:22 traditionally something that you know 40:23 for example at retool more focused on by 40:25 the product engineers who are building 40:27 our AI products and and really want to 40:29 get the best value for our customers. 40:32 >> Right. Yeah. Which I think is why we we 40:34 make sure to expose a lot of that 40:36 information so that you can 40:38 Yeah, 40:38 >> keep an eye on it. Yeah, 40:42 >> shout out to uh the retool agents 40:45 product which has really cool 40:46 observability features that you can use 40:48 to track token usage by your agents on 40:51 your your keys. 40:52 >> Yes. If any of you have been to a 40:54 webinar, I'm sure you know who Kent is. 40:55 We all love Kent. 40:58 >> Yes. 40:59 >> Okay. Can you elaborate a little bit 41:00 more on implementing data oo 41:03 implementing data governance for AI apps 41:06 and are there any tools or patterns that 41:08 are already established that we should 41:10 be aware of? 41:12 >> Yeah. Um 41:15 so again I think this is a place where 41:18 um 41:20 best practices are still being kind of 41:22 figured out. Um now there's a couple 41:26 different perspectives or ways to think 41:28 about this question as well. 41:32 So um one question is how do we think 41:35 about the governance of the data that's 41:37 input into our our LLM from an external 41:42 system. Um so for example the tools that 41:45 are used by an agent to fetch 41:47 information or the vector stores that 41:50 you know a rag app uses to answer 41:51 questions. 41:53 Um 41:55 I think in this situation generally the 41:59 tools and techniques are pretty similar 42:02 to like a traditional data governance 42:05 system. Um, you might, you know, kind of 42:08 think about the, uh, 42:11 LLM or the service accounts that NLM 42:14 uses to access this data as kind of like 42:16 one of the accounts you pay the most 42:18 attention to. 42:21 Um, you know, the principle of least uh 42:25 permissions is going to be even more 42:27 important when you're talking about 42:29 something that a system that could be 42:32 kind of like um, you know, a 42:36 uh, polymath toddler who knows how to 42:40 fetch whatever write whatever code it 42:43 wants or fetch whatever data it wants, 42:44 but doesn't understand enough about the 42:46 world to know which information it 42:48 shouldn't divulge or which actions are, 42:51 you know, um, inappropriate to take 42:53 based on the powers it has. Uh, you 42:56 know, it's kind of like the the the one 43:00 account that you really want to make 43:01 sure you get the permissioning uh, 43:04 exactly right and even the permissioning 43:05 on who can kind of make changes to the 43:08 application itself. 43:10 So another aspect of this is thinking 43:12 about uh how do we manage the data that 43:14 is input by humans or users to a AI 43:20 system. Um this comes with uh 43:24 potentially a lot of privacy and other 43:28 concerns that can make this data 43:30 different from other types of data. 43:33 um you know, if you're talking about the 43:36 data that's getting from say your data 43:38 warehouse, 43:39 generally you're going to um hopefully 43:41 be making sure that PII or other 43:44 sensitive information isn't being sent 43:45 to your LLM. However, you can't, you 43:47 know, stop a human from just telling 43:49 your your AI chatbot that uh what its 43:52 address is and its birth date or it SS 43:55 social security number. Um, so I'm less 43:59 of an expert on this area, but there are 44:01 emerging tools like brain trust um that 44:04 allow you to kind of walk that fine 44:06 balance of tracking what is the data 44:10 that humans have entered into or or uh 44:13 introduced to our systems, how has that 44:15 caused our systems to behave? uh while 44:18 also um making sure you uh you ensure 44:23 that only people who should have access 44:24 to this information have access and that 44:26 sensitive information is kind of removed 44:28 before it's exposed to people, 44:32 >> right? Um okay, the chat is we got a lot 44:36 of questions coming in. Um okay, so 44:40 >> do you have any opinions on MCP or kind 44:42 of model context protocol? 44:46 >> Yeah. 44:47 So, um 44:49 it's interesting at retool, we have a 44:52 lot of um in-house tools that we've 44:54 built for our agents platform. 44:57 Uh and so, you know, I've had a lot of 44:59 experience um kind of looking at and 45:02 working with those tools. However, you 45:04 know, we also made sure that we added 45:07 MCP support to our agents product 45:10 because it is clearly um at the very 45:14 least a strong candidate for you know uh 45:17 the leading way to to kind of manage the 45:19 tools that you provide to an LL. Uh I 45:22 think MCP is very promising uh in terms 45:27 of being open source um very auditable 45:31 uh easy to kind of customize or fork if 45:33 needed um and you know a kind of really 45:37 impressive collaboration between um 45:40 competing companies in the space to 45:43 create a uh shared toolkit that anyone 45:46 can use. Actually, before I proceed, I 45:48 guess to make sure it was on the same 45:49 page, MCP stands for model context 45:51 protocol. It's open source software that 45:54 you can use to um define tools or 45:59 agents. Uh it comes out of the box with 46:02 a large number of connectors that you 46:04 can, you know, configure to connect to 46:05 your system. So, for example, I believe 46:07 it comes with a Salesforce connector out 46:09 of the box. Um 46:12 and uh it's standardized between um like 46:17 open AAI, anthropic um maybe XAI it's uh 46:22 like portable between models uh which 46:25 makes it you know super powerful. 46:28 Now I I have seen some concern about um 46:33 MCP some of the limitations potentially 46:36 of the MCP implementation. there might 46:39 be kind of the like lowest common 46:41 denominator um tools out of the box that 46:44 you get for each connector uh which 46:46 might not meet all of your business use 46:47 cases. Um and I've also believe you know 46:52 I might have seen some concerns about 46:53 like security or um governance in some 46:56 of the connectors MCP but um I don't 46:59 remember the details if that's the case 47:00 but you know very promising um in 47:03 principle. 47:05 >> Very nice. And do you think that the 47:07 future of AI lays in sort of querying 47:10 raw data or in working with a more 47:13 curated knowledge base? 47:17 >> So it's really tough to say because the 47:20 future of the underlying technologies is 47:24 uh very uncertain. um 47:27 you know I'd say let's say we uh achieve 47:30 uh super intelligence 47:33 uh like some people uh forecast in the 47:36 next several years you know at that 47:38 point actually maybe it doesn't matter 47:39 uh how you architected your data the uh 47:42 super intelligence will um rewrite 47:45 rewrite it to its own uh ends but um the 47:49 underlying idea I'm getting at is the 47:51 more if these systems get to a place 47:54 where they're able to independently 47:57 make reliably good decisions despite 48:00 having a lot of information to parse 48:03 then um querying you know less curated 48:07 more raw data might become the most 48:10 effective way of running the systems. 48:12 You know you might be able to just say 48:13 like hey 48:16 here's the uh database kind of have at 48:19 it you know here's the documentation 48:21 figure it out. uh but for the time being 48:24 in the foreseeable future 48:26 uh curated data is the name of the game 48:29 in my opinion. Um 48:32 I here's an example. Um, you know, so I 48:36 had a a stakeholder who 48:39 uh really wanted to 48:43 ask an LLM questions kind of like broad 48:47 questions about, you know, asking LM to 48:49 analyze really large amounts of 48:52 unstructured data. Um, and I set it up 48:58 uh, and the AI isn't wasn't actually 49:00 able to provide meaningful insight 49:03 because 49:05 it's too overwhelmed with context and it 49:08 just kind of ended up providing these 49:10 basic platitudes um, about the the data 49:14 it provided that that didn't really add 49:15 any insight. Um yeah I I have to be a 49:18 little you know can't get into too much 49:19 detail about what exactly I was working 49:20 on but suffice to say that you know 49:23 these systems really don't perform uh 49:25 well if you provide them a lot of 49:28 context a lot of information in order to 49:31 really be valuable and to make it past 49:33 that like proof of concept stage to a 49:35 real production application you need to 49:38 give it a very well- definfined narrow 49:40 task and very well- definfined narrow 49:42 information to use to like execute or 49:45 make decisions about that task. 49:47 So working with that problem, did you 49:49 approach that from more of a prompt 49:51 angle or were you going at it from more 49:53 of a data angle of how can I scrape and 49:56 grab the data that I need and then kind 49:59 of 50:00 >> Yeah. How did you approach that? 50:02 >> Yeah. Um so I approached that from a 50:05 data angle. So you know my my role was 50:08 uh okay how do we how do we architect um 50:12 a data set and then architect the system 50:16 we use to pass this context to an LLM so 50:20 that someone can ask questions uh of the 50:23 LLM. Um so it's functionally kind of 50:26 like um a rag app uh built on on some 50:31 like uh custom architecture. Um, so 50:33 yeah, there's really interesting 50:34 questions if you um get into 50:37 architecting data for AI of how do you 50:40 efficiently 50:42 um summarize the information, this 50:45 unstructured information and then pass 50:47 it to an LLM while still staying within 50:50 your context window and not providing, 50:52 you know, too much context or or uh not, 50:55 you know, um not wasting tokens by 50:58 repeating the same information over 51:00 again. Nice. 51:05 Nice. Um, I'm also looking at the chat 51:09 here. Um, okay. So, here's here's an 51:12 interesting question. What kind of 51:14 company is best positioned to benefit 51:16 from your offerings? 51:18 I might be able to answer this one. 51:19 Supposing a company uses real time data 51:22 heavy workflows and in-house solutions 51:24 compared to the more standard tools like 51:25 Zenesk and Salesforce. What kind of 51:28 support can RTOL give to help integrate 51:29 such workflows if that's supported? So 51:32 we like to say that RTOL is data 51:34 agnostic. So if if you are exposing 51:37 those workflows through an API or if 51:40 you're storing the information in a 51:41 database that is something that you 51:43 could connect and store within retool 51:45 and use within retool. So what kind of 51:48 companies are best positioned? I think 51:50 anywhere from people who are trying to 51:52 go from prototype to a production 51:54 application or even enterprise customers 51:57 who need that built-in security and 51:59 governance and even like source control 52:01 and auditability. 52:03 There's a range of needs that retool can 52:06 can fit and with whatever data you're 52:09 working with we can connect to that and 52:11 you will have governance over that. We 52:13 talked a lot about governance. I think 52:14 that's one of the main benefits of using 52:17 retools. have instead of having to 52:20 manage all of these API keys and doing 52:22 it in separate tabs, which I know we all 52:25 have crazy tab windows, you can do that 52:27 within the reach tool platform and it 52:30 makes it really easy to keep under 52:32 control who's using what, who has access 52:34 to what, which APIs you're working with. 52:38 Um, so yes, even if it's custom 52:40 workflows, that is supported. 52:42 Yeah, I would add um in some ways you 52:45 might be a better candidate for retool 52:47 or you might get some benefits that 52:49 other other types of uh retool customers 52:51 don't because you own the system that 52:53 you're connecting to entirely. Um and 52:56 then you can kind of tailor the like 52:59 affordances of say the API or whatever 53:01 system you need to connect to to match 53:04 whatever you need in reach. Um so that 53:06 could actually be a really great fit. 53:09 >> Yes. Yes. Um, okay. Gonna wrap up really 53:13 quickly. Oh, before that, my favorite 53:16 question. Sam, 53:18 what specific brand and shade of black 53:20 nail polish will maximize my ability to 53:23 data science this hard? 53:25 [Laughter] 53:28 >> Huh. I wonder it was the person who 53:30 answer asked this question named Mitch. 53:34 >> Uh, great question. I don't know. Uh, my 53:37 wife and I got manicures last week. Uh, 53:39 I don't know the the brand. She actually 53:41 kind of did all the she kind of chose 53:43 for me. Um, I can find out for you 53:46 though if you need to. Um, find, you 53:48 know, I'm on LinkedIn. Message me. Just 53:51 say you need information about uh, black 53:53 nail polish. 53:56 >> That was Mitch. 53:58 >> Love it. 53:59 >> That was not Mitch's question to my 54:00 surprise. 54:02 >> Okay, one more one more question. What 54:04 is something that excites you the most 54:06 about this new world of AI? and then 54:08 we'll wrap up. 54:12 >> That's an interesting question. Um 54:15 I'd say you know the most exciting thing 54:19 is uh possibility of kind of like 54:23 blowing open 54:26 what data teams can do and accomplish. 54:29 Um and you know kind of like some ways 54:32 maybe muddying the waters between 54:34 traditional job titles. uh might be that 54:37 software engineers need to become more 54:41 data data engineers and data analysts um 54:43 to become like truly effective uh at at 54:46 building the best AI applications. Um or 54:50 it might be that uh data scientists 54:52 start becoming you know more like 54:54 back-end engineers in certain ways uh in 54:56 order to to to deliver AI applications. 55:00 Um yeah it' be very interesting to see 55:03 how that goes. 55:05 >> Right. It's yeah the industry is 55:07 shifting a lot a lot. Um okay well Sam 55:11 thank you so much for your time for your 55:12 expertise. 55:14 Yeah you are so full of knowledge. It's 55:16 just amazing to hear you talk. 55:18 >> Well thank you 55:20 >> and 55:20 >> glad to sit down and chat. 55:22 >> It's great. Thank you everyone for 55:24 coming. Um 55:26 also for those of you who don't know we 55:28 are also hosting a live summit coming up 55:30 in October 7th. Uh we're hosting it in 55:33 San Francisco. So that's super exciting. 55:36 Uh Daniel's going to drop a link in the 55:38 chat. So if anyone is interested in 55:40 joining and attending 55:42 um we would love to see you there. 55:44 There's going to be a lot of great 55:45 sessions, a lot of great conversations, 55:47 good people. Um and we will follow up 55:50 with a link to this video recording and 55:53 please feel free to connect with both 55:55 Sam and I on LinkedIn. Reach out if you 55:57 have any questions. And Sam, everyone 56:00 loves the nail polish. 56:02 Love it. Um, 56:06 >> I'll make sure my wife knows and that uh 56:08 my nail pol my manicure place knows next 56:10 time I I go out. 56:12 >> Write a good Yelp review for sure. Um, 56:15 okay. Thank you everyone and yeah, look 56:18 forward to connecting soon. 56:20 >> Have a good day, guys.