Description Transcript
The agentic future isn't coming—it's here, and it’s changing everything. In this session, recorded live at SFJAZZ Center in San Francisco on October 7, 2025, Retool Head of Product Design Paco Vinoly and Solutions Engineer Tom Konewka discuss what’s possible when agents can reason, plan, execute, and iterate autonomously.
We’ll walk through agentic apps built on Retool, discuss the philosophical and practical implications of software that truly works for you, and preview a future where human and machine work in harmony, not conflict.
🔗 Resources mentioned in this session:
https://retool.com/agents
👉Start building in Retool for free:
https://retool.com/
Read more 0:04 Please welcome Retool head of design, 0:07 Paco Venoli. 0:12 >> All right. Thank you. Thank you. 0:16 Hi everyone. Welcome back. I am Paco 0:19 Vignoli. I'm head of design and today 0:23 our talk is our agentic future. It's uh 0:28 sometimes it feels like these slides get 0:30 outdated from one day to the next with 0:32 all the announcements that we're getting 0:33 on on the field, but I'm excited to 0:37 share with you how we think about agents 0:40 and our agentic future in terms of of 0:43 retool. 0:45 So imagine a world 0:48 where software anticipates your needs. 0:52 It can make decisions and act on your 0:54 behalf across every part of your 0:58 business, every corner of the 1:00 enterprise. 1:02 It's not that hard to imagine that 1:03 world, right? It's uh it's upon us and 1:07 and it's made possible by these LLMs, 1:11 the large language models. And you saw 1:14 David talk about it, Abishek. 1:17 And it is a generational technological 1:21 shift on the scale of the internet, 1:24 the smartphone. 1:26 So let's get into it. You've seen this 1:29 slide and and I want to hammer this 1:31 again and I know you've heard it from 1:33 Sean and Mads in in the first session 1:36 today, but it's important and we're 1:38 going to put this through the lens of 1:40 agents in this session. Again, we've 1:43 talked about these AI demos 1:47 and uh somebody was saying we can't use 1:49 the Sean was saying we can't use the 1:51 magic word and I'm like, dude, I have it 1:53 on my slide. Don't say that in your 1:55 session because it's not going to work 1:57 for me. But they're incredible, right? 2:00 And um and then you have production 2:03 reality which is so much more 2:07 complicated and requires so many of 2:09 those things uh that we talked about 2:11 today to make them um real and that gap 2:16 is is something that we're going to talk 2:18 again about during the session again 2:20 through the lens of agents. 2:23 The other thing we're going to talk 2:24 about is why the future of work isn't 2:28 about AI replacing humans. 2:32 Um the the future is about humans and AI 2:38 collaborating, working together. And 2:40 it's important as we go through the 2:43 different types of workflows and agents 2:46 how each both humans and AI plays roles 2:50 that um depend on each other. 2:53 So, I had a I was going to go a little 2:56 bit off script here and I had a question 2:58 for the audience. I can see some of you 2:59 out there. And my question wasn't about 3:02 how many of you have seen AI demos or 3:06 been blown away by them, but has anyone 3:09 in the audience built or tried to build 3:12 an AI agent that didn't quite make it, 3:16 that failed in some way? 3:19 Any show of hands? 3:21 Uh, wait. Did I see one in the middle? 3:23 No. Oh, I got one here. 3:27 Okay, here we go. 3:31 Phil, 3:33 do you want to come up and share what 3:34 happened? 3:37 Let's go. John, can I get borrow a mic? 3:40 Hey, everyone. Welcome, Phil. This is He 3:42 has not prepared for this. 3:44 >> I have not. 3:44 >> Come on in. 3:47 >> So, you've built an AI agent. Yes. 3:51 >> And you know my next slide is probably 3:53 going to say how hard it is to bring 3:55 these into production. 3:57 >> What did you experience? 3:58 >> Yeah. I mean basically this was when I 4:01 was learning retool and uh we built as a 4:04 class our first AI agent and I thought 4:07 oh this is like super intuitive like 4:09 it's using common language like I am not 4:12 um an engineer for those who do not 4:14 know. Uh, and uh, I thought, okay, this 4:16 is something I could definitely like 4:18 kind of try on my own and come up with 4:20 one that I could then have my colleagues 4:22 use and see all the great work that I 4:24 did. And let's let's about the third 4:28 prompt in it just started really going 4:30 like off chart and went really really 4:34 bad. I just kept processing and 4:36 spiraling and I had to stop it. I kept 4:38 like trying to do reprompts and what I 4:41 thought was really intuitive like 4:42 definitely shook my confidence. Yeah. 4:45 Awesome. Well, I'm sure there's lots of 4:47 stories like these, so thank you for 4:49 sharing those. You bet. 4:50 >> Sorry to put you on the spot and and go 4:52 off script. Um, I'm going to hand this 4:55 back to John. Thanks, John. Thanks, 4:56 Phil. Again, I I think there's there's a 5:00 lot of you that might have that story uh 5:03 to tell or may experience it uh in the 5:06 near future. 5:08 And and that's the the trick to some of 5:11 this talk. And again, I think you also 5:12 heard David talk about this. The reality 5:14 is that 95% 5:17 of AI pilots never make it to 5:20 production. 5:23 This is what the uh MIT researchers are 5:25 calling the learning gap. And it's not a 5:28 technology problem. It's not about the 5:32 models and their capabilities. It's 5:34 about how we design the systems and 5:36 these workflows. 5:38 Most organizations don't know how to fit 5:40 AI into human workflows in a way that's 5:44 safe, reliable, 5:46 and ultimately valuable for the company. 5:52 And that's the real challenge before us. 5:55 In this talk, we're going to focus on 5:57 how do we build systems where AI 6:02 and people work together, not just 6:04 coexist, but collaborate, 6:08 amplify each other's strengths. 6:11 Okay, so let me set a little bit of the 6:13 context. 6:15 What you can see in the slide here is a 6:17 little bit about our text stack and you 6:20 can see and it it's a little bit 6:21 genericized but hopefully you can see 6:24 some of the uh your technology here. 6:27 What we're focusing on today is the 6:30 automate layer which traditionally has 6:34 the workflows and now we are adding 6:37 agents to it. This is where most of the 6:41 human AI collaboration happens. 6:45 But the work that happens in automate 6:48 depends on everything below it as well. 6:51 It's those foundational elements that 6:53 Gabriella and Todd talked about. It's 6:56 about governance. It's about connecting 6:58 to the data layer. It's about deploying 7:02 securely. 7:04 Um, and all of these things are the ones 7:08 that 7:10 are critical to ensuring the success of 7:13 what's happening in in Automate. So, 7:17 I'm going to do a little bit of a more 7:20 theoretical pass and then Tom is going 7:22 to come in and and share how the product 7:25 really comes to life. 7:27 But what we're seeing in our customers 7:28 that are power users is that they are 7:32 evolving around across this maturity uh 7:36 curve. 7:38 On the left we have these Gen AI 7:41 workflows. They're relatively consistent 7:44 in terms of the output you get. They're 7:46 not very flexible, right? But and they 7:50 also can accept a ton of inputs, but 7:52 they're pretty predictable and easy to 7:54 work with. Probably most of you are 7:57 working in this space already and have 8:00 maybe for for a few years. And then on 8:03 the top right 8:05 you have these AI agents fully 8:08 autonomous 8:10 much greater variety of inputs you can 8:12 put into the system 8:15 but the consistency of the outputs uh 8:18 can change dramatically and they're can 8:20 be much more harder to work with. 8:23 I want to make this a little bit more 8:24 concrete and go through each one of 8:27 these and and and I'm doing this in the 8:31 for the purpose of setting the stage not 8:33 only to what Tom is going to talk about 8:35 but to think about 8:37 agents AI and human interaction and the 8:40 different versions of this that we have. 8:43 So for genai workflows, this is one of 8:48 the most um deterministic workflows and 8:52 the most simple ones that that we have. 8:54 In this case, the human has defined 8:57 every step of the workflow and the LLM 9:02 is in the middle and it's adding this 9:05 generative power. And we take an example 9:08 here of a sales call. sales call comes 9:12 in, uh, a transcript is, uh, created and 9:15 it goes into the LLM, which at this 9:18 point it can summarize, it can highlight 9:21 objections, it can create action items, 9:24 it can create um, a recap of the call 9:28 and then sends an email to the the the 9:32 sales sales rep. The workflow is always 9:36 the same, but the LLM can create 9:40 different outputs to it. 9:43 It's a powerful tool. Um, but it doesn't 9:46 have a ton of flexibility. So again, 9:49 bottom left, Genai workflows, the 9:51 simplest version that we have. 9:55 And then we move a little bit further 9:56 up, one step further, and and these are 10:00 agentic workflows. In this model, 10:04 uh, in this workflow, the model begins 10:07 to make decisions that affect how the 10:09 workflow runs. 10:12 We take the the previous example, right? 10:15 We get the transcript, comes in, the LLM 10:18 picks it up and does the work of 10:21 summarizing objections, action items, 10:24 sum, um, creating its its email, but it 10:28 has a point where it needs to make a 10:30 decision. Now, it's not just generating 10:33 output. It needs to decide, 10:36 am I ready to insert data into the CRM? 10:39 Am I ready to send this off to the sales 10:41 rep? Or do I need more clarity and I 10:46 have to reach out to a human for for 10:48 some help? 10:50 Now the LLM will make that decision 10:53 depending on on on how it's thinking and 10:58 um begins to chart the route through the 11:02 workflow. The workflow is predefined. 11:05 The human has set the steps but now the 11:09 LLM decides which path to take. 11:13 Okay. So that's agentic workflows. And 11:17 and finally we get to the AI agents 11:20 fully autonomous 11:23 and this is when you have 11:26 we don't have the clarity between input 11:28 and output. There's a lot of 11:31 variability. 11:33 In this case the agent has the ability 11:36 to check context can decide can can 11:40 choose to reason further can choose to 11:42 call tools. 11:44 um even a human if necessary. There are 11:48 no fixed steps 11:51 and the path is charted differently each 11:54 time. 11:55 For example, here um 11:58 it's hard to take the same example all 12:01 the way through, but the input comes in. 12:03 Let's say it's a it's a call transcript 12:05 and the LLM is starting to do its thing. 12:08 Now, it may say, "I'm ready. I got 12:10 everything I need." and and output or it 12:14 may say 12:16 well you know what I am going to check 12:18 the CRM or I need to do a web web search 12:21 to validate something that I heard in 12:23 the transcript and it can then generate 12:26 loops that go back as many times as 12:29 necessary 12:30 to get to the point where it feels 12:32 confident 12:34 to end the workflow and move forward. 12:38 So these are the three 12:40 types of workflows that we're seeing and 12:43 they all leverage the LLM in different 12:46 ways and in greater levels of 12:48 complexity. 12:52 But each one of these systems faces the 12:54 same trade-off. 12:56 Control versus delegation. How much 12:59 control do we keep with the humans and 13:03 how much we choose to delegate to the 13:06 LLM? 13:08 And it's not a fixed or one-time 13:10 decision. It's a dynamic boundary and it 13:15 shifts depending on context, on risk, on 13:19 the capabilities of the LLM. 13:23 But okay so I just wanted to set a 13:25 little bit of the theoretical framework. 13:29 Um but now I I think it's important to 13:31 see how the tool how the product 13:34 functions and how our customers are 13:36 implementing it in the real world. And 13:38 for that I'd like to bring Tom on stage 13:41 to walk us through some of those 13:42 examples. 13:44 >> Please welcome Retool solutions engineer 13:47 Tom Kuka. 13:55 Thank you everyone. 13:57 Feels like I have my own Netflix special 13:59 here. Just uh maybe 1% of the pyro 14:03 budget. Uh we really need a candle for 14:05 our fireside chats next year. 14:08 Anyways, agents. Why are we actually 14:11 here? So, what Paco was talking a little 14:13 bit earlier was the theoretical 14:15 application of these agents and why 14:17 they're important. My part here is 14:20 actually going to be the applied 14:21 portion, seeing it in action in our 14:24 product. All right, so stay with me 14:26 here. You've made it. This is the last 14:28 product content session and I'm the last 14:30 speaker, but the product is actually 14:32 very very important. So with agents, 14:36 right, what does it actually take to win 14:39 with agents? It's not just about the 14:41 underlying LLM model. It's the system. 14:44 It is the product. It's the platform. 14:47 it's retool. So building with uh 14:49 building agents with retool actually 14:51 requires three main critical things. The 14:54 first one is tools. You're going to 14:56 learn that tools are effectively the 14:58 lifeblood of agents. They're the 15:00 mechanisms that can take action on your 15:03 data and your systems. Tools are great, 15:06 but if you let them loose, they need 15:09 guard rails. That's the second point 15:10 right there. Guardrails are important 15:12 because of the power of these agents. 15:14 They need to be constrained in a way 15:16 that is safe, secure, governed, and 15:19 auditable. And finally, the third point 15:22 is the quirks. Quirks, what are they? 15:25 I'm sure we've all tried to build 15:28 something with an LLM and we did not get 15:31 the same outcomes every single time. 15:33 These are the quirks, the edge cases. We 15:36 need uh abilities and mechanisms to 15:38 control for those quirks within a 15:40 platform when building agents to give 15:42 you the best shot of a deterministic 15:44 outcome for what really is 15:47 undeterministic software. 15:49 But when these three align, agents stop 15:52 being demos. They become dependable 15:55 business partners. 15:57 So let me show you the platform that 15:59 brings these three pieces together. 16:02 So we're going to start with the 16:03 narrative here. Meet Pam. She's an 16:06 account executive for a Fortune 500 16:08 company. She's talking to stakeholders, 16:11 customers, prospects, internally, 16:14 externally, all day, every day. She's 16:15 very busy. She's got backtobacks all day 16:18 long. Now, she has an important meeting 16:21 coming up with one Tim Cook. I wonder 16:23 what he does. And she actually needs to 16:26 prep. She needs to get a very informed 16:29 and intentional meeting preparation 16:31 document. she just doesn't have the time 16:33 or the effort or the energy to do so. 16:36 So, enter retool agents. So, we're going 16:39 to play this so you can see it in 16:41 action. So, Pam is actually using a 16:43 meeting prep agent to to prep for this 16:46 upcoming meeting with Tim Cook. 16:50 So, the prompt has been sent. You can 16:52 now see that the meeting prep agent is 16:54 searching the web using the search web 16:56 tool, right? And within the platform, 16:59 you can actually see the thinking and 17:01 the reasoning that's actually occurring 17:03 for each successive step, each tool use. 17:06 And that's important for us as operators 17:08 to build trust within the system. You 17:11 don't we don't necessarily need to have 17:12 all of this exposed, but as we're 17:14 building, as we're really shifting our 17:16 human psychology around the construct of 17:19 agents, it's important to have this so 17:20 we can actually build trust. So you can 17:23 see we're using another tool to create a 17:25 Google doc, update a Google doc, and 17:27 then finally send email. So we've 17:29 context switched maybe three or four 17:31 times now. We've all been in the room, 17:34 we've all been at home, we've all been 17:36 at meetings where we're getting constant 17:37 Slack pings and context switching is 17:40 productivity poison, right? Where us as 17:44 humans, our performance heavily declines 17:47 and degrades over time every time we 17:50 need to context switch. But agents, 17:53 they're built for it. So, as you can see 17:55 here, we actually got the digest at the 17:57 end for the research being complete. And 17:59 then it's uh the email is sent to Kenan, 18:01 one of Pam's associates, for review. And 18:04 that was just done within minutes, 18:06 right? This could normally take hours, 18:09 if not days, if you're factoring 18:10 everything else that you have going on 18:11 in your life. And yes, this is live in 18:15 Retool today. So now that you've seen 18:17 the system in action, let's actually go 18:20 under the covers and dig into the 18:22 mechanisms behind making it reliable. 18:26 So over here, 18:29 tools. Why are tools important? Tools, 18:32 tools, tools. Conveniently, our 18:34 company's name is Retool, but they're 18:37 super important because this is what 18:38 actually gives agents the power to to 18:42 reason and make change in your business. 18:45 So you can see over here there's a bunch 18:46 of core tools. These are the out- 18:48 ofthe-box tools that we have available 18:50 in the platform for you today. Things 18:52 like send email, creating calendar 18:54 events, creating docs, doing web 18:56 searches, all the things that you would 18:58 expect from an agent platform to help 19:00 you with these business processes that 19:02 you're trying to automate or trying to 19:05 create for the very first time. Now 19:08 every business is unique. Every business 19:10 is really undeterministic in nature. So 19:13 we need to provide for a mechanism to 19:16 actually account for that. So that's 19:18 where you can actually create your own 19:19 custom tools. What is a custom tool? 19:22 Well, if you've used retool workflows 19:24 before, it provides you with a workflow 19:25 like canvas to design and iterate and 19:29 provide your agent with a very specific 19:32 job and function with that tool that is 19:34 actually unique towards your business. 19:37 Other things you can do, you can use 19:39 other agents, you can import from an 19:41 agent, you can even leverage the 19:43 existing workflows that you've built in 19:44 the retool platform today. And now you 19:47 can actually also connect to MCP 19:49 servers. And then we're also um working 19:52 very heavily on A to becoming a deacto 19:55 standard just like MCP has. 19:59 So in our previous example, our agent 20:02 could search the web, do research, send 20:05 an email, share the meeting prep doc 20:07 with key attendees. Now what sets this 20:10 system apart from other AR architectures 20:12 is that the LLM is deciding when to use 20:15 each of these tools that the human has 20:18 equipped it with. This separation of 20:21 capabilities versus decisions is what 20:23 sets it apart. So the result is there's 20:26 fewer prompts into an LLM that you have 20:28 to do. There's more autonomy that you're 20:31 granting the agent and you get 20:33 consistent outcomes. 20:36 All right. However, giving these agents 20:40 capabilities through tools means you 20:42 also need to take measures to prevent 20:43 them from acting in an unauthorized and 20:46 even malicious way. 20:48 On the one hand, that means giving 20:51 powerful detailed observability like you 20:53 see on screen over here. We have things 20:56 like token usage and estimated costs. 20:58 Great for those business line 20:59 stakeholders to know how much are these 21:02 transformations actually affecting my 21:04 bottom line. The total runtime and total 21:07 runs performance other technology 21:09 stakeholders to know how good is this 21:12 agent? How quickly is it solving the 21:13 problems? How many times does it need to 21:15 run to solve that problem? You can see a 21:17 graph here with all the different tool 21:19 usage. So these things are coming out of 21:21 the box. It's actually quite trivial to 21:24 build an agent today without any sort of 21:27 platform. There's a lot of tools out 21:29 there, but actually putting into 21:30 production in a secure, governed and 21:32 observable way is something that retool 21:35 is very very unique at. 21:38 So 21:40 let's actually um let's actually go to 21:43 the next screen here to actually 21:44 configure an actual agent. Now, LLMs, 21:47 they're inherently nondeterministic 21:50 systems. This means that you need a 21:52 different set of controls compared to 21:54 other software paradigms that have 21:56 existed in the past. Often times that 21:58 involves creating a detailed specific 22:00 prompt like you can do today in retool 22:02 in the instructions panel there where 22:05 you actually guide on what this agent's 22:07 role is. Selecting the model as well is 22:10 important. With retool, you can actually 22:12 select any model that your organization 22:14 has a key for. We can provide models for 22:17 you or you can bring your own. 22:20 Next is temperature. Really kind of 22:22 adjusting for the creativity of the 22:25 agent itself. So, and you align that 22:27 with the task that's actually at hand. 22:30 So, if there's a like let's say a 22:31 financial analysis agent, you'd want the 22:33 temperature to be basically zero because 22:35 you're dealing with numbers. If there is 22:38 a marketing agent to create brand copy, 22:41 you might want to slide a temperature 22:42 up. Also, iterations controlling cost 22:45 and performance is important. Sometimes, 22:48 I know we've all been there. We've used 22:49 an LLM and then all of a sudden it's 22:52 stuck in a loop, right? That loop does 22:54 unfortunately cost money. So, we want to 22:56 limit the possibilities and limit how 22:58 many times an agent tries to solve a 23:00 problem. And that gives us actual good 23:02 feedback to iterate and improve those 23:04 agents. 23:06 So um but what happens now when we take 23:09 some actual unique steps when dealing 23:12 with these LLMs by providing tools and 23:15 providing documentation how can we 23:17 further build upon this 23:21 all right 23:22 this slide what even is this slide I'm 23:26 like two feet away from it and I can't 23:28 even read the text 23:32 digital transformation information. 23:35 This is business process. This is all 23:38 things that you've probably done in your 23:40 organization over the last 5 10 years. 23:44 Different versions of this, different 23:45 iterations of this, but it's all the 23:48 same. It's a static process that you've 23:50 all come up with, and this is the best 23:52 way to implement it. At least over here 23:54 with the retool workflows, it's actually 23:56 diagrammed, and you can follow along and 23:58 make sense. I'm not going to ask you to 24:00 raise your hand here, but I know a lot 24:02 of you have some version of this in your 24:04 organization that's probably not even 24:06 diagrammed. It just exists in the 24:08 digital ether, right? And what happens 24:11 when someone leaves your team, leaves 24:13 your organization, someone new comes, 24:15 right? Your business is change, your 24:18 business changes by the really the month 24:20 and the year. like it's almost it's very 24:23 difficult to have like an actual five to 24:25 10 year business plan at the rate of 24:26 things are improving and changing every 24:28 single day. 24:30 So now what if we could refactor this? 24:34 What if we could actually change this 24:36 how we actually approach solving a 24:38 business process 24:41 enter an agent? So this that previous 24:44 workflow was a security inbox triage 24:47 workflow where there's different 24:49 branches based on like if it's an email 24:51 asking for this, if it's an email asking 24:53 for that, right? Whether we know the uh 24:56 know the organization or not. Now 24:58 instead we can refactor this whole thing 25:01 and put it into just simple three 25:03 blocks. A starting point, the invocation 25:06 of the agent and a response, right? 25:11 much simpler than, you know, I'll just 25:12 go back for a second. Much simpler than 25:14 this. 25:16 So, I actually want to do a thought 25:18 exercise with you right now. Let's 25:21 actually travel back in time. Circa 25:23 2022. 25:25 LLMs are now starting to take main stage 25:28 here right now. I want you like imagine 25:33 I came up to you in in 2022, even early 25:35 2023, and asked you like 25:39 what if we could actually give you this 25:42 agent and it can solve your problem for 25:44 you 50% of the time, roughly the 25:48 capability around that time frame. So if 25:51 you are willing and able to, please lift 25:54 your hand if you would not trust that 25:57 agent around that time. Everyone's hand 26:00 should be lifted up and keep it up. 26:02 Please keep it up. 26:04 All right, good. Getting audience 26:06 participation is always fun. Now, let's 26:10 fast forward a couple years, 2024, 2025 26:14 in this timeline, in this multiverse. 26:16 Reasoning has kind of come out and now I 26:18 told you that this agent can solve your 26:22 problem 26:24 90% of the time. I'll even give you 95. 26:27 Who would still not trust this agent? 26:31 Okay, some hands going down, some hands 26:33 remaining up. This is good. This is like 26:36 the prototyping PC phase of the 26:39 wonderful world of agents this year, 26:41 past year, and next year. Now, if you 26:46 listen to Cle talk, the fireside chat 26:48 with Elizabeth Ray earlier, you know 26:50 he's all bought in on agents. You saw 26:53 Burger's presentation with Uber as well. 26:56 Now, let's fast forward 3 years, four 26:58 years. What if I told you that this 27:00 agent is accurate 99.9999% 27:07 of the time? Who would trust it then? 27:11 Raise your Okay, everyone's raising 27:12 their hand up. Great. I guess I should 27:14 have said, who would not trust it then? 27:15 Everyone raise their hand down. 27:18 All right. So, we have something where 27:21 in three years time 27:24 basically for every 1 million 27:25 interactions that you're going to have 27:27 with an agent, every 1 million customer 27:29 support tickets, every 1 million every 1 27:32 million calls, every 1 million legal 27:34 requests, there will only be one error 27:38 that that actually requires human 27:40 review. So now you have to ask yourself, 27:43 three years is a short amount of time. 27:44 I'll even give you five years to the end 27:46 of the decade. You're probably still 27:48 going to be at the same companies. Maybe 27:50 you'll join a new company in a year or 27:52 two where you'll actually be tasked with 27:53 figuring this out. So now you have to 27:56 ask yourself, what is the actual shelf 27:59 life of your business processes today? 28:02 If in the next 3 to 5 years you'll have 28:05 an agent that can do it quicker, 28:07 cheaper, more accurately 28:10 99.9999% 28:12 of the time. It really becomes the march 28:14 of nines, right? For certain businesses, 28:16 maybe two or three nines is sufficient. 28:19 But then the march of nines, six nines 28:21 for every million transactions, right? 28:24 Seven nines for every 10 million, eight 28:25 nines for every 100 million, nine nines 28:27 for every billion transactions, billion 28:29 interactions, billion iterations. 28:32 So what does that actually mean? It 28:35 means that your business process flow 28:37 diagrams that existed before in the past 28:39 will now look like this underneath the 28:42 covers. We have a security inbox triage 28:44 agent. They use tools. They use 28:47 reasoning. They're able to actually get 28:49 the job done and adjust as your business 28:52 requirements change. Give it a new tool. 28:55 Give it a new instructions. Give it more 28:57 context. 28:58 So the most interesting thing that we've 29:01 observed thus far and really predicting 29:03 going forward is that the boundary 29:06 between what should be automated and 29:08 what needs human input is constantly 29:10 shifting. Right? 29:12 I'm really like I'm willing to like now 29:15 you know everyone kind of cringes but 29:17 I'm willing to actually plan my travel 29:19 through an through cla through chat GPT 29:23 I just don't have the time or the 29:24 cognitive cycles to do it anymore. So 29:26 not only is the technology changing, our 29:28 human psychology is changing and retool 29:31 actually gives you the platform to help 29:33 you with that change. 29:36 And one example I just mentioned claudet 29:39 4.5 is that they recently changed how 29:42 they approach their thinking and 29:44 reasoning. Whereas before you actually 29:46 had to select a specific model among 29:48 other model providers as well that would 29:50 pin your level of reasoning, your level 29:52 of chain of thought to actually get a 29:55 job done. But now that's actually 29:57 determined by the model itself based on 29:59 the input and context that you provide. 30:02 So you need a platform that can adjust 30:04 to those new realities in real time. Not 30:06 like in a quarter, in several months, in 30:09 real time today, tomorrow, next week. 30:14 So 30:15 when you put all of this together, you 30:18 stop thinking of them as separate tools, 30:21 human oversight, agents, and workflows. 30:24 you start seeing them as building blocks 30:26 of a single automation platform. So 30:29 here's where this gets powerful. The 30:31 future of automation isn't just smarter 30:34 agents. It's the orchestration of those 30:38 agents and workflows and the structured 30:40 human tasks on a single surface powered 30:43 by retool. Workflows give us the durable 30:46 deterministic execution. 30:48 Agents give us adaptive nondeterministic 30:50 reasoning and humans they give us 30:53 accountability and judgment. So you can 30:55 kind of see I'm now phrasing this as 30:58 digital co-workers. It's a concept that 31:01 we're going to have to get familiar 31:02 with. It may like seem a little like uh 31:05 like I don't really like does not 31:07 compute but by the end of this decade it 31:09 will be common. Digital labor, digital 31:11 co-workers, there will be different 31:13 labels for it but it is a reality that's 31:15 coming. 31:17 So let's actually continue the 31:20 narrative, right? So on the left hand 31:23 side now we actually have 31:26 a process that we need to create. So 31:28 let's go back to Pam's scenario. Pam 31:31 actually sells a lot of product for her 31:34 company, right? Thousands of customer 31:37 interactions. Now one customer 31:39 unfortunately wants a refund. There's a 31:42 bit of a dispute. Didn't see eye to eye 31:43 on something. It happens. you know, law 31:45 of large numbers type stuff. Now, 31:48 strangely enough, Pam's company, let's 31:51 just for argument sake say it's Dunder 31:53 Mifflin, has has never had to do a 31:57 return before. They they actually have 31:58 no return or dispute resolution process. 32:02 So, Pam grabs Dwight, grabs Jim, even 32:06 Michael. They go to a meeting room and 32:08 then after some colorful commentary they 32:12 come up with an SOP for a dispute 32:15 process. Great. Now typically what would 32:19 follow is that they would reach out to 32:20 their IT team, maybe their in-house, 32:23 maybe their contract. There would be 32:24 lots of back and forth. There'd be lots 32:25 of committees. But that unfortunately 32:28 does not scale in this new world. What 32:31 actually happens is that Pam goes to the 32:33 customer success team. Let's say it's 32:36 Stanley and Creed. And from there, they 32:40 actually put this into Retool. They drop 32:42 the SOP into Retool. And now Retool is 32:45 actually able to parse out the diagram, 32:47 the requirements and create that dispute 32:50 resolution process for the company, for 32:52 the team. You can see the task plan, the 32:55 core workflow blocks, the user tasks for 32:57 the human layer, and then finally the 32:59 execution decision branches. So on the 33:01 right side, what you see here is all of 33:04 the different agents that can come into 33:06 play. There's this kind of orchestrator 33:08 producer agent that we have at the top. 33:10 There's a fraud workflow. There's an 33:12 evidence agent. There's a risk agent. 33:14 There's lots of agents available at this 33:17 business's disposal to create this new 33:19 process. And finally, we get our 33:22 response and a human review at the 33:23 bottom. 33:26 So let's actually go through uh this 33:28 dispute resolution process kind of in 33:30 real time. So we see a new case has 33:33 actually been triggered 33:35 and now the dispute producer agent 33:38 starts reaching out to all other 33:40 available agents that it has. So first 33:42 it goes to the evidence collector. It's 33:44 going to all the different data sources 33:46 that are available in the organization 33:48 just to try and reconcile what's 33:50 actually going on. Next, it goes to the 33:52 Zenesk agent through AD toA to actually 33:56 initiate a refund request and log it in 33:58 Zenesk. Also reconcile some other 34:01 information within Salesforce on on 34:03 account metrics, etc. Then finally sends 34:06 it off to the customer success team to 34:08 be approved through Slack. So now what 34:12 we actually have is that customer 34:13 success team was able to actually 34:16 process this refund on behalf of Pam. 34:18 She did not have to make a call to them. 34:20 She didn't have to message them. She 34:22 could just initiate the request and then 34:25 it can be completed by someone that has 34:27 the delegated authority to accept it. 34:30 For now, that authority is a human. In 34:33 the future, it could be an agent. It 34:35 could be both, just depending on what 34:37 your thresholds are and what your risk 34:39 tolerance is in your business. 34:42 So, now we went from making an 34:43 individual more productive to a team to 34:47 a business. And that's what really 34:49 matters. And it's all built in RTO's 34:51 centralized agent platform. 34:56 So before 35:00 sorry went one slide ahead. So I want to 35:03 kind of leave you with a comparison. I I 35:07 you can tell I love TV shows. I love 35:09 movies. So what this actually reminds me 35:11 of I don't know if anyone's ever watched 35:12 the Big Short. Who here has watched The 35:14 Big Short? Curious. Wow. Actually like 35:17 more than half the hands went up. Oh, 35:18 okay. You'll you'll get this. So, do you 35:21 remember that scene where Jared Vennett, 35:24 played by Ryan Gosling, he's in the 35:26 move? He's in he's like in the room 35:28 selling to Michael Scott, Frontpoint 35:31 Point Partners. He's selling credit 35:33 default swaps on the housing market. 35:35 Why? What? What is that? Well, he was 35:37 betting that the housing market is going 35:39 to go through massive volatility and 35:41 change. Well, the same thing is 35:43 happening with your business processes 35:45 right now. They're going through massive 35:48 volatility and change, right? And now 35:51 what he was doing, he was selling fire 35:53 insurance on that volatility and change 35:55 that was already underway. 35:58 So what is retool? Retool is the credit 36:01 default swaps against your business 36:03 processes. It is the fire insurance. 36:06 It's available today, right now. And now 36:09 everyone around you, right, everyone 36:12 around you, whether it's your partners, 36:13 whether it's your friends, they have no 36:15 idea what's going on, especially if 36:17 they're not in tech. Maybe this isn't a 36:19 good sample size because we're all in 36:20 the Bay Area. Really, all everyone's 36:23 circle of friends is somewhat adjacent 36:24 or invol directly involved with tech, 36:26 but there's a lot of people that really 36:27 don't know what's going on, but you do, 36:30 and you can actually take advantage of 36:32 it today. So be that agent of change 36:35 within your business and you'll really 36:38 help transform it and get it ready for 36:40 the next decade. So with that, I do want 36:44 to actually bring back Paco to the stage 36:47 so he can actually close us out and get 36:48 us ready for the next session. 36:55 >> Thanks Tom. Thanks Tom. We are the last 36:59 session product session of the day. I 37:02 think it's the best one. But uh I also 37:05 feel like 37:07 if we closed out product this year, 37:11 agents is probably the keynote for next 37:13 year. So you may see Tom and I uh next 37:18 year doing the the main one hopefully. 37:20 But uh this is so cool and um one thing 37:25 I want to take us back to where we 37:26 started. We started with how do we 37:29 bridge that 95% failure gap? And as I 37:33 was backstage listening to Tom, I 37:36 realized maybe next year it's not going 37:39 to be that 95% failure gap. It's going 37:42 to be that 0% failure gap optimistic 37:46 maybe 5%. And this is what we saw today 37:49 is what is going to ensure the closing 37:52 of that gap. So, I'm so excited for the 37:55 the year to come. So, two things. 37:59 Um, we talked about embracing the human 38:03 AI boundary. We talked about not 38:06 eliminating it and building the tools 38:09 that make that boundary visible 38:12 and adjustable. 38:15 And by recognizing that the future of AI 38:18 isn't about perfect autonomy, 38:22 it's about perfect collaboration 38:26 between human judgment and machine 38:28 capabilities. Humans don't disappear. 38:31 They become first class participants 38:34 and their role shifts to oversight, 38:36 approvals, 38:37 and judgment calls. 38:40 All of this to make sure that the work 38:42 that we make, the agents we create are 38:46 safe, compliant, and aligned with our 38:49 business goals. 38:51 So, with that, we are going to close 38:53 out, but we encourage all of you guys to 38:56 take a quick break and come back because 38:58 now the fun really begins with Patrick 39:00 and David, and then Harry's coming back 39:03 to do a show that is going to blow our 39:05 minds. So, thank you all very much. 39:08 >> Thanks everyone.