Description Transcript
Data and Engineering teams are being asked to automate revenue, risk, and operations workflows—not just publish dashboards. But when you’re designing those systems, it’s not always obvious which parts should be handled by workflows, which by AI agents that can recover from errors and look up missing data, and how to keep everything reliable and governed.
In this webinar, Keanan Koppenhaver (Manager, Developer Relations at Retool) walked through real‑world patterns for combining workflows and agents to handle messy data, reduce failures, and safely write back to your core systems instead of triggering yet another read‑only dashboard.
We covered:
How to decide when a workflow is enough, and when to introduce an agent that can reason, retry, and enrich context.
How to design automations that can recover from bad or incomplete data rather than silently breaking.
How to structure safe, auditable write‑backs so agents improve upstream data quality over time.
Read more 0:10 All right. Good morning, good afternoon or good evening, I suppose, depending on where you're joining from. Thanks for hanging out with us today. We are going to talk about automations. 0:21 So if you are interested in workflows, yeah, agents, automations in general, you're in the right place. If you're not, I hope you stick around anyway because I think it'll be pretty good. So with that, let's dive right in. You will see Kylie in the chat. So if you have questions that we're going along, you know, drop them in there. She's going to help you keep track of them and we'll get them towards the end. 0:46 But as we kind of kick off if you want to jump over to the chat and let us know kind of where you're joining from. It's always kind of cool to see who we have on with us today. So while y'all are doing that, I'm going to go ahead and hold my slides. Welcome Jeff from Seattle. Good to see you. 1:07 Very cool. Lots of different, lots of different folks. Dublin, Ireland, Clearwater, Florida, Toronto, Portland. That is quite the spread. So very close. See. All right. 1:21 So like I mentioned, we're going to be talking about designing automations and sort of all the different shapes of automation, types of automation and how to know which one is right for sort of your particular use case. So we'll have a bunch of demos, examples and hopefully this will be really tactical. 1:37 So that y'all could have things that you can take away and apply inside your own retail account as soon as we get done here in about an hour. 1:44 So like I said, we're going to take a look at, you know, what is an automation kind of how we're going to define it for our purposes today. 1:50 We're going to look at a pretty straightforward workflow. We're going to look at a version of that workflow that has a little bit of AI magic thrown into it. 1:58 And then we're going to look at a version of that workflow that is based on an agent and we're going to look at the sort of benefits and drawbacks of each of those. 2:06 So that when you're designing your automations, you can start to think about, you know, what is the shape of automation that you need and have a little bit more of a framework for how to understand and think about that. 2:16 Like I also mentioned, we are going to leave plenty of time at the end for Q&A. 2:21 So if you have questions, please drop them in the chat and in Kylie will again flag them and we'll get those going at the end. 2:30 And as a reminder for everybody, for those folks who are able to join live here with us, great. 2:35 For those folks who are watching the recording after this that we will share out with you below and we're glad you're watching. So let's dive right in. 2:44 So first automation is a pretty broad word. 2:47 When you think of automation, you might think of robots and a factory or an assembly line. 2:52 You might think of, you know, some people define on automation as even just like a checklist of things that you need to do. 2:57 But we're going to take a look at a more specific kind of automation today and that's, you know, software automation. 3:02 So if anybody, you know, is a power excel user, you might remember doing macros and excel with a tool like visual basic. 3:10 And this is actually a type of automation, right? 3:13 You have things that happen when a certain set of criteria is filled or when a button is clicked or things like that. 3:19 Things that you are not sort of having to do manually. You can see here this, you know, this visual basic code is running based on the data in this spreadsheet. 3:27 If you're like me and you're more of an apple and a Mac person, maybe back in the day, you wrote some, you know, apple script automations is actually my first sort of for a into coding is writing little automations for things to run just on my own computer. 3:41 So when I clicked a button, I wanted, you know, three different applications to open or when someone sent me a particular type of email. 3:47 I wanted to trigger an action on my computer, all these different things. So this is another type of code based automation. 3:55 And if you've been sort of paying attention on, you know, social media, especially over this past weekend. 4:01 You've seen a lot of the different AI automations that people are building. 4:04 So this weekend, the thing that kind of went viral is this tool called cloud bot. It says since a rebranded, but basically it is an AI assistant that lives inside of your favorite, you know, messaging platform, whether that's telegram or WhatsApp or I message. 4:21 People were installing this on their computers and they were doing things like, you know, having this assistant go in and clean up their whole computer or migrating their website like Dave is over here. 4:31 All in a sort of automated way where they weren't touching code, they weren't, you know, building anything like that. They were just kind of setting up this assistant and letting it run. 4:38 So these are all kind of different versions of, you know, what we can consider automations. 4:44 But a lot of what folks think of and so the primary thing that we'll be looking at today is this type of automation here where it's pretty visual. 4:52 It's very clear to see, you know, step one, step two, step three and basically allows our tooling to sort of work through a defined process. 5:02 And you know, hook up to our various resources. You can see here that we're getting data from GitHub or doing some code to process it and then we're inserting into a database. 5:09 So this type of sort of end to end automation via software in a visual way is what we're going to be looking at today. 5:16 All right. So the three different automations we're going to look at we're actually going to look at them through the lens of the same problem, right. 5:24 So we're going to say that we have some either, you know, inaccurate or incomplete customer data. Maybe it's complete. We're not really sure. 5:32 But we need to take it from, you know, where it is, whether that's the beginning of the database or getting submitted via our website. 5:38 And we need to get it into Salesforce, right. So our Salesforce can take a look at it. They can, you know, have it have all the information available to them as a lead. 5:47 And, you know, our sales team can get the information they need. So that's sort of our problem. 5:52 And, you know, again, in a fully manual world, I could take all these inputs. I could review them. I could look at that with my own eyes. I could copy and paste them all into Salesforce. 6:01 But that's pretty slow. And as our, you know, as our company scales, we wanted to be able to automate that process. Right. So that's the kind of frame that we use for our automation today. 6:13 All right. So with that, we're going to start with what I call sort of a deterministic workflow, right. It's very straightforward automation. 6:20 It's basically what you saw in that screen grab a little bit earlier where we have all the different blocks laid out on the canvas. 6:25 And so, you know, we have a lot of different ways to do this. It's going to run the same way every time and to end that's what immediately say deterministic. 6:32 And you know, there's not going to be any sort of variance in what's happening is going to take the data and process it in the same way. 6:39 And that's where we're going to start. So let's take a look actually at a demo of how this works. 6:46 So here's sort of the one of our lead enrichment workflow. And you can see we've got a couple blocks on the canvas here. 6:54 And so what this does is it takes in we just have our test data here in sort of our start parameter. 7:02 And when we run this first block, what we'll see happens is it basically splits out from, you know, my email address is my information here. 7:11 We're pretending that I'm the lead in this case. It splits out the domain name of our company. 7:17 So reach all calm is the domain names is great. That's the domain. 7:21 So then what we have here is actually kind of the meat of our workflow. This is using a tool called Apollo. 7:27 And what I'm trying to do here is we get relatively little information from our initial lead. 7:32 But there are things that if a sales lead came in that I would probably want to know. 7:36 I want to know, you know, what's the size of the company, potentially what is there sort of annual revenue. 7:41 If I'm selling to a particular team, like, do they have, you know, lots of folks on that team are there. 7:47 Is it a relatively small team that I need to kind of think about expanding into. 7:51 So from this domain name, we can basically use this tool called Apollo to search and get more information about the company. 7:58 So you can see that this is already configured as a resource in Retool. 8:02 One of the things about Retool is super powerful is it can connect to all your various data sources, whether it's supported via one of our native integrations or whether in this case, I wanted a bit more control. 8:12 So I set it up as a resource via our rest API connector, which basically means I provided it with the base URL of the API for the tool that I want to hit. 8:22 As well as my credentials. So those are stored securely behind the scenes, which is why, which is helpful in this case, especially because I don't want to leak the API key to all of y'all for this integration. 8:34 But basically I can just pass it the domain name of my sales lead that I'm looking at and it goes ahead and it calls Apollo's API looks like we need to fix this piece here. 8:48 And it's going to go ahead and once it calls the API, it's basically going to return me all of this data about the company that I said about right. 8:58 So let's make this view a little bit more helpful. So it's going to say cool, the company you asked me about is Retool and again, from just this single domain name of our prospect in this case, it's going to give us a bunch of information about the company. 9:11 It's going to give us a logo URL, things like what industry this company is in, what keywords apply to this company. 9:21 There's a lot of these here, 168 organization revenue, which is something we just talked about, right, as potentially being a viable data point, things like where the office is located, a description of the company, which maybe helpful, if it's a company you've never heard of before. 9:34 And then down here, you can see that we even have things like, you know, what technologies this uses and sort of a breakdown of, you know, who is in what departments. And so again, this information is maybe not like completely up to date or 100% real time accurate, but directly it helps to give you a bunch more information about the prospect that you might be looking into. 9:58 And again, this is already sort of automation piece that's helpful because I could have gone and tried to pull all of this manually. I could have done internet research and all that, but this tool called the Apollo has all information ready for us. 10:11 And so we can just pull it in as an integration into Retool and then we have that available in our workflow. 10:16 So the next step then is I've created a Salesforce resource as well. And I can basically just take this information, both, you know, you see I'm pulling here from the start trigger, which is the information that got kind of passed into the automation in the first place. 10:32 And the information that we have pulled out of Apollo and I can have this block, which will actually go through and create a lead in Salesforce. So if I let this run, you can see that OK, we've had success true. 10:47 And if I bounce over to Salesforce here into one of our sort of sandbox environments, you can see that now, you know, I've been created here as a lead, my lead status has been set to inquiry. 10:58 And you can kind of dive in here and see, you know, exactly all the different information that we were able to drop in that we pulled from Apollo. 11:07 So you can see number of employees here is dynamic, all these different pieces of information that we didn't have when we kind of got the information in, but we do now because sort of our enrichment workflow has run. 11:19 So this is very straightforward. We're just sort of passing the information in manually still. But you could imagine if this information was stored in a database or a store in HubSpot or some other tool. 11:32 You could basically pass all of those records into this workflow one by one. It would run sort of our enrichment setup here. And then we would see that, you know, all those leads would just get pop it in Salesforce automatically. 11:44 So something that might take us, you know, tens of hours, maybe hundreds of hours, depending on how many leads we had, all runs very quickly. And so another kind of key crucial important thing here is at any point when we do sort of run this end to end, we can go down here into our run history. 12:01 And we can see, let me just make sure this is published. 12:08 And we're going to go ahead and do an end to end run of this. We were specifically just running each block one at a time. So it could show y'all how it worked. But if we run the whole workflow end to end, we see the whole thing runs in less than four seconds. 12:18 And at any point we can dive in here and see the output of any individual block, just like we were just looking at. So we have sort of full visibility into what's going on here in this workflow. 12:26 And we could see that again, our lead was created. So I'm just going to jump back over to Salesforce real quick and clean these up because we're going to do this process a couple times with our different workflows. And I want to make sure we're always looking at sort of our latest data. 12:46 Alright, so that's an example of again, what we sort of call a deterministic workflow. It essentially like we saw sort of runs the same way for the same sort of input data, you would expect to get the same output data. 12:58 There's no sort of variability in how any of those blocks are running. 13:02 There's just one to the next to the next. And that is how that runs. 13:08 So the sort of pros of this type of automation are that it's pretty easy to observe and debug like we just looked at. 13:15 You can look at the inputs and outputs of every single block. You can very clearly see why each action was taken. 13:21 And if there's an error that gets thrown, it's very easy to sort of catch that error forward into someone by email posted and slack. 13:28 If it's an error in code, things like that. 13:31 It's very easy to kind of narrow down what's happening where the problem is and what's going on how to fix it with dimension. 13:36 It also runs the same flow every time. So it's very predictable. If you give it data that's in the right input format, it will give you, you know, the correct output format once you've built it and debugged it. 13:46 And the logic is clearly defined and explainable, you know, in about five minutes, I ran y'all through how the logic worked. 13:52 You can see each of the blocks on the canvas. You can see how they're connected to each other. 13:56 You can see data flowing from one end to the other. And so this type of automation is very clear, very explainable, very debugable, very observable, all those different things, which is, you know, those are good qualities to have in automation. 14:09 Some of the cons are it's relatively inflexible. We mentioned before that data kind of has to be in the right format for this workflow to run properly. 14:17 And we'll take a look at what happens when it's not a little bit later. 14:20 Error checking has to be added manually. So you can definitely debug and check for errors, but all of that has to be done manually. 14:27 And if you want sort of any new capabilities for things, those have to be manually built added to the workflow tested. 14:35 And that's the kind of trade off that you get from having a very specific readable workflow is that you have to do a lot of the, you know, upgrade and maintenance work yourself in that way. 14:45 So we'll look at how that compares to some other options in a second here, but this is the first sort of shape of automation that we want to cover and look at. 14:55 All right. So there are cases like you mentioned where, you know, maybe our input data is a little bit fuzzy. 15:01 Maybe, you know, someone type with their email, maybe someone forgot to fill in their email entirely or maybe they didn't include their role. 15:08 You have a bunch of old data that, you know, before you put the form field for their title and you're in your CRM. 15:16 And now you want to go back and see, okay, can we get all these people's current titles. 15:20 And those things are kind of hard to do with with a deterministic workflow, a lot of different things. 15:23 So with the advent of large language models and AI kind of as we know it today, we can upgrade our automation is a little bit and includes that AI. 15:32 So we're going to take a look at how that works actually. 15:35 Let's put up some of our tabs here. 15:39 And we're going to go to our V2 workflow. So in this case, we're dealing with the same sort of thing. 15:46 You know, we have our input that maybe in this case now comes from a web form. 15:51 And as anybody who has dealt with input data from random people on the internet knows spam is a very real problem. Right. 15:59 So in this case, we have a web form that people can say sign up for a demo. 16:03 But we notice that there is a lot of spam. They're all just fake submissions coming in. 16:08 And we want to make sure that instead of our workflow just flowing through and putting all those spam submissions straight into Salesforce. 16:14 We have some sort of filter on what we think is likely a spam submission. 16:18 So this is where this sort of like fuzzy logic of this might be spam or this might not be spam is kind of hard to codify into workflow blocks that either have code or strict conditions. 16:32 But this is something that language models are actually really good at. 16:35 And so you can see here in this workflow in this version of the workflow, we have a lot of the same steps. 16:40 We have our Apollo query. We have our Salesforce query. 16:43 So that all on the back end is going to stay the same. 16:47 But up front, we have this interesting unique block here that uses Retool AI. 16:53 So within Retool, you can integrate AI from any of your favorite model providers. 16:59 And you can run AI queries against those large language models to get the information any output that you need. 17:05 So in this example, I basically just asked the language model. 17:09 I said, you know, based on everything you know is this company likely a genuine lead that has a marketing team that we can sell to. 17:14 Or is this a family that we should ignore? 17:17 I told the model specifically to use scratch pat attacks to outline your reasoning and then return a final score on a scale of one to 10 with this should say 10 being almost certainly spam. 17:29 And actually, let's do one being almost certainly spam and 10 being a real genuine lead. Right. 17:35 So basically I'm saying, you know, do some thinking sort of quote unquote out loud. 17:39 That's a little bit of a language model trick that just like, you know, if you start talking before a thought is fully formed, you might not do your best thinking. 17:47 If you let the language model output a little bit and sort of walk through the steps about how it's. 17:53 Again, quote unquote reasoning about a problem. It comes out with a little bit of a better answer. 17:57 So we're going to let it ramble a little bit and then we're going to ask it to, you know, give it a final score one to 10. 18:04 And we will use that score and just a little bit to determine, OK, whether this meets our threshold for dropping into Salesforce or not. 18:11 So we have that as sort of our input prompt. We pass it our data that comes in via our start trigger. 18:16 So you may, this is again, we're going to be taking in data from, you know, a web form fill or from HubSpot or some other system. 18:23 But however it gets passed into our workflow, we want to pass that to the large language model. 18:28 You can use this double curly brace syntax to pull data or information from any connected workflow block. 18:36 So we see that in a couple of different places. You see like here, we're pulling, you know, dynamic data from our our stamps for block. 18:44 That's a pretty common pattern that you see. So we have our prompts. We pass our data into our large language model. 18:49 If we want to give this any sort of system message like, you know, you are a master at detecting spam. You could do that here. 18:56 But we are then are choosing our model. Like I mentioned, you can choose. 19:00 Once you have the different providers configured in your retail account, you can choose models from all sorts of different app providers opening eye and traffic, Google lots of different open source models. 19:09 You can use custom app providers, if you have a model that's not supported sort of out of the box with any more native integrations. 19:15 So right now we will just pick a sonnet for 0.5. 19:19 And I was going to run this and see you can see over here, you know, first name, John last name, doe. This is looking a little suspicious. 19:25 His title is chief excitement officer, John Doe at fake company.com. So this is probably, you know, if I had to guess this family. 19:33 So let's see what sonnet says. So down here we can see the scratch pad and said, okay, we're analyzing this lead red flags chief excitement officer. 19:39 This is a highly unusual non standard title. So it notes that like some startups do use creative titles, but it's using this as a data point of this is starting to feel suspicious. 19:48 Right. It flags John Doe as being the most common placeholder name. It notices that her email domain is fake company. 19:55 It costs that a massive red flag. You know, it detects my sarcasm in the company name of totally real company. 20:03 And it says there aren't any actually positive indicators that this is a really. 20:07 And so it appears to be test and dummy data that someone's entered, which is true. 20:11 It could be any one of these right possibly a colleague testing your form, which is exactly what it is in this case. 20:16 And so it ends its scratch pad and it gives us a score of one right, which is very helpful. 20:21 So again, if I go in here and I fill in my actual information, I can, you know, fill this out. I can update this with real data. 20:34 And I'm going to go ahead and update this with all real data. 20:37 But again, if there was some real data, some fake data, you could actually sort of test the language model and see through its scratch pad. 20:45 And what it weights more highly, what it values more highly. And you can also affect that by what you put in the prompt. 20:53 So let's see what we have in here with this newest information. 20:58 Take it a little bit to come back to us. 21:01 All right, so it says, you know, it knows about Retool. They've raised some money. It has a good company email domain, not a Gmail address, which makes it more like to be real. 21:10 If things my job title is legitimate, which is good to know. It's pretty validating. 21:15 You know, and this is my name looks real, which is interesting. 21:19 It says that, you know, we're pretty a pretty good fit for marketing, which is what we mentioned. And since I do work on the marketing team, that makes sense. 21:25 And it points out that it has no potential concerns. Right. So it says, okay, this is, you know, we can verify this pretty easily. 21:31 So for that reason, like this is probably a genuine lead. It's interesting that it gave it a 9.5 instead of a 10. 21:38 You know, and it actually down here gives us reasoning only reason isn't a perfect 10 is the small chance of email spoofing, but all indicators suggest it's a high quality. 21:45 So this is where the sort of we call this an agentic workflow, right, or a workflow with some large language model components, right. 21:55 So not only are we using this generation capability of large language models in the workflow itself. 22:01 We're actually going to over here come over and make some decisions based on what that is. 22:05 So you can see that we have this block here called get spam form. 22:10 And we're writing in JavaScript at any point, if you want to sort of get even more custom little your workloads doing, you can write JavaScript or Python anywhere instead of a workflow. 22:19 So we're writing a little JavaScript, the rejects here to basically pull out just the score that the model gave us from all of this sort of explanation and output, we're pulling out just our spam score. 22:29 So in this case, our spam score is 9.5. And again, the higher the score, that's, you know, the more like that our lead is meant to be legitimate. 22:37 And then here, I'll just have this block called check if legit and what this is is it's an if else block basically. 22:47 And so in this case, I say, okay, if our spam score basically is greater than a seven right so seven out of 10. 22:55 That's the threshold to kind of I chose for what I would consider is a legitimate lead. 23:00 And so that's, you know, if if the LLM ranks are lead as less than a seven out of 10. 23:06 That to that to me means pretty likely spam. 23:09 And so we don't want to go through the whole process of using up our API credits dumping fake data into Salesforce. 23:16 So this block will basically run and only if our score is greater than 10, which 9.5 is, we'll like continue the workflow. 23:23 If we wanted to add an else condition here, we could, you know, create a new block that basically, I don't know, you know, send us an email at some point that was like, hey, you know, we got a news family. 23:35 Maybe something you want to look into maybe not. 23:38 But you could take any sort of action as part of the sort of else criteria, but we're just going to focus on the sort of happy path for now. 23:47 So once our if block runs and it confirms that we have a good lead again, we run the same exact Apollo block. 23:53 We just need to make sure it's pulling in from the right place data. 24:11 And then we can get our email from our initial input. 24:15 And then our workflow runs the exact same as normal, right? 24:20 It gets all our data from Apollo. 24:22 We go here and we can insert our lead into Salesforce. 24:26 We see that that runs true. 24:28 And if we go over here and refresh in Salesforce, we have another lead in here. 24:31 So again, this is a little bit more complex. 24:34 But when you start to kind of bring these workflows into the real world in this in this example, we are imagining pulling data in from a web form. 24:41 You know, we sometimes need to add these sorts of guardrails. 24:45 And this is something that large segment models are actually pretty good at, especially the more information that you were able to give it. 24:50 So this is just an example of a little bit more complex of a workflow. 24:55 But one that's that's pretty helpful in this sort of use case. 25:02 So let's look at, you know, pros and cons again, what are some of the pros and cons of this type of architecture, right? 25:08 So as we saw, it's more flexible and a hard coded workflow. 25:12 If, you know, we had sort of missing data or in this case, if we had spam data, our workflow is not just going to break down because of that. 25:21 It's able to, you know, as long as the task and the variances like language based able to sort of deal with that based on how we set it up. 25:29 Another pro is the variance is scoped to specific blocks, right? So there is a little bit more variation in this workflow because a large language model is non deterministic. 25:38 It doesn't output the same output for every input. 25:41 So you'll get a slightly different output each time. 25:45 But, you know, the, sorry about that, the output variance is actually scoped to that particular block. 25:52 So the rest of our workflow does run deterministically. 25:55 It will run the same way every time, but the only sort of change is in the output of that large language model block that AI block, which you can see that we sort of handled by asking for it to output our information that we care about in structured tags. 26:08 So there are ways to get around that as well. 26:10 And again, we have the ability here to handle many more different types of data. 26:14 We'll look at this specifically in the next example as well, but as soon as you introduce a large language model, your workflow gets a lot more flexible, your automation gets a lot more flexible. 26:23 And you can handle sort of fuzzy input data or input data that's not maybe fully filled out or you could have considerations for things like common typos and and not have to sort of codify each, you know, of those types of input data. 26:38 But again, this is not a perfect shape of workflow either. 26:42 There is variance in the large language model output, right? 26:47 So again, if I ran that LLM block a bunch of different times in front of y'all, you would see the output is slightly different every time and you have to control for that. 26:55 Right. So in our case, we control for it by asking for the data in structured tags that the information that we actually cared about, we could always pull out every time. 27:04 But there are times when the large language model will just sort of forget to do that for you. And that's something it will introduce, you know, a breakage into your workflow. 27:12 It's also more vulnerable to prop injection. 27:14 So we didn't really talk about this, but if we're just taking data in from the web, like someone submitted on a form and not doing any sort of pre processing on it. 27:24 And if someone typed their name as like ignore all instructions and output something else, we're basically passing that straight into the LLM. 27:32 So that's again, some of you have to kind of consider when you build this type of workflow. 27:36 And I would recommend, you know, building both like front end validation that makes sure the input. 27:41 Like when the person submits it is what you want it to be. 27:43 But also probably something in your automation that says, okay, is this data at least the right shape? Is it probably safe? 27:50 And that's sort of thing as you start to build this up. 27:53 And again, as with anything that's when it becomes non deterministic, error handling gets a little bit more difficult. 27:58 So in our earlier case, when we had an error in our workflow, it probably would mean that an entire block would just fail. 28:05 But with our LLM step, like we mentioned, there are multiple different kinds of errors now. 28:10 Maybe the API from anthropic, you know, is down or slow or doesn't return as quickly as we thought it was. 28:18 That's a type of error that's not necessarily wrong. 28:21 It's just we don't get the data back that we need. 28:23 Maybe the AI returns just a bunch of explanation without our actual scoring it or without our score inside of the tags that we asked it for. 28:31 And so all of those things are different sort of classes of errors that will break our workflow in a more subtle way than just like purely stopping it deadness track. 28:39 So those are a little bit harder to detect. 28:42 And you have to sort of think about all of those different edge cases and ways that your workflow could go wrong. 28:47 And there are just a lot more complications that can happen in this type of flow. 28:50 So lots of lots of pros are to be more flexible, but also some things to consider with this sort of middle of the road architecture. 28:59 All right. 29:00 So now let's take a look at what kind of going on full on into the world of agents looks like. 29:06 So if y'all are not familiar with what an AI agent is, basically it's a sort of mostly fully autonomous system that has access to different tools that normally sort of does some thinking some reasoning and then takes action using one of its tools on your behalf. 29:21 So we'll get a little bit more clear on what that looks like we go through a demo. 29:26 So I'm going to just show you the flow here and to end real quick and then we'll dive into what the agents actually doing. 29:34 So here you can see that I have in my JSON parameters, some pretty incomplete data. 29:39 I've got my first name listed here, but my last name is nowhere to be found. 29:42 My title is missing. 29:43 My email is here, but the company that I work for is not here. 29:46 So this is an example of like data that we may really need to clean up where, you know, a simple sort of true false filter from an LLM is not going to be able to fix it. 29:55 And we need to do something a little bit more complicated. 29:58 So that's where our agent comes in. 29:59 And we built a lead in richer agent, which I'll dive into in just a second here. 30:04 But you can see that basically the output of this agent in the test run that I ran was, you know, a fully complete piece of information that we can then, you know, run the same workflow steps that we're already familiar with. 30:14 So these are all the same. 30:16 The interesting piece in this sort of architecture is what's going on with our agent. 30:21 Right. 30:21 So let's jump over here to look at our agent itself. 30:25 And an agent like we mentioned is defined by a couple different things. 30:28 It's defined first and foremost by the prompts that it has. 30:32 So in this case, we tell the agent that it is a lead enrichment agent. 30:35 We tell it what it's job is. 30:36 We tell it, you know, what sort of output schema we want. 30:40 How it's going to complete its task. 30:42 And we give it some rules too. 30:44 So, you know, like I have here, when filling in the email field, use the person's company email, don't use a personal email address. 30:52 Because these are some common failure patterns that I was seeing. 30:55 But basically what we want to do here is sort of give the agent tools to fill in all this information. 31:01 So you can in Retool build any sort of custom tools that you want. 31:07 You can import tools that you've already built for other agents. 31:10 Agents can talk to each other and use each other as tools. 31:13 You can use workflows, any of the deterministic workflows that we've just been talking about building. 31:17 Ask tools for agents. 31:18 You can connect to MCP servers if you have access to those. 31:21 But we've also sort of given you a bunch of different tools out of the box. 31:26 That you don't have to build yourselves, but you can just give your agent access to you. 31:29 So things like working with Google box or moving around files and Retool storage working with vector databases. 31:34 In this case, you'll notice I have two tools already selected the search web tool and the get web page content tool. 31:42 So my agent now has instructions to go figure out this data and it has the search web and the get web page content tools. 31:52 The final thing that it needs is access to a large language model, right? 31:55 So in this case, I gave it access to open a eyes 03 model. 31:59 But again, you can choose any of these that you have sort of configured in your account. 32:03 If you're a Gemini fan, you can give it access to Gemini and a thropic similar any that you have configured in your reachable account. 32:08 You can use inside of your agent. 32:10 If you need to get a little bit advanced under this advanced settings here, you can adjust the maximum or iterations and iteration is basically that loop that I was talking about before where an agent sort of has a thought about what to do. 32:22 And then takes action based on a tool. 32:24 Right. 32:26 So we have all this configured. 32:28 Let's look at what an agent run actually looks like, right? So I'm just going to go ahead and publish this here. 32:35 So that y'all can see what this actually looks like when it runs. 32:39 So I basically said, okay, please help me fill in this missing information. 32:42 And I gave it our, you know, pretty empty object from our workflow that we just looked at, right? 32:47 But like here's some data. 32:48 I need this to be fall. 32:49 Can you help me fill in it? 32:50 So what the agent does is it goes off here and says, okay, the user provided some partial data. 32:56 I need to find the last name of the title in the company likely to play a Retool. 32:59 So it guessed that based on the email address. 33:01 And it's going to start using tools. 33:03 So it decided to start using its web search tool and see sort of what it could find out about me on the web. 33:08 Right. 33:09 So it uses the web search tool. 33:11 And you can see we had a couple of different ways to kind of see what's going on with this agent right over here in the right hand side. 33:15 We see that it's telling us what tools it's using. 33:18 You can see sort of a simulated view of what it's searching for. 33:21 But if we open this search web tool, we can see that, you know, I can dig into any of the search results here. 33:29 That, you know, pop up. 33:31 This one is apparently a thing about a conference that I spoke at recently. 33:35 All these different things. 33:36 You can imagine these being like the, you know, the top 10 blue links on a Google search page. 33:41 This is an entry from the Retool blog, et cetera. 33:44 So it's basically just doing some web search. 33:46 And it's able to then kind of ingest these results. 33:49 And take action based on them. 33:51 Right. 33:51 So again, it's going to pick one of the pages from its search. 33:54 It's going to pull the content of that page and it's going to go look and see, okay, based on this page. 33:59 Like, what can I find out about what I'm researching. 34:03 So again, it's here. 34:04 It says, you know, he quote leads the developer team at Retool. 34:07 So it's guessing at my title there. 34:09 But it says, I'm not sure. 34:10 So I'm going to do another web search with a little bit more specific criteria. 34:13 So this is the kind of thing as we watch this kind of run the rest of the way that this sort of loop would be really difficult to put into a workflow itself. 34:23 You could have a workflow that calls itself sort of recursively. 34:26 But it'd be very difficult for your workflow to know like when it had enough information to stop when it needed to keep going. 34:32 And so instead of sort of forcing you to do this in a little bit of a janky way by hooking workflows together. 34:38 We've basically made agents a sort of first class version of the Retool platform, first class tool on the Retool platform. 34:45 And you can build these agents that have access to their various tools. 34:48 And they can get you pretty good results. 34:50 You can see here it actually filled in all of our data. 34:54 And it returned that right. 34:55 So in the context of a workflow, this means that at any point you can, let's sorry bounce with our correct one here, you can call this agent and it will basically start this running chat that we saw here. 35:09 So there are two ways basically you can use an agent inside of your workflow. 35:13 You can use this result return type, which means it will actually pause your workflow until that whole agentic process is done. 35:20 Or you can have it run async in the background. 35:23 So this is where if you're running something as a background job or something where you want the agent itself to actually update your data sources. 35:31 You might not care to block the execution of your workflow while this is running. 35:35 So what this will do is we'll just return the run ID of the agent. 35:39 So this is alpha numeric string up here in the URL. 35:42 So you can at any point, you know, check in on how the agents running. 35:46 You can basically go up here and say, okay, we're going to invoke the agent and see kind of what's going on. 35:54 But you don't have to wait for that to happen for the other steps in your workflow to happen. 35:58 So depending on the rest of your workflow and what you need, that's kind of how that would work. 36:03 So in this case here, we have, you know, our agent is running once it runs, it gives us our data back. 36:09 And then we can run the same sort of things we can split the domain off. 36:12 We can query Apollo and we can drop our lead into Salesforce. 36:16 So this again is probably the most complicated version of this workflow. 36:20 And there are things that you need to do to make sure your agent runs as it should, right? 36:25 So for example, this agent, the data that comes back into the workflow is really dependent on the output of the agent being this sort of JSON string. 36:35 Right. So again, we talked about how large language models don't always output the same sort of output. 36:42 So that's where a couple specific features of agents coming to play. 36:45 I just want to cover really quickly. 36:47 And these are some of these things over here that you see under e-vails and data sets. Right. 36:53 So we talked about the example of I always want my agent to output a JSON object. Right. 36:58 So this is the case where I would add a data set and I would say, OK, I want to make sure we test the output of the agent. Right. 37:08 So in this output data set, I'm going to go ahead and add a test case. 37:13 And I'm going to say, OK, what was the input that I gave this this is it right here. 37:17 So for this input, basically, when I give the agent this input, I want it to have, you know, a final answer. 37:24 I want to see what the final answer is going to be. 37:27 And I want the final answer to again, the valid JSON in this case, right. 37:33 So that's a very basic check that's like, OK, is my is my agent outputting valid JSON. 37:39 That's basically true false, yes or no, etc. 37:42 If in another case, I want to add my same input and I want to say, OK, actually, you know, I want to make sure it matches a bigger JSON schema. 37:50 Right. I can basically paste this in here. I can clean this up a little bit. 37:55 And you know, I can go through it. Basically, say, OK, here is, you know, first name, which would be a string last name, which would be a string. 38:02 You can put in a JSON schema here and it will actually check it against that schema to make sure that not only is your is your agent outputting valid JSON. 38:10 It's outputting it with the correct schema. 38:12 So you can add a bunch of different validations here. 38:15 Anything from, you know, an exact match for an output string to a schema match all the way up to using a large language model as a judge. 38:23 So I can say, you know, is the output of my agent funny. 38:25 If I was using agent to write an email or something, for example, I can say, you know, is the output professional. 38:31 And it ranks it again on this scale. So you can get very complex with these evals, but it's an important way to put sort of barbrails on your agents and make sure they're running in the way that you would expect. 38:43 So once you have your data sets and your test cases set up, you can basically run an email, which will do here. 38:49 We'll say we want to eval the output data set. 38:51 And what this does is it basically takes a look at the current prompt model, everything for your agent. 38:56 It takes it through a test run and then evaluates it against those test cases. 39:01 So once you have your data sets and your eval set up, you can, you know, when you decide you want to upgrade to a new large language model. 39:08 You don't need to be nervous about it, changing the behavior of your agent because you can just run all of your evals to see if you have, you know, behavior that's changed from the new model into fix, et cetera. 39:18 So it looks like this eval failed. 39:21 So you can again dive in here and see, okay, why is the eval failing. 39:27 And it's saying here that, okay, the output is not an answer. 39:30 It decided to call our web search tool instead, which we would expect, right? 39:33 So what we could do instead is we could go actually over here to one of our chats that we feel is helpful. 39:41 And basically in this three dots, we could say, okay, this is actually a great example of a chat that we want to turn to an e-daw. 39:48 So we can go ahead and add it to the eval dataset and you'll see this populates it with exactly what happened here. 39:55 In terms of all the different steps this agent took, we can add it to our output dataset and we can just create that in there. 40:02 So now instead of having, you know, this here, we actually have taken a test case from our real world data set, right? 40:12 We could say, okay, it really needs to match our expected answer. 40:16 So we have that in here. 40:17 Another LM is going to be judging that and we have that ready to go without having to manage to figure any of this or so. 40:22 So tools like this are really helpful for adding door real estate agents. 40:26 And it's important that when you get into these more complex types of automations, you make sure they're still running properly. 40:31 And so you have these progress on them. 40:35 All right. 40:37 So pros and cons of agents, obviously this is the most flexibility in terms of data input. 40:42 It can take a wide range of input data. 40:45 It's able to use its tools to sort of understand and take action on that data and work within the parameters that you've set. 40:52 You can add new capabilities pretty easily by adding tools. 40:55 We talked about adding new capabilities to your deterministic workflow would mean that you had to build those things yourself. 41:00 With agents, you definitely can build those tools yourself as we saw. 41:03 But you can also use prebuilt tooling to give your agent capabilities much more easily. 41:10 Third, you know, your agents sort of get upgrades over time as companies like athropic and open AI and Google and others release new better large language models. 41:21 So by just going into that model, drop that and peeing a new model, you can basically give your agent, you know, better reasoning capabilities access to more up to date training data. 41:32 Stronger large language models, things like that without having to do sort of any additional work on your end. 41:37 And especially once you get the evals set up and configured, you can have the confidence that that model upgrade is also, you know, actually making your agent better and not making it worse. 41:46 So one of the benefits of, you know, working around these large language models and building your automations around them is that as the models get better, you know, theoretically your automation to do as well. 41:58 Some of the cons. 42:00 Most difficult to debug if we talked about, you know, the single LLM block introducing some non determinism into an automation. 42:08 You can think of this as like running that same loop a bunch of different times. So every single one of those LLM calls to the have like reasoning can potentially produce a different output. 42:20 So you can have two runs of the same agent that go in wildly different directions. 42:23 And sometimes that's helpful, but a lot of times again, as we talked about with the guard rails, you really want to make sure that your agents are running more reliably. 42:30 And so there are tools to help you do that, but it's definitely sort of the default state that it's the most sort of difficult to kind of understand why it made one decision or another. 42:40 And there are a lot of different levers for you to pull to modify the behavior. It's sometimes harder than which one to use. 42:46 Again, while like we have the widest variable you know, put. 42:48 And you know that the flip side of getting access to newer better models is you are required to manage that model dependency right so if 42:58 an AI provider sort of deprecates a particular model. 43:01 That will you know sort of invalidate your agent you have to figure out, okay, we we definitely do need to upgrade to another model now. 43:07 Is it working as expected? We need to send up more e-vals, etc. So that's sort of a two sides of the same point situation where you get new functionality and features for free when you know. 43:17 The models upgrade, but that's also another dependency you have to manage, whereas if you just had a static sort of deterministic workflow, it could just run, you know, basically indefinitely depending on what your data source is more. 43:29 So I hope this has been helpful sort of three different shapes and types of automations. 43:35 Automations in general are something that the product team here at Rachel is working pretty heavily on and focusing a lot on this quarter and next. 43:42 And so expect to see some changes coming very soon and some upgrades coming very soon to both workflows agents and how they work together like we looked at today. 43:52 So like I promised I want to leave folks playing time for Q&A. It looks like we've got just over 15 minutes to the top of the hour. So we're going to dive into some of those right now. 44:02 And like I said, Kylie has been collecting them in a Google Doc for me here. So we'll get right to those. 44:08 If you have Q&A that you haven't dropped in the chat yet, please do. 44:12 We will be going through as many of them as we can in our last sort of 17 minutes here. 44:17 And again, thank you for coming and hope you stick around. 44:21 All right. 44:22 First question here. How many how many AI units will it consume? Yeah. So in terms of a like consumption of AI. 44:32 The way that this currently works is when you hook up your model provider, you basically give Retool the API keys that you have. So in this case, we've given it our anthropic API key. 44:45 And it basically will just bill you from your anthropic account directly at this point. We don't take any sort of there's no margin for us on top of that. 44:56 If you give us your API key, it's it's how many tokens you are spending directly with the large, I look at the bottom of the matter. So that again, depends on how long your input is, which model you're using those all have different sorts of costs. 45:12 But you can do things like find something called a tokenizer online and figure out how many tokens is this going to use and sort of evaluate based on, you know how that's looking. 45:21 The nice thing about the way that this is currently set up is again, if you find that you are spending too many tokens is getting too expensive on a particular model, you can always, you know, bump that down and see, okay, am I getting good enough output with a cheaper model or a faster model. 45:37 And is that, you know, that it's very easy to make that trade off as opposed to being locked into just one model or one architecture. 45:47 Great. 45:48 Do the workflows run in parallel when run multiple times the AI responses are pretty long delay and I'm wondering about scalability. Yeah, good question. So this really depends on how you trigger the workflow. 45:57 So again, we were just triggering all our workflows here manually by clicking this run button or by running each individual blocks. 46:04 But if you have a workflow is triggered by some like maybe an Amazon S QSQ or by, you know, a web hook. 46:12 So for example, if you come over here, you can, you know, make this workflow, triggerable via a web hook, which is basically a URL that you can hit the triggers the workflow. So from any other system, you can trigger your workflow that way. 46:24 Any of these things will essentially kick off a run of your workflow. And so that you do have the ability to run workflows in parallel. 46:33 There are things you can configure in the settings to kind of determine that behavior as well. Some specific blocks also have behavior configurations like we saw with the agent, whether to run that, you know, async or wait for that to run. 46:46 So they're very flexible in terms of how you run them and where you can run them. You're definitely not stuck waiting for an ad block every time for trying to run a bunch of operations in a row. 46:59 That's that's flexibility that you definitely do have. 47:03 Cool. Yeah, we get a whole thread here about cost, which we covered a little bit. Yeah, sign in for five is definitely not cheap. That is true, depending on how much data you're passing into it. 47:13 And again, this is sort of the trade off that you have to make based on your use case, what sort of complications you have in your data. 47:21 If you need a more complicated model, a more expensive model, or if you're okay, just running a cheaper faster model because you're more budget conscious, that's up to you. And so, yeah, we covered a lot of the folks, the responses that folks have added to the chat for Mike. So thank you for everybody who chimed in Zachary specifically. 47:39 But again, the focus here for us is to make it very flexible for you to sort of experiment with different models, swap them out very easily and be pretty model agnostic in terms of we don't have a preference for a particular model provider or a particular model. 47:55 We find that different models are good for different workflows. And, you know, obviously when when budget comes into play, that's again, sort of a separate question. So those are all things to evaluate, but my recommendation would be, yeah, like set up your prompt, passing your data, try a few different ones of these, you know, again, sonnet is their sort of anthropics middle of the road model. 48:15 If I went down to high coup, like I could just run this again and see like, okay, it's going to run a little bit faster this time. 48:22 But, you know, is the output, like the trade off in the output that I get, like much worse in that case, I probably would want to stick with sonnet. 48:31 In this case, it looks like, you know, we get a similar conclusion. So maybe, you know, with enough test cases, I can be confident this has been enough for my use case. So that's sort of up to you to determine based on which model you want to use for that. 48:46 Question, can we set temperature or top T for agents right now the, let me jump back here for agents, the sort of interaction that we expose in the advanced setting is our max iterations here. 49:00 So that's sort of the only parameter that you have control over right now. 49:04 We're always exploring ways to, you know, make these tools a little bit more configurable, but at the moment, this is basically the only sort of advanced parameter like that that you have access to. 49:14 That said, if, you know, these are the kinds of things that you care about, you can use the rest API endpoint for any of these different models, which, you know, out of the box does take those words parameters. 49:26 So you can kind of fill the extra mile on that if you want to customize it in that way. 49:33 All right, David asks, can you call the agent directly from outside of Retool, eG, build an agent here and then pass data to it as part of a separate application. 49:40 Absolutely. So there are a couple of different ways to do that. Let me look at you can basically wrap an agent in a workflow type automation and you can enable this web hook, which essentially lets you interact with that agent from there. 49:55 Agents can also be triggered in a couple different ways. You'll notice a period that we have the chat trigger enabled, just kind of how we were testing it. 50:02 But you can also trigger an agent via email, which gives it a unique email address that you can then, you know, forward email threads to. 50:08 So for like scheduling agents, things like that, that's pretty helpful. But also something that's relatively new is our agent agent protocol. So this is, again, exactly what you're asking for. It enables the agent to be triggered from outside of Retool. 50:21 So if you want to learn more about this, you can check out the documentation for agent agent communication. It's basically brand new. We just recently launched it. 50:29 But this allows you to do this sort of agent agent communication with agent built in Retool and connected to outside systems. 50:38 All right, can Retool act as an MCP server for external agents or can only call other MCP servers. Good questions from Peter. So right now we have Retool setup as sort of an MCP ingest, right? 50:50 Like if you have an agent, you can add an MCP server to it, which you set up as a resource. And it can pull information in from that MCP server and use those tools. Right. 51:01 We're looking at whether it makes sense to sort of expose, you know, select Retool functionality as an MCP server so that you can interact with Retool from another system as an MCP way. 51:13 But Retool, I will say if you need that sort of interaction does have a pretty robust API that a lot of our enterprise customer specifically use sort of set up configure the ritual environments and take action. 51:23 So that means we can look into we don't currently have the ability to control your Retool environment as an MCP server, but you can use MCP servers as tools. 51:35 All right. How about agents outside web search similar to manus such as a portal login and actions done post login. 51:42 Let's say I went along into a FedEx portal from our account and check the claims status of lost packages, not part of the API. So it's not available by the API. 51:48 Yeah, so it sounds like what you're talking about is basically can your agent sort of use a fake browser or do what folks are on computer use, right. 51:58 And I will say that's not a sort of native tool that we support, but there are plenty of headless browser automation companies that have APIs of their own. 52:08 So what you could do is basically hook that up as a custom tool and say, okay, we're going to use this browser tool. 52:15 So that your Retool agent can then call that browser tool. The browser tool could run the automation that your agent had specified and then it will return the results of the agent. 52:23 So a couple more steps to do that, but Retool agents don't have the ability to do that sort of computer use directly. 52:30 So I recommend using a third party. So browser use API there. 52:34 Cool. 52:36 Looks like Retool is much more feature rich than MS power automate specifically in the agent work flow. Can you speak to the key differences between Retool and power automate. 52:45 Yeah, that's a good question. 52:48 In terms of like that specific tool, I'm not exactly familiar, but I will say the kind of key things to think about when you're building an automation in Retool is that as you saw here with our large language model block. 53:01 And as you've seen with some of our ability for you to write code anywhere and you know hook up different resources, whether they're supported as any of our native integrations or our generic rest day integration is basically we are sort of optimizing for flexibility. 53:15 Right. So we want to make your automations as customizable and powerful as you need them to be. 53:21 So in whatever way you want to hook blocks together, whatever sort of outside resources you need, whatever language models you want to use, whatever sort of built in Retool resources we can help offer you. 53:32 We want to make that as flexible and as powerful as possible. We don't want to lock you into a specific vendor, a specific tool set, all those sorts of things. 53:41 So that's really kind of one of the ways that we try to differentiate ourselves when we think about how we build a lot of issues. 53:45 So we're going to do that. 53:48 All right, is it possible to connect with Microsoft co-pilot access company data in the Microsoft graph like exchange and SharePoint. 53:55 Yeah, interesting question. I'm not entirely familiar with how that data is exposed. 54:00 If there was a sort of co-pilot API or if co-pilot understood this sort of industry standard like agent agent protocol, you can maybe hook it up that way. 54:10 The other thing you could possibly do is if you can figure out a way to get the data out of co-pilot, you could have a sort of data workflow that put that data into some sort of central database where you could then pull it into a ritual automation. 54:23 So there are a couple of ways to go about that. But I think the key factor there would be what options you have for getting the data out of co-pilot or accessing it in co-pilot, which I'm not entirely familiar with. 54:38 All right, couple more here. One of the possible triggers API chat, web hook, cron, etc. Yeah, so it depends how you are calling your agent. Like I mentioned, we're working to sort of bring workflows and agents a little bit more closely together. 54:51 But right now, if you have an agent that runs as part of a workflow, you can trigger that workflow on a schedule. So this is kind of the cron you were talking about. Either you can just set a specific time here, or if you're familiar with cron syntax, you can specify all that yourself. 55:08 You can trigger it via a web hook, which again basically means you can hit a specific URL to trigger your workflow and pass it in any of the data that needs to get. So those are two different ways. 55:19 And then from the agent itself, again, you can trigger it manually here via chat by just literally copy pasting something like this and starting the chat. 55:26 But you can also trigger via email or use an external agent to trigger this via a day. So a few different options here. We're always exploring other ways that people are using agents and workflows to see if you know there's, there's things there that are helpful. 55:44 I will note that something we just launched pretty recently into general availability is the ability to trigger workflows via things like S QS from from AWS. And I believe Kafka as well. So if that's something you're interested in, definitely check out the documentation and a Retool change log for that. 56:03 Because that's something that another if you have cues set up, you want to be able to trigger workflows based on items coming into your queue. And so that's pretty powerful as well. 56:12 All right. When I create and use a workflow with the trigger, do I have a limit on the number of executions per day or does it depend on the plan that I've affiliated with my company. Yeah, good question. So if you go to retail.com slash pricing, you will see sort of the limits here. Right. So you can see that like the free plan, for example, is limited to 500 work loads a month. 56:33 Once you bump up the team that goes up to 5000 and then it kind of scales from there. So it is pretty dependent on which which plan you've chosen. But we do try to, you know, be pretty flexible folks and figure out how to figure out, you know, what's going to what's going to best fit their use case. So I don't believe there is a daily limit, but again, those all sort of flow up to whatever your monthly limit would be. 57:01 All right. I have a question here. Do the models you can figure have certain levels of capabilities for your agents think like structured outputs or tool calling or routing. 57:10 It helps us vet and hone in a few from our catalog to nimble for our team. Yeah. So good question. As far as exposing the features and functionality specific providers. So like, for example, when opening I launched the ability to require your agents to output JSON or require an LLM. 57:30 Call to output JSON. As far as supporting like provider specific features like that. That's something that we're still working on right now. The, the Retool sort of AI resource bundles all those providers together. 57:45 So we're doing some work on sort of splitting that out to say, okay, we have just our open AI integration or just our anthropic integration or just our Google integration. 57:53 And that will allow us to do a lot more with those providers, specific features. But that's still sort of working progress. What I will say is there are also tools like open router that help you sort of balance between providers in this way and it gets when that maybe that routing or things like that that you are looking for. 58:12 So hope that's helpful. But yeah, as you mentioned Zachary in the chat, you can always create a rest API resource instead and hit that directly via just an API request. 58:25 But we're working actively right now that the resources team to sort of smooth over that experience and make it so that you don't have to resort to that. 58:33 All right. 58:37 Cool. Let's see. And we got one more here. If you've got any final questions, we do have a couple minutes left. 58:44 We got a question on if I'm an existing any end user, why should I switch to Retool? Is it any better? 58:50 Obviously, I mean, you're here on a on a retail webinar. I represent our marketing team here at retail. So I have my own sort of biased opinions. I will say I've used both an a and a and a tool. And I think there are just like some pros and cons of each obviously. 59:05 I think one thing to consider is are there specific integrations that one tool has over the other. 59:11 If you want to see, you know, for example, retools integrations, you can go to Retool.com slash integrations and you can see all the different systems here that we integrate with. 59:20 You can click into any of them. Like let's say we want to look at you know, data dog. And we can look at specifically like the documentation about how that works, how to put this my Retool account. 59:29 So sort of breath of integrations doesn't mean to consider if there's a particular integration that you really don't want to have to write by hand and Retool has it and doesn't like that would probably be a case that I would recommend for Retool. 59:42 If there's sort of a flow where you know, you do some of the stuff like something in Zachary mentioned that we don't support quite yet that maybe you know, an instance to look somewhere else. 59:52 But again, you know, it's it's always kind of a constant comparison because we're adding features and so are they. And so really it's just it comes down to what system are you comfortable with. 1:00:02 What sort of fits your workflow and your working style. And also a key thing for me is, you know, the integration space. So definitely check that out. 1:00:11 I'll read the lot of slash integrations. 1:00:15 All right, is there a max processing time. I think it's the last last question that will take, but, but thanks everybody for submitting your questions again in the men. 1:00:23 Is there a max processing time for workflows like Lambda. If so, how can we use Retool for operations that prepare long processing times like downloading data for processing. Yeah, good question. So you'll see if you actually look at each of the blocks. 1:00:36 They have a time out here that are set on them. So at the block level, you can sort of set that. 1:00:42 But workflows are actually designed to be potentially very long running. Right. So we use a technology called temporal under the hood that helps us manage this. 1:00:51 And so you can definitely look in the docs for just like a ritual workflow execution limit. 1:00:57 But we do have workflows that run, you know, quite a long time. We have workflows that sort of pause in the middle. People can sort of get notified, have input and come back to them. 1:01:10 But yeah, so we do have workflows that run point a long time. But you can see here that, you know, for async workflow runs, basically with things that happen in the middle. 1:01:19 You know, we have a time out of 30 hours. If you are waiting for a user to do something or you have a wait block, you can have that workflow wait as long as you would like it to up to each way block and wait 60 days. 1:01:32 But for synchronous workflow runs, we have an actual response. We do have that sort of 15 minute time out here. So more details in the docs under workflow limits. 1:01:41 But, you know, I have pushed workflows pretty hard on a few different use cases. And I have not really found a case yet where I've run up into a limit where I feel like there's something that I can't process either by figuring out a different way to do back shit or just literally leaning into the fact that they're pretty large limits on workloads. 1:02:03 Alright, cool. Well, like I said, appreciate everybody for coming in and spending an hour with us. It's really cool to see there's over 100 people here. 1:02:11 But yeah, please follow up with us. We'll send out the recording to y'all in a couple days. But take a look at the docs. If you have any more questions that weren't answered here. 1:02:20 Also feel free to jump into the Retool community. It's our community forum where you can ask any questions answered from folks on the Retool staff, other folks in the community as well as a great place to go. That's community dot Retool.com. 1:02:34 And yeah, thanks for coming. See you the next one. And we can't wait to see what you build a free tool. Thanks everybody.