Description Transcript
0:01 All right. 0:02 Hi everyone. Hey Kenan. 0:05 >> Hey, good morning. 0:06 >> Good morning. Welcome back. This is the 0:10 final day of AI build week. It is day 0:13 four. Uh we did have our day five on 0:16 Friday, so the schedule was a bit 0:19 shuffled there, but welcome back for the 0:22 final day of AI build week. All of these 0:25 sessions are being recorded. You can 0:27 find them in our YouTube channel and in 0:30 the community. Uh they will all be 0:32 posted there as well as all of the Q&A. 0:37 Really awesome questions everyone has 0:39 been asking throughout the week. All of 0:40 the questions that we got to and all of 0:42 them that we didn't have time for, 0:45 they're all posted in the days thread in 0:47 the community. So you can find them 0:49 there. You can follow up. You can ask us 0:51 more questions. We're in the community 0:53 and um we're there to help you. So, 0:57 there's still more time to win swag. A 0:59 few of you have already won swag 1:01 throughout the week. So, uh just engage 1:03 here, engage in the community. Um answer 1:06 the poll and you might win something 1:08 cool. So, I'll be following up with 1:10 those swag winners in a few days later 1:12 this week. And um also at the toward the 1:16 end of the session, you will see a poll. 1:19 We really want to know your feedback. 1:21 Did you like these sessions? What do you 1:23 think? Um, this is the first time we 1:25 have done something like this. We've 1:27 never done a multi-day webinar. Um, so 1:30 massive shout out to Kenan, to 1:33 Angelique, the other presenter, and also 1:35 to Daniel Kim, who is behind the scenes 1:38 making sure that this runs as smoothly 1:40 and seamlessly as possible. So, thank 1:42 you so much, Daniel. 1:44 Um, and lastly, we have a webinar 1:47 happening tomorrow. If you really like 1:49 this content, you like more build along 1:51 content, we actually have one tomorrow 1:54 at 2:00 p.m. Pacific Standard Time, uh 1:57 it is about building a customer support 2:00 agent. So, um you can sign up on our 2:03 events page. We will be posting it in 2:05 the chat here. And you can also see all 2:08 of our builder events in the community. 2:12 There's a tab on the left hand side that 2:14 says upcoming events. So, click there. 2:17 Uh we will be adding the event tomorrow 2:19 shortly. So you'll see that soon. But 2:21 that is where you can stay updated to 2:23 all of up these upcoming events similar 2:26 to this webinar or yeah different things 2:28 we have going on. So anyway I will pass 2:31 it over to Kenan uh for our final day. 2:34 So thank you everyone for tuning in and 2:37 I hope you enjoy today's session. All 2:39 right. 2:40 >> Awesome. Thanks Sarah. And yeah like 2:42 like Sarah was saying welcome everybody. 2:44 uh thanks for for joining us on our last 2:46 day of AI build week. Uh moved from our 2:49 from our Thursday session to today. So 2:50 thanks for everybody's uh flexibility. 2:52 We're going to dive in and get started. 2:55 Um and if you're here to learn about AI 2:58 in workflows, then you are absolutely in 3:00 the right place. So today, you know, 3:03 we're we've covered uh how to choose 3:04 different models. We've covered looking 3:06 at how vectors work. We've looked at a 3:08 example of how to put that together in 3:09 an application. uh when we talked about 3:11 how we built retool GPT to work on our 3:13 data internally. So today we're going to 3:15 talk about the actual workflows that 3:17 work kind of behind the scenes to make 3:18 these sort of applications work um 3:20 things like when to use a workflow 3:22 versus when to use an AI agent uh which 3:24 as Sarah mentioned we discussed on 3:26 Friday and uh just help you hopefully 3:28 wrap your head around this particular 3:30 architecture uh and when it's useful in 3:32 the context of building tools with AI. 3:35 So, just a little bit of a preview of 3:37 what we're going to talk about. Uh, 3:38 we're going to look at some common 3:39 patterns of how different AI workflows 3:42 sort of show up and when you might want 3:44 to use this particular architecture in 3:45 your building with AI. Uh, then we're 3:47 going to dive right in. This is AI build 3:49 week after all. So, we're going to build 3:50 a workflow from beginning to end uh that 3:52 basically watches uh blue sky for 3:55 mentions of a certain term, uses AI to 3:57 determine if they're relevant or not, 3:59 uh, and then posts into Slack. So, 4:00 that'll be hopefully pretty cool to see. 4:02 Um we're going to talk about a few cases 4:05 uh where workflows can be used to 4:07 integrate into larger systems as opposed 4:08 to just being standalone things. Uh and 4:10 then we will uh preview kind of how that 4:13 fits into you know the brand new world 4:14 of agents. So um as always as we're 4:17 going through please drop your questions 4:19 in either the Zoom Q&A or the chat. Uh 4:21 we got a bunch of retail folks in there 4:22 who will be collecting those up and uh 4:24 we will get to them at the end. We'll 4:26 have plenty of time for for questions. 4:27 So uh bring as many as you got. 4:31 All right. So let's take a look at some 4:33 common patterns of how workflows with AI 4:35 in them sort of show up. And uh the 4:38 three that we're going to look at are 4:39 called prompt chaining, routing, and 4:41 parallelization, which is a very uh hard 4:43 tongue twister that I had to practice a 4:45 little bit this morning. So uh I just 4:47 wanted to call out that all of these uh 4:49 graphics that we have here are from this 4:50 anthropic blog post called building 4:52 effective agents. Um and in the chat 4:55 very shortly, we'll be posting a link to 4:57 a Google Drive folder with resources 4:59 from today's talk. And uh this blog post 5:01 is linked in that resources list. So go 5:03 check that out if you're interested uh 5:05 in more content like this. But one of 5:07 the common uses for a workflow that 5:09 contains different AI pieces is when you 5:11 want to do something called prompt 5:12 chaining. So for example, if you use a 5:14 tool like chat GPT, you can of course 5:17 prompt it, wait for it to have feedback 5:20 and prompt it again. You can give it 5:21 these multiple prompts in a row. But 5:23 when you want to do that in the context 5:24 of an automation that runs without human 5:26 intervention, you need a bit of a 5:28 different architecture to allow these 5:29 prompts to move one to the other, right? 5:31 And so this is an example of how this 5:33 might work. We'd have some sort of 5:34 input. We would feed that input into a 5:36 large language model call. We would get 5:38 the output from that call and do 5:40 something else with it, which we'll see 5:41 in a little bit what that something else 5:42 might be. Sometimes that's feeding it 5:44 into code. Sometimes that's feeding it 5:46 into, you know, a database call or 5:48 query, something like that. Um, and then 5:50 we would just have some sort of failure 5:52 condition because the output from a 5:54 large language model is 5:55 non-deterministic. So maybe uh you don't 5:58 get a valid output here. Uh, and your 6:00 kind of next step in in the workflow 6:02 fails, right? So you'd have some way to 6:03 say, okay, this isn't what we want. 6:04 We'll have to try again at some point. 6:06 But then you would pass all of that 6:08 information to, you know, a second 6:09 prompt which you could chain to a third, 6:12 etc. And you could have multiple steps 6:13 in this workflow, however many you need. 6:16 Uh but the the core idea here is you're 6:18 basically taking the output from one 6:19 large language model and feeding it into 6:21 another. Whether that's generated output 6:23 or whether you're using a large language 6:25 model to generate a prompt for another 6:27 large language model. Um there are 6:29 multiple ways to kind of go about this, 6:30 but this is an example of one sort of 6:32 workflow architecture. And you can 6:34 imagine like if you had a workflow that 6:35 was trying to write uh first draft of an 6:38 article or a blog post for example, you 6:40 might have your first LLM call actually 6:42 write the first draft the article 6:44 itself. You might then pass that article 6:46 to a second LLM call that looks at okay 6:49 act as an editor figure out where this 6:50 article is a little redundant or where 6:52 whether we could you know explain uh 6:54 more things like that. Um then the third 6:56 call might be okay find me some places 6:58 where I can insert relevant images here 7:00 or take the actions that the editor step 7:03 recommended and actually you know make 7:05 those changes to the article text uh and 7:07 kind of go through. So you can see where 7:09 instead of just having a single sort of 7:11 oneshot prompt uh this sort of prompt 7:13 chaining flow uh is really powerful to 7:16 kind of get improved output from your 7:17 large language models. And if you 7:19 remember uh on Monday of last week uh 7:21 and showed us how different models have 7:24 different strengths and this is a good 7:25 way that you can also compare uh you 7:28 know you can use one model for article 7:30 generation for example. If you find 7:32 another model is just a much better 7:34 editor in in your kind of style, you can 7:36 use multiple different models in these 7:38 multiple different LLM calls. U and that 7:40 makes this architecture particularly 7:41 powerful, especially for more 7:42 complicated use cases. 7:46 The second uh example here is routing. 7:49 So again, this often comes into play 7:50 when you're working with multiple 7:51 different models. Um and in this case, 7:54 we have some input. We have a large 7:56 language model call up front that uh is 7:58 kind of making this routing decision. So 8:00 imagine we have three different models 8:02 here. We have you know a cheap fast 8:03 model. We have kind of a mid-tier model 8:05 and then we have you know a more 8:07 expensive like reasoning model. So for 8:09 any given input uh if it's just a simple 8:12 sort of question of of recall or um just 8:16 very basic generation, you might not 8:18 want to actually uh you know pay the 8:20 extra token cost and use the extra time 8:24 for the more expensive reasoning model 8:25 when you don't need it. So that's where 8:27 this initial router up front uh is 8:30 making that decision that okay this is a 8:31 relatively simple problem we can 8:33 probably pass it to u the fast cheap 8:35 model right so um this is the the kind 8:38 of decision making that as your systems 8:40 get more complex you might need to take 8:41 on u but again it's not this is not a 8:44 behavior you can necessarily get uh with 8:46 sort of a oneshot call uh to a large 8:49 language model the kind of exception 8:51 here is if you've heard of a tool called 8:52 open router it kind of does all of this 8:54 under the hood so We do have a webinar 8:57 we did a while back where we used open 8:59 router uh and we showed how you can just 9:01 hook it in as a resource to your 9:03 workflows and it will automatically make 9:05 those sort of model routing decisions 9:06 for you. So uh there are tools out here 9:08 that do this but if you want to do it 9:10 yourself and have kind of total control 9:11 over which models you're picking from 9:13 what the criteria are you're choosing 9:14 from etc. This is a way that you can 9:17 create a workflow that does that as 9:19 well. 9:21 And the third is parallelization. That 9:23 tongue twister we talked about before 9:24 right. So in this case uh we have an 9:27 input we're making kind of three 9:29 different large language model calls 9:30 simultaneously then we have some sort of 9:32 aggregation step uh you know to whether 9:36 we um like we need to get all the 9:39 outputs basically together uh and then 9:41 we have some sort of output. So you can 9:43 imagine in this example um potentially 9:46 we are you know taking a piece of 9:48 content we have we want to generate 9:50 social media posts for uh Facebook X 9:53 LinkedIn etc. So we would have in this 9:55 case kind of these three different 9:56 prompts and say okay this is our 9:58 generation step for a Facebook post. Uh 10:01 this is you know a LinkedIn one and this 10:03 is our post for X. So we would be able 10:04 to have different prompts for each of 10:06 those um social networks that we wanted 10:10 to um set up. And so our input would 10:13 then run through all three of those 10:15 prompts all at the same time and it 10:16 would generate uh our outputs for each 10:20 of those specific social networks. So um 10:23 that this is a case where again we the 10:24 input we don't necessarily need to chain 10:26 them together because um the output from 10:30 our post generated for Facebook for 10:31 example uh isn't useful to as an input 10:35 to our post for X but we do get the 10:38 benefit of because we can generate them 10:39 all at the same time our workflow is a 10:42 lot quicker um and it's able to you know 10:45 accomplish what we need to accomplish. 10:46 So this is an example of a slightly 10:48 different pattern um where we get the 10:52 benefits of speed while also uh being 10:54 able to take advantage of the fact that 10:56 we can call all of these at one 10:57 different uh point in each other. Cool. 11:01 So now we've looked at a couple of these 11:03 different patterns. Um let's take a look 11:05 and actually build one of these. Again, 11:07 we we mentioned this is AI build week. 11:09 So let's just dive in here. So I'm going 11:10 to tab over here and we've got basically 11:13 a blank uh workflow canvas. 11:16 and we're going to get started. So, like 11:17 we mentioned, what we're building today 11:19 is uh a workflow that basically monitors 11:23 uh Blue Sky, which is a social network 11:25 uh similar to Twitter or X. And um 11:28 anytime, you know, there's a post on 11:30 there that mentions uh retool or 11:32 retool.com or some variation of of 11:34 retool, we want to flag it because we 11:36 have, you know, a dedicated channel in 11:38 Slack that basically pulls together all 11:40 of those mentions. Um but, you know, 11:42 retool is a pretty common word. Um, and 11:46 so we don't want to, you know, have that 11:49 Slack channel fill up with mentions of 11:51 people uh wanting to retool their 11:53 fantasy football team, for example, 11:54 because that's not that's not relevant, 11:56 right? So, that's where the AI piece is 11:57 going to come in. Uh, and we'll see, you 11:59 know, exactly how we do that here in a 12:00 bit. So, we've got our start trigger, 12:02 which we'll just leave as is for now. 12:04 Uh, we're going to do a little bit of a 12:07 cooking show sort of model today because 12:08 I don't think you all want to sit here 12:10 and watch me type out, you know, 50 12:12 lines of code, for example. So, we're 12:14 going to work uh in some copy paste, but 12:16 I'll walk through and explain exactly 12:17 what we're doing. So, we're going to 12:18 call this uh step here get blue sky 12:22 posts. And what we're going to do is 12:25 we're going to grab this from over here. 12:28 And this is actually this is Python 12:30 code. We need to swap over to Python. In 12:32 workflows, we can either use uh 12:33 JavaScript or Python. So, in this case, 12:35 I'm going to use Python uh because we 12:37 have this at proto library that's really 12:39 useful for interacting with uh the blue 12:41 sky API. So, um, that is what we'll use 12:44 that there. We'll import it. Uh, I'm 12:46 going to log in here as myself. And so, 12:49 you'll notice I have this blue sky API 12:51 key here. Uh, this is an API key that 12:53 I've kind of pre-generated. Uh, because 12:55 I'm live sharing my screen, I've just 12:57 put this as a configuration variable 12:59 that's kind of stored behind the scenes 13:00 in a workflow so that uh, y'all don't go 13:03 ahead and start posting as me. But we 13:05 have that set up and I'm able to kind of 13:07 log in as myself which is uh lets me a 13:10 be able to run the sort of search that I 13:12 want to run. So you can see here I'm 13:14 looking for you know mentions of 13:16 retool.com. I'm looking for retool 13:18 lowercase. I'm looking for retool 13:19 uppercase. And basically I'm running all 13:21 of these different searches. I'm 13:22 combining them together. And at the end 13:25 of this output I want to get you know a 13:27 list of basically the combined results 13:29 of all of those different mentions. So 13:31 at any point in time once we have this 13:34 uh set up we can basically run this 13:36 individual workflow block if we want to. 13:37 So that's useful for just making sure 13:39 that our configuration is working 13:40 correctly. Uh oh no module named at 13:43 proto. Perfect. So this is actually 13:45 something we need to do. Uh because I 13:47 mentioned there's a great library in 13:49 Python for setting this up. Um but you 13:52 can see over here we're not actually 13:53 importing any uh Python libraries yet. 13:55 So we're going to go ahead and say add 13:57 Python library. 13:59 We are going to get our app proto 14:01 library. I think that should be what we 14:02 need. Uh yeah, sounds good. And I'm 14:05 actually also once I add this, I'm going 14:07 to go ahead and add uh our requests 14:10 library as well. This is a pretty common 14:11 library in Python. Um and we will need 14:15 it here in a second. So, let's just grab 14:18 that. And looks like our environment's 14:20 still setting up. So, we'll need that in 14:22 one second. But basically uh you can 14:25 include any of these uh libraries that 14:28 are available uh on you know the mpm 14:31 registry or any of the python registries 14:32 as well and that will kind of give you 14:35 that functionality uh that you would 14:37 expect from you know writing your own 14:38 sort of python script and uh including 14:40 those libraries in the exact same way 14:42 you can do it inside of a a workflow. Uh 14:45 cool. So now that we've got these two uh 14:48 libraries in let's try and run this 14:49 again. We should see that this works as 14:51 we would expect. 14:55 Uh yeah, cool. Get latest query time 14:57 stamp. This is actually something we 14:58 will come back to in a second and we 15:01 will see why this is not working. Uh but 15:03 essentially we're going to add a block 15:04 in here very soon. Uh because when we're 15:07 querying uh blue sky, we obviously we're 15:10 going to have this workflow run every 10 15:11 minutes for example. Um and we don't 15:14 want to just continually get the same 15:16 post that we've already mentioned in 15:18 Slack, right? So, we're going to input a 15:20 time stamp that says, "Okay, this is the 15:21 last time we queried it. Only get posts 15:23 since this time." So, we'll come back to 15:25 this once we get that sorted out and uh 15:27 and working. But, let's go on to our our 15:29 next step here. 15:31 Cool. So, we need to do two things. 15:33 First of all, it's entirely possible 15:35 that we don't find any uh new posts on 15:38 Blue Sky that mention retool. Um so, in 15:41 that case, we just want to kind of exit 15:42 the workflow and uh you and and be done 15:45 with that. So, um, we're going to insert 15:47 a branch here. And we'll say, okay, this 15:50 we'll just call this if posts found. And 15:52 again, uh, this is important for, um, 15:55 you know, naming your blocks is pretty 15:56 important. It just keeps things a lot 15:57 cleaner. And for anybody who, uh, comes 16:00 into your workflow and is trying to 16:01 figure out what's going on, uh, it gives 16:03 them a little bit more context into what 16:04 each of these blocks does as opposed to 16:06 having to look at all the code. So, what 16:08 we're going to do is we're going to say, 16:09 okay, cool. If our you know our get blue 16:11 sky posts uh block ran and there is data 16:15 which means there are you know posts 16:17 associated with it and that data has 16:19 length which means there's at least one 16:21 that means you know we're going to move 16:22 on with our with our workflow. 16:25 So uh let's go ahead and drag this out 16:28 to a new block here. And in this case 16:30 what we're going to do is we're going to 16:31 insert another code block and I call 16:34 this uh distill posts. Basically, what 16:37 this does is there's a whole lot of 16:38 information that comes back from the 16:39 Blue Sky API about each particular post. 16:42 Um, and I don't need all of that 16:43 information uh for our workflow to be 16:46 useful, right? So, we're just going to 16:48 go ahead and grab this piece of code 16:50 here. We're going to use Python again 16:51 just for consistency. 16:54 And what this basically does is just 16:56 loops through all of the posts that we 16:58 have gotten and it gets the important 17:00 pieces of okay, what's the URL? um what 17:04 are any of the kind of embeds that we 17:06 need and basically just outputs you know 17:08 the text the URL any of those sorts of 17:10 things that we will need later. So it 17:12 just cleans up our post array and um you 17:16 know makes it so that uh we only have 17:18 the information we need and our AI step 17:21 that we insert later will not get 17:23 confused with a bunch of other 17:24 information that we don't actually need. 17:26 Right? So 17:28 now once we have our post list 17:30 distilled, we have a list of all of the 17:32 posts that have used either retail.com 17:34 lowercase word retool, uppercase word 17:36 retool. Now is when we need to start 17:38 making decisions about okay, are these 17:40 posts relevant and useful? Do we want to 17:42 push them all the way through to Slack 17:43 or are you know are these definite 17:45 mentions but we just want to discard 17:46 them because they're not really 17:47 relevant. And that's where we have our 17:49 AI action or decision-making step coming 17:52 in here. So we're going to call this 17:53 decide relevance. 17:56 We're going to use retail AI. We're 17:58 going to use our generate uh text action 18:00 because we just wanted to generate u an 18:02 outputed updated array of uh our posts 18:05 with a score assigned to them. Right? 18:07 So, I'm going to grab my prompt here and 18:09 pull it over. And we'll just go through 18:11 what this prompt says very quickly. So, 18:13 it says, "Your task is to return valid 18:14 JSON that has a new score field attached 18:17 to each item in the array. And the JSON 18:20 array is provided at the end of this 18:21 message." Right? So based on the text 18:23 field for the post in the original 18:24 array, add a score with a value from one 18:28 low likelihood with high confidence to 18:30 10 high likelihood with high confidence. 18:32 This score indicates whether a post that 18:34 mentions retool is likely to be about 18:35 the software company that builds a 18:36 platform for software developers to 18:37 quickly build tools. Right? So we're 18:39 saying okay on a 1 to 10 scale, how sure 18:41 are you that this is about retool the 18:43 company? Right? And so then we add some 18:45 specific rules here based on like posts 18:47 that we've observed. Right? We say okay 18:48 if a post is clearly about a sports team 18:50 uh it's not about retool and should be 18:51 ranked to one if the post contains 18:54 retool like a specific mention right 18:55 like that should be ranked to 10 uh 18:58 except in cases where there are more 18:59 than eight mentions because we've 19:01 detected that that's usually spam you 19:02 know there are a lot of folks that just 19:03 kind of uh include a lot of mentions in 19:05 their post and that's something not 19:06 something we want to include. um things 19:09 like usage of the word retool as a verb 19:11 is unlikely to be the company, right? So 19:13 things like that uh we you know just 19:15 give it some rules for uh how we want it 19:17 to work and we ask it to return a new 19:21 valid JSON array with each post text and 19:23 URL along with the corresponding score. 19:25 Return only the JSON do not return any 19:27 other text. So again just some like 19:28 prompting rules here to say okay we only 19:30 want JSON. Um, if you were calling the 19:33 Open AI API directly, for example, via 19:35 our REST API integration, you could ask 19:37 for, you know, structured output here. 19:38 And so, you could enforce that sort of 19:40 JSON uh, output. But in this case, we're 19:43 just using the AI integration uh, that's 19:44 built in. And so, we just give it a lot 19:46 of feedback that that's what we want. 19:48 Uh, as far as the model goes, again, 19:50 this is where it probably makes sense to 19:52 experiment a bit. I've got some open AI 19:53 models here. Uh, I'm going to use Cloud 19:55 Sonnet 4. Uh, in my testing, that's kind 19:58 of been my favorite for this type of 20:00 task. So, we're going to go ahead and 20:01 use that uh for our AI step. So, when we 20:04 get here now, we should basically have 20:07 uh an array of posts that has a score 20:09 of, you know, between one and 10, how 20:11 relevant they are uh to retool. 20:16 And this is if you remember our like 20:17 prompt chaining example from before. 20:19 This is where we're going to chain our 20:20 large language model output into a code 20:23 step again, which is just going to do a 20:24 little bit of cleanup for us. So, we're 20:26 going to call this filter relevant 20:29 posts. And again, I mentioned before 20:31 that uh we have uh basically an array 20:34 with all of our posts scored from 1 to 20:36 10. And so, basically what this is going 20:38 to do here is just say, okay, look at 20:40 all of these different posts and you 20:43 know, if the score on any given post is 20:45 greater than a four, we've decided 20:47 that's relevant enough to include and 20:49 and push into Slack. So this allows for 20:52 us we we do get some sort of false 20:54 positives here because four is not that 20:56 high on a scale of 1 to 10. Uh but we 20:58 basically decided that we would rather 20:59 see more posts that are relevant than 21:02 potentially miss uh some some relevant 21:04 posts. Right? So we've decided that kind 21:06 of four is our threshold there. But uh 21:08 you could adjust this basically as 21:10 needed. Right? So that's our kind of 21:12 filtering step. And then basically that 21:14 will output an array of any posts that 21:17 it thinks are relevant. And so what we 21:20 want to do then is we want to do a 21:21 research query here because we've talked 21:23 about, you know, actually pushing these 21:24 into Slack. So we're going to go ahead 21:28 and grab our retool Slackbot integration 21:31 that we already have set up as a 21:32 resource. We are going to uh post a chat 21:36 message 21:38 and we'll pick a channel and text. But 21:40 before we do that, I mentioned that we 21:43 need to potentially handle more than one 21:46 post at a time, right? this this array 21:48 might have more than one post and we 21:49 want to post each uh post that mentions 21:51 retail as an individual message in Slack 21:53 so that people can respond to each one 21:55 etc. And so instead of uh doing this 21:58 Slack query as a standalone block, what 22:00 we actually want to do is put it inside 22:03 of a loop block. So let's take a look at 22:05 what that looks like. We can do loop 22:07 here. Our input is our filter relevant 22:09 posts. So basically this takes the 22:10 output of our relevant post block, puts 22:12 it into a loop. We'll call this post to 22:15 Slack. 22:18 And we can then decide, do we want to 22:19 run these loops in parallel, one at a 22:21 time, or as a batch? Because we're kind 22:23 of putting each post into Slack. We'll 22:25 just call this sequential. And we'll 22:27 just put like a 500 millisecond delay 22:28 here, just so we don't sort of spam the 22:30 Slack API. For our loop runner, this is 22:33 where we actually need to put our Slack 22:34 uh resource query. Now, this is where 22:36 you can decide, okay, do you want to run 22:37 some code over this loop? Do you want to 22:39 use any of your other resources? Uh, and 22:41 we do. We want to use our retool 22:43 Slackbot. So, let me just type that 22:45 here. grab it from the list. And now we 22:48 can do all of the kind of operations we 22:50 were talking about before. So we do want 22:51 to post a message. For the text, we're 22:54 going to say value because that's going 22:56 to come from uh that's each item in the 22:58 array that's coming from this block. 23:00 That's how the loop block works. And 23:02 we'll say URI. I know that's the key 23:04 that we're going to need, which we will 23:05 be able to see in a minute. And as far 23:07 as our channel, this will basically pull 23:08 up all of our various uh Slack channels 23:11 here. But in this case, we want to uh 23:14 use our FX tool. So we can just type in 23:16 uh the name of our channel because we 23:18 already know what it is. So in this 23:21 example, I'm just double checking. I 23:23 built a channel for us that is called AI 23:26 build week 23:28 of B Sky. Perfect. 23:31 So that is going to be our channel 23:33 there. And we're going to post uh the 23:35 actual just URL for the blue sky post. 23:37 Slack will uh do what it needs to do to 23:39 expand that, etc. And uh we should be 23:42 good to go. So I'm going to go ahead and 23:43 hit deploy on this just to uh get 23:46 everything saved. And like we mentioned, 23:49 the only kind of final step we need to 23:51 do here is make sure we're not just 23:52 posting the same posts into Slack over 23:54 and over. So that's where our uh time 23:57 stamp comes in. So this is a pretty 23:59 common pattern and actually something 24:00 that uh we had a couple questions about 24:02 last week where uh if you're running uh 24:04 a workflow in this in our example, we're 24:06 going to run it every 10 minutes. um how 24:08 do you make sure that uh the workflows 24:09 aren't overlapping or you know it's not 24:12 working on duplicate data etc and that's 24:14 where uh this timestamp tool comes in so 24:17 retool has a database uh built in we 24:20 call it retool db um and it's a 24:23 postgress database that basically works 24:25 uh like a spreadsheet in the browser so 24:27 you can insert data here uh you can 24:29 upload a CSV but it's fully Postgress 24:31 under the hood which is very useful in a 24:32 lot of cases in this case I've created a 24:35 blue sky latest query table and we'll 24:37 zoom this in just a bit so you can see 24:38 what's going on here. Uh, and this table 24:41 only has one row and basically it just 24:43 has this uh, date field, this time stamp 24:45 here. So, that's useful in a couple 24:48 different ways. What we can do is we can 24:50 actually uh, expand this out a little 24:53 bit. We need to kind of adjust our 24:55 blocks over here slightly. 24:59 And instead of just directly getting the 25:01 blue sky posts right away, what we can 25:03 do is we can insert uh another block 25:07 here. We'll make it a resource query. 25:09 And instead of anything like Slack or 25:11 REST API, we're just going to call 25:13 retail database. We're going to do a 25:14 retail database query. We'll call this 25:16 query 25:17 get latest query timestamp. 25:21 And again, that's going to just allow us 25:23 to move this down a little bit here just 25:25 to give us a little more room. And it's 25:27 going to allow us to uh get the 25:29 information out of that table. Right? So 25:32 we can see if we run this, the latest 25:34 time stamp is July 14th at 6:10 a.m. 25:37 This is UTC. So uh around, you know, 25:41 midnight my time last night was when 25:43 this last ran. So now we can kind of 25:46 parse that output into our get blue sky 25:49 posts field. And you see here we have 25:51 this since parameter. And so what this 25:54 is doing is when we query the uh blue 25:57 sky API, we're able to say, "Okay, cool. 26:00 Uh we want to only query posts since our 26:03 last given timestamp so that we're not 26:06 uh doing a bunch of redundant queries 26:07 and things." So now when we run this, 26:10 our get latest query time stamp should 26:11 no longer be undefined because we just 26:14 defined it in our previous uh database 26:16 block. So we can see here it's calling. 26:18 I think my time stamp is probably going 26:20 to be a little long. Uh which means that 26:22 it's going to it's going to time out. So 26:24 we'll fix that in a second because again 26:25 this time stamp hasn't been updated uh 26:28 since you know early this morning UK 26:29 time which if there are any folks 26:31 joining from the UK they know it's much 26:32 later than 6:10 uh a.m. on the 14th of 26:35 July. So we are now querying the time 26:37 stamp which is good. Um but what we want 26:40 to do is once we get blue sky posts 26:43 successfully, we also need to then 26:45 update uh our time stamp so that you 26:48 know it knows that it has completed 26:50 successfully another uh run and query of 26:52 the API. So what I'm going to do here is 26:55 actually just shift some of my workflow 26:57 blocks over. We have just a little bit 26:59 more room to work with here. 27:03 So we're distilling our posts here. 27:06 We're doing our if posts found. Okay, 27:09 perfect. So, we'll now pull this up 27:10 here. So, the only other thing we need 27:13 now is to say, okay, uh if you know 27:16 we've run a successful query of our blue 27:19 sky API, uh we want to update our time 27:22 stamp, right? So, uh let's go ahead and 27:25 do that. So, after this blue sky post 27:28 query runs, we're going to run another 27:29 resource query. Zoom in just so you can 27:31 see a little bit here. We are going to 27:34 make a database query and instead of our 27:37 select like before, we're going to 27:38 actually just run this update query. So 27:41 it says update blue sky latest query set 27:43 date to you know the current time stamp 27:45 uh and make make sure that's updated. 27:47 We'll call this update latest date time. 27:53 And now this is able to run uh after the 27:56 get blue sky post runs. Right? So we'll 27:58 play this block so that it runs the 28:00 update query. See, it doesn't return 28:02 anything. But if we go over here now, we 28:05 should see, okay, we now have our latest 28:08 updated time stamp. It's currently 5:28 28:10 p.m. UTC. And you can see over here, our 28:13 time stamp in our database has refreshed 28:15 as well. Right? So now we shouldn't get 28:17 this timeout anymore over here. But 28:20 also, we won't actually get any posts 28:22 because we literally just updated the 28:24 time stamp and there haven't been any 28:27 posts uh that mention retool yet. So, 28:30 let's go ahead and fix that. Actually, 28:32 we'll just go over here. I'm just going 28:34 to go ahead and create a new post that 28:36 just says uh testing testing here is a 28:40 retool post from AI build week. Right. 28:44 So, we're going to go ahead and hit post 28:46 on this one. And if I go here, we see 28:50 that our post is here. It's retool in 28:52 capital letters. So, that is one of the 28:54 conditions we're checking for. So now if 28:56 we go here and we run our get blue sky 28:59 post, we're going to run this with the 29:01 previous blocks. So that it also runs 29:03 this block as well. But if we do that, 29:05 we can see this block spins. It gets our 29:07 time stamp and then this block spins. 29:09 And we should see that we do have, you 29:11 know, a single post here. And that's the 29:12 one we just saw, right? So we can then 29:15 just to kind of test as we go, we can 29:16 run each of these blocks because we've 29:18 run the one before it. It will be able 29:20 to use the output. So we see here that 29:21 our if condition is successful. We can 29:24 run our distill posts test and we can 29:28 see that it pulls down uh our one post 29:30 from our data here into just the actual 29:34 underlying information. Look like that's 29:37 timing out for some reason. Let's try 29:38 that again. 29:40 Cool. There it is. And now this is our 29:44 AI step. So let's see what score it 29:45 gives this post. I would think because 29:48 it mentions AI build week uh it should 29:50 probably be a pretty high score. But 29:52 let's see. We'll go ahead and run this. 29:54 And again, we're just stringifying uh 29:56 via the JSON library all of the data 29:58 from our distilled posts step. And so 30:01 that's just taking this dumping it as a 30:03 string into the prompt. Uh if you have 30:05 more data than this to use in a 30:07 particular AI call, um that's where you 30:09 may want to use like a vector, which was 30:11 our session from Tuesday. Um but in this 30:14 case, we know we're going to have at 30:15 most, you know, a few posts here because 30:17 this is running every 10 minutes. And so 30:20 we're just going to dump that all in the 30:21 prompt. uh these models have a big 30:22 enough context window. So it looks like 30:24 in this case it gave it a score of 10 30:26 which is exactly what we'd expect. It's 30:28 very relevant to RTOL. We're going to go 30:30 ahead and filter our relevant posts. Uh 30:33 it looks like here it did uh include 30:37 these kind of triple back ticks with the 30:39 JSON. And so it's having a little bit of 30:41 a hard time uh parsing that. And this is 30:44 an example of where it can be really 30:46 difficult to kind of mesh the the AI 30:50 block with uh the the code block just 30:52 because different models will handle 30:53 this entirely differently. Right? So we 30:55 asked it for just the JSON array, no 30:58 other text. Uh but if we see this as a 31:00 failure, we could also ask to say uh you 31:03 know do not include back ticks or any 31:09 other formatting other than the raw JSON 31:14 array. Right? So that's a little bit 31:17 more guidance for our prompt here. We'll 31:19 see how it handles that. And it looks 31:21 like it did it for us that time, which 31:23 is great. If it didn't, this would be a 31:24 good chance to experiment with uh a GPT 31:27 model maybe from OpenAI or one of 31:29 Google's Gemini models or you know a 31:31 custom a custom model that you have set 31:33 up uh in retool in some other way. But 31:35 in this case, you know, just being a 31:37 little bit more specific with our prompt 31:38 and telling it please don't include any 31:39 back ticks uh was was able to get where 31:42 we need to go. So then we will have this 31:44 filter relevant post step which now 31:46 because uh it's just JSON coming out of 31:48 our prompt, it is able to do for us and 31:52 that's great. And so, uh, that works 31:55 very well. And so, now you can see we 31:56 have this URI field here. Uh, and that 32:00 is what if we run this Slack step is 32:03 what's going to go into Slack. So, we'll 32:05 hit play on this. We see that it 32:07 actually did post there. And I will just 32:10 hop over real quick to our uh, Slack 32:14 channel, which I need to actually share 32:17 my screen with y'all. 32:21 One second. This is a separate a 32:23 separate window from the the Chrome 32:25 window we were just looking at. 32:29 But you can see here our post actually 32:31 did come into Slack which is which is 32:32 pretty cool. So this is an example of a 32:35 workflow that's now running end to end 32:37 uh which is pretty cool. And uh we just 32:40 need to kind of take a few more steps to 32:42 set this up and actually get it running 32:44 in uh production. So let's go ahead and 32:46 do that real quick. 32:49 So the one thing we mentioned that we 32:50 haven't actually done yet um is getting 32:53 it to run on a schedule. So I was 32:56 running each of these workflow blocks 32:57 one by one. Uh but you know we want this 33:00 to be a thing that kind of just runs in 33:01 the background, right? So first of all 33:02 I'm going to go ahead and click deploy 33:04 on this so that it's actually 33:06 everything's saved. Our latest version 33:07 is all set to go because now we like how 33:09 this is running. Uh our time stamp uh we 33:12 also because we ran these workflow 33:13 blocks one by one. We want to make sure 33:15 that that's updated. But if we run our 33:17 workflow end to end, that will 33:18 automatically update for us. So now 33:20 we're in a pretty good spot with this. 33:22 This is where we come over to our 33:23 triggers tab of our retail workflows 33:25 here. 33:27 And we can trigger this workflow via a 33:29 web hook, which means, you know, a post 33:30 request or someone asking for it in a 33:32 slack message or something. But in this 33:34 case, we want to use the schedule 33:35 trigger. And what this will do is allow 33:38 us to uh run this workflow on a 33:41 schedule. So in this case, we can say, 33:43 okay, UTC is the time zone. we want to 33:45 run this, you know, every minute, every 33:47 hour, every day, etc. So, um, we can 33:50 say, okay, we'll run this every minute, 33:52 uh, in production. We want to only have 33:54 one of these running at a time. And, uh, 33:57 that will run, you know, every minute 33:58 for us. Let's just say every hour for 34:00 the sake of, uh, you know, not 34:01 completely spamming this. 34:03 >> Um, we can hit save on that. 34:05 >> Hey, Kenan, jumping in. Uh, I think 34:06 you're not sharing your screen right 34:08 now, so 34:09 >> correct. Thank everyone for all the 34:10 feedback. 34:11 >> Great call. Yeah, thanks y'all. Uh, I'm 34:13 seeing that I'm seeing the chat blow up, 34:14 which I was not looking at before, but 34:16 thanks for thanks for dropping in. Cool. 34:18 I will jump back just a second, uh, 34:20 because none of you all saw anything 34:21 that just happened. So, um, we have our 34:23 workflow set up and, uh, we're going to 34:25 go over here to our triggers tab. And in 34:28 the triggers tab, we can again have the 34:30 web hook option if we wanted to trigger 34:31 this like as an API, for example. Um, 34:33 but we're going to use our schedule, 34:35 which allows us to specify basically the 34:37 interval when this runs. So in this 34:40 case, we currently have it set up to run 34:41 every hour at 0 minutes past the hour. 34:44 So, you know, in in my time zone right 34:45 now, it's 12:35. So, this would run in 34:47 25 minutes. For example, though, if I 34:49 wanted to run it at, you know, 12:40, 34:51 which would run in 5 minutes, right? 34:52 This will run every hour at 40 minutes 34:54 past the hour. So, you can get kind of 34:56 specific here, um, and kind of set up 34:58 anything the way you want to. But if you 35:00 want even more control and you're 35:02 familiar with, uh, cron syntax, you can, 35:05 uh, specify that here as well. So 35:07 there's two different kind of options 35:08 interval and cron here. Um and we can 35:12 have this you know basically run as much 35:14 as we want. Um so now that we have this 35:17 if we hit save changes this workflow is 35:20 basically set up to run uh on that 35:22 schedule. So at 40 minutes past the hour 35:24 it will run and uh it will you know grab 35:27 any relevant posts it will filter them 35:28 into Slack and it will let us keep an 35:30 eye on uh what's going on on social 35:32 media. So, we use Blue Sky in this is 35:34 example because the API key is very easy 35:37 to get. It's very simple API to interact 35:38 with um and all of that, but you could 35:41 absolutely expand this to any sort of 35:43 social network or any other type of 35:44 monitoring that you wanted to. Um this 35:46 is just kind of an idea to put in your 35:48 head about a potential use case for this 35:50 sort of thing. And so, at any point, uh 35:52 I can come down here to the run history 35:56 and I can see, okay, when did this run? 35:58 Uh how, you know, how long did it run 36:00 for? What happened? And because we ran 36:02 all of these blocks individually, uh we 36:04 don't have anything in the run history. 36:05 But let's go ahead and click run on 36:07 this. And you'll see that our run 36:08 history pops up right away. And it gives 36:10 us a pretty significant level of detail 36:11 on what's actually going on. So in this 36:14 case, you know, our start trigger, we 36:15 can see that we didn't have any data as 36:17 part of our start trigger. We queried 36:18 for the latest time stamp. You can see 36:20 it got the latest time stamp here, 533. 36:22 That was the last time it ran when we 36:23 updated it. It queried the blue sky API 36:26 and it didn't find any posts uh that 36:28 matched our search term which makes 36:29 sense because uh you know we just 36:31 queried it previously and that's kind of 36:33 how our time stamp is supposed to work. 36:34 We only want posts since the last time 36:36 we queried it. It ran our if block and 36:39 basically decided that you know it 36:41 didn't find any posts but it still 36:43 updated our latest datetime. Right? So 36:45 this is even when it finds no posts in 36:47 the API, uh we still wanted to update 36:49 our datetime so that we're not again 36:51 querying a bunch of old content uh that 36:54 we we don't we don't care about or we 36:56 already we already looked at it and 36:57 decided it wasn't relevant. Right? So in 36:59 this case because our if uh condition 37:02 came back as false you can see that 37:04 distill post side relevance all these 37:06 other uh blocks were skipped because 37:08 they were not part of uh this particular 37:11 run of a workflow. Right? So at any 37:13 point you could, you know, completely 37:15 close down this tab, come back, uh, and 37:17 say your workflow has been running every 37:19 hour. If you come back tomorrow, you'll 37:21 be able to see, you know, 23 or 24, uh, 37:24 runs in this history list. And any of 37:26 those that you see, you can dive into 37:28 and see, okay, what were the posts that 37:29 pulled from the API? What was the 37:31 response of the AI action, right, which 37:33 I found is pretty useful. Sometimes the 37:35 AI action, like we saw in our example, 37:37 just kind of stops parsing things or 37:39 starts parsing them differently, uh, and 37:40 your workflow starts to break, right? 37:41 And so that's a case where you really 37:43 want that ability to kind of dive deeper 37:45 and and see what's going on there. So uh 37:47 the run history is super useful for 37:49 that. Uh especially for these things you 37:51 kind of just put into the background and 37:52 more are more set it and forget it. 37:55 Cool. So yeah, we have a workingflow. 37:58 Hopefully that was a a kind of helpful 38:01 over-the-shoulder look at how how I 38:02 build workflows. Um because this is one 38:04 that uh I've actually built and we are 38:06 currently kind of using in production 38:07 every tool. 38:09 So we looked at kind of one of these 38:12 types of uses for workflows which is a 38:14 like a headless monitor right so 38:16 headless I mean there's no need for any 38:17 sort of user interface really there's no 38:19 need for sort of end users to be able to 38:21 interact with this uh thing that we've 38:23 built but you know it is just kind of 38:25 running in the background uh think of it 38:27 as good for replacements for things like 38:28 crown jobs everything that needs to run 38:30 every so often but uh there doesn't need 38:32 to be a huge amount of like human 38:33 interaction for example another good use 38:36 case for workflows that we use quite a 38:38 bit and you see customers use as well is 38:40 kind of quoteunquote serverless 38:41 functions. So if you're familiar with 38:43 AWS Lambda, basically a kind of 38:45 self-contained piece of uh what most 38:47 people would call backend code that can 38:49 be invoked either on a timer or by you 38:52 know pinging it as an API request or 38:54 calling like that. So lots of times you 38:56 need this for things like something to 38:58 put behind a Slack bot, right? when you 39:00 have a a bot in Slack and someone 39:02 triggers it, it needs to actually call 39:04 some sort of logic uh on on some other 39:07 system, right? And you could, you know, 39:09 spin up a server and and maintain that 39:11 and host a whole codebase there and do 39:13 all that stuff. But a lot of times 39:14 they're just kind of the Slackbot is an 39:16 interface for something quick like a 39:18 database query or oh go check this like 39:20 set of logs and tell me if there's 39:21 anything new or relevant that I need to 39:22 find. And they're just kind of these 39:24 one-off functions that workflows are a 39:26 really good kind of paradigm for as 39:27 well. anything basically you need uh 39:30 sort of like API in a box uh workflows 39:33 works really well for that too. And 39:35 something new that we're seeing and 39:37 really excited about is the possibility 39:38 for workflows to be tools that AI agents 39:41 can use. So we recently launched uh 39:44 retool agents which we took a look at on 39:46 uh Friday of last week. If you haven't 39:48 caught that uh please do. It's over on 39:50 our our YouTube channel. It's just 39:51 retool on YouTube. Um, but basically AI 39:54 agents are most useful and really only 39:56 useful when they have access to tools. 39:59 And so workflows are already hooked into 40:01 all of your other business systems. Like 40:02 we just saw, it's hooked into your 40:03 database. It's hooked into uh maybe your 40:05 social media accounts, maybe your 40:07 Salesforce data, things like that. And 40:08 so workflows give you a really nice way 40:11 to set, you know, boundaries on what can 40:13 access these resources, how how these 40:15 resources can be accessed, etc. and 40:17 giving those same sort of boundaried uh 40:20 request abilities to an AI agent uh can 40:22 be super powerful. So uh in the example 40:24 that we built on Friday, we have an 40:26 agent that has three separate workflows 40:28 that it can use uh to take action uh to 40:31 build some kind of cool video concepts. 40:32 So uh if you're interested in that, 40:34 definitely bounce over to YouTube and uh 40:36 check that out because that was a that 40:38 was a fun one. Cool. Um, but yeah, I 40:42 think that's it as far as the the 40:43 workflow content that we wanted to uh 40:46 talk through today with y'all. So again, 40:48 uh just kind of a a reminder that if you 40:51 want to post some questions in the chat, 40:53 uh please do that. We have uh looks like 40:55 about 20 minutes to get to a bunch of 40:57 those. Uh we already see some coming in 40:59 and the the retool folks in the chat are 41:01 helping to grab all of those for me. So 41:03 uh yeah, please post those in the chat 41:05 and we will uh take a bunch of those 41:07 now. Cool. We have the first question 41:09 here. What is the Slackbot that's used 41:11 in the looper? Is that a function, a 41:13 retool agent, or something else? Yeah. 41:15 So, basically, let's bounce actually 41:17 back over here and take a look at what 41:18 that is. So, retool offers a bunch of 41:20 different uh we call them resources, 41:22 which are basically integrations to 41:23 other tools and Slack is one of them. So 41:26 when you have a new workflow block, you 41:29 can give it a resource query and 41:31 basically say, okay, rest API is just 41:32 the the default that pops up here. But 41:34 we can say, okay, we want to call uh you 41:37 know, we want this to be a query into 41:38 Slack somehow. This is uh an integration 41:41 that we basically set up ahead of time 41:42 so that this is connected to our 41:44 specific uh retail Slack instance. So it 41:47 knows everything about our channels. It 41:49 has the proper level of access that our 41:51 administrators have decided is 41:52 appropriate. Um all that stuff. So this 41:54 makes it really easy for this kind of 41:56 integration to be set up once and then 41:58 reused by multiple people across the 41:59 organization. So once we say this is the 42:02 Slack integration we want to use, we can 42:04 then kind of configure this um to say 42:06 okay we want to post a message into a 42:08 particular channel. If you wanted to do 42:09 something like, you know, list out all 42:11 the conversations that are existing. All 42:13 of the other various things you can do 42:15 with the Slack API are kind of pre-built 42:17 into this uh operations list. But by far 42:20 the most common is, you know, we want 42:21 our retool workflow to be able to post 42:23 into Slack, right? And so based on which 42:25 one of these operations you pick, you 42:26 then get these different options um of, 42:29 you know, kind of pre-populated things 42:30 we think you'll probably need. Our posts 42:33 were just plain text, so we use the text 42:35 parameter here. Uh but if you want to 42:37 use the the fancy sort of slack markup 42:39 that you can do um you can do that here 42:42 as blocks as well. Um so you can kind of 42:44 choose your own flavor there. Um but 42:47 that is just a pre-built uh integration 42:49 that we already have as part of uh as 42:51 part of retool. So hopefully that's 42:52 helpful. Um I do just want to flag that 42:55 uh Daniel in our uh Zoom chat is going 42:57 to be posting a poll. This is super 42:59 helpful for us to know if uh today was 43:01 useful to y'all. Uh if you wanted to see 43:03 us do something differently. Uh again, 43:04 as Sarah mentioned, this is not the last 43:06 one of these we're going to do. So, uh 43:07 once you see that poll uh pop up on your 43:10 screen, please just give us a quick 43:11 quick thumbs up, thumbs middle, or 43:13 thumbs down. Uh and if you do thumbs 43:15 middle and say it was solid, but I had 43:16 questions, please again drop those in 43:19 the chat. We'll keep going through 43:20 those. 43:22 All right, let's delete this here just 43:24 to clean this up a little bit. Cool. 43:26 Cool. So hopefully that was helpful uh 43:28 via Slack, but yeah, it's a it's a 43:29 built-in regional integration uh that 43:30 you can use uh because we found that a 43:32 lot of folks want to either from 43:34 regional apps or from their workflows um 43:36 call call Slack. So that's something 43:38 that uh is is useful. All right, 43:41 question from Michael. Uh yeah, welcome 43:43 back by the way. I feel like a familiar 43:45 face around here at AI build week. Um, 43:47 so his question is, "Decide our decide 43:50 relevance block, which is a reminder, 43:51 which is our our AI prompt here, makes 43:53 decisions for a batch of posts in one 43:55 single LLM call. And while that's simple 43:57 and cost-effective as you're only 43:59 prompting the LLM once, um, the model 44:02 could give a different decision 44:03 depending on what other posts it has 44:04 read before. So, you know, the different 44:06 posts in a given batch could influence 44:08 each other." So, his question is, do you 44:10 have further guidance, pros and cons for 44:12 calling the LLM in a batch versus 44:13 calling the LLM in a loop? Yeah, I think 44:16 this really depends on what sort of 44:19 overlap you are seeing and what sort of 44:22 kind of uh for lack of a better word, 44:24 contamination between the individual 44:26 posts you're seeing. Um, and in our 44:30 case, the testing that I've done for 44:32 this particular use case, um, I found 44:34 that it's able to evaluate each post 44:36 pretty independently, uh, because we've 44:38 given it kind of that specific guidance 44:40 of saying, "You're getting a batch of 44:41 posts. Please look at them one at a 44:43 time. give each of them a score like 44:45 it's able to um understand kind of the 44:47 nuance of of that language. Um but again 44:50 if this is something you were concerned 44:52 about u you could in the prompt very 44:54 explicitly say you know evaluate each 44:56 post one at a time etc. Um and give it 45:00 more explicit guidance in in that way. 45:02 Um but again, yeah, if if you're finding 45:05 that you're seeing a lot of posts that 45:07 are kind of coming through as relevant 45:09 that you don't think they are, uh and 45:11 you think that's because they're getting 45:12 contaminated by previous uh decisions, 45:15 then that would be the case where uh 45:17 yeah, you would just use uh like we did 45:19 for Slack, you would use a loop block 45:21 here and instead of the Slack resource, 45:23 you would uh use the retail AI resource 45:25 and you could say, "Okay, cool. We're 45:26 going to loop through each of these 45:27 posts. or we're going to just prompt the 45:29 AI one at a time uh for them and uh and 45:33 that would give you kind of a clean 45:34 slate, a blank slate each time. So, um 45:37 something that you would definitely just 45:38 need to experiment with and see if you 45:39 feel like that's the reason you're not 45:41 getting the output that you want. Um but 45:44 in my experience in this particular 45:46 workflow, we've been running this for 45:47 for a bit now. Um you know, it's able to 45:50 distinguish that. I think the other 45:52 thing that's that's helping there is 45:54 we're not giving it a set of a 100 45:55 posts, for example, right? I think that 45:57 as you kind of utilize more of the 45:59 context window, that's where that 46:00 confusion might start coming in. We're 46:01 giving it a very focused set of, you 46:03 know, here's just basically the text of 46:05 the post, that's pretty much it and the 46:07 URL. Um, and you know, limiting that to 46:10 a pretty uh contained list of just a few 46:13 posts. So, that's been uh that's been 46:15 helpful and I haven't I haven't found 46:16 any problems in this particular use 46:18 case. 46:20 Cool. Um, Gabriel asks, "How wide are 46:23 you planning to expand the resources or 46:24 tools that the workflows and agents have 46:26 access to?" Yeah, so that's a good 46:28 question. Um, I will say that we have a 46:32 long list of uh, integrations that we 46:35 already kind of natively support. Uh, I 46:38 believe actually 46:40 that if we go over here, yeah, this is 46:42 our brand new integrations page which 46:43 just launched uh, this week. And so you 46:45 can kind of go through here and see all 46:47 of these different tools that we already 46:48 do have integrations for. And just like 46:50 Slack, these can just be natively used 46:52 uh inside of your workflows. You don't 46:54 have to do any sort of custom, you know, 46:56 API calling or anything like that uh to 46:58 get access to any of these tools that we 47:00 have build integrations for. So, we're 47:02 always looking to expand this list 47:04 obviously largely based on customer 47:06 feedback. So, if a new tool comes out 47:08 and a lot of our customers are asking 47:09 for, okay, why can we use this, where 47:11 can we use this, how can we use this, um 47:13 we, you know, consider that pretty 47:14 heavily when expanding this list. So if 47:16 there's a tool that you wish integrated 47:18 into retool more natively, uh definitely 47:20 hop over to the community and uh you 47:22 make your make your voice heard there. 47:24 Um but you know another thing uh that we 47:28 have as kind of a flexibility of 47:29 retools, you'll notice here that some of 47:30 these aren't uh tools per se. They're 47:33 just kind of protocols, right? And so if 47:35 a tool that you like uh isn't supported 47:37 here, uh you can use our REST API 47:39 connector, our GraphQL connector, our 47:42 SOAP API connector to basically make 47:44 just a dedicated API call to that 47:46 service. So even if you don't have uh an 47:49 integration natively available, uh as 47:51 long as the tool you're trying to 47:52 integrate with has some sort of API, uh 47:54 you can use this uh this REST API 47:56 integration or any of the other API 47:57 integrations as well to access that. 48:00 And for agents specifically, uh we 48:03 support uh using MCP servers as tools as 48:05 well. So if you have uh an MCP server 48:08 that you want to use, you can give your 48:09 agent access to that MCP server uh with 48:12 just the same sort of native resource 48:14 integration we looked at with some of 48:15 these other tools. Uh and then your 48:17 agent will have access to all the tools 48:18 that that server supports as well. So, 48:20 we're very focused on being super 48:22 platform agnostic, you know, data data 48:24 format exa agnostic and uh trying to let 48:27 you connect as as many and as varied of 48:29 tools as you would like. So, um again, 48:31 look at the list here, retail.com/ 48:33 integrations. If there's one that's not 48:34 on this list that you want to see, 48:35 definitely let us know uh in the 48:37 community, but we're always thinking 48:38 about how to make other tools easier to 48:40 integrate with. 48:43 All right. Uh let's keep going here. 48:46 Alexi asks, "If you're just 48:48 experimenting and playing around with 48:48 multiple structured unstructured data 48:50 sources and wanted to leverage AI to 48:53 gain insights on that interrelated data, 48:54 what would you recommend, workflows or 48:56 agents for exploratory work?" Yeah, this 48:58 is a good question and kind of talks 48:59 about the architecture differences 49:01 between workflows and agents. So um the 49:04 diff the key difference that I kind of 49:06 see is that if you are working on a 49:08 problem that potentially 49:11 um you you want the AI to be able to 49:14 take sort of multiple steps to do its 49:17 own investigation and the the path to a 49:20 solution isn't always the same each 49:22 time. Uh that's where an agent really 49:24 comes in handy where you can say okay 49:25 cool here's the agent you have access to 49:27 these tools. uh and then the AI in that 49:30 case is kind of deciding on the approach 49:31 of how do I go about this? What tools do 49:33 I use? do I have to ask the user for 49:34 follow-up information? Whereas if the 49:37 path from initial problem to a 49:39 successful solution is more well 49:40 defined, that's where uh a workflow 49:43 might come in handy, right? So it looks 49:44 like in your example, you have a bunch 49:46 of data about a client, structured data 49:48 entries, previous chat history, and you 49:50 just want to experiment with what 49:51 insights you can get out of it. And so 49:53 in that case, I would maybe start with 49:54 even something more open-ended than a, 49:57 you know, a workflow or an agent. I 49:59 would just say, okay, cool. I do a 50:00 single sort of AI action maybe in an app 50:03 in our app builder for example and 50:05 depending on how much data you have you 50:07 might be able to just put all of that 50:08 into the prompt and ask for suggestions 50:10 on okay based on all of this uh what 50:13 sort of follow-up questions should I ask 50:14 this this client or um what what things 50:17 maybe am I not what patterns am I not 50:19 seeing here those sorts of open-ended 50:20 questions just as a even as a single LLM 50:23 call um are are really kind of 50:25 informative to me at least uh about you 50:28 know where where to go from there and 50:29 that will hopefully sort of influence uh 50:32 the next pattern you take. But in 50:33 general, if the path from beginning to 50:34 end is more structured, that's where I 50:36 would recommend a workflow. There can 50:37 obviously be LLM steps in the middle to 50:39 give you that generative sort of output. 50:40 Uh but if you don't know or the path 50:42 from beginning to end is going to be 50:43 kind of different for each particular uh 50:45 set of data or problem you're 50:47 encountering, uh that's where an agent 50:48 with that flexibility uh is helpful. 50:52 Cool. Um yeah, Ashley asks uh why do we 50:56 need to filter relevant posts code? 50:58 Isn't that what the AI prompt is doing? 51:00 Why do we need this? And isn't this 51:01 redundant? Yeah, good question. First of 51:02 all, welcome back. I feel like another 51:04 familiar face. Uh, and and I've enjoyed 51:06 your questions this whole week. Uh, but 51:08 basically this filter relevant post, if 51:10 I wasn't clear, um, the AI is not making 51:13 any decisions on like callulling the 51:17 posts out of the array. It's assigning 51:19 every single post a score between 1 and 51:21 10. So, this filter relevant posts code, 51:24 what this is actually doing, this step 51:26 right here is really the key piece. It's 51:28 saying, "Okay, any post greater than 51:31 four is going to get passed through. Any 51:33 post that's less than four, um, you 51:36 know, is going to fall out of the 51:37 array." So, the AI is not doing any of 51:39 that filtering for us. It's just doing 51:41 the scoring. Uh, this code here is 51:43 actually deciding, okay, anything less 51:44 than a four, uh, is not relevant and so 51:46 shouldn't be passed to Slack, right? So, 51:49 that's, uh, that's that's what that's 51:51 doing there. Hopefully hopefully that's 51:52 helpful. Cool. Uh jumping back just 51:55 briefly to the topic of integrations. Uh 51:57 we had a post in the chat noting that a 51:59 uh a Tableau connection would be great 52:01 and uh yeah absolutely uh it's on our 52:03 list and something that we're we're 52:05 exploring there. So uh we know that a 52:06 lot of folks use those sorts of BI tools 52:08 uh in conjunction with retool. So uh 52:11 appreciate it and uh thanks for the 52:12 feedback there. 52:14 Cool. Um all right, Audrey asks, "Not 52:16 sure if I caught it, so apologies if 52:18 repetitive. No worries. That's why we're 52:19 here. Uh, do you have any tips on how to 52:21 ensure the output of the AI block is 52:22 something that could be easily parsed in 52:24 later code blocks like a JSON object or 52:26 any tips how to handle for when the 52:27 output is not in the expected format? 52:29 Yeah, good question. There are a couple 52:30 different ways to do this. Uh, but this 52:32 is at its core one of the difficulties 52:35 with integrating uh AI into more sort of 52:38 deterministic workflows, right? Because 52:40 like you saw in our example or maybe if 52:42 you joined late, you didn't see, but 52:43 originally um when we ran this AI step, 52:46 it didn't give us this like clean JSON 52:49 output. it had triple backtick, JSON, 52:51 and triple back tick, right? And when 52:53 it's outputting in like a web chat, 52:55 that's useful for formatting reasons, 52:57 but when we're calling it via the API, 52:58 we really don't want that, right? So, 53:00 there's a couple different ways to get 53:01 around that. The way we got around it is 53:02 we basically just in the prompt said, 53:04 "Do not include back ticks or any other 53:06 formatting other than the raw JSON." So, 53:09 this is a fine way to go about this. 53:11 It's a little bit frustrating because it 53:13 is kind of just like trial and error. 53:14 you have to wait for there to be a 53:15 failure, see what type of failure 53:17 happened and then update your prompt. 53:19 Um, and uh, so in that case, you know, 53:22 it's a little bit like if you want to 53:24 have some more confidence, that's maybe 53:25 not the best way to go about it, but 53:27 that does work. And that's by far the 53:28 simplest approach, just explicitly 53:30 updating your prompt every time you see 53:32 some sort of failure and trying to 53:34 account for that particular failure 53:35 case, being more specific in your 53:36 instructions, that sort of thing. Um the 53:39 other option is basically you could use 53:42 something like this uh this code step 53:43 here that we have to try to parse uh the 53:46 output. So for example, if you're 53:47 expecting JSON output and you have a 53:49 Python block or a JavaScript block that 53:51 runs, you know, a JSON parse and that 53:53 JSON parse fails, you know that it 53:56 didn't output uh you know valid JSON. So 53:59 you could either then like loop back and 54:01 try to call the LLM again. You could 54:02 exit and send some sort of error 54:04 notification. Uh there's a bunch of 54:05 different options there, but you can use 54:07 kind of the strictness of code to uh 54:10 enforce the output of an LLM. The other 54:12 thing that's sometimes useful is you can 54:14 chain LLM steps together like we talked 54:16 about before. So you could have another 54:18 large language model step here, another 54:19 AI step that basically evaluates is the 54:22 output of this first step valid JSON? If 54:24 not, please clean it up until it's just 54:26 valid JSON, etc. So again, that's not a 54:29 100% foolproof answer, but at least then 54:32 you have kind of two separate language 54:33 models looking at the problem. Uh and 54:35 we've seen uh people get a lot higher 54:37 success rates with uh with that sort of 54:39 approach as well. So um there multiple 54:42 different ways to go about it. Again, a 54:43 little bit depends on your use case, but 54:45 uh and how much kind of certainty you 54:47 need and how much error handling you 54:49 want to do. Uh but hopefully that's 54:50 that's helpful. Thanks for the question. 54:53 All right. Uh we'll take a couple quick 54:55 ones here and then we've got a little 54:57 bit of a a deeper dive one to wrap up. 55:00 So yeah, we had a question on what is 55:02 the concurrency limit configuration 55:04 option. Yeah. So uh basically that was 55:07 over in the triggers. I believe we saw 55:10 here under concurrency limit and 55:12 basically what this says is exactly what 55:13 this tool tip says here. So this is the 55:16 maximum number of inrogress runs of the 55:18 workflow that are allowed. Right? So in 55:20 this case we only want one run happening 55:23 at a given time. Right? This shouldn't 55:24 be an issue because we're only running 55:26 it once an hour. This workflow doesn't 55:28 take an hour to run. But for example, if 55:30 you have a longer running workflow or if 55:31 you have a workflow that's sort of 55:32 waiting for human input or something, 55:34 you could imagine a world where another 55:36 workflow trigger might happen while the 55:38 first one is running. And if you're 55:39 reading and writing to or from the same 55:41 database or the same shared resource, um 55:44 that could cause, you know, weird race 55:46 conditions or other things that uh you 55:48 don't really want to happen, right? So, 55:50 um this is that's when this is useful. 55:52 Um, but yeah, that's basically what that 55:53 does is just the amount of active runs 55:55 that can be happening at one time. Uh, 55:56 if this is set, you know, it won't let 55:58 another run kick off before it stops the 56:00 first one. 56:02 All right. Um, and then, uh, we had 56:05 somebody in the chat ask, is there 56:06 support for memory with regional AI? So, 56:09 right now, that's not something that we 56:10 support uh, natively. It's something we 56:12 are actively working on, especially in 56:13 the context of agents. uh but we do like 56:17 I mentioned before have uh retool 56:19 database which is very useful for kind 56:20 of an in place uh memory store. So if 56:23 you have things that you want your AI to 56:25 remember um that's a case where you 56:27 could u spin up a database table and 56:30 just basically call it you know memories 56:31 or something and write individual rows 56:32 there and then uh pull all of that in as 56:36 context into you know the prompt for the 56:38 next time this workflow runs. For 56:39 example, uh if you have, you know, if 56:42 you want to kind of create your own chat 56:44 interface, which we showed a little bit 56:45 about uh with Real GPT last week, um we 56:49 are saving kind of each of those 56:50 messages and each of those message 56:52 threads that a person kind of interacts 56:53 with our our AI into retal database so 56:56 that uh the the next large language 56:58 model call that happens, it has all that 57:00 history available. So you can kind of 57:02 give uh your AI memory that way. But uh 57:04 more to come on that for sure. 57:07 All right, cool. Um, let's do a question 57:10 from James here. I've used a lot of 57:11 dynamic resources in my app building. 57:13 I've run into the issue where the agents 57:14 cannot use dynamic resources. For 57:16 example, if you have many MySQL 57:17 databases and you want to determine 57:19 which one to run based on a dynamic 57:22 resource ID, is that something that's 57:24 achievable in some way? Cool. Um, good 57:27 question. By the way, welcome back as 57:28 well. I feel like I've seen you around 57:30 this week as well. Um, but yeah, I think 57:33 uh this is probably a problem for 57:36 agents. Um and again this is the kind of 57:40 thing where uh if we go back and look at 57:43 some of our uh 57:46 architectures here instead of this is 57:48 kind of maybe the architecture that you 57:50 want. Instead of uh you know an LLM 57:52 routing between different LLMs in your 57:54 case you would probably have an LLM 57:56 routing between different databases. And 57:58 so you'd have to get some sort of uh 58:01 list of those databases and kind of the 58:03 selection criteria into the initial 58:05 prompt. Um, but I would anticipate that 58:08 if you asked uh your agent to make a 58:10 selection on the relevant data source 58:12 first, um, you could then have all of 58:15 those data sources set up as tools and 58:17 say, "Okay, please make a selection on 58:18 this first." You could do any of the 58:20 kind of common prompting tricks like 58:22 explain your reasoning or whatever 58:23 before you make a choice uh, to both be 58:26 able to see what it's thinking, but also 58:28 we found that that gives a little bit 58:30 better output. Um, but then if all of 58:32 those databases are tools, um, then you 58:36 could see that, uh, it selects the right 58:38 one. Another way to kind of make sure 58:40 this continues to happen correctly is to 58:42 use the eval that we looked at, uh, on 58:44 Friday. So, if you haven't seen that on 58:46 Friday, definitely go back and check it 58:47 out. But, um, basically, you can eval 58:50 whether your agent is making the correct 58:52 tool call based on the input. So, you 58:54 could give it an input where you would 58:55 expect it to choose data source one, for 58:57 example, or database one. um and ensure 59:01 that then it does make that correct tool 59:03 call. So you could set up all those 59:05 different evals um and then run those to 59:07 make sure that okay now that we have 59:08 that sort of prompting selection logic 59:10 in place uh those eval 59:14 fail whether it's making those those 59:16 choices uh correctly 59:19 but yeah it's going to require a little 59:20 bit of experimentation but I hope that 59:22 was uh that was helpful pointing you in 59:23 the right direction. Cool. Um yeah. All 59:27 right. Uh Mike asks, "Are there any 59:30 human in the loop options for 59:31 workflows?" Uh the answer is yes. So 59:34 this is something we are are rolling out 59:36 um over the the coming weeks. We've been 59:39 testing this with some private beta 59:40 customers and things. Um but there is an 59:43 option for uh you kind of bringing a 59:45 human into the loop and flagging that 59:46 okay this is something that uh a human 59:48 needs to take a look at. Um and that uh 59:52 you know is kind of a pause in the 59:53 workflow, right? So, um, how that looks 59:56 right now is this concept of a user 59:58 task. And basically what this lets you 1:00:00 do is, you know, break out this task, 1:00:04 assign it to someone or assign it to a 1:00:06 team. Uh, you can then have an 1:00:08 associated app so that, you know, the 1:00:10 person who this task gets assigned to 1:00:12 has some sort of interface to take 1:00:13 action on what they want to what they 1:00:15 need to work on. So, for example, if 1:00:17 it's an approval to send an email, you 1:00:19 may build a basic retual app with just a 1:00:21 textbox output. Uh you could then have 1:00:24 our workflow here. Populate that text 1:00:27 box with the draft of the email that it 1:00:29 had created and then flag to the user. 1:00:31 Okay, hey here here's the link where you 1:00:33 want to go look at the text here. If it 1:00:34 looks good, click the approve button. Uh 1:00:36 and then you know we'll basically kind 1:00:38 of go back into the workflow uh run and 1:00:40 it will just kind of run from there. So 1:00:42 something that a lot of folks have asked 1:00:44 for um and something that we are working 1:00:46 on rolling out to everybody but uh 1:00:47 there's obviously a lot of various edge 1:00:49 cases and things here to uh to deal 1:00:51 with. So, um yeah, look for that uh in 1:00:54 in your retool workflows environment and 1:00:56 uh if not if you're not seeing it and 1:00:58 it's something you need to use for a use 1:01:00 case that you're uh that you're working 1:01:01 on, uh definitely let us know in the 1:01:03 community. Thanks, Mike, for your 1:01:05 question. 1:01:07 Cool. All right, y'all. Well, we are uh 1:01:09 at the hour. Thanks everybody for 1:01:11 coming. This was uh super awesome to see 1:01:13 everybody's questions and and see 1:01:15 everybody here live on uh Zoom. So, uh 1:01:18 again, this concludes our our AI build 1:01:20 week. If you missed any of our sessions 1:01:21 this week, uh they are all available on 1:01:24 our YouTube channel, youtube.comretool. 1:01:27 And uh as Sarah is posting in the chat, 1:01:29 definitely head over to the community 1:01:31 for the thread of questions replay 1:01:33 access to all the resources. You'll be 1:01:35 able to download the actual JSON from 1:01:37 this workflow. So if you want to run 1:01:38 this in your own account, you obviously 1:01:40 have to hook up your own Slack and your 1:01:41 own AI, but uh all the blocks that we 1:01:43 laid out on the canvas are are there in 1:01:44 that JSON for you to get started. And uh 1:01:47 yeah, thanks for coming y'all. 1:01:48 >> Thank you everyone. And thank you so 1:01:50 much Kenan for another incredible 1:01:53 session and thanks to everyone behind 1:01:54 the scenes. Um we loved all your 1:01:57 questions and we loved all the 1:01:58 engagement. So uh we'll continue the 1:02:00 conversation in the community. Uh we 1:02:02 shared so much information today and uh 1:02:04 so many resources. So feel free to ping 1:02:07 us uh and yeah we'll see you back here 1:02:10 soon. Thanks everyone for joining our 1:02:12 first ever multi-day webinar build week. 1:02:14 So thank you. Bye.