you2idea@video:~$ watch Ey18PDiaAYI [8:26:38]
// transcript — 24288 segments
0:01 In this course, I'm going to take you from a complete beginner to building
0:05 powerful noode AI agents. I don't have any coding experience, and you don't
0:08 need any either. In the past eight months, I've made over half a million
0:12 dollars in revenue by building and teaching people how to build AI agents.
0:15 In this video together, we're going to set up your 2e free and end trial. We're
0:19 going to set up credentials together and walk through step-by-step builds. And by
0:22 the end, you'll have over 15 AI automations ready to take advantage of
0:25 this opportunity. All right, so here is a quick look at the highle course
0:29 agenda. Keep in mind everything will be timestamped below so you can jump around
0:32 to where you need. But I definitely recommend saving this for later. As you
0:35 can see, this is extremely comprehensive, packed with a ton of
0:38 value, so you can come back to this later when you want to explore different
0:42 chapters. But what we're going to do is start off with talking about AI agents
0:45 and the opportunity that we are all living in right now. Then we'll move
0:49 into Foundations. I'll set up a free twoe trial with you guys. I will talk
0:53 about the UI. We'll get familiar with it and go over some foundational knowledge
0:56 you'll need. From there, we'll move into step-by-step workflows where we're
0:59 actually using that knowledge and connecting to different integrations and
1:03 setting up some pretty cool automations right away. Then, we'll talk about APIs
1:07 and HTTP request. We'll set up a few common examples together, and you'll see
1:10 it's not that difficult. Then, moving into the back half of the course, we'll
1:14 talk about AI agent tools and memory. We will discuss multi-agent architectures.
1:18 I'll talk about prompting, do a live example, and some other cool tips that I
1:21 want to share with you guys that could be helpful. We will look at web hooks,
1:24 what those really mean. And then we'll look at a few example workflows where
1:28 we've triggered them with web hooks. I'll talk about MCP servers, what that
1:32 really is, and we'll do a step-by-step self-hosted setup of Naden and connect
1:36 to some MCP servers. And finally, we'll close off with lessons from my first 6
1:40 months of building AI agents. So, if that all sounds good to you guys, let's
1:48 started. All right, so AI agents, artificial intelligence, whatever it is,
1:51 there is definitely a lot of hype. There's no denying that. And so the
1:55 purpose of this section is just to make sure we can cut through all of that and
1:58 actually understand what is an AI agent at its core. What can they do and why do
2:03 we need them? You probably heard you know digital employee or virtual
2:07 assistant all this kind of stuff. But we need to understand what that actually
2:11 means and what powers them. So at this point I'm sure we are all familiar with
2:15 something like chatbt which is a large language model at its core. And so we're
2:18 looking at right here is a very simple visualization of how a large language
2:22 model works. Meaning right here in green we have a large language model. Let's
2:26 say it's chatbt and we the user give it some sort of input. So maybe that's like
2:30 hey help me write an email to John. The LLM would then take our input process
2:35 this. It would basically just create an email for us and then it would spit that
2:39 out as an output and that's it. This LLM at its core cannot take any action. It's
2:43 really not that practical. It just kind of helps you be more productive because
2:47 at the end of the process, we'd have to take this output and copy and paste it
2:50 into something that actually can take action like Gmail. And so the power of
2:54 these large language models really comes into play when we start to expose them
2:58 to different tools. And tools just means any of these integrations that we use
3:02 every single day within our work that let us actually do something. So whether
3:06 that's send an email or update a row in our CRM or look at a database or Air
3:11 Table, whatever it is, even Outlook, that's a tool. It just means connecting
3:16 to an actual platform that we use to do something. So now instead of having the
3:19 LLM help us write an email that we would copy and paste or us exporting a Google
3:24 sheet and giving it to an LLM to analyze, it can basically just interact
3:27 with any of its tools that we give it access to. So when we add an LLM to
3:33 tools, we basically can get two different things and that is either an
3:38 AI workflow or an AI agent. So right away you can already tell what the
3:40 difference is, but you can also see some similarities. So, let's break it down.
3:44 Starting with an AI workflow, we can see that we have an input similar to like we
3:48 did up top with our LLM. But now, instead of just going input, LLM,
3:54 output, we can work in those tools right into the actual AI workflow itself. So,
3:58 here is an example of what an AI workflow could practically look like.
4:01 First of all, we have a tool which is HubSpot, and that's going to be the
4:04 input for this workflow. This will basically pass over a new lead that has
4:08 been inserted into our CRM. Then we're hitting another tool which is Perplexity
4:11 which helps us do research. So we're going to do research on that lead. From
4:15 there, after we get that research, we're going to hit an LLM, which is where this
4:18 whole AI powered workflow terminology comes in, because we're using that LLM
4:23 to then take the research, draft a personalized email, and then it can use
4:27 another tool to actually send that email. And the reason that we do this as
4:30 a workflow is because this is going to happen in the same four steps in that
4:35 order every time. new lead comes in, research, write the personalized email,
4:39 send the email. And so whenever we know a process is linear or sequential or
4:43 it's going to follow that order every time, it's much much better to do an
4:47 actual workflow rather than send that off to an AI agent where we have an
4:51 input, we have the LLM, which is the AI agent. This is the brain of the whole
4:54 operation and it has access to all of the different tools it can use and then
4:59 it has an output. So yes, it is true that this AI agent down here could do
5:02 the exact same job as this AI workflow. We could also over here with an AI agent
5:07 get a new form and have a new row in our CRM. The agent could then think about it
5:10 and decide, okay, I'm going to use perplexity to do research and then after
5:13 that I'm going to send an email with my email tool. But it's not the most
5:16 effective way to do it because it's going to be more expensive. It's going
5:20 to be slower and it's going to be more errorprone. So basically the whole idea
5:25 is AI agents can make decisions and act autonomously based on different inputs.
5:29 AI workflows follow the guardrails that we put in place. there's no way they can
5:32 deviate off the path that we chose for them. So, a big part of building
5:36 effective systems is understanding, okay, do I need to build an AI workflow
5:39 or am I going to build an AI agent? Is this process deterministic or
5:43 nondeterministic? Or in other words, is it predictable or is it unpredictable?
5:46 If something's unpredictable, that's when we're going to use an AI agent with
5:50 a brain and with different tools to actually do the job. And that's where
5:53 the whole autonomy comes into play. And I don't want to dive too deep into the
5:56 weeds of this right now. We'll cover this later in a different section. But
6:00 real quick, four main pros of AI workflows over AI agents. Reliability
6:05 and consistency, cost efficiency, easier debugging and maintenance, and
6:08 scalability. Once again, we have a whole section dedicated to this idea and we're
6:12 going to dive into it after we've built out a few AI workflows, but I wanted you
6:15 guys to understand this because obviously the whole purpose you probably
6:18 came here was to learn how to build AI agents. But before we build AI agents,
6:22 we're going to learn how to build AI workflows. It's the whole concept of
6:26 crawl, walk, run. you wouldn't just start running right away. And trust me,
6:29 after you've built out a few AI workflows, it's going to make a lot more
6:32 sense when you hop into building some more complex agentic systems. But just
6:36 to give you that quick fix of AI agent knowledge, and we'll revisit this later
6:39 when we actually build our first agent together. What is the anatomy of an AI
6:43 agent? What are the different parts that make one up? So, here's a simple diagram
6:47 that I think illustrates AI agents as simple as possible. We have an input, we
6:51 have our LLM, and we have our output like we talked about earlier. But then
6:55 inside the AI agent you can see two main things. We have a brain and we have
6:59 instructions. So the first thing is a brain. This comes in the form of a large
7:03 language model and also memory. So first off the large language model this is an
7:07 AI chat model that we'll choose whether that's an open AI model or an anthropic
7:11 model or a Google model. This is going to be what powers the AI agent to make
7:16 decisions to reason to generate outputs that sort of stuff. And then we also
7:19 have the memory. So this can come in the form of long-term memory as well as
7:22 short-term memory. But basically, we want to make sure that if we're
7:25 conversating with our agent, it's not going to forget what we're talking about
7:29 after every single sentence. It's going to retain that context window, and it
7:33 can also remember things that we talked about a while back. And then the other
7:37 piece is the instructions for the AI agent, which is also kind of referred to
7:41 as a system prompt. And this is really important because this is telling this
7:45 AI agent, you know, here's your role, here's what you do, here are the tools
7:49 you have. This is basically like your job description. So, the same way you
7:53 wouldn't expect a new hire to hop into the company and just start using its
7:56 different tools and knowing what to do, you would have to give it basically some
8:01 pretty specific training on this is basically your end goal. Here are the
8:03 tools you have and here's when you use each one to get the job done. And the
8:07 system prompt is different than the input, which is kind of referred to as a
8:11 user prompt. And think of it like this. when you're talking to chatbt in your
8:15 browser and every single message that you're typing and sending off to it is a
8:19 user message because that input changes every time. It's dynamic, but the system
8:23 prompt is typically going to say the same over the course of this agent's
8:27 life unless its role or actual instructions are going to change. But
8:30 anyways, let's say the input is, hey, can you help me send an email to John?
8:34 What's going to happen is the agent's going to use its brain to understand the
8:37 input. It's going to check its memory to see if there's any other interactions
8:40 that would help with this current input. Then it will look at its instructions
8:44 and see, okay, how do I actually send an email to John? And then it will call on
8:48 its tool to actually send an email. So at a high level, that is the anatomy of
8:52 an AI agent. And I hope that that helps paint a clear picture in your mind.
8:55 Cool. So now that we've talked about what an AI agent is and what a workflow
8:59 is and why we want to walk before we run, let's actually get into Naden and
9:03 start building some stuff. All right. Right. So, before we dive into actually building AI agents, I
9:08 want to share some eyeopening research that underscores exactly why you're
9:11 making such a valuable investment in yourself today. This research report
9:14 that I'm going to be walking through real quick will be available for free in
9:17 my school community if you want to go ahead and take a look at it. It's got a
9:21 total of 48 sources that are all from within the past year. So, you know it's
9:24 real, you know it's relevant, and it was completely generated for me using
9:27 Perplexity, which is an awesome AI tool. So, just a year ago, AI was still
9:31 considered experimental technology for most businesses. Now, it's become the
9:35 core driver of competitive advantage across every industry and business size.
9:39 What we're witnessing isn't just another tech trend. It's a fundamental business
9:44 transformation. Let me start with something that might surprise you. 75%
9:48 of small businesses now use AI tools. That's right. This isn't just enterprise
9:52 technology anymore. In fact, the adoption rates are climbing fastest
9:55 among companies generating just over a million dollars in revenue at 86%.
9:59 What's truly remarkable is the investment threshold. The median annual
10:03 AI investment for small businesses is just 1,800. That's less than 150 bucks
10:08 per month to access technology that was science fiction just a few years
10:13 ago. Now, I know some of you might be skeptical about AI's practical value.
10:16 Let's look at concrete outcomes businesses are achieving. Marketing
10:20 teams are seeing a 22% increase in ROI for AIdriven campaigns. Customer service
10:25 AI agents have reduced response time by 60% while resolving 80% of inquiries
10:29 without human intervention. Supply chains optimized with AI have cut
10:34 transportation costs by 5 to 10% through better routing and demand forecasting.
10:37 These are actual measured results from implementations over the past year. Now,
10:40 for those of you from small organizations, consider these examples.
10:44 Henry's House of Coffee used AIdriven SEO tools to improve their product
10:48 descriptions, resulting in a 200% improvement in search rankings and 25%
10:53 revenue increase. Vanisec insurance implemented custom chat bots that cut
10:57 client query resolution time from 48 hours to just 15 minutes. Small
11:01 businesses using Zapier automations saved 10 to 15 hours weekly on routine
11:06 data entry and CRM updates. What's revolutionary here is that none of these
11:09 companies needed to hire AI specialists or data scientists to achieve these
11:15 results. The economic case for AI skills is compelling. 54% of small and medium
11:19 businesses plan to increase AI spending this year. 83% of enterprises now
11:23 prioritize AI literacy in their hiring decisions. Organizations with AI trained
11:27 teams are seeing 5 to 8% higher profitability than their peers. But
11:31 perhaps most telling is this. Small businesses using AI report 91% higher
11:37 revenue growth than nonAI adopters. That gap is only widening. So the opportunity
11:42 ahead. The truth is mastering AI is no longer optional. It's becoming the price
11:45 of entry for modern business competitiveness. those who delay risk
11:48 irrelevance while early adopters are already reaping the benefits of
11:52 efficiency, innovation, and market share gains. Now, the good news is that we're
11:56 still in the early stages. By developing these skills now, you're positioning
11:59 yourself at the forefront of this transformation and going to be in
12:02 extremely high demand over the next decade. So, let's get started building
12:10 agent. All right, so here we are on Naden's website. You can get here using
12:14 the link in the description. And what I'm going to do is go ahead and sign up
12:17 for a free trial with you guys. And this is exactly the process you're going to
12:19 take. And you're going to get two weeks of free playing around. And like I said,
12:23 by the end of those two weeks, you're already going to have automations up and
12:26 running and tons of templates imported into your workflows. And I'm not going
12:29 to spend too much time here, but basically Nitn just lets you automate
12:33 anything. Any business process that you have, you can automate it visually with
12:37 no code, which is why I love it. So here you can see NIDN lets you automate
12:40 business processes without limits on your logic. It's a very visual builder.
12:44 We have a ton of different integrations. We have the ability to use code if you
12:48 want to. Lots of native nodes to do data transformation. And we have tons of
12:52 different triggers, tons of different AI nodes. And we're going to dive into this
12:55 so you can understand what's all going on. But there's also hundreds of
12:58 templates to get you started. Not only on the end website itself, but also in
13:02 my free school community. I have almost 100 templates in there that you can plug
13:05 in right away. Anyways, let's scroll back up to the top and let's get started
13:08 here with a new account. All right. All right. So, I put in my name, my email,
13:11 password, and I give my account a name, which will basically be up in the top
13:16 search bar. It'll be like nate herkdemo.app.n.cloud. So, that's what
13:19 your account name means. And you can see I'm going to go ahead and start our
13:22 14-day free trial. Just have to do some quick little onboarding. So, it asks us
13:26 what type of team are we on. I'm just going to put product and design. It asks
13:29 us the size of our company. It's going to ask us which of these things do we
13:32 feel most comfortable doing. These are all pretty technical. I just want to put
13:35 none of them, and that's fine. And how did you hear about any? Let's go ahead
13:39 with YouTube and submit that off. And now you have the option to invite other
13:42 members to your workspace if you want to collaborate and share some credentials.
13:44 For now, I'm just going to go ahead and skip that option. So from here, our
13:48 workspace is already ready. There's a little quick start guide you could watch
13:50 from Eniden's YouTube channel, but I'm just going to go ahead and click on
13:53 start automating. All right, so here we are. This is what Eniden looks like. And
13:57 let's just familiarize with this dashboard a little bit real quick. So up
14:00 in the top left, we can see we have 14 days left in our free trial and we've
14:05 used zero out of a,000 executions. An execution just basically means when you
14:09 run a workflow from end to end that's going to be an execution. So we can see
14:12 on the lefth hand side we have overview. We have like a personal set of projects.
14:15 We have things that have been shared with us. We have the ability to add a
14:19 project. We have the ability to go to our admin panel where we can upgrade our
14:23 instance of nodn. We can turn it off. That sort of stuff. So here's my admin
14:26 panel. You can see how many executions I have, how many active workflows I have,
14:29 which I'll explain what that means later. We have the ability to go ahead
14:33 and manage our nen versions. And this is where you could kind of upgrade your
14:36 plan and change your billing information, stuff like that. But you'll
14:39 notice that I didn't even have to put any billing details to get started with
14:43 my twoe free trial. But then if I want to get back into my workspace, I'm just
14:45 going to click on open right here. And that will send us right back into this
14:49 dashboard that we were just on. Cool. So right here we can see we can either
14:53 start from scratch, a new workflow, or we can test a simple AI agent example.
14:56 So let's just click into here real quick and break down what is actually going on
15:00 here. So, in order for us to actually access this demo where we're going to
15:03 just talk to this AI agent, it says that we have to start by saying hi. So,
15:06 there's an open chat button down here. I'm going to click on open chat and I'm
15:11 just going to type in here, hi. And what happens is our AI agent fails because
15:15 this is basically the brain that it needs to use in order to think about our
15:18 message and respond to us. And what happens is we can see there's an error
15:21 message. So, because these things are red, I can click into it and I can see
15:25 what is the error. It says error in subnode OpenAI model. So that would be
15:29 this node down here which is called OpenAI model. I would click into this
15:33 node and we can basically see that the error is there is no credentials. So
15:38 when you're in NADN what happens is in order to access any sort of API which
15:41 we'll talk about later but in order to access something like your Gmail or
15:47 OpenAI or your CRM you always need to import some sort of credential which is
15:50 just a fancy word for a password in order to actually like get into that
15:54 information. So right here we can see there's 100 free credits from OpenAI.
15:58 I'm going to click on claim credits. And now we just are using our NEN free
16:02 OpenAI API credits and we're fine on this front. But don't worry, later in
16:05 this video I'm going to cover how we can actually go to OpenAI and get an API key
16:10 and create our own password in here. But for now, we've claimed 100 free credits,
16:13 which is great. And what I'm going to do is just go ahead and resend this message
16:16 that says hi. So I can actually go to this hi text and I can just click on
16:20 this button which says repost message. And that's just going to send it off
16:23 again. And now our agent's going to actually be able to use its brain and
16:27 respond to us. So what it says here is welcome to NINDN. Let's start with the
16:30 first step to give me memory. Click the plus button on the agent that says
16:34 memory and choose simple memory. Just tell me once you've done that. So sure,
16:37 why not? Let's click on the plus button under memory. And we'll click on simple
16:41 memory real quick. And we're already set up. Good to go. So now I'm just going to
16:46 come down here and say done. Now we can see that our agent was able to use its
16:49 memory and its brain in order to respond to us. So now it can prompt us to add
16:54 tools. It can do this other stuff, but we're going to break that down later in
16:57 this video. Just wanted to show you real quick demo of how this works. So, what I
17:01 would do is up in the top right, I can click on save just to make sure that the
17:05 what we've done is actually going to be saved. And then to get back out to the
17:08 main screen, I'm going to click on either overview or personal. But if I
17:11 click on overview, that just takes us back to that home screen. But now, let's
17:15 talk about some other stuff that happens in a workflow. So, up in the top right,
17:19 I'm going to click on create workflow. You can see now this opens up a new
17:22 blank page. And then you have the option up here in the top left to name it. So
17:26 I'm just going to call this one demo. Now we have this new workflow that's
17:31 saved in our N environment called demo. So a couple things before we actually
17:34 drag in any nodes is up here. You can see where is this saved. If you have
17:37 different projects, you can save workflows in those projects. If you want
17:40 to tag them, you can tag different things like if you have one for customer
17:45 support or you have stuff for marketing, you can give your workflows different
17:48 tags just to keep everything organized. But anyways, every single workflow has
17:53 to start off with some sort of trigger. So when I click on add first step, it
17:56 opens up this panel on the right that says what triggers this workflow. So we
18:00 can have a manual trigger. We can have a certain event like a new message in
18:04 Telegram or a new row in our CRM. We can have a schedule, meaning we can set this
18:08 to run at 6 a.m. every single day. We can have a web hook call, form
18:11 submission, chat message like we saw earlier. There's tons of ways to
18:15 actually trigger a workflow. So for this example, let's just say I'm going to
18:18 click on trigger manually, which literally just gives us this button
18:21 where if we click test workflow, it goes ahead and executes. Cool. So this is a
18:26 workflow and this is a node, but this is a trigger node. What happens after a
18:29 trigger node is different types of nodes, whether that's like an action
18:34 node or a data transformation node or an AI node, some sort of node. So what I
18:39 would do is if I want to link up a node to this trigger, I would click on the
18:42 plus button right here. And this pulls up a little panel on the right that says
18:46 what happens next. Do you want to take action with AI? Do you want to take
18:49 action within a certain app? Do you want to do data transformation? There's all
18:52 these other different types of nodes. And what's cool is let's say we wanted
18:55 to take action within an app. If I clicked on this, we can see all of the
18:58 different native integrations that Nin has. And once again, in order to connect
19:02 to any of these tons of different tools that we have here, you always need to
19:06 get some sort of password. So let's say Google Drive. Now that I've clicked into
19:09 Google Drive, there's tons of different actions that we can take and they're all
19:12 very intuitive. you know would you want to copy a file would you want to share a
19:16 file do you want to create a shared drive it's all very natural language and
19:19 let's say for example I want to copy a file in order for nitn to tell Google
19:24 drive which file do we want to copy we first of all have to provide a
19:27 credential so every app you'll have to provide some sort of credential and then
19:31 you have basically like a configuration panel right here in the middle which
19:35 would be saying what is the resource you want what do you want to do what is the
19:38 file all this kind of stuff so whenever you're in a node in nen what you're
19:42 going to have is on the left you have an input panel which is basically any data
19:45 that's going to be feeding into this current node. In the middle you'll have
19:49 your configuration which is like the different settings and the different
19:52 little levers you can tweak in order to do different things. And then on the
19:56 right is going to be the output panel of what actually comes out of this node
20:00 based on the way that you configured it. So every time you're looking at a node
20:03 you're going to have three main places input configuration and output. So,
20:07 let's just do a quick example where I'm going to delete this Google Drive node
20:11 by clicking on the delete button. I'm going to add an AI node because there's
20:14 a ton of different AI actions we can take as well. And all I'm going to do is
20:17 I'm just going to talk to OpenAI's kind of like chatbt. So, I'll click on that
20:21 and I'm just going to click on message a model. So, once that pulls up, we're
20:25 going to be using our NEN free OpenAI credits that we got earlier. And as you
20:30 can see, we have to configure this node. What do we want to do? The resource is
20:34 going to be text. It could be image, audio, assistant, whatever we want. The
20:38 operation we're taking is we want to just message a model. And then of
20:42 course, because we're messaging a model, we have to choose from this list of
20:47 OpenAI models that we have access to. And actually, it looks like this N free
20:51 credits only actually give us access to a chat model. And this is a bit
20:54 different. Not exactly sure why. Probably just because they're free
20:57 credits. So, what we're going to do real quick is head over to OpenAI and get a
21:01 credential so I can just show you guys how this works with input configuration
21:06 and output. So, basically, you'd go to openai.com. You'd come in here and you'd
21:09 create an account if you don't already have one. If you have a chat GBT account
21:12 and you're on like maybe the 20 bucks a month plan, that is different than
21:17 creating an OpenAI API account. So, you'd come in here and create an OpenAI
21:20 account. As you see up here, we have the option for Chatbt login or API platform
21:25 login, which is what we're looking for here. So, now that you've created an
21:29 account with OpenAI's API, what you're going to do is come up to your dashboard
21:34 and you're going to go to your API keys. And then all you'd have to do is click
21:38 on create new key. Name this one whatever you want. And then you have a
21:42 new secret key. But keep in mind, in order for this key to work, you have to
21:45 have put in some billing information in your OpenAI account. So, throw in a few
21:49 bucks. They'll go a lot longer than you may think. And then you're going to take
21:52 that key that we just copied, come back into Nitn, and under the credential
21:56 section, we're going to click on create new credential. All I had to do now was
22:00 paste in that API key right there. And then you have the option to name this
22:02 credential if you have a ton of different ones. So I can just say, you
22:07 know, like demo on May 21st. And now I have my credential saved and named
22:11 because now we can tell the difference between our demo credential and our NAN
22:15 free OpenAI credits credential. And now hopefully we have the ability to
22:18 actually choose a model from the list. So, as you can see, we can access chat
22:25 GBT for latest, 3.5 Turbo, 4, 4.1 mini, all this kind of stuff. I'm going to
22:28 choose 4.1 mini, but as you can see, you can come back and change this whenever
22:31 you want. And I'm going to keep this really simple. In the prompt, I'm just
22:35 going to type in, tell me a joke. So now, when this node executes, it's
22:39 basically just going to be sending this message to OpenAI's model, which is
22:44 GBT4.1 Mini, and it's just going to say, "Tell me a joke." And then what we're
22:48 going to get on the output panel is the actual joke. So what I can do is come up
22:52 right here and click on test step. This is going to run this node and then we
22:56 get an output over here. And as you can see both with the input and the output
23:00 we have three options of how we want to view our data. We can click on schema,
23:05 we can click on table or we can click on JSON. And this is all the exact same
23:09 data. It's just like a different way to actually look at it. I typically like to
23:13 look at schema. I think it just looks the most simple and natural language.
23:17 But what you can see here is the message that we got back from this open AAI
23:21 model was sure here's a joke for you. Why don't scientists trust atoms?
23:25 Because they make up everything. And what's cool about schemas is that this
23:29 is all drag and drop. So now once we have this output, we could basically
23:32 just use it however we want. So if I click out of here and I open up another
23:36 node after this, and for now I'm just going to grab a set node just to show
23:39 you guys how we can drag and drop. What I would do is let's say we wanted to add
23:43 a new field and I'm just going to call this open AI's response. So we're
23:49 creating a field called open AI's response. And as you can see it says
23:52 drag an input field from the left to use it here. So as we know every node we
23:57 have input configuration output on the input we can basically choose which one
24:01 of these things do we want to use. I just want to reference this content
24:04 which is the actual thing that OpenAI said to us. So I would drag this from
24:08 here right into the value. And now we can see that we have what's called a
24:12 variable. So anything that's going to be wrapped in these two curly braces and
24:16 it's going to be green is a variable. And it's coming through as JSON
24:20 message.content which is basically just something that represents whatever is
24:24 coming from the previous node in the field called content. So we can see
24:29 right here JSON message.content we have message. Within message we have
24:33 basically a subfolder called content and that's where we access this actual
24:37 result this real text. And you can see if I click into this variable, if I make
24:41 it full screen, we have an expression which is our JSON variable. And then we
24:45 have our result, which is the actual text that we want back. So now if I go
24:49 ahead and test this step, we can see that we only get output to us OpenAI's
24:54 response, which is the text we want. Okay, so this would basically be a
24:58 workflow because we have a trigger and then we have our nodes that are going to
25:02 execute when we hit test workflow. So if I hit test workflow, it's going to run
25:05 the whole thing. And as you can see, super visual. We saw that OpenAI was
25:09 thinking and then we come over here and we get our final output which was the
25:13 actual joke. And now let me show you one more example of how we can map our
25:16 different variables without using a manual trigger. So let's say we don't
25:19 want a manual trigger. I'm just going to delete that. But now we have no way to
25:22 run this workflow because there's no sort of trigger. So I'm just going to
25:25 come back in here and grab a chat trigger just so we can talk to this
25:29 workflow in Naden. I'm going to hook it up right here. I would just basically
25:33 drag this plus into the node that I want. So I just drag it into OpenAI. And
25:37 now these two things are connected. So if I went into the chat and I said
25:41 hello, it's going to run the whole workflow, but it's not really going to
25:44 make sense because I said hello and now it's telling me a joke about why don't
25:48 scientists trust atoms. So what I would want to do is I'd want to come into this
25:52 OpenAI node right here. And I'm just going to change the actual prompt. So
25:56 rather than asking it to tell me a joke, what I would do is I'd just delete this.
26:00 And what I want to do is I want OpenAI to go ahead and process whatever I type
26:05 in this chat. same way it would work if we were in chatbt in our browser and
26:10 whatever we type OpenAI responds to. So all I would have to do to do that is I
26:14 would grab the chat input variable right here. I would drag that into the prompt
26:20 section. And now if I open this up, it's looking at the expression called
26:23 JSON.input because this field right here is called chat input. And then the
26:27 result is going to be whatever we type anytime. even if it's different 100
26:31 times in a row, it's always going to come back as a result that's different,
26:34 but it's always going to be referenced as the same exact expression. So, just
26:38 to actually show you guys this, let's save this workflow. And I'm going to
26:42 say, "My name is Nate. I like to eat ice cream. Make up a funny story about me."
26:53 Okay, so we'll send this off and the response that we should get will be one
26:57 that is actually about me and it's going to have some sort of element of a story
27:00 with ice cream. So let's take a look. So it said, "Sure, Nate, here's a funny
27:03 story for you." And actually, because we're setting it, it's coming through a
27:06 little weird. So let's actually click into here to look at it. Okay, so here
27:09 is the story. Let me just make this a little bigger. I can go ahead and drag
27:12 the configuration panel around by doing this. I can also make it larger or
27:16 smaller if I do this. So let's just make it small. We'll move it all the way to
27:20 the left and let's read the story. So, it said, "Sure, Nate. Here's a funny
27:24 story just for you. Once upon a time, there was a guy named Nate who loved ice
27:26 cream more than anything else in the world. One day, Nate decided to invent
27:31 the ultimate ice cream. A flavor so amazing that it would make the entire
27:34 town go crazy." So, let's skip ahead to the bottom. Basically, what happens is
27:38 from that day on, Nate's stand became the funniest spot in town. A place where
27:41 you never knew if you'd get a sweet, savory, or plain silly ice cream. And
27:45 Nate, he became the legendary ice cream wizard. That sounds awesome. So that's
27:49 exactly how you guys can see what happened was in this OpenAI node. We
27:54 have a dynamic input which was us talking to this thing in a chat trigger.
27:59 We drag in that variable that represents what we type into the user prompt. And
28:03 this is going to get sent to OpenAI's model of GPT 4.1 Mini because we
28:08 configured this node to do so. And the reason we were able to actually
28:11 successfully do that is because we put in our API key or our password for
28:17 OpenAI. And then on the right we get this output which we can look at either
28:22 in schema view, table view or JSON view. But they all represent the same data. As
28:26 you can see, this is the exact story we just read. Something I wanted to talk
28:29 about real quick that is going to be super helpful for the rest of this
28:32 course is just understanding what is JSON. And JSON stands for JavaScript
28:37 object notation. And it's just a way to identify things. And the reason why it's
28:40 so important to talk about is because over here, right, we all kind of know
28:43 what schema is. It's just kind of like the way something's broken down. And as
28:47 you can see, we have different drill downs over here. And we have different
28:50 things to reference. Then we all understand what a table is. It's kind of
28:53 like a table view of different objects with different things within them. Kind
28:56 of like the subfolders. And once again, you can also drag and drop from table
29:00 view as well. And then we have JSON, which also you can drag and drop. Don't
29:04 worry, you can drag and drop pretty much this whole platform, which is why it's
29:08 awesome. But this may look a little more cody or intimidating, but I want to talk
29:13 about why it is not. So, first of all, JSON is so so important because
29:17 everything that we do is pretty much going to be built on top of JSON. Even
29:21 the workflows that you're going to download later when you'll see like,
29:24 hey, you can download this template for free. When you download that, it's going
29:28 to be a JSON file, which means the whole workflow in NN is basically represented
29:32 as JSON. And so, hopefully that doesn't confuse you guys, but what it is is it's
29:39 literally just key value pairs. So what I mean by that is like over here the key
29:44 is index and index equals zero and then we have like the role of the openi
29:48 assistant and that's the key and the value of the role is assistant. So it's
29:52 very very natural language if you really break it down. What is the content that
29:55 we're looking at? The content that we're looking at is this actual content over
29:59 here. But like I said the great thing about that is that pretty much every
30:03 single large language model or like chat gbt cloud 3.5 they're all trained on
30:08 JSON and they all understand it. So, well, because it's universal. So, right
30:11 here on the left, we're looking at JSON. If I was to just copy this entire JSON,
30:17 go into ChatgBT and say, "Hey, help me understand this JSON." And then I just
30:21 basically pasted that in there, it's going to be able to tell us exactly like
30:24 which keys are in here and what those values are. So, it says this JSON
30:28 represents the response from an AI model like chatbt in a structured format. Let
30:32 me break it down for you. So, basically, it's going to explain what each part of
30:36 this JSON means. We can see the index is zero. That means it's the first
30:39 response. We can see the role equals assistant. We can see that the content
30:44 is the funny story about Nate. We can see all this stuff and it basically is
30:48 able to not only break it down for us, but let's say we need to make JSON. We
30:52 could say, "Hey, I have this natural language. Can you make that into JSON
30:55 for me?" Hey, can you help me make a JSON body where my name is Nate? I'm 23
31:02 years old. I went to the University of Iowa. I like to play pickle ball. We'll
31:08 send that off and basically it will be able to turn that into JSON for us. So
31:13 here you go. We can see name Nate, age 23, education, University of Iowa,
31:18 interest pickle ball. And so don't let it overwhelm you. If you ever need help
31:22 either making JSON or understanding JSON, throw it into chat and it will do
31:26 a phenomenal job for you. And actually, just to show you guys that I'm not
31:29 lying, let's just copy this JSON that chat gave us. Go back into our workflow
31:33 and I'm just going to add a set field just to show you guys. And instead of
31:36 manual mapping, I'm just going to set some data using JSON. So I'm going to
31:41 delete this, paste in exactly what chat gave me. Hit test step. And what do we
31:44 see over here? We see the name of someone named Nate. We see their age. We
31:47 see their education. And we see their interest in either schema table or JSON
31:53 view. So hopefully that gives you guys some reassurance. And just once again,
31:57 JSON's super important. And it's not even code. That is just a really quick
32:03 foundational understanding of a trigger, different nodes, action nodes, AI nodes.
32:08 You have a ton to play with. And that's kind of like the whole most overwhelming
32:12 part about NIN is you know what you need to do in your brain, but you don't know
32:16 maybe which is the best nen node to actually get that job done. So that's
32:19 kind of the tough part is it's a lot of just getting the reps in, understanding
32:24 what node is best for what. But I assure you by the time your twoe trial is up,
32:27 you'll have mastered pretty much all that. All right, but something else I
32:30 want to show you guys is now what we're looking at is called the editor. So if
32:34 you look at the top middle right here, we have an editor. And this is where we
32:37 can, you know, zoom out, we can move around, we can basically edit our
32:41 workflow right here. And it moves from left to right, as you guys saw, the same
32:46 way we we read from left to right. And now, because we've done a few runs and
32:49 we've tested out these different nodes, what we'll click into is executions. And
32:53 this will basically show us the different times we've ran this workflow.
32:57 And what's cool about this is it will show us the data that has moved through.
33:01 So let's say you set up a workflow that every time you get an email, it's going
33:04 to send some sort of automated response. You could come into this workflow, you
33:07 could click on executions, and you could go look at what time they happened, what
33:11 actually came through, what email was sent, all that kind of stuff. So if I go
33:15 all the way down to this third execution, we can remember that what I
33:19 did earlier was I asked this node to tell us a joke. We also had a manual
33:23 trigger rather than a chat trigger. And we can see this version of the workflow.
33:28 I could now click into this node and I could see this is when we had it
33:32 configured to tell us a joke. And we could see the actual joke it told us
33:35 which was about scientists not trusting atoms. And obviously we can still
33:39 manipulate this stuff, look at schema, look at table and do the same thing on
33:42 that left-hand side as well. So I wanted to talk about how you can import
33:46 templates into your own NN environment because it's super cool and like I said
33:49 they're all kind of built on top of JSON. So, I'm going to go to NN's
33:53 website and we're going to go to product and we're going to scroll down here to
33:56 templates. And you can see there's over 2100 workflow automation templates. So,
34:00 let's scroll down. Let's say we want to do this one with cloning viral Tik Toks
34:04 with AI avatars. And we can use this one for free. So, I'll click on use for
34:07 free. And what's cool is we can either copy the template to clipboard or since
34:10 we're in the cloud workspace, we could just import it right away. And so, this
34:14 is logged into my other kind of my main cloud instance, but I'll still show you
34:16 guys how this works. I would click on this button. it would pull up this
34:19 screen where I just get to set up a few things. So, there's going to be
34:22 different things we'd have to connect to. So, you would basically just select
34:25 your different credentials if you already had them set up. If not, you
34:27 could create them right here. And then you would just basically be able to hit
34:32 continue. And as this loads up, you see we have the exact template right there
34:36 to play with. Or let's say you're scrolling on YouTube and you see just a
34:39 phenomenal Nate Herk YouTube video that you want to play around with. All you
34:42 have to do is go to my free school community and you will come into YouTube
34:46 resources or search for the title of the video. And let's say you wanted to play
34:49 with this shorts automation that I built. What you'll see right here is a
34:52 JSON file that you'll have to download. Once you download that, you'll go back
34:56 into Nitn, create a new workflow, and then when you import that from file if
34:59 you click on this button right here, you can see the entire workflow comes in.
35:02 And then all you're going to have to do is follow the setup guide in order to
35:05 connect your own credentials to these different nodes. All right. And then the
35:08 final thing I wanted to talk about is inactive versus active workflows. So you
35:11 may have noticed that none of our executions actually counted up from
35:16 zero. And the reason is because this is counting active workflow executions. And
35:20 if we come up here to the top right, we can see that we have the ability to make
35:24 a workflow active, but it has to have a trigger node that requires activation.
35:27 So real quick, let's say that we come in here and we want a workflow to start
35:32 when we have a schedule trigger. So I would go to schedule and I would
35:35 basically say, okay, I want this to go off every single day at midnight as we
35:38 have here. And what would happen is while this workflow is inactive, it's
35:42 only actually going to run if we hit test workflow and then it runs. But if
35:47 we were to flick this on as active now, it says your schedule trigger will now
35:51 trigger executions on the schedule you have defined. These executions will not
35:55 show up immediately in the editor, but you can see them in the execution list.
35:59 So this is basically saying two things. It's saying now that we have the
36:01 schedule trigger set up to run at midnight, it's actually going to run at
36:05 midnight because it's active. If we left this inactive, it would not actually
36:09 run. And all it meant by the second part is if we were sitting in this workflow
36:13 at midnight, we wouldn't see it execute and go spinning and green and red in
36:18 live real time, but it would still show up as an execution. But if it's an
36:22 active workflow, you just don't get to see them live visually running and
36:26 spinning anymore. So that's the difference between an active workflow
36:29 and an inactive workflow. Let's say you have a trigger that's like um let's say
36:33 you have a HubSpot trigger where you want this basically to fire off the
36:37 workflow whenever a new contact is created. So you'd connect to HubSpot and
36:42 you would make this workflow active so that it actually runs if a new contact's
36:46 created. If you left this inactive, even though it says it's going to trigger on
36:50 new contact, it would not actually do so unless this workflow was active. So
36:53 that's a super important thing to remember. All right. And then one last
36:57 thing I want to talk about which we were not going to dive into because we'll see
37:01 examples later is there is one more way that we can see data rather than schema
37:05 table or JSON and it's something called binary. So binary basically just means
37:11 an image or maybe a big PDF or a word doc or a PowerPoint file. It's basically
37:15 something that's not explicitly textbased. So let me show you exactly
37:19 what that might look like. What I'm going to do is I'm going to add another
37:22 trigger under this workflow and I'm going to click on tab. And even though
37:25 it doesn't say like what triggers this workflow, we can still access different
37:28 triggers. So I'm just going to type in form. And this is going to give us a
37:32 form submission that basically is an NAND native form. And you can see
37:35 there's an option at the bottom for triggers. So I'm going to click on this
37:38 trigger. Now basically what this pulls up is another configuration panel, but
37:42 obviously we don't have an input because it's a trigger, but we are going to get
37:46 an output. So anyways, let me just set up a quick example form. I'm just going
37:50 to say the title of this form is demo. The description is binary data. And now
37:55 what happens if I click on test step, it's going to pull up this form. And as
37:58 you can see, we haven't set up like any fields for people to actually submit
38:02 stuff. So the only option is to submit. But when I hit submit, you can see that
38:06 the node has been executed. And now there's actually data in here. Submitted
38:09 at with a timestamp. And then we have different information right here. So let
38:13 me just show you guys. We can add a form element. And when I'm adding a form
38:17 element, we can basically have this be, you know, date, it can be a drop down,
38:20 it can be an email, it can be a file, it can be text. So, real quick, I'm just
38:23 going to show you an example where, let's say we have a form where someone
38:27 has to submit their name. We have the option to add a placeholder or make it
38:30 required. And this isn't really the bulk of what I'm trying to show you guys. I
38:34 just want to show you binary data. But anyways, let's say we're adding another
38:37 field that's going to be a file. I'm just going to say file. And this will
38:41 also be required. And now if I go ahead and hit test step, it's going to pull up
38:45 a new form for us with a name parameter and a file parameter. So what I did is I
38:49 put my name and I put in just a YouTube short that I had published. And you can
38:53 see it's an MP4 file. So if I hit submit, we're going to get this data
38:56 pulled into N as you can see in the background. Just go ahead and watch. The
39:00 form is going to actually capture this data. There you go. Form submitted. And
39:05 now what we see right here is binary data. So this is interesting, right? We
39:09 still have our schema. We still have our table. We still have our JSON, but what
39:13 this is showing us is basically, okay, the name that the person submitted was
39:17 Nate. The file, here are some information about it as far as the name
39:21 of it, the mime type, and the size, but we don't actually access the file
39:25 through table or JSON or schema view. The only way we can access a video file
39:29 is through binary. And as you can see, if I clicked on view, it's my actual
39:33 video file right here. And so that's all I really wanted to show you guys was
39:36 when you're working with PDFs or images or videos, a lot of times they're going
39:39 to come through as binary, which is a little confusing at first, but it's not
39:42 too bad. And we will cover an example later in this tutorial where we look at
39:47 a binary file and we process it. But as you can see now, if we were doing a next
39:52 node, we would have schema, table, JSON, and binary. So we're still able to work
39:55 with the binary. We're still able to reference it. But I just wanted to throw
39:58 out there, when you see binary, don't get scared. It just basically means it's
40:02 a different file type. It's not just textbased. Okay, so that's going to do
40:05 it for just kind of setting up the foundational knowledge and getting
40:09 familiar with the dashboard and the UI a little bit. And as you move into these
40:12 next tutorials, which are going to be some step by steps, I'm going to walk
40:15 through every single thing with you guys setting up different accounts with
40:19 Google and something called Pine Cone. And we'll talk about all this stuff step
40:22 by step. But hopefully now it's going to be a lot better moving into those
40:25 sections because you've seen, you know, some of the input stuff and how you
40:29 configure nodes and just like all this terminology that you may not have been
40:33 familiar with like JSON, JavaScript variables, workflows, executions, that
40:38 sort of stuff. So, like I said, let's move into those actual step-by-step
40:41 builds. And I can assure you guys, you're going to feel a lot more
40:43 comfortable after you have built a workflow end to end. All right, we're
40:48 going to talk about data types in Nadn and what those look like. It's really
40:50 important to get familiar with this before we actually start automating
40:53 things and building agents and stuff like that. So, what I'm going to do is
40:57 just pull in a set node. As you guys know, this just lets us modify, add, or
41:01 remove fields. And it's very, very simple. We basically would just click on
41:05 this to add fields. We can add the name of the field. We choose the data type,
41:08 and then we set the value, whether that's a fixed value, which we'll be
41:13 looking at here, or if we're dragging in some sort of variable from the lefth
41:15 hand side. But clearly, right now, we have no data incoming. We just have a
41:20 manual trigger. So, what I'm going to do is zoom in on the actual browser so we
41:24 can examine this data on the output a bit bigger and I don't have to just keep
41:27 cutting back and forth with the editing. So, as you can see, there's five main
41:31 data types that we have access to and end it in. We have a string, which is
41:35 basically just a fancy name for a word. Um, as you can see, it's represented by
41:40 a little a, a letter a. Then we have a number, which is represented by a pound
41:43 sign or a hashtag, whatever you want to call it. Um, it's pretty
41:47 self-explanatory. Then we have a boolean which is basically just going to be true
41:50 or false. That's basically the only thing it can be represented by a little
41:54 checkbox. We have an array which is just a fancy word for list. And we'll see
41:58 exactly what this looks like. And then we have an object which is probably the
42:01 most confusing one which basically means it's just this big block which can have
42:05 strings in them, numbers in them. It can have booleans in them. It can have
42:09 arrays in them. And it can also have nested objects within objects. So we'll
42:12 take a look at that. Let's just start off real quick with the string. So let's
42:17 say a string would be a name and that would be my name. So if I hit test step
42:21 on the right hand side in the JSON, it comes through as key value pair like we
42:26 talked about. Name equals Nate. Super simple. You can tell it's a string
42:30 because right here we have two quotes around the word Nate. So that represents
42:34 a string. Or you could go to the schema and you can see that with name equals
42:38 Nate, there's the little letter A and that basically says, okay, this is a
42:41 string. As you see, it matches up right here. Cool. So that's a string. Let's
42:46 switch over to a number. Now we'll just say we're looking at age and we'll throw
42:51 in the number 50. Hit test step. And now we see age equals 50 with the pound sign
42:55 right here as the symbol in the schema view. Or if we go to JSON view, we have
43:01 the key value pair age equals 50. But now there are no double quotes around
43:05 the actual number. It's green. So that's how we know it's not a string. This is a
43:10 number. And um that's where you may run into some issues where if you had like
43:13 age coming through as a string, you wouldn't be able to like do any
43:17 summarizations or filters, you know, like if age is greater than 50, send it
43:21 off this way. If it's less than 50, send it that way. In order to do that type of
43:24 filtering and routing, you would need to make sure that age is actually a number
43:30 variable type or data type. Cool. So there's age. Let's go to a boolean. So
43:35 we're going to basically just say adult. And that can only be true or false. You
43:39 see, I don't have the option to type anything here. It's only going to be
43:42 false or it's only going to be true. And as you can see, it'll come through.
43:45 It'll look like a string, but there's no quotes around it. It's green. And that's
43:49 how we know it's a boolean. Or we could go to schema, and we can see that
43:53 there's a checkbox rather than the letter A symbol. Now, we're going to move on to
43:58 an array. And this one's interesting, right? So, let's just say we we want to
44:01 have a list of names. So, if I have a list of names and I was typing in my
44:05 name and I tried to hit test step, this is where you would run into an error
44:09 because it's basically saying, okay, the field called names, which we set right
44:13 here, it's expecting to get an array, but all we got was Nate, which is
44:17 basically a string. So, to fix this error, change the type for the field
44:21 names or you can ignore type conversions, whatever. Um, so if we were
44:25 to come down to the option and ignore type conversions. So when we hit ignore
44:29 type conversions and tested the step, it basically just converted the field
44:32 called names to a string because it just could understand that this was a string
44:35 rather than an array. So let's turn that back off and let's actually see how we
44:39 could get this to work if we wanted to make an array. So like we know an array
44:44 just is a fancy word for a list. And in order for us to actually send through an
44:48 end and say, okay, this is a list, we have to wrap it in square brackets like
44:53 this. But we also have to wrap each item in the list in quotes. So I have to go
44:58 like this and go like that. And now this would pass through as a list of a of
45:02 different strings. And those are names. And so if I wanted to add another one
45:06 after the first item, I would put a comma. I put two quotes. And then inside
45:10 that I could put another name. Hit test step. And now you can see we're getting
45:14 this array that's made up of different strings and they're all going to be
45:17 different names. So I could expand that. I could close it out. Um we could drag
45:21 in different names. And in JSON, what that looks like is we have our key and
45:25 then we have two closed brackets, which is basically exactly what like right
45:29 here. This is exactly what we typed right here. So that's how it's being
45:32 represented within these square brackets right here. Okay, cool. So the final one
45:36 we have to talk about is an object. And this one's a little more complex. So if
45:40 I was to hit test step here, it's going to tell us names expects an object, but
45:44 we got an array. So once again, you could come in here, ignore type
45:47 conversions, and then it would just basically come through as a string, but
45:50 it's not coming through as an array. So that's not how we want to do it. And I
45:55 don't want to mess with the actual like schema of typing in an object. So what
45:58 I'm going to do is go to chat. I literally just said, give me an example
46:02 JSON object to put into naden. It gives me this example JSON object. I'm going
46:06 to copy that. Come into the set node, and instead of manual mapping, I'm just
46:09 going to customize it with JSON. Paste the one that chat just gave us. And when
46:15 I hit test step, what we now see first of all in the schema view is we have one
46:18 item with you know this is an object and all this different stuff makes it up. So we
46:24 have a string which is name herk. We have a string which is email nate
46:27 example.com. We have a string which is company true horizon. Then we have an
46:33 array of interests within this object. So I could close this out. I could open
46:36 it up. And we have three interests. AI automation nadn and YouTube content. And
46:40 this is, you know, chat GBT's long-term memory about me making this. And then we
46:45 also have an object within our object which is called project. And the interesting difference
46:51 here with an object or an array is that when you have an array of interests,
46:54 every single item in that array is going to be called interest zero, interest
46:58 one, interest two. And by the way, this is three interests, but computers start
47:01 counting from zero. So that's why it says 0, one, two. But with an object, it
47:06 doesn't all have to be the same thing. So you can see in this project object
47:11 project object we have one string called title we have one string called called
47:15 called status and we have one string called deadline and this all makes up
47:18 its own object. As you can see if we went to table view this is literally
47:22 just one item that's really easy to read. And you can tell that this is an
47:26 array because it goes 012. And you can tell that this is an object because it
47:29 has different fields in it. This is a one item. It's one object. It's got
47:33 strings up top. It has no numbers actually. So the date right here, this
47:37 is coming through as a string variable type. We can tell because it's not
47:40 green. We can tell because it has double quotes around it. And we can also tell
47:43 because in schema it comes through with the letter A. But this is just how you
47:47 can see there's these different things that make up um this object. And you can
47:52 even close them down in JSON view. We can see interest is an array that has
47:55 three items. We could open that up. We can see project is an object because
47:59 it's wrapped in in um curly braces, not not um the closed square brackets as you
48:05 can see. So, there's a difference. And I know this wasn't super detailed and it's
48:09 just something really really important to know heading into when you actually
48:13 start to build stuff out because you're probably going to get some of those
48:15 errors where you're like, you know, blank expects an object but got this or
48:19 expects an array and got this. So, just wanted to make sure I came in here and
48:23 threw that module at you guys and hopefully it'll save you some headaches
48:26 down the road. Real quick, guys, if you want to be able to download all the
48:29 resources from this video, they'll be available for free in my free school
48:32 community, which will be the link in the pinned comment. There'll be a zip file
48:36 in there that has all 23 of these workflows, as you can see, and also two
48:40 PDFs at the bottom, which are covered in the video. So, like I said, join the
48:43 Free School community. Not only does it have all of my YouTube resources, but
48:46 it's also a really quick growing community of people who are obsessed
48:49 with AI automation and using ND every day. All you'll have to do is search for
48:53 the title of this video using the search bar or you can click on YouTube
48:56 resources and find the post associated with this video. And then you'll have
48:59 the zip file right here to download which once again is going to have all 23
49:04 of these JSON N workflows and two PDFs. And there may even be some bonus files
49:07 in here. You'll just have to join the free school community to find out. Okay,
49:11 so we talked about AI agents. We talked about AI workflows. We've gotten into
49:14 NADN and set up our account. We understand workflows, nodes, triggers,
49:19 JSON, stuff like that, and data types. Now, it's time to use all that stuff
49:21 that we've talked about and start applying it. So, we're going to head
49:24 into this next portion of this course, which is going to be about step-by-step
49:26 builds, where I'm going to walk you through every single step live, and
49:31 we'll have some pretty cool workflows set up by the end. So, let's get into
49:34 it. Today, we're going to be looking at three simple AI workflows that you can
49:37 build right now to get started learning NAND. We're going to walk through
49:40 everything step by step, including all of the credentials and the setups. So,
49:43 let's take a look at the three workflows we're going to be building today. All
49:46 right, the first one is going to be a rag pipeline and chatbot. And if you
49:50 don't know what rag means, don't worry. We're going to explain it all. But at a
49:52 high level, what we're doing is we're going to be using Pine Cone as a vector
49:55 database. If you don't know what a vector database is, we'll break it down.
49:58 We're going to be using Google Drive. We're going to be using Google Docs. And
50:01 then something called Open Router, which lets us connect to a bunch of different
50:05 AI models like OpenAI's models or Anthropics models. The second workflow
50:08 we're going to look at is a customer support workflow that's kind of going to
50:11 be building off of the first one we just built. Because in the first workflow,
50:14 we're going to be putting data into a Pine Cone vector database. And in this
50:17 one, we're going to use that data in there in order to respond to customer
50:21 support related emails. So, we'll already have had Pine Cone set up, but
50:24 we're going to set up our credentials for Gmail. And then we're also going to
50:28 be using an NAN AI agent as well as Open Router once again. And then finally,
50:31 we're going to be doing LinkedIn content creation. And in this one, we'll be
50:35 using an NAN AI agent and open router once again, but we'll have two new
50:38 credentials to set up. The first one being Tavi, which is going to let us
50:41 search the web. And then the second one will be Google Sheets where we're going
50:44 to store our content ideas, pull them in, and then have the content written
50:49 back to that Google sheet. So by the end of this video, you're going to have
50:51 three workflows set up and you're going to have a really good foundation to
50:55 continue to learn more about NADN. You'll already have gotten a lot of
50:57 credentials set up and understand what goes into connecting to different
51:00 services. One of the trickiest being Google. So we'll walk through that step
51:03 by step and then you'll have it configured and you'll be good. And then
51:05 from there, you'll be able to continuously build on top of these three
51:08 workflows that we're going to walk through together because there's really
51:11 no such thing as a finished product in the space. Different AI models keep
51:14 getting released and keep getting better. There's always ways to improve
51:17 your templates. And the cool thing about building workflows in NAN is that you
51:20 can make them super customized for exactly what you're looking for. So, if
51:24 this sounds good to you, let's hop into that first workflow. Okay, so for this
51:27 first workflow, we're building a rag pipeline and chatbot. And so if that
51:31 sounds like a bunch of gibberish to you, let's quickly understand what rag is and
51:36 what a vector database is. So rag stands for retrieval augmented generation. And
51:40 in the simplest terms, let's say you ask me a question and I don't actually know
51:43 the answer. I would just kind of Google it and then I would get the answer from
51:47 my phone and then I would tell you the answer. So in this case, when we're
51:50 building a rag chatbot, we're going to be asking the chatbot questions and it's
51:53 not going to know the answer. So it's going to look inside our vector
51:56 database, find the answer, and then it's going to respond to us. And so when
52:00 we're combining the elements of rag with a vector database, here's how it works.
52:03 So the first thing we want to talk about is actually what is a vector database.
52:07 So essentially this is what a vector database would look like. We're all
52:11 familiar with like an x and yaxis graph where you can plot points on there on a
52:14 two dimensional plane. But a vector database is a multi-dimensional graph of
52:19 points. So in this case, you can see this multi-dimensional space with all
52:23 these different points or vectors. And each vector is placed based on the
52:27 actual meaning of the word or words in the vector. So over here you can see we
52:31 have wolf, dog and cat. And they're placed similarly because the meaning of
52:35 these words are all like animals. Whereas over here we have apple and
52:38 banana which the meaning of the words are food more likely fruits. And that's
52:42 why they're placed over here together. So when we're searching through the
52:46 database, we basically vectorize a question the same way we would vectorize
52:50 any of these other points. And in this case, we were asking for a kitten. And
52:53 then that query gets placed over here near the other animals and then we're
52:56 able to say okay well we have all these results now. So what that looks like and
53:00 what we'll see when we get into NAND is we have a document that we want to
53:03 vectorize. We have to split the document up into chunks because we can't put like
53:07 a 50page PDF as one chunk. So it gets split up and then we're going to run it
53:10 through something called an embeddings model which basically just turns text
53:15 into numbers. Just as simple as that. And as you can see in this case let's
53:18 say we had a document about a company. We have company data, finance data, and
53:22 marketing data. And they all get placed differently because they mean different
53:26 things. And the the context of those chunks are different. And then this
53:30 visual down here is just kind of how an LLM or in this case, this agent takes
53:34 our question, turns it into its own question. We vectorize that using the
53:38 same embeddings model that we used up here to vectorize the original data. And
53:42 then because it gets placed here, it just grabs back any vectors that are
53:46 nearest, maybe like the nearest four or five, and then it brings it back in
53:49 order to respond to us. So don't want to dive too much into this. Don't want to
53:53 over complicate it, but hopefully this all makes sense. Cool. So now that we
53:57 understand that, let's actually start building this workflow. So what we're
53:59 going to do here is we are going to click on add first step because every
54:03 workflow needs a trigger that basically starts the workflow. So, I'm going to
54:08 type in Google Drive because what we're going to do is we are going to pull in a
54:12 document from our Google Drive in order to vectorize it. So, I'm going to choose
54:15 a trigger which is on changes involving a specific folder. And what we have to
54:19 do now is connect our account. As you can see, I'm already connected, but what
54:22 we're going to do is click on create new credential in order to connect our
54:25 Google Drive account. And what we have to do is go get a client ID and a
54:29 secret. So, what we want to do is click on open docs, which is going to bring us
54:33 to Naden's documents on how to set up this credential. We have a prerequisite
54:37 which is creating a Google Cloud account. So I'm going to click on Google
54:40 Cloud account and we're going to set up a new project. Okay. So I just signed
54:43 into a new account and I'm going to set up a whole project and walk through the
54:46 credentials with you guys. You'll click up here. You'll probably have something
54:49 up here that says like new project and then you'll click into new project. All
54:54 we have to do now is um name it and you you'll be able to start for free so
54:56 don't worry about that yet. So I'm just going to name this one demo and I'm
55:00 going to create this new project. And now up here in the top right you're
55:02 going to see that it's kind of spinning up this project. and then we'll move
55:06 forward. Okay, so it's already done and now I can select this project. So now
55:10 you can see up here I'm in my new project called demo. I'm going to click
55:15 on these three lines in the top left and what we're going to do first is go to
55:18 APIs and services and click on enabled APIs and services. And what we want to
55:22 do is add the ones we need. And so right now all I'm going to do is add Google
55:27 Drive. And you can see it's going to come up with Google Drive API. And then
55:31 all we have to do is really simply click enable. And there we I just enabled it.
55:35 So you can see here the status is enabled. And now we have to set up
55:37 something called our OOTH consent screen, which basically is just going to
55:43 let Nadn know that Google Drive and Naden are allowed to talk to each other
55:46 and have permissions. So right here, I'm going to click on OOTH consent screen.
55:49 We don't have one yet, so I'm going to click on get started. I'm going to give
55:53 it a name. So we're just going to call this one demo. Once again, I'm going to
55:57 add a support email. I'm going to click on next. Because I'm not using a Google
56:01 Workspace account, I'm just using a, you know, nate88@gmail.com. I'm going to have to
56:05 choose external. I'm going to click on next. For contact information, I'm
56:08 putting the same email as I used to create this whole project. Click on next
56:12 and then agree to terms. And then we're going to create that OOTH consent
56:17 screen. Okay, so we're not done yet. The next thing we want to do is we want to
56:20 click on audience. And we're going to add ourselves as a test user. So we
56:23 could also make the app published by publishing it right here, but I'm just
56:26 going to keep it in test. And when we keep it in test mode, we have to add a
56:30 test user. So I'm going to put in that same email from before. And this is
56:32 going to be the email of the Google Drive we want to access. So I put in my
56:36 email. You can see I saved it down here. And then finally, all we need to do is
56:40 come back into here. Go to clients. And then we need to create a new client.
56:45 We're going to click on web app. We're going to name it whatever we want. Of
56:47 course, I'm just going to call this one demo once again. And now we need to
56:52 basically add a redirect URI. So if you click back in Nitn, we have one right
56:57 here. So, we're going to copy this, go back into cloud, and we're going to add
57:00 a URI and paste it right in there, and then hit create, and then once that's created,
57:06 it's going to give us an ID and a secret. So, all we have to do is copy
57:10 the ID, go back into Nit and paste that right here. And then we need to go grab our
57:16 secret from Google Cloud, and then paste that right in there. And now we have a
57:19 little button that says sign in with Google. So, I'm going to open that up.
57:22 It's going to pull up a window to have you sign in. Make sure you sign in with
57:25 the same account that you just had yourself as a test user. That one. And
57:30 then you'll have to continue. And then here is basically saying like what
57:33 permissions do we have? Does anyone have to your Google Drive? So I'm just going
57:36 to select all. I'm going to hit continue. And then we should be good.
57:39 Connection successful and we are now connected. And you may just want to
57:43 rename this credential so you know you know which email it is. So now I've
57:47 saved my credential and we should be able to access the Google Drive now. So,
57:49 what I'm going to do is I'm going to click on this list and it's going to
57:52 show me the folders that I have in Google Drive. So, that's awesome. Now,
57:56 for the sake of this video, I'm in my Google Drive and I'm going to create a
57:59 new folder. So, new folder. We're going to call this one um FAQ. Create this one
58:05 because we're going to be uploading an FAQ document into it. So, here's my FAQ
58:09 folder um right here. And then what I have is down here I made a policy and
58:14 FAQ document which looks like this. We have some store policies and then we
58:17 also have some FAQs at the bottom. So, all I'm going to do is I'm going to drag
58:21 in my policy and FAQ document into that new FAQ folder. And then if we come into
58:27 NAN, we click on the new folder that we just made. So, it's not here yet. I'm
58:30 just going to click on these dots and click on refresh list. Now, we should
58:35 see the FAQ folder. There it is. Click on it. We're going to click on what are
58:38 we watching this folder for. I'm going to be watching for a file created. And
58:43 then, I'm just going to hit fetch test event. And now we can see that we did in
58:47 fact get something back. So, let's make sure this is the right one. Yep. So,
58:50 there's a lot of nasty information coming through. I'm going to switch over
58:52 here on the right hand side. This is where we can see the output of every
58:55 node. I'm going to click on table and I'm just going to scroll over and there
59:00 should be a field called file name. Here it is. Name. And we have policy and FAQ
59:04 document. So, we know we have the right document in our Google Drive. Okay. So,
59:08 perfect. Every time we drop in a new file into that Google folder, it's going
59:11 to start this workflow. And now we just have to configure what happens after the
59:15 workflow starts. So, all we want to do really is we want to pull this data into
59:20 n so that we can put it into our pine cone database. So, off of this trigger,
59:24 I'm going to add a new node and I'm going to grab another Google Drive node
59:28 because what happened is basically we have the file ID and the file name, but
59:32 we don't have the contents of the file. So, we're going to do a download file
59:35 node from Google Drive. I'm going to rename this one and just call it
59:38 download file just to keep ourselves organized. We already have our
59:41 credential connected and now it's basically saying what file do you want
59:45 to download. We have the ability to choose from a list. But if we choose
59:48 from the list, it's going to be this file every time we run the workflow. And
59:52 we want to make this dynamic. So we're going to change from list to by ID. And
59:56 all we have to do now is we're going to look on the lefth hand side for that
59:59 file that we just pulled in. And we're going to be looking for the ID of the
60:02 file. So I can see that I found it right down here in the spaces array because we
60:06 have the name right here and then we have the ID right above it. So, I'm
60:10 going to drag ID, put it right there in this folder. It's coming through as a
60:14 variable called JSON ID. And that's just basically referencing, you know,
60:17 whenever a file comes through on the the Google Drive trigger. I'm going to use
60:21 the variable JSON. ID, which will always pull in the files ID. So, then I'm going
60:25 to hit test step and we're going to see that we're going to get the binary data
60:28 of this file over here that we could download. And this is our policy and FAQ
60:33 document. Okay. So, there's step two. We have the file downloaded in NADN. And
60:37 now it's just as simple as putting it into pine cone. So before we do that,
60:41 let's head over to pine cone.io. Okay, so now we are in pine cone.io, which is
60:45 a vector database provider. You can get started for free. And what we're going
60:48 to do is sign up. Okay, so I just got logged in. And once you get signed up,
60:52 you should see us a page similar to this. It's a get started page. And what
60:55 we want to do is you want to come down here and click on, you know, begin setup
60:59 because we need to create an index. So I'm going to click on begin setup. We
61:03 have to name our index. So you can call this whatever you want. We have to
61:08 choose a configuration for a text model. We have to choose a configuration for an
61:11 embeddings model, which is sort of what I talked about right in here. This is
61:15 going to turn our text chunks into a vector. So what I'm going to do is I'm
61:19 going to choose text embedding three small from OpenAI. It's the most cost
61:23 effective OpenAI embedding model. So I'm going to choose that. Then I'm going to
61:26 keep scrolling down. I'm going to keep mine as serverless. I'm going to keep
61:29 AWS as the cloud provider. I'm going to keep this region. And then all I'm going
61:33 to do is hit create index. Once you create your index, it'll show up right
61:36 here. But we're not done yet. You're going to click into that index. And so I
61:39 already obviously have stuff in my vector database. You won't have this.
61:41 What I'm going to do real quick is just delete this information out of it. Okay.
61:45 So this is what yours should look like. There's nothing in here yet. We have no
61:48 name spaces and we need to get this configured. So on the left hand side, go
61:53 over here to API keys and you're going to create a new API key. Name it
61:58 whatever you want, of course. Hit create key. And then you're going to copy that
62:02 value. Okay, back in NDN, we have our API key copied. We're going to add a new
62:07 node after the download file and we're going to type in pine cone and we're
62:10 going to grab a pine cone vector store. Then we're going to select add documents
62:14 to a vector store and we need to set up our credential. So up here, you won't
62:18 have these and you're going to click on create new credential. And all we need
62:21 to do here is just an API key. We don't have to get a client ID or a secret. So
62:24 you're just going to paste in that API key. Once that's pasted in there and
62:27 you've given it a name so you know what this means. You'll hit save and it
62:30 should go green and we're connected to Pine Cone and you can make sure that
62:34 you're connected by clicking on the index and you should have the name of
62:37 the index right there that we just created. So I'm going to go ahead and
62:40 choose my index. I'm going to click on add option and we're going to be
62:43 basically adding this to a Pine Cone namespace which back in here in Pine
62:48 Cone if I go back into my database my index and I click in here you can see
62:51 that we have something called namespaces. And this basically lets us
62:55 put data into different folders within this one index. So if you don't specify
62:59 an index, it'll just come through as default and that's going to be fine. But
63:02 we want to get into the habit of having our data organized. So I'm going to go
63:05 back into NADN and I'm just going to name this name space FAQ because that's
63:10 the type of data we're putting in. And now I'm going to click out of this node.
63:13 So you can see the next thing that we need to do is connect an embeddings
63:17 model and a document loader. So let's start with the embeddings model. I'm
63:20 going to click on the plus and I'm going to click on embeddings open AAI. And
63:23 actually, this is one thing I left out of the Excalaw is that we also will need
63:27 to go get an OpenAI key. So, as you can see, when we need to connect a
63:30 credential, you'll click on create new credential and we just need to get an
63:33 API key. So, you're going to type in OpenAI API. You'll click on this first
63:37 link here. If you don't have an account yet, you'll sign in. And then once you
63:40 sign up, you want to go to your dashboard. And then on the lefth hand
63:44 side, very similar thing to Pine Cone, you'll click on API keys. And then we're
63:47 just going to create a new key. So you can see I have a lot. We're going to
63:49 make a new one. And I'm calling everything demo, but this is going to be
63:53 demo number three. Create new secret key. And then we have our key. So we're
63:56 going to copy this and we're going to go back into Nit. Paste that right here. We
63:59 paste it in our key. We've given in a name. And now we'll hit save and we
64:03 should go green. Just keep in mind that you may need to top up your account with
64:06 a few credits in order for you to actually be able to run this model. Um,
64:10 so just keep that in mind. So then what's really important to remember is
64:13 when we set up our pine cone index, we use the embedding model text embedding
64:17 three small from OpenAI. So that's why we have to make sure this matches right
64:20 here or this automation is going to break. Okay, so we're good with the
64:24 embeddings and now we need to add a document loader. So I'm going to click
64:27 on this plus right here. I'm going to click on default data loader and we have
64:31 to just basically tell Pine Cone the type of data we're putting in. And so
64:35 you have two options, JSON or binary. In this case, it's really easy because we
64:39 downloaded a a Google Doc, which is on the lefth hand side. You can tell it's
64:42 binary because up top right here on the input, we can switch between JSON and
64:47 binary. And if we were uploading JSON, all we'd be uploading is this gibberish
64:51 nonsense information that we don't need. We want to upload the binary, which is
64:55 the actual policy and FAQ document. So, I'm just going to switch this to binary.
64:58 I'm going to click out of here. And then the last thing we need to do is add a
65:01 text splitter. So, this is where I was talking about back in this Excal. we
65:05 have to split the document into different chunks. And so that's what
65:08 we're doing here with this text splitter. I'm going to choose a
65:12 recursive character text splitter. There's three options and I won't dive
65:15 into the difference right now, but recursive character text splitter will
65:18 help us keep context of the whole document as a whole, even though we're
65:22 splitting it up. So for now, chunk size is a th00and. That's just basically how
65:25 many characters am I going to put in each chunk? And then is there going to
65:29 be any overlap between our chunks of characters? So right now I'm just going
65:33 to leave it default a,000 and zero. So that's it. You just built your first
65:37 automation for a rag pipeline. And now we're just going to click on the play
65:40 button above the pine cone vector store node in order to see it get vectorized.
65:43 So we're going to basically see that we have four items that have left this
65:47 node. So this is basically telling us that our Google doc that we downloaded
65:51 right here. So this document got turned into four different vectors. So if I
65:55 click into the text splitter, we can see we have four different responses and
65:59 this is the contents that went into each chunk. So we can just verify this by
66:03 heading real quick into Pine Cone, we can see we have a new name space that we
66:07 created called FAQ. Number of records is four. And if we head over to the
66:10 browser, we can see that we do indeed have these four vectors. And then the
66:13 text field right here, as you can see, are the characters that were put into
66:17 each chunk. Okay, so that was the first part of this workflow, but we're going
66:21 to real quick just make sure that this actually works. So we're going to add a
66:24 rag chatbot. Okay. So, what I'm going to do now is hit the tab, or I could also
66:27 have just clicked on the plus button right here, and I'm going to type in AI
66:31 agent, and that is what we're going to grab and pull into this workflow. So, we
66:35 have an AI agent, and let's actually just put him right over here. Um, and
66:41 now what we need to do is we need to set up how are we actually going to talk to
66:44 this agent. And we're just going to use the default N chat window. So, once
66:47 again, I'm going to hit tab. I'm going to type in chat. And we have a chat
66:51 trigger. And all I'm going to do is over here, I'm going to grab the plus and I'm
66:54 going to drag it into the front of the AI agent. So basically now whenever we
66:58 hit open chat and we talk right here, the agent will read that chat message.
67:02 And we know this because if I click into the agent, we can see the user message
67:07 is looking for one in the connected chat trigger node, which we have right here
67:10 connected. Okay, so the first step with an AI agent is we need to give it a
67:14 brain. So we need to give it some sort of AI model to use. So we're going to
67:18 click on the plus right below chat model. And what we could do now is we
67:22 could set up an OpenAI chat model because we already have our API key from
67:25 OpenAI. But what I want to do is click on open router because this is going to
67:30 allow us to choose from all different chat models, not just OpenAIs. So we
67:33 could do Claude, we could do Google, we could do Plexity. We have all these
67:36 different models in here which is going to be really cool. And in order to get
67:39 an Open Router account, all you have to do is go sign up and get an API key. So
67:43 you'll click on create new credential and you can see we need an API key. So
67:47 you'll head over to openouter.ai. You'll sign up for an account. And then all you
67:50 have to do is in the top right, you're going to click on keys. And then once
67:54 again, kind of the same as all all the other ones. You're going to create a new
67:57 key. You're going to give it a name. You're going to click create. You have a
68:01 secret key. You're going to click copy. And then when we go back into NN and
68:04 paste it in here, give it a name. And then hit save. And we should go green.
68:07 We've connected to Open Router. And now we have access to any of these different
68:12 chat models. So, in this case, let's use let's use Claude um 3.5
68:19 Sonnet. And this is just to show you guys you can connect to different ones.
68:22 But anyways, now we could click on open chat. And actually, let me make sure you
68:26 guys can see him. If we say hello, it's going to use its brain claw 3.5 sonnet.
68:30 And now it responded to us. Hi there. How can I help you? So, just to validate
68:34 that our information is indeed in the Pine Cone vector store, we're going to
68:38 click on a tool under the agent. We're going to type in Pine Cone um and grab a
68:43 Pine Cone vector store and we're going to grab the account that we just
68:46 selected. So, this was the demo I just made. We're going to give it a name. So,
68:50 in this case, I'm just going to say knowledge base. We're going to give a description.
68:59 Call this tool to access the policy and FAQ database. So, we're basically just
69:03 describing to the agent what this tool does and when to use it. And then we
69:08 have to select the index and the name space for it to look inside of. So the
69:11 index is easy. We only have one. It's called sample. But now this is important
69:14 because if you don't give it the right name space, it won't find the right
69:19 information. So we called ours FAQ. If you remember in um our Pine Cone, we
69:23 have a namespace and we have FAQ right here. So that's why we're doing FAQ. And
69:26 now it's going to be looking in the right spot. So before we can chat with
69:30 it, we have to add an embeddings model to our Pine Cone vector store, which
69:33 same thing as before. We're going to grab OpenAI and we're going to use
69:37 embedding3 small and the same credential you just made. And now we're going to be
69:41 good to go to chat with our rag agent. So looking back in the document, we can
69:44 see we have some different stuff. So I'm going to ask this chatbot what the
69:48 warranty policy is. So I'm going to open up the chat window and say what is our
69:54 warranty policy? Send that off. And we should see that it's going to use its
69:57 brain as well as the vector store in order to create an answer for us because
70:00 it didn't know by itself. So there we go. just finished up and it said based on the information
70:06 from our knowledge base, here's the warranty policy. We have one-year
70:10 standard coverage. We have, you know, this email for claims processes. You
70:14 must provide proof of purchase and for warranty exclusions that aren't covered,
70:18 damage due to misuse, water damage, blah blah blah. Back in the policy
70:22 documentation, we can see that that is exactly what we have in our knowledge
70:26 base for warranty policy. So, just because I don't want this video to go
70:29 too long, I'm not going to do more tests, but this is where you can get in
70:31 there and make sure it's working. One thing to keep in mind is within the
70:34 agent, we didn't give it a system prompt. And what a system prompt is is
70:38 just basically a message that tells the agent how to do its job. So what you
70:42 could do is if you're having issues here, you could say, you know, like this
70:45 is the name of our tool which is called knowledgeb. You could tell the agent and
70:49 in system prompt, hey, like your job is to help users answer questions about the
70:54 um you know, our policy database. You have a tool called knowledgebase. You
70:58 need to use that in order to help them answer their questions. and that will
71:01 help you refine the behavior of how this agent acts. All right, so the next one
71:05 that we're doing is a customer support workflow. And as always, you have to
71:08 figure out what is the trigger for my workflow. In this case, it's going to be
71:12 triggered by a new email received. So I'm going to click on add first step.
71:16 I'm going to type in Gmail. Grab that node. And we have a trigger, which is on
71:19 message received right here. And we're going to click on that. So what we have
71:23 to do now is obviously authorize ourselves. So we're going to click on
71:26 create new credential right here. And all we have to do here is use OOTH 2. So
71:30 all we have to do is click on sign in. But before we can do that, we have to
71:33 come over to our Google Cloud once again. And now we have to make sure we
71:36 enable the Gmail API. So we'll click on Gmail API. And it'll be really simple.
71:40 We'll just have to click on enable. And now we should be able to do that OOTH
71:43 connection and actually sign in. You'll click on the account that you want to
71:46 access the Gmail. You'll give it access to everything. Click continue. And then
71:50 we're going to be connected as you can see. And then you'll want to name this
71:54 credential as always. Okay. So now we're using our new credential. And what I'm
71:57 going to do is if I hit fetch test event. So now we are seeing an email
72:01 that I just got in this inbox which in this case was nencloud was granted
72:05 access to your Google account blah blah blah. Um so that's what we just got.
72:09 Okay. So I just sent myself a different email and I'm going to fetch that email
72:13 now from this inbox. And we can see that the snippet says what is the privacy
72:17 policy? I'm concerned about my data and passwords. And what we want to do is we
72:21 want to turn off simplify because what this button is doing is it's going to
72:24 take the content of the email and basically, you know, cut it off. So in
72:28 this case, it didn't matter, but if you're getting long emails, it's going
72:30 to cut off some of the email. So if we turn off simplify fetch test event, once
72:34 again, we're now going to get a lot more information about this email, but we're
72:37 still going to be able to access the actual content, which is right here. We
72:41 have the text, what is privacy policy? I'm concerned about my data and
72:44 passwords. Thank you. And then you can see we have other data too like what the
72:48 subject was, who the email is coming from, what their name is, all this kind
72:52 of stuff. But the idea here is that we are going to be creating a workflow
72:56 where if someone sends an email to this inbox right here, we are going to
72:59 automatically look up the customer support policy and respond back to them
73:02 so we don't have to. Okay. So the first thing I'm actually going to do is pin
73:05 this data just so we can keep it here for testing. Which basically means
73:08 whenever we rerun this, it's not going to go look in our inbox. It's just going
73:12 to keep this email that we pulled in, which helps us for testing, right? Okay,
73:16 cool. So, the next step here is we need to have AI basically filter to see is
73:21 this email customer support related? If yes, then we're going to have a response
73:24 written. If no, we're going to do nothing because maybe the use case would
73:28 be okay, we're going to give it an access to an inbox where we're only
73:32 getting customer support emails. But sometimes maybe that's not the case. And
73:35 let's just say we wanted to create this as sort of like an inbox manager where
73:38 we can route off to different logic based on the type of email. So that's
73:41 what we're going to do here. So I'm going to click on the plus after the
73:44 Gmail trigger and I'm going to search for a text classifier node. And what
73:49 this does is it's going to use AI to read the incoming email and then
73:53 determine what type of email it is. So because we're using AI, the first thing
73:56 we have to do is connect a chat model. We already have our open router
73:59 credential set up. So I'm going to choose that. I'm going to choose the
74:01 credential and then I'm for this one, let's just keep it with 40 mini. And now
74:07 this AI node actually has AI and I'm going to click into the text classifier.
74:09 And the first thing we see is that there's a text to classify. So all we
74:13 want to do here is we want to grab the actual content of the email. So I'm
74:17 going to scroll down. I can see here's the text, which is the email content.
74:20 We're going to drag that into this field. And now every time a new email
74:25 comes through, the text classifier is going to be able to read it because we
74:28 put in a variable which basically represents the content of the email. So
74:32 now that it has that, it still doesn't know what to classify it as or what its
74:35 options are. So we're going to click on add category. The first category is
74:39 going to be customer support. And then basically we need to give it a
74:42 description of what a customer support email could look like. So I wanted to
74:46 keep this one simple. It's pretty vague, but you could make this more detailed,
74:49 of course. And I just sent an email that's related to helping out a
74:52 customer. They may be asking questions about our policies or questions about
74:56 our products or services. And what we can do is we can give it specific
74:59 examples of like here are some past customer support emails and here's what
75:02 they've looked like. And that will make this thing more accurate. But in this
75:05 case, that's all we're going to do. And then I'm going to add one more category
75:08 that's just going to be other. And then for now, I'm just going to say any email
75:14 that is not customer support related. Okay, cool. So now when we click out of
75:18 here, we can see we have two different branches coming off of this node, which
75:21 means when the text classifier decides, it's either going to send it off this
75:24 branch or it's going to send it down this branch. So let's quickly hit play.
75:28 It's going to be reading the email using its brain. And now you can see it has
75:32 outputed in the customer support branch. We can also verify by clicking into
75:35 here. And we can see customer support branch has one item and other branch has
75:39 no items. And just to keep ourselves organized right now, I'm going to click
75:42 on the other branch and I'm just going to add an operation that says do nothing
75:46 just so we can see, you know, what would happen if it went this way for now. But
75:49 now is where we want to configure the logic of having an agent be able to read
75:55 the email, hit the vector database to get relevant information and then help
75:58 us write an email. So I'm going to click on the plus after the customer support
76:02 branch. I'm going to grab an AI agent. So this is going to be very similar to
76:05 the way we set up our AI agent in the previous workflow. So, it's kind of
76:08 building on top of each other. And this time, if you remember in the previous
76:12 one, we were talking to it with a connected chat trigger node. And as you
76:15 can see here, we don't have a connected chat trigger node. So, the first thing
76:19 we want to do is change that. We want to define below. And this is where you
76:22 would think, okay, what do we actually want the agent to read? We want it to
76:25 read the email. So, I'm going to do the exact same thing as before. I'm going to
76:29 go into the Gmail trigger node, scroll all the way down until we can find the
76:32 actual email content, which is right here, and just drag that right in.
76:35 That's all we're going to do. And then we definitely want to add a system
76:39 message for this agent. We are going to open up the system message and I'm just
76:42 going to click on expression so I can expand this up full screen. And we're
76:46 going to write a system prompt. Again, for the sake of the video, keeping this
76:49 prompt really concise, but if you want to learn more about prompting, then
76:52 definitely check out my communities linked down below as well as this video
76:56 up here and all the other tutorials on my channel. But anyways, what we said
76:59 here is we gave it an overview and instructions. The overview says you are
77:03 a customer support agent for TechHaven. Your job is to respond to incoming
77:06 emails with relevant information using your knowledgebased tool. And so when we
77:10 do hook up our Pine Cone vector database, we're just going to make sure
77:13 to call it knowledgebase because that's what the agent thinks it has access to.
77:17 And then for the instructions, I said your output should be friendly and use
77:20 emojis and always sign off as Mr. Helpful from TechHaven Solutions. And
77:24 then one more thing I forgot to do actually is we want to tell it what to
77:27 actually output. So if we didn't tell it, it would probably output like a
77:31 subject and a body. But what's going to happen is we're going to reply to the
77:34 incoming email. We're not going to create a new one. So we don't need a
77:38 subject. So I'm just going to say output only the body content of the email. So
77:44 then we'll give it a try and see what that prompt looks like. We may have to
77:47 come back and refine it, but for now we're good. Um, and as you know, we have
77:51 to connect a chat model and then we have to connect our pine cone. So first of
77:54 all, chat model, we're going to use open router. And just to show you guys, we
77:58 can use a different type of model here. Let's use something else. Okay. So,
78:01 we're going to go with Google Gemini 2.0 Flash. And then we need to add the Pine
78:04 Cone database. So, I'm going to click on the plus under tool. I'm going to search
78:09 for Pine Cone Vector Store. Grab that. And we have the operation is going to be
78:13 retrieving documents as a tool for an AI agent. We're going to call this
78:20 knowledge capital B. And we're going to once again just say call this tool to
78:27 access policy and FAQ information. We need to set up the index as well as the
78:31 namespace. So sample and then we're going to call the namespace, you know,
78:35 FAQ because that's what it's called in our pine cone right here as you can see.
78:38 And then we just need to add our embeddings model and we should be good
78:42 to go which is embedded OpenAI text embedding three small. So we're going to
78:46 hit the play above the AI agent and it's going to be reading the email. As you
78:49 can see once again the prompt user message. It's reading the email. What is
78:52 the privacy policy? I'm concerned about my data and my passwords. Thank you. So
78:56 we're going to hit the play above the agent. We're going to watch it use its
78:59 brain. We're going to watch it call the vector store. And we got an error. Okay.
79:04 So, I'm getting this error, right? And it says provider returned error. And
79:09 it's weird because basically why it's erroring is because of our our chat
79:12 model. And it's it's weird because it goes green, right? So, anyways, what I
79:16 would do here is if you're experiencing that error, it means there's something
79:19 wrong with your key. So, I would go reset it. But for now, I'm just going to
79:22 show you the quick fix. I can connect to a OpenAI chat model real quick. And I
79:27 can run this here and we should be good to go. So now it's going to actually
79:31 write the email and output. Super weird error, but I'm honestly glad I caught
79:34 that on camera to show you guys in case you face that issue because it could be
79:37 frustrating. So we should be able to look at the actual output, which is,
79:41 "Hey there, thank you for your concern about privacy policy. At Tech Haven, we
79:45 take your data protection seriously." So then it gives us a quick summary with
79:48 data collection, data protection, cookies. If we clicked into here and
79:51 went to the privacy policy, we could see that it is in fact correct. And then it
79:55 also was friendly and used emojis like we told it to right here in the system
79:59 prompt. And finally, it signed off as Mr. Helpful from Tech Haven Solutions,
80:02 also like we told it to. So, we're almost done here. The last thing that we
80:05 want to do is we want to have it actually reply to this person that
80:09 triggered the whole workflow. So, we're going to click on the plus. We're going
80:13 to type in Gmail. Grab a Gmail node and we're going to do reply to a message.
80:17 Once we open up this node, we already know that we have it connected because
80:20 we did that earlier. We need to configure the message ID, the message
80:24 type, and the message. And so all I'm going to do is first of all, email type.
80:28 I'm going to do text. For the message ID, I'm going to go all the way down to
80:31 the Gmail trigger. And we have an ID right here. This is the ID we want to
80:35 put into the message ID so that it responds in line on Gmail rather than
80:39 creating a new thread. And then for the message, we're going to just drag in the
80:43 output from the agent that we just had write the message. So, I'm going to grab
80:46 this output, put it right there. And now you can see this is how it's going to
80:49 respond in email. And the last thing I want to do is I want to click on add option, append
80:55 nadn attribution, and then just check that off. So then at the bottom of the
80:59 email, it doesn't say this was sent by naden. So finally, we'll hit this test
81:04 step. We will see we get a success message that the email was sent. And
81:07 I'll head over to the email to show you guys. Okay, so here it is. This is the
81:11 one that we sent off to that inbox. And then this is the one that we just got
81:13 back. As you can see, it's in the same thread and it has basically the privacy
81:19 policy outlined for us. Cool. So, that's workflow number two. Couple ways we
81:22 could make this even better. One thing we could do is we could add a node right
81:25 here. And this would be another Gmail one. And we could basically add a label
81:31 to this email. So, if I grab add label to message, we would do the exact same
81:34 thing. We'd grab the message ID the same way we grabbed it earlier. So, now it
81:38 has the message ID of the label to actually create. And then we would just
81:41 basically be able to select the label we want to give it. So in this case, we
81:44 could give it the customer support label. We hit test step, we'll get
81:48 another success message. And then in our inbox, if we refresh, we will see that
81:51 that just got labeled as customer support. So you could add on more
81:55 functionality like that. And you could also down here create more sections. So
81:59 we could have finance, you know, a logic built out for finance emails. We could
82:02 have logic built out for all these other types of emails and um plug them into
82:07 different knowledge bases as well. Okay. So the third one we're going to do is a
82:11 LinkedIn content creator workflow. So, what we're going to do here is click on
82:14 add first step, of course. And ideally, you know, in production, what this
82:17 workflow would look like is a schedule trigger, you know. So, what you could do
82:20 is basically say every day I want this thing to run at 7:00 a.m. That way, I'm
82:23 always going to have a LinkedIn post ready for me at, you know, 7:30. I'll
82:27 post it every single day. And if you wanted it to actually be automatic,
82:30 you'd have to flick this workflow from inactive to active. And, you know, now
82:34 it says, um, your schedule trigger will now trigger executions on the schedule
82:37 you have defined. So now it would be working, but for the sake of this video,
82:40 we're going to turn that off and we are just going to be using a manual trigger
82:44 just so we can show how this works. Um, but it's the same concept, right? It
82:48 would just start the workflow. So what we're going to do from here is we're
82:51 going to connect a Google sheet. So I'm going to grab a Google sheet node. I'm
82:55 going to click on get rows and sheet and we have to create our credential once
82:58 again. So we're going to create new credential. We're going to be able to do
83:02 ooth to sign in, but we're going to have to go back to Google Cloud and we're
83:05 going to have to grab a sheet and make sure that we have the Google Sheets API
83:08 enabled. So, we'll come in here, we'll click enable, and now once this is good
83:12 to go, we'll be able to sign in using OOTH 2. So, very similar to what we just
83:16 had to do for Gmail in that previous workflow. But now, we can sign in. So,
83:20 once again, choosing my email, allowing it to have access, and then we're
83:23 connected successfully, and then giving this a good name. And now, what we can
83:26 do is choose the document and the sheet that it's going to be pulling from. So,
83:29 I'm going to show you. I have one called LinkedIn posts, and I only have one
83:33 sheet, but let's show you the sheet real quick. So, LinkedIn posts, what we have
83:38 is a topic, a status, and a content. And we're just basically going to be pulling
83:42 in one row where the status equals to-do, and then we are going to um
83:46 create the content, upload it back in right here, and then we're going to
83:50 change the status to created. So, then this same row doesn't get pulled in
83:52 every day. So, how this is going to work is that we're going to create a filter.
83:56 So the first filter is going to be looking within the status column and it
84:01 has to equal to-do. And if we click on test step, we should see that we're
84:04 going to get like all of these items where there's a bunch of topics. But we
84:08 don't want that. We only want to get the first row. So at the bottom here, add
84:12 option. I'm going to say return only first matching row. Check that on. We'll
84:15 test this again. And now we're only going to be getting that top row to
84:19 create content on. Cool. So we have our first step here, which is just getting
84:23 the content from the Google sheet. Now, what we're going to do is we need to do
84:27 some web search on this topic in order to create that content. So, I'm going to
84:30 add a new node. This one's going to be called an HTTP request. So, we're going
84:34 to be making a request to a specific API. And in this case, we're going to be
84:38 using Tavly's API. So, go on over to tavly.com and create a free account.
84:42 You're going to get a,000 searches for free per month. Okay, here we are in my
84:46 account. I'm on the free researcher plan, which gives me a thousand free
84:49 credits. And right here, I'm going to add an API key. We're going to name it,
84:54 create a key, and we're going to copy this value. And so, you'll start to get
84:56 to the point when you connect to different services, you always need to
85:00 have some sort of like token or API key. But anyways, we're going to grab this in
85:03 a sec. What we need to do now is go to the documentation that we see right
85:06 here. We're going to click on API reference. And now we have right here.
85:10 This is going to be the API that we need to use in order to search the web. So,
85:14 I'm not going to really dive into like everything about HTTP requests right
85:17 now. I'm just going to show you the simple way that we can get this set up.
85:21 So first thing that we're going to do is we obviously see that we're using an
85:25 endpoint called Tavali search and we can see it's a post request which is
85:28 different than like a git request and we have all these different things we need
85:31 to configure and it can be confusing. So all we want to do is on the top right we
85:35 see this curl command. We're going to click on the copy button. We're going to
85:40 go back into our NEN, hit import curl, paste in the curl command, hit
85:46 import, and now the whole node magically just basically filled in itself. So
85:50 that's really awesome. And now we can sort of break down what's going on. So
85:53 for every HTTP request, you have to have some sort of method. Typically, when
85:58 you're sending over data to a service, which in this case, we're going to be
86:01 sending over data to Tavali. It's going to search the web and then bring data
86:05 back to us. That's a post request because we're sending over body data. If
86:09 we were just like kind of trying to hit an and if we were just trying to access
86:14 like you know um bestbuy.com and we just wanted to scrape the information that
86:17 could just be a simple git request because we're not sending anything over
86:20 anyways then we're going to have some sort of base URL and endpoint which is
86:24 right here. The base URL we're hitting is api.com/tavaly and then the endpoint
86:30 we're hitting is slash search. So back in the documentation you can see right
86:34 here we have slash search but if we were doing like an extract we would do slash
86:37 extract. So that's how you can kind of see the difference with the endpoints.
86:40 And then we have a few more things to configure. The first one of course is
86:44 our authorization. So in this case, we're doing it through a header
86:46 parameter. As you can see right here, the curl command set it up. Basically
86:51 all we have to do is replace this um token with our API key from Tavi. So I'm
86:56 going to go back here, copy that key in N. I'm going to get rid of token and
87:00 just make sure that you have a space after the word bearer. And then you can
87:03 paste in your token. And now we are connected to Tavi. But we need to
87:07 configure our request before we send it off. So right here are the parameters
87:11 within our body request. And I'm not going to dive too deep into it. You can
87:13 go to the documentation if you want to understand like you know the main thing
87:17 really is the query which is what we're searching for. But we have other things
87:20 like the topic. It can be general or news. We have search depth. We have max
87:24 results. We have a time range. We have all this kind of stuff. Right now I'm
87:28 just going to leave everything here as default. We're only going to be getting
87:31 one result. And we're going to be doing a general topic. We're going to be doing
87:34 basic search. But right now, if we hit test step, we should see that this is
87:37 going to work. But it's going to be searching for who is Leo Messi. And
87:40 here's sort of like the answer we get back as well as a URL. So this is an
87:45 actual website we could go to about Lionel Messi and then some content from
87:51 that website. Right? So we are going to change this to an expression so that we
87:54 can put a variable in here rather than just a static hard-coded who is Leo
87:58 Messi. We'll delete that query. And all we're going to do is just pull in our
88:02 topic. So, I'm just going to simply pull in the topic of AI image generation.
88:06 Obviously, it's a variable right here, but this is the result. And then we're
88:09 going to test step. And this should basically pull back an article about AI
88:14 image generation. And you know, so here is a deep AI um link. We'll go to it.
88:19 And we can see this is an AI image generator. So maybe this isn't exactly
88:23 what we're looking for. What we could do is basically just say like, you know, we
88:28 could hardcode in search the web for. And now it's going to be saying search
88:31 the web for AI image generation. We could come in here and say yeah actually
88:34 you know let's get three results not just one. And then now we could test
88:37 that step and we're going to be getting a little bit different of a search
88:42 result. Um AI image generation uses text descriptions to create unique visuals.
88:45 And then now you can see we got three different URLs rather than just one.
88:49 Anyways, so that's our web search. And now that we have a web search based on
88:53 our defined topic, we just need to write that content. So I'm going to click on
88:58 the plus. I'm going to grab an AI agent. And once again, we're not giving it the
89:01 connected chat trigger node to look at. That's nowhere to be found. We're going
89:05 to feed in the research that was just done by Tavi. So, I'm going to click on
89:10 expression to open this up. I'm going to say article one with a colon and I'm
89:15 just going to drag in the content from article one. I'm going to say article 2
89:20 with a colon and just drag in the content from article 2. And then I'm
89:25 going to say article 3 colon and just drag in the content from the third
89:29 article. So now it's looking at all three article contents. And now we just
89:32 need to give it a system prompt on how to write a LinkedIn post. So open this
89:36 up. Click on add option. Click on system message. And now let's give it a prompt
89:41 about turning these three articles into a LinkedIn post. Okay. So I'm heading
89:45 over to my custom GPT for prompt architect. If you want to access this,
89:48 you can get it for free by joining my free school community. Um you'll join
89:51 that. It's linked in the description and then you can just search for prompt
89:54 architect and you should find the link. Anyways, real quick, it's just asking
89:58 for some clarification questions. So, anyways, I'm just shooting off a quick
90:01 reply and now it should basically be generating our system prompt for us. So,
90:05 I'll check in when this is done. Okay, so here is the system prompt. I am going
90:09 to just paste it in here and I'm just going to, you know, disclaimer, this is
90:12 not perfect at all. Like, I don't even want this tool section at all because we
90:16 don't have a tool hooked up to this agent. Um, we're obviously just going to
90:19 give it a chat model real quick. So, in this case, what I'm going to do is I'm
90:22 going to use Claude 3.5 Sonnet just because I really like the way that it
90:25 writes content. So, I'm using Claude through Open Router. And now, let's give
90:28 it a run and we'll just see what the output looks like. Um, I'll just click
90:31 into here while it's running and we should see that it's going to read those
90:34 articles and then we'll get some sort of LinkedIn post back. Okay, so here it is.
90:39 The creative revolution is here and it's AI powered. Gone are the days of hiring
90:42 expensive designers or struggling with complex software. Today's entrepreneurs
90:46 can transform ideas into a stunning visuals instantly using AI image
90:50 generators. So, as you can see, we have a few emojis. We have some relevant
90:53 hashtags. And then at the end, it also said this post, you know, it kind of
90:56 explains why it made this post. We could easily get rid of that. If all we want
90:59 is the content, we would just have to throw that in the system prompt. But now
91:03 that we have the post that we want, all we have to do is send it back into our
91:07 Google sheet and update that it was actually made. So, we're going to grab
91:11 another sheets node. We're going to do update row and sheet. And this one's a
91:14 little different. It's not just um grabbing stuff from a row. We're trying
91:18 to update stuff. So, we have to say what document we want, what sheet we want.
91:21 But now, it's asking us what column do we want to match on. So, basically, I'm
91:25 going to choose topic. And all we have to do is go all the way back down to the
91:28 sheet. We're going to choose the topic and drag it in right here. Which is
91:32 basically saying, okay, when this node gets called, whenever the topic equals
91:37 AI image generation, which is a variable, obviously, whatever whatever
91:40 topic triggered the workflow is what's going to pop up here. We're going to
91:44 update that status. So, back in the sheets, we can see that the status is
91:47 currently to-do, and we need to change it to created in order for it to go
91:51 green. So, I'm just going to type in created, and obviously, you have to
91:53 spell this correctly the same way you have it in your Google Sheets. And then
91:56 for the content, all I'm going to do is we're just going to drag in the output
92:00 of the AI agent. And as you can see, it's going to be spitting out the
92:03 result. And now if I hit test step and we go back into the sheet, we'll
92:06 basically watch this change. Now it's created. And now we have the content of
92:10 our LinkedIn post as well with some justification for why it created the
92:14 post like this. And so like I said, you could basically have this be some sort
92:17 of, you know, LinkedIn content making machine where every day it's going to
92:21 run at 7:00 a.m. It's going to give you a post. And then what you could do also
92:24 is you can automate this part of it where you're basically having it create
92:27 a few new rows every day if you give it a certain sort of like general topic to
92:32 create topics on and then every day you can just have more and more pumping out.
92:35 So that is going to do it for our third and final workflow. Okay, so that's
92:39 going to do it for this video. I hope that it was helpful. You know, obviously
92:42 we connected to a ton of different credentials and a ton of different
92:46 services. We even made a HTTP request to an API called Tavali. Now, if you found
92:49 this helpful and you liked this sort of live step-by-step style and you're also
92:53 looking to accelerate your journey with NAN and AI automations, I would
92:56 definitely recommend to check out my paid community. The link for that is
1:48 started. All right, so AI agents, artificial intelligence, whatever it is,
1:51 there is definitely a lot of hype. There's no denying that. And so the
1:55 purpose of this section is just to make sure we can cut through all of that and
1:58 actually understand what is an AI agent at its core. What can they do and why do
2:03 we need them? You probably heard you know digital employee or virtual
2:07 assistant all this kind of stuff. But we need to understand what that actually
2:11 means and what powers them. So at this point I'm sure we are all familiar with
2:15 something like chatbt which is a large language model at its core. And so we're
2:18 looking at right here is a very simple visualization of how a large language
2:22 model works. Meaning right here in green we have a large language model. Let's
2:26 say it's chatbt and we the user give it some sort of input. So maybe that's like
2:30 hey help me write an email to John. The LLM would then take our input process
2:35 this. It would basically just create an email for us and then it would spit that
2:39 out as an output and that's it. This LLM at its core cannot take any action. It's
2:43 really not that practical. It just kind of helps you be more productive because
2:47 at the end of the process, we'd have to take this output and copy and paste it
2:50 into something that actually can take action like Gmail. And so the power of
2:54 these large language models really comes into play when we start to expose them
2:58 to different tools. And tools just means any of these integrations that we use
3:02 every single day within our work that let us actually do something. So whether
3:06 that's send an email or update a row in our CRM or look at a database or Air
3:11 Table, whatever it is, even Outlook, that's a tool. It just means connecting
3:16 to an actual platform that we use to do something. So now instead of having the
3:19 LLM help us write an email that we would copy and paste or us exporting a Google
3:24 sheet and giving it to an LLM to analyze, it can basically just interact
3:27 with any of its tools that we give it access to. So when we add an LLM to
3:33 tools, we basically can get two different things and that is either an
3:38 AI workflow or an AI agent. So right away you can already tell what the
3:40 difference is, but you can also see some similarities. So, let's break it down.
3:44 Starting with an AI workflow, we can see that we have an input similar to like we
3:48 did up top with our LLM. But now, instead of just going input, LLM,
3:54 output, we can work in those tools right into the actual AI workflow itself. So,
3:58 here is an example of what an AI workflow could practically look like.
4:01 First of all, we have a tool which is HubSpot, and that's going to be the
4:04 input for this workflow. This will basically pass over a new lead that has
4:08 been inserted into our CRM. Then we're hitting another tool which is Perplexity
4:11 which helps us do research. So we're going to do research on that lead. From
4:15 there, after we get that research, we're going to hit an LLM, which is where this
4:18 whole AI powered workflow terminology comes in, because we're using that LLM
4:23 to then take the research, draft a personalized email, and then it can use
4:27 another tool to actually send that email. And the reason that we do this as
4:30 a workflow is because this is going to happen in the same four steps in that
4:35 order every time. new lead comes in, research, write the personalized email,
4:39 send the email. And so whenever we know a process is linear or sequential or
4:43 it's going to follow that order every time, it's much much better to do an
4:47 actual workflow rather than send that off to an AI agent where we have an
4:51 input, we have the LLM, which is the AI agent. This is the brain of the whole
4:54 operation and it has access to all of the different tools it can use and then
4:59 it has an output. So yes, it is true that this AI agent down here could do
5:02 the exact same job as this AI workflow. We could also over here with an AI agent
5:07 get a new form and have a new row in our CRM. The agent could then think about it
5:10 and decide, okay, I'm going to use perplexity to do research and then after
5:13 that I'm going to send an email with my email tool. But it's not the most
5:16 effective way to do it because it's going to be more expensive. It's going
5:20 to be slower and it's going to be more errorprone. So basically the whole idea
5:25 is AI agents can make decisions and act autonomously based on different inputs.
5:29 AI workflows follow the guardrails that we put in place. there's no way they can
5:32 deviate off the path that we chose for them. So, a big part of building
5:36 effective systems is understanding, okay, do I need to build an AI workflow
5:39 or am I going to build an AI agent? Is this process deterministic or
5:43 nondeterministic? Or in other words, is it predictable or is it unpredictable?
5:46 If something's unpredictable, that's when we're going to use an AI agent with
5:50 a brain and with different tools to actually do the job. And that's where
5:53 the whole autonomy comes into play. And I don't want to dive too deep into the
5:56 weeds of this right now. We'll cover this later in a different section. But
6:00 real quick, four main pros of AI workflows over AI agents. Reliability
6:05 and consistency, cost efficiency, easier debugging and maintenance, and
6:08 scalability. Once again, we have a whole section dedicated to this idea and we're
6:12 going to dive into it after we've built out a few AI workflows, but I wanted you
6:15 guys to understand this because obviously the whole purpose you probably
6:18 came here was to learn how to build AI agents. But before we build AI agents,
6:22 we're going to learn how to build AI workflows. It's the whole concept of
6:26 crawl, walk, run. you wouldn't just start running right away. And trust me,
6:29 after you've built out a few AI workflows, it's going to make a lot more
6:32 sense when you hop into building some more complex agentic systems. But just
6:36 to give you that quick fix of AI agent knowledge, and we'll revisit this later
6:39 when we actually build our first agent together. What is the anatomy of an AI
6:43 agent? What are the different parts that make one up? So, here's a simple diagram
6:47 that I think illustrates AI agents as simple as possible. We have an input, we
6:51 have our LLM, and we have our output like we talked about earlier. But then
6:55 inside the AI agent you can see two main things. We have a brain and we have
6:59 instructions. So the first thing is a brain. This comes in the form of a large
7:03 language model and also memory. So first off the large language model this is an
7:07 AI chat model that we'll choose whether that's an open AI model or an anthropic
7:11 model or a Google model. This is going to be what powers the AI agent to make
7:16 decisions to reason to generate outputs that sort of stuff. And then we also
7:19 have the memory. So this can come in the form of long-term memory as well as
7:22 short-term memory. But basically, we want to make sure that if we're
7:25 conversating with our agent, it's not going to forget what we're talking about
7:29 after every single sentence. It's going to retain that context window, and it
7:33 can also remember things that we talked about a while back. And then the other
7:37 piece is the instructions for the AI agent, which is also kind of referred to
7:41 as a system prompt. And this is really important because this is telling this
7:45 AI agent, you know, here's your role, here's what you do, here are the tools
7:49 you have. This is basically like your job description. So, the same way you
7:53 wouldn't expect a new hire to hop into the company and just start using its
7:56 different tools and knowing what to do, you would have to give it basically some
8:01 pretty specific training on this is basically your end goal. Here are the
8:03 tools you have and here's when you use each one to get the job done. And the
8:07 system prompt is different than the input, which is kind of referred to as a
8:11 user prompt. And think of it like this. when you're talking to chatbt in your
8:15 browser and every single message that you're typing and sending off to it is a
8:19 user message because that input changes every time. It's dynamic, but the system
8:23 prompt is typically going to say the same over the course of this agent's
8:27 life unless its role or actual instructions are going to change. But
8:30 anyways, let's say the input is, hey, can you help me send an email to John?
8:34 What's going to happen is the agent's going to use its brain to understand the
8:37 input. It's going to check its memory to see if there's any other interactions
8:40 that would help with this current input. Then it will look at its instructions
8:44 and see, okay, how do I actually send an email to John? And then it will call on
8:48 its tool to actually send an email. So at a high level, that is the anatomy of
8:52 an AI agent. And I hope that that helps paint a clear picture in your mind.
8:55 Cool. So now that we've talked about what an AI agent is and what a workflow
8:59 is and why we want to walk before we run, let's actually get into Naden and
9:03 start building some stuff. All right. Right. So, before we dive into actually building AI agents, I
9:08 want to share some eyeopening research that underscores exactly why you're
9:11 making such a valuable investment in yourself today. This research report
9:14 that I'm going to be walking through real quick will be available for free in
9:17 my school community if you want to go ahead and take a look at it. It's got a
9:21 total of 48 sources that are all from within the past year. So, you know it's
9:24 real, you know it's relevant, and it was completely generated for me using
9:27 Perplexity, which is an awesome AI tool. So, just a year ago, AI was still
9:31 considered experimental technology for most businesses. Now, it's become the
9:35 core driver of competitive advantage across every industry and business size.
9:39 What we're witnessing isn't just another tech trend. It's a fundamental business
9:44 transformation. Let me start with something that might surprise you. 75%
9:48 of small businesses now use AI tools. That's right. This isn't just enterprise
9:52 technology anymore. In fact, the adoption rates are climbing fastest
9:55 among companies generating just over a million dollars in revenue at 86%.
9:59 What's truly remarkable is the investment threshold. The median annual
10:03 AI investment for small businesses is just 1,800. That's less than 150 bucks
10:08 per month to access technology that was science fiction just a few years
10:13 ago. Now, I know some of you might be skeptical about AI's practical value.
10:16 Let's look at concrete outcomes businesses are achieving. Marketing
10:20 teams are seeing a 22% increase in ROI for AIdriven campaigns. Customer service
10:25 AI agents have reduced response time by 60% while resolving 80% of inquiries
10:29 without human intervention. Supply chains optimized with AI have cut
10:34 transportation costs by 5 to 10% through better routing and demand forecasting.
10:37 These are actual measured results from implementations over the past year. Now,
10:40 for those of you from small organizations, consider these examples.
10:44 Henry's House of Coffee used AIdriven SEO tools to improve their product
10:48 descriptions, resulting in a 200% improvement in search rankings and 25%
10:53 revenue increase. Vanisec insurance implemented custom chat bots that cut
10:57 client query resolution time from 48 hours to just 15 minutes. Small
11:01 businesses using Zapier automations saved 10 to 15 hours weekly on routine
11:06 data entry and CRM updates. What's revolutionary here is that none of these
11:09 companies needed to hire AI specialists or data scientists to achieve these
11:15 results. The economic case for AI skills is compelling. 54% of small and medium
11:19 businesses plan to increase AI spending this year. 83% of enterprises now
11:23 prioritize AI literacy in their hiring decisions. Organizations with AI trained
11:27 teams are seeing 5 to 8% higher profitability than their peers. But
11:31 perhaps most telling is this. Small businesses using AI report 91% higher
11:37 revenue growth than nonAI adopters. That gap is only widening. So the opportunity
11:42 ahead. The truth is mastering AI is no longer optional. It's becoming the price
11:45 of entry for modern business competitiveness. those who delay risk
11:48 irrelevance while early adopters are already reaping the benefits of
11:52 efficiency, innovation, and market share gains. Now, the good news is that we're
11:56 still in the early stages. By developing these skills now, you're positioning
11:59 yourself at the forefront of this transformation and going to be in
12:02 extremely high demand over the next decade. So, let's get started building
12:10 agent. All right, so here we are on Naden's website. You can get here using
12:14 the link in the description. And what I'm going to do is go ahead and sign up
12:17 for a free trial with you guys. And this is exactly the process you're going to
12:19 take. And you're going to get two weeks of free playing around. And like I said,
12:23 by the end of those two weeks, you're already going to have automations up and
12:26 running and tons of templates imported into your workflows. And I'm not going
12:29 to spend too much time here, but basically Nitn just lets you automate
12:33 anything. Any business process that you have, you can automate it visually with
12:37 no code, which is why I love it. So here you can see NIDN lets you automate
12:40 business processes without limits on your logic. It's a very visual builder.
12:44 We have a ton of different integrations. We have the ability to use code if you
12:48 want to. Lots of native nodes to do data transformation. And we have tons of
12:52 different triggers, tons of different AI nodes. And we're going to dive into this
12:55 so you can understand what's all going on. But there's also hundreds of
12:58 templates to get you started. Not only on the end website itself, but also in
13:02 my free school community. I have almost 100 templates in there that you can plug
13:05 in right away. Anyways, let's scroll back up to the top and let's get started
13:08 here with a new account. All right. All right. So, I put in my name, my email,
13:11 password, and I give my account a name, which will basically be up in the top
13:16 search bar. It'll be like nate herkdemo.app.n.cloud. So, that's what
13:19 your account name means. And you can see I'm going to go ahead and start our
13:22 14-day free trial. Just have to do some quick little onboarding. So, it asks us
13:26 what type of team are we on. I'm just going to put product and design. It asks
13:29 us the size of our company. It's going to ask us which of these things do we
13:32 feel most comfortable doing. These are all pretty technical. I just want to put
13:35 none of them, and that's fine. And how did you hear about any? Let's go ahead
13:39 with YouTube and submit that off. And now you have the option to invite other
13:42 members to your workspace if you want to collaborate and share some credentials.
13:44 For now, I'm just going to go ahead and skip that option. So from here, our
13:48 workspace is already ready. There's a little quick start guide you could watch
13:50 from Eniden's YouTube channel, but I'm just going to go ahead and click on
13:53 start automating. All right, so here we are. This is what Eniden looks like. And
13:57 let's just familiarize with this dashboard a little bit real quick. So up
14:00 in the top left, we can see we have 14 days left in our free trial and we've
14:05 used zero out of a,000 executions. An execution just basically means when you
14:09 run a workflow from end to end that's going to be an execution. So we can see
14:12 on the lefth hand side we have overview. We have like a personal set of projects.
14:15 We have things that have been shared with us. We have the ability to add a
14:19 project. We have the ability to go to our admin panel where we can upgrade our
14:23 instance of nodn. We can turn it off. That sort of stuff. So here's my admin
14:26 panel. You can see how many executions I have, how many active workflows I have,
14:29 which I'll explain what that means later. We have the ability to go ahead
14:33 and manage our nen versions. And this is where you could kind of upgrade your
14:36 plan and change your billing information, stuff like that. But you'll
14:39 notice that I didn't even have to put any billing details to get started with
14:43 my twoe free trial. But then if I want to get back into my workspace, I'm just
14:45 going to click on open right here. And that will send us right back into this
14:49 dashboard that we were just on. Cool. So right here we can see we can either
14:53 start from scratch, a new workflow, or we can test a simple AI agent example.
14:56 So let's just click into here real quick and break down what is actually going on
15:00 here. So, in order for us to actually access this demo where we're going to
15:03 just talk to this AI agent, it says that we have to start by saying hi. So,
15:06 there's an open chat button down here. I'm going to click on open chat and I'm
15:11 just going to type in here, hi. And what happens is our AI agent fails because
15:15 this is basically the brain that it needs to use in order to think about our
15:18 message and respond to us. And what happens is we can see there's an error
15:21 message. So, because these things are red, I can click into it and I can see
15:25 what is the error. It says error in subnode OpenAI model. So that would be
15:29 this node down here which is called OpenAI model. I would click into this
15:33 node and we can basically see that the error is there is no credentials. So
15:38 when you're in NADN what happens is in order to access any sort of API which
15:41 we'll talk about later but in order to access something like your Gmail or
15:47 OpenAI or your CRM you always need to import some sort of credential which is
15:50 just a fancy word for a password in order to actually like get into that
15:54 information. So right here we can see there's 100 free credits from OpenAI.
15:58 I'm going to click on claim credits. And now we just are using our NEN free
16:02 OpenAI API credits and we're fine on this front. But don't worry, later in
16:05 this video I'm going to cover how we can actually go to OpenAI and get an API key
16:10 and create our own password in here. But for now, we've claimed 100 free credits,
16:13 which is great. And what I'm going to do is just go ahead and resend this message
16:16 that says hi. So I can actually go to this hi text and I can just click on
16:20 this button which says repost message. And that's just going to send it off
16:23 again. And now our agent's going to actually be able to use its brain and
16:27 respond to us. So what it says here is welcome to NINDN. Let's start with the
16:30 first step to give me memory. Click the plus button on the agent that says
16:34 memory and choose simple memory. Just tell me once you've done that. So sure,
16:37 why not? Let's click on the plus button under memory. And we'll click on simple
16:41 memory real quick. And we're already set up. Good to go. So now I'm just going to
16:46 come down here and say done. Now we can see that our agent was able to use its
16:49 memory and its brain in order to respond to us. So now it can prompt us to add
16:54 tools. It can do this other stuff, but we're going to break that down later in
16:57 this video. Just wanted to show you real quick demo of how this works. So, what I
17:01 would do is up in the top right, I can click on save just to make sure that the
17:05 what we've done is actually going to be saved. And then to get back out to the
17:08 main screen, I'm going to click on either overview or personal. But if I
17:11 click on overview, that just takes us back to that home screen. But now, let's
17:15 talk about some other stuff that happens in a workflow. So, up in the top right,
17:19 I'm going to click on create workflow. You can see now this opens up a new
17:22 blank page. And then you have the option up here in the top left to name it. So
17:26 I'm just going to call this one demo. Now we have this new workflow that's
17:31 saved in our N environment called demo. So a couple things before we actually
17:34 drag in any nodes is up here. You can see where is this saved. If you have
17:37 different projects, you can save workflows in those projects. If you want
17:40 to tag them, you can tag different things like if you have one for customer
17:45 support or you have stuff for marketing, you can give your workflows different
17:48 tags just to keep everything organized. But anyways, every single workflow has
17:53 to start off with some sort of trigger. So when I click on add first step, it
17:56 opens up this panel on the right that says what triggers this workflow. So we
18:00 can have a manual trigger. We can have a certain event like a new message in
18:04 Telegram or a new row in our CRM. We can have a schedule, meaning we can set this
18:08 to run at 6 a.m. every single day. We can have a web hook call, form
18:11 submission, chat message like we saw earlier. There's tons of ways to
18:15 actually trigger a workflow. So for this example, let's just say I'm going to
18:18 click on trigger manually, which literally just gives us this button
18:21 where if we click test workflow, it goes ahead and executes. Cool. So this is a
18:26 workflow and this is a node, but this is a trigger node. What happens after a
18:29 trigger node is different types of nodes, whether that's like an action
18:34 node or a data transformation node or an AI node, some sort of node. So what I
18:39 would do is if I want to link up a node to this trigger, I would click on the
18:42 plus button right here. And this pulls up a little panel on the right that says
18:46 what happens next. Do you want to take action with AI? Do you want to take
18:49 action within a certain app? Do you want to do data transformation? There's all
18:52 these other different types of nodes. And what's cool is let's say we wanted
18:55 to take action within an app. If I clicked on this, we can see all of the
18:58 different native integrations that Nin has. And once again, in order to connect
19:02 to any of these tons of different tools that we have here, you always need to
19:06 get some sort of password. So let's say Google Drive. Now that I've clicked into
19:09 Google Drive, there's tons of different actions that we can take and they're all
19:12 very intuitive. you know would you want to copy a file would you want to share a
19:16 file do you want to create a shared drive it's all very natural language and
19:19 let's say for example I want to copy a file in order for nitn to tell Google
19:24 drive which file do we want to copy we first of all have to provide a
19:27 credential so every app you'll have to provide some sort of credential and then
19:31 you have basically like a configuration panel right here in the middle which
19:35 would be saying what is the resource you want what do you want to do what is the
19:38 file all this kind of stuff so whenever you're in a node in nen what you're
19:42 going to have is on the left you have an input panel which is basically any data
19:45 that's going to be feeding into this current node. In the middle you'll have
19:49 your configuration which is like the different settings and the different
19:52 little levers you can tweak in order to do different things. And then on the
19:56 right is going to be the output panel of what actually comes out of this node
20:00 based on the way that you configured it. So every time you're looking at a node
20:03 you're going to have three main places input configuration and output. So,
20:07 let's just do a quick example where I'm going to delete this Google Drive node
20:11 by clicking on the delete button. I'm going to add an AI node because there's
20:14 a ton of different AI actions we can take as well. And all I'm going to do is
20:17 I'm just going to talk to OpenAI's kind of like chatbt. So, I'll click on that
20:21 and I'm just going to click on message a model. So, once that pulls up, we're
20:25 going to be using our NEN free OpenAI credits that we got earlier. And as you
20:30 can see, we have to configure this node. What do we want to do? The resource is
20:34 going to be text. It could be image, audio, assistant, whatever we want. The
20:38 operation we're taking is we want to just message a model. And then of
20:42 course, because we're messaging a model, we have to choose from this list of
20:47 OpenAI models that we have access to. And actually, it looks like this N free
20:51 credits only actually give us access to a chat model. And this is a bit
20:54 different. Not exactly sure why. Probably just because they're free
20:57 credits. So, what we're going to do real quick is head over to OpenAI and get a
21:01 credential so I can just show you guys how this works with input configuration
21:06 and output. So, basically, you'd go to openai.com. You'd come in here and you'd
21:09 create an account if you don't already have one. If you have a chat GBT account
21:12 and you're on like maybe the 20 bucks a month plan, that is different than
21:17 creating an OpenAI API account. So, you'd come in here and create an OpenAI
21:20 account. As you see up here, we have the option for Chatbt login or API platform
21:25 login, which is what we're looking for here. So, now that you've created an
21:29 account with OpenAI's API, what you're going to do is come up to your dashboard
21:34 and you're going to go to your API keys. And then all you'd have to do is click
21:38 on create new key. Name this one whatever you want. And then you have a
21:42 new secret key. But keep in mind, in order for this key to work, you have to
21:45 have put in some billing information in your OpenAI account. So, throw in a few
21:49 bucks. They'll go a lot longer than you may think. And then you're going to take
21:52 that key that we just copied, come back into Nitn, and under the credential
21:56 section, we're going to click on create new credential. All I had to do now was
22:00 paste in that API key right there. And then you have the option to name this
22:02 credential if you have a ton of different ones. So I can just say, you
22:07 know, like demo on May 21st. And now I have my credential saved and named
22:11 because now we can tell the difference between our demo credential and our NAN
22:15 free OpenAI credits credential. And now hopefully we have the ability to
22:18 actually choose a model from the list. So, as you can see, we can access chat
22:25 GBT for latest, 3.5 Turbo, 4, 4.1 mini, all this kind of stuff. I'm going to
22:28 choose 4.1 mini, but as you can see, you can come back and change this whenever
22:31 you want. And I'm going to keep this really simple. In the prompt, I'm just
22:35 going to type in, tell me a joke. So now, when this node executes, it's
22:39 basically just going to be sending this message to OpenAI's model, which is
22:44 GBT4.1 Mini, and it's just going to say, "Tell me a joke." And then what we're
22:48 going to get on the output panel is the actual joke. So what I can do is come up
22:52 right here and click on test step. This is going to run this node and then we
22:56 get an output over here. And as you can see both with the input and the output
23:00 we have three options of how we want to view our data. We can click on schema,
23:05 we can click on table or we can click on JSON. And this is all the exact same
23:09 data. It's just like a different way to actually look at it. I typically like to
23:13 look at schema. I think it just looks the most simple and natural language.
23:17 But what you can see here is the message that we got back from this open AAI
23:21 model was sure here's a joke for you. Why don't scientists trust atoms?
23:25 Because they make up everything. And what's cool about schemas is that this
23:29 is all drag and drop. So now once we have this output, we could basically
23:32 just use it however we want. So if I click out of here and I open up another
23:36 node after this, and for now I'm just going to grab a set node just to show
23:39 you guys how we can drag and drop. What I would do is let's say we wanted to add
23:43 a new field and I'm just going to call this open AI's response. So we're
23:49 creating a field called open AI's response. And as you can see it says
23:52 drag an input field from the left to use it here. So as we know every node we
23:57 have input configuration output on the input we can basically choose which one
24:01 of these things do we want to use. I just want to reference this content
24:04 which is the actual thing that OpenAI said to us. So I would drag this from
24:08 here right into the value. And now we can see that we have what's called a
24:12 variable. So anything that's going to be wrapped in these two curly braces and
24:16 it's going to be green is a variable. And it's coming through as JSON
24:20 message.content which is basically just something that represents whatever is
24:24 coming from the previous node in the field called content. So we can see
24:29 right here JSON message.content we have message. Within message we have
24:33 basically a subfolder called content and that's where we access this actual
24:37 result this real text. And you can see if I click into this variable, if I make
24:41 it full screen, we have an expression which is our JSON variable. And then we
24:45 have our result, which is the actual text that we want back. So now if I go
24:49 ahead and test this step, we can see that we only get output to us OpenAI's
24:54 response, which is the text we want. Okay, so this would basically be a
24:58 workflow because we have a trigger and then we have our nodes that are going to
25:02 execute when we hit test workflow. So if I hit test workflow, it's going to run
25:05 the whole thing. And as you can see, super visual. We saw that OpenAI was
25:09 thinking and then we come over here and we get our final output which was the
25:13 actual joke. And now let me show you one more example of how we can map our
25:16 different variables without using a manual trigger. So let's say we don't
25:19 want a manual trigger. I'm just going to delete that. But now we have no way to
25:22 run this workflow because there's no sort of trigger. So I'm just going to
25:25 come back in here and grab a chat trigger just so we can talk to this
25:29 workflow in Naden. I'm going to hook it up right here. I would just basically
25:33 drag this plus into the node that I want. So I just drag it into OpenAI. And
25:37 now these two things are connected. So if I went into the chat and I said
25:41 hello, it's going to run the whole workflow, but it's not really going to
25:44 make sense because I said hello and now it's telling me a joke about why don't
25:48 scientists trust atoms. So what I would want to do is I'd want to come into this
25:52 OpenAI node right here. And I'm just going to change the actual prompt. So
25:56 rather than asking it to tell me a joke, what I would do is I'd just delete this.
26:00 And what I want to do is I want OpenAI to go ahead and process whatever I type
26:05 in this chat. same way it would work if we were in chatbt in our browser and
26:10 whatever we type OpenAI responds to. So all I would have to do to do that is I
26:14 would grab the chat input variable right here. I would drag that into the prompt
26:20 section. And now if I open this up, it's looking at the expression called
26:23 JSON.input because this field right here is called chat input. And then the
26:27 result is going to be whatever we type anytime. even if it's different 100
26:31 times in a row, it's always going to come back as a result that's different,
26:34 but it's always going to be referenced as the same exact expression. So, just
26:38 to actually show you guys this, let's save this workflow. And I'm going to
26:42 say, "My name is Nate. I like to eat ice cream. Make up a funny story about me."
26:53 Okay, so we'll send this off and the response that we should get will be one
26:57 that is actually about me and it's going to have some sort of element of a story
27:00 with ice cream. So let's take a look. So it said, "Sure, Nate, here's a funny
27:03 story for you." And actually, because we're setting it, it's coming through a
27:06 little weird. So let's actually click into here to look at it. Okay, so here
27:09 is the story. Let me just make this a little bigger. I can go ahead and drag
27:12 the configuration panel around by doing this. I can also make it larger or
27:16 smaller if I do this. So let's just make it small. We'll move it all the way to
27:20 the left and let's read the story. So, it said, "Sure, Nate. Here's a funny
27:24 story just for you. Once upon a time, there was a guy named Nate who loved ice
27:26 cream more than anything else in the world. One day, Nate decided to invent
27:31 the ultimate ice cream. A flavor so amazing that it would make the entire
27:34 town go crazy." So, let's skip ahead to the bottom. Basically, what happens is
27:38 from that day on, Nate's stand became the funniest spot in town. A place where
27:41 you never knew if you'd get a sweet, savory, or plain silly ice cream. And
27:45 Nate, he became the legendary ice cream wizard. That sounds awesome. So that's
27:49 exactly how you guys can see what happened was in this OpenAI node. We
27:54 have a dynamic input which was us talking to this thing in a chat trigger.
27:59 We drag in that variable that represents what we type into the user prompt. And
28:03 this is going to get sent to OpenAI's model of GPT 4.1 Mini because we
28:08 configured this node to do so. And the reason we were able to actually
28:11 successfully do that is because we put in our API key or our password for
28:17 OpenAI. And then on the right we get this output which we can look at either
28:22 in schema view, table view or JSON view. But they all represent the same data. As
28:26 you can see, this is the exact story we just read. Something I wanted to talk
28:29 about real quick that is going to be super helpful for the rest of this
28:32 course is just understanding what is JSON. And JSON stands for JavaScript
28:37 object notation. And it's just a way to identify things. And the reason why it's
28:40 so important to talk about is because over here, right, we all kind of know
28:43 what schema is. It's just kind of like the way something's broken down. And as
28:47 you can see, we have different drill downs over here. And we have different
28:50 things to reference. Then we all understand what a table is. It's kind of
28:53 like a table view of different objects with different things within them. Kind
28:56 of like the subfolders. And once again, you can also drag and drop from table
29:00 view as well. And then we have JSON, which also you can drag and drop. Don't
29:04 worry, you can drag and drop pretty much this whole platform, which is why it's
29:08 awesome. But this may look a little more cody or intimidating, but I want to talk
29:13 about why it is not. So, first of all, JSON is so so important because
29:17 everything that we do is pretty much going to be built on top of JSON. Even
29:21 the workflows that you're going to download later when you'll see like,
29:24 hey, you can download this template for free. When you download that, it's going
29:28 to be a JSON file, which means the whole workflow in NN is basically represented
29:32 as JSON. And so, hopefully that doesn't confuse you guys, but what it is is it's
29:39 literally just key value pairs. So what I mean by that is like over here the key
29:44 is index and index equals zero and then we have like the role of the openi
29:48 assistant and that's the key and the value of the role is assistant. So it's
29:52 very very natural language if you really break it down. What is the content that
29:55 we're looking at? The content that we're looking at is this actual content over
29:59 here. But like I said the great thing about that is that pretty much every
30:03 single large language model or like chat gbt cloud 3.5 they're all trained on
30:08 JSON and they all understand it. So, well, because it's universal. So, right
30:11 here on the left, we're looking at JSON. If I was to just copy this entire JSON,
30:17 go into ChatgBT and say, "Hey, help me understand this JSON." And then I just
30:21 basically pasted that in there, it's going to be able to tell us exactly like
30:24 which keys are in here and what those values are. So, it says this JSON
30:28 represents the response from an AI model like chatbt in a structured format. Let
30:32 me break it down for you. So, basically, it's going to explain what each part of
30:36 this JSON means. We can see the index is zero. That means it's the first
30:39 response. We can see the role equals assistant. We can see that the content
30:44 is the funny story about Nate. We can see all this stuff and it basically is
30:48 able to not only break it down for us, but let's say we need to make JSON. We
30:52 could say, "Hey, I have this natural language. Can you make that into JSON
30:55 for me?" Hey, can you help me make a JSON body where my name is Nate? I'm 23
31:02 years old. I went to the University of Iowa. I like to play pickle ball. We'll
31:08 send that off and basically it will be able to turn that into JSON for us. So
31:13 here you go. We can see name Nate, age 23, education, University of Iowa,
31:18 interest pickle ball. And so don't let it overwhelm you. If you ever need help
31:22 either making JSON or understanding JSON, throw it into chat and it will do
31:26 a phenomenal job for you. And actually, just to show you guys that I'm not
31:29 lying, let's just copy this JSON that chat gave us. Go back into our workflow
31:33 and I'm just going to add a set field just to show you guys. And instead of
31:36 manual mapping, I'm just going to set some data using JSON. So I'm going to
31:41 delete this, paste in exactly what chat gave me. Hit test step. And what do we
31:44 see over here? We see the name of someone named Nate. We see their age. We
31:47 see their education. And we see their interest in either schema table or JSON
31:53 view. So hopefully that gives you guys some reassurance. And just once again,
31:57 JSON's super important. And it's not even code. That is just a really quick
32:03 foundational understanding of a trigger, different nodes, action nodes, AI nodes.
32:08 You have a ton to play with. And that's kind of like the whole most overwhelming
32:12 part about NIN is you know what you need to do in your brain, but you don't know
32:16 maybe which is the best nen node to actually get that job done. So that's
32:19 kind of the tough part is it's a lot of just getting the reps in, understanding
32:24 what node is best for what. But I assure you by the time your twoe trial is up,
32:27 you'll have mastered pretty much all that. All right, but something else I
32:30 want to show you guys is now what we're looking at is called the editor. So if
32:34 you look at the top middle right here, we have an editor. And this is where we
32:37 can, you know, zoom out, we can move around, we can basically edit our
32:41 workflow right here. And it moves from left to right, as you guys saw, the same
32:46 way we we read from left to right. And now, because we've done a few runs and
32:49 we've tested out these different nodes, what we'll click into is executions. And
32:53 this will basically show us the different times we've ran this workflow.
32:57 And what's cool about this is it will show us the data that has moved through.
33:01 So let's say you set up a workflow that every time you get an email, it's going
33:04 to send some sort of automated response. You could come into this workflow, you
33:07 could click on executions, and you could go look at what time they happened, what
33:11 actually came through, what email was sent, all that kind of stuff. So if I go
33:15 all the way down to this third execution, we can remember that what I
33:19 did earlier was I asked this node to tell us a joke. We also had a manual
33:23 trigger rather than a chat trigger. And we can see this version of the workflow.
33:28 I could now click into this node and I could see this is when we had it
33:32 configured to tell us a joke. And we could see the actual joke it told us
33:35 which was about scientists not trusting atoms. And obviously we can still
33:39 manipulate this stuff, look at schema, look at table and do the same thing on
33:42 that left-hand side as well. So I wanted to talk about how you can import
33:46 templates into your own NN environment because it's super cool and like I said
33:49 they're all kind of built on top of JSON. So, I'm going to go to NN's
33:53 website and we're going to go to product and we're going to scroll down here to
33:56 templates. And you can see there's over 2100 workflow automation templates. So,
34:00 let's scroll down. Let's say we want to do this one with cloning viral Tik Toks
34:04 with AI avatars. And we can use this one for free. So, I'll click on use for
34:07 free. And what's cool is we can either copy the template to clipboard or since
34:10 we're in the cloud workspace, we could just import it right away. And so, this
34:14 is logged into my other kind of my main cloud instance, but I'll still show you
34:16 guys how this works. I would click on this button. it would pull up this
34:19 screen where I just get to set up a few things. So, there's going to be
34:22 different things we'd have to connect to. So, you would basically just select
34:25 your different credentials if you already had them set up. If not, you
34:27 could create them right here. And then you would just basically be able to hit
34:32 continue. And as this loads up, you see we have the exact template right there
34:36 to play with. Or let's say you're scrolling on YouTube and you see just a
34:39 phenomenal Nate Herk YouTube video that you want to play around with. All you
34:42 have to do is go to my free school community and you will come into YouTube
34:46 resources or search for the title of the video. And let's say you wanted to play
34:49 with this shorts automation that I built. What you'll see right here is a
34:52 JSON file that you'll have to download. Once you download that, you'll go back
34:56 into Nitn, create a new workflow, and then when you import that from file if
34:59 you click on this button right here, you can see the entire workflow comes in.
35:02 And then all you're going to have to do is follow the setup guide in order to
35:05 connect your own credentials to these different nodes. All right. And then the
35:08 final thing I wanted to talk about is inactive versus active workflows. So you
35:11 may have noticed that none of our executions actually counted up from
35:16 zero. And the reason is because this is counting active workflow executions. And
35:20 if we come up here to the top right, we can see that we have the ability to make
35:24 a workflow active, but it has to have a trigger node that requires activation.
35:27 So real quick, let's say that we come in here and we want a workflow to start
35:32 when we have a schedule trigger. So I would go to schedule and I would
35:35 basically say, okay, I want this to go off every single day at midnight as we
35:38 have here. And what would happen is while this workflow is inactive, it's
35:42 only actually going to run if we hit test workflow and then it runs. But if
35:47 we were to flick this on as active now, it says your schedule trigger will now
35:51 trigger executions on the schedule you have defined. These executions will not
35:55 show up immediately in the editor, but you can see them in the execution list.
35:59 So this is basically saying two things. It's saying now that we have the
36:01 schedule trigger set up to run at midnight, it's actually going to run at
36:05 midnight because it's active. If we left this inactive, it would not actually
36:09 run. And all it meant by the second part is if we were sitting in this workflow
36:13 at midnight, we wouldn't see it execute and go spinning and green and red in
36:18 live real time, but it would still show up as an execution. But if it's an
36:22 active workflow, you just don't get to see them live visually running and
36:26 spinning anymore. So that's the difference between an active workflow
36:29 and an inactive workflow. Let's say you have a trigger that's like um let's say
36:33 you have a HubSpot trigger where you want this basically to fire off the
36:37 workflow whenever a new contact is created. So you'd connect to HubSpot and
36:42 you would make this workflow active so that it actually runs if a new contact's
36:46 created. If you left this inactive, even though it says it's going to trigger on
36:50 new contact, it would not actually do so unless this workflow was active. So
36:53 that's a super important thing to remember. All right. And then one last
36:57 thing I want to talk about which we were not going to dive into because we'll see
37:01 examples later is there is one more way that we can see data rather than schema
37:05 table or JSON and it's something called binary. So binary basically just means
37:11 an image or maybe a big PDF or a word doc or a PowerPoint file. It's basically
37:15 something that's not explicitly textbased. So let me show you exactly
37:19 what that might look like. What I'm going to do is I'm going to add another
37:22 trigger under this workflow and I'm going to click on tab. And even though
37:25 it doesn't say like what triggers this workflow, we can still access different
37:28 triggers. So I'm just going to type in form. And this is going to give us a
37:32 form submission that basically is an NAND native form. And you can see
37:35 there's an option at the bottom for triggers. So I'm going to click on this
37:38 trigger. Now basically what this pulls up is another configuration panel, but
37:42 obviously we don't have an input because it's a trigger, but we are going to get
37:46 an output. So anyways, let me just set up a quick example form. I'm just going
37:50 to say the title of this form is demo. The description is binary data. And now
37:55 what happens if I click on test step, it's going to pull up this form. And as
37:58 you can see, we haven't set up like any fields for people to actually submit
38:02 stuff. So the only option is to submit. But when I hit submit, you can see that
38:06 the node has been executed. And now there's actually data in here. Submitted
38:09 at with a timestamp. And then we have different information right here. So let
38:13 me just show you guys. We can add a form element. And when I'm adding a form
38:17 element, we can basically have this be, you know, date, it can be a drop down,
38:20 it can be an email, it can be a file, it can be text. So, real quick, I'm just
38:23 going to show you an example where, let's say we have a form where someone
38:27 has to submit their name. We have the option to add a placeholder or make it
38:30 required. And this isn't really the bulk of what I'm trying to show you guys. I
38:34 just want to show you binary data. But anyways, let's say we're adding another
38:37 field that's going to be a file. I'm just going to say file. And this will
38:41 also be required. And now if I go ahead and hit test step, it's going to pull up
38:45 a new form for us with a name parameter and a file parameter. So what I did is I
38:49 put my name and I put in just a YouTube short that I had published. And you can
38:53 see it's an MP4 file. So if I hit submit, we're going to get this data
38:56 pulled into N as you can see in the background. Just go ahead and watch. The
39:00 form is going to actually capture this data. There you go. Form submitted. And
39:05 now what we see right here is binary data. So this is interesting, right? We
39:09 still have our schema. We still have our table. We still have our JSON, but what
39:13 this is showing us is basically, okay, the name that the person submitted was
39:17 Nate. The file, here are some information about it as far as the name
39:21 of it, the mime type, and the size, but we don't actually access the file
39:25 through table or JSON or schema view. The only way we can access a video file
39:29 is through binary. And as you can see, if I clicked on view, it's my actual
39:33 video file right here. And so that's all I really wanted to show you guys was
39:36 when you're working with PDFs or images or videos, a lot of times they're going
39:39 to come through as binary, which is a little confusing at first, but it's not
39:42 too bad. And we will cover an example later in this tutorial where we look at
39:47 a binary file and we process it. But as you can see now, if we were doing a next
39:52 node, we would have schema, table, JSON, and binary. So we're still able to work
39:55 with the binary. We're still able to reference it. But I just wanted to throw
39:58 out there, when you see binary, don't get scared. It just basically means it's
40:02 a different file type. It's not just textbased. Okay, so that's going to do
40:05 it for just kind of setting up the foundational knowledge and getting
40:09 familiar with the dashboard and the UI a little bit. And as you move into these
40:12 next tutorials, which are going to be some step by steps, I'm going to walk
40:15 through every single thing with you guys setting up different accounts with
40:19 Google and something called Pine Cone. And we'll talk about all this stuff step
40:22 by step. But hopefully now it's going to be a lot better moving into those
40:25 sections because you've seen, you know, some of the input stuff and how you
40:29 configure nodes and just like all this terminology that you may not have been
40:33 familiar with like JSON, JavaScript variables, workflows, executions, that
40:38 sort of stuff. So, like I said, let's move into those actual step-by-step
40:41 builds. And I can assure you guys, you're going to feel a lot more
40:43 comfortable after you have built a workflow end to end. All right, we're
40:48 going to talk about data types in Nadn and what those look like. It's really
40:50 important to get familiar with this before we actually start automating
40:53 things and building agents and stuff like that. So, what I'm going to do is
40:57 just pull in a set node. As you guys know, this just lets us modify, add, or
41:01 remove fields. And it's very, very simple. We basically would just click on
41:05 this to add fields. We can add the name of the field. We choose the data type,
41:08 and then we set the value, whether that's a fixed value, which we'll be
41:13 looking at here, or if we're dragging in some sort of variable from the lefth
41:15 hand side. But clearly, right now, we have no data incoming. We just have a
41:20 manual trigger. So, what I'm going to do is zoom in on the actual browser so we
41:24 can examine this data on the output a bit bigger and I don't have to just keep
41:27 cutting back and forth with the editing. So, as you can see, there's five main
41:31 data types that we have access to and end it in. We have a string, which is
41:35 basically just a fancy name for a word. Um, as you can see, it's represented by
41:40 a little a, a letter a. Then we have a number, which is represented by a pound
41:43 sign or a hashtag, whatever you want to call it. Um, it's pretty
41:47 self-explanatory. Then we have a boolean which is basically just going to be true
41:50 or false. That's basically the only thing it can be represented by a little
41:54 checkbox. We have an array which is just a fancy word for list. And we'll see
41:58 exactly what this looks like. And then we have an object which is probably the
42:01 most confusing one which basically means it's just this big block which can have
42:05 strings in them, numbers in them. It can have booleans in them. It can have
42:09 arrays in them. And it can also have nested objects within objects. So we'll
42:12 take a look at that. Let's just start off real quick with the string. So let's
42:17 say a string would be a name and that would be my name. So if I hit test step
42:21 on the right hand side in the JSON, it comes through as key value pair like we
42:26 talked about. Name equals Nate. Super simple. You can tell it's a string
42:30 because right here we have two quotes around the word Nate. So that represents
42:34 a string. Or you could go to the schema and you can see that with name equals
42:38 Nate, there's the little letter A and that basically says, okay, this is a
42:41 string. As you see, it matches up right here. Cool. So that's a string. Let's
42:46 switch over to a number. Now we'll just say we're looking at age and we'll throw
42:51 in the number 50. Hit test step. And now we see age equals 50 with the pound sign
42:55 right here as the symbol in the schema view. Or if we go to JSON view, we have
43:01 the key value pair age equals 50. But now there are no double quotes around
43:05 the actual number. It's green. So that's how we know it's not a string. This is a
43:10 number. And um that's where you may run into some issues where if you had like
43:13 age coming through as a string, you wouldn't be able to like do any
43:17 summarizations or filters, you know, like if age is greater than 50, send it
43:21 off this way. If it's less than 50, send it that way. In order to do that type of
43:24 filtering and routing, you would need to make sure that age is actually a number
43:30 variable type or data type. Cool. So there's age. Let's go to a boolean. So
43:35 we're going to basically just say adult. And that can only be true or false. You
43:39 see, I don't have the option to type anything here. It's only going to be
43:42 false or it's only going to be true. And as you can see, it'll come through.
43:45 It'll look like a string, but there's no quotes around it. It's green. And that's
43:49 how we know it's a boolean. Or we could go to schema, and we can see that
43:53 there's a checkbox rather than the letter A symbol. Now, we're going to move on to
43:58 an array. And this one's interesting, right? So, let's just say we we want to
44:01 have a list of names. So, if I have a list of names and I was typing in my
44:05 name and I tried to hit test step, this is where you would run into an error
44:09 because it's basically saying, okay, the field called names, which we set right
44:13 here, it's expecting to get an array, but all we got was Nate, which is
44:17 basically a string. So, to fix this error, change the type for the field
44:21 names or you can ignore type conversions, whatever. Um, so if we were
44:25 to come down to the option and ignore type conversions. So when we hit ignore
44:29 type conversions and tested the step, it basically just converted the field
44:32 called names to a string because it just could understand that this was a string
44:35 rather than an array. So let's turn that back off and let's actually see how we
44:39 could get this to work if we wanted to make an array. So like we know an array
44:44 just is a fancy word for a list. And in order for us to actually send through an
44:48 end and say, okay, this is a list, we have to wrap it in square brackets like
44:53 this. But we also have to wrap each item in the list in quotes. So I have to go
44:58 like this and go like that. And now this would pass through as a list of a of
45:02 different strings. And those are names. And so if I wanted to add another one
45:06 after the first item, I would put a comma. I put two quotes. And then inside
45:10 that I could put another name. Hit test step. And now you can see we're getting
45:14 this array that's made up of different strings and they're all going to be
45:17 different names. So I could expand that. I could close it out. Um we could drag
45:21 in different names. And in JSON, what that looks like is we have our key and
45:25 then we have two closed brackets, which is basically exactly what like right
45:29 here. This is exactly what we typed right here. So that's how it's being
45:32 represented within these square brackets right here. Okay, cool. So the final one
45:36 we have to talk about is an object. And this one's a little more complex. So if
45:40 I was to hit test step here, it's going to tell us names expects an object, but
45:44 we got an array. So once again, you could come in here, ignore type
45:47 conversions, and then it would just basically come through as a string, but
45:50 it's not coming through as an array. So that's not how we want to do it. And I
45:55 don't want to mess with the actual like schema of typing in an object. So what
45:58 I'm going to do is go to chat. I literally just said, give me an example
46:02 JSON object to put into naden. It gives me this example JSON object. I'm going
46:06 to copy that. Come into the set node, and instead of manual mapping, I'm just
46:09 going to customize it with JSON. Paste the one that chat just gave us. And when
46:15 I hit test step, what we now see first of all in the schema view is we have one
46:18 item with you know this is an object and all this different stuff makes it up. So we
46:24 have a string which is name herk. We have a string which is email nate
46:27 example.com. We have a string which is company true horizon. Then we have an
46:33 array of interests within this object. So I could close this out. I could open
46:36 it up. And we have three interests. AI automation nadn and YouTube content. And
46:40 this is, you know, chat GBT's long-term memory about me making this. And then we
46:45 also have an object within our object which is called project. And the interesting difference
46:51 here with an object or an array is that when you have an array of interests,
46:54 every single item in that array is going to be called interest zero, interest
46:58 one, interest two. And by the way, this is three interests, but computers start
47:01 counting from zero. So that's why it says 0, one, two. But with an object, it
47:06 doesn't all have to be the same thing. So you can see in this project object
47:11 project object we have one string called title we have one string called called
47:15 called status and we have one string called deadline and this all makes up
47:18 its own object. As you can see if we went to table view this is literally
47:22 just one item that's really easy to read. And you can tell that this is an
47:26 array because it goes 012. And you can tell that this is an object because it
47:29 has different fields in it. This is a one item. It's one object. It's got
47:33 strings up top. It has no numbers actually. So the date right here, this
47:37 is coming through as a string variable type. We can tell because it's not
47:40 green. We can tell because it has double quotes around it. And we can also tell
47:43 because in schema it comes through with the letter A. But this is just how you
47:47 can see there's these different things that make up um this object. And you can
47:52 even close them down in JSON view. We can see interest is an array that has
47:55 three items. We could open that up. We can see project is an object because
47:59 it's wrapped in in um curly braces, not not um the closed square brackets as you
48:05 can see. So, there's a difference. And I know this wasn't super detailed and it's
48:09 just something really really important to know heading into when you actually
48:13 start to build stuff out because you're probably going to get some of those
48:15 errors where you're like, you know, blank expects an object but got this or
48:19 expects an array and got this. So, just wanted to make sure I came in here and
48:23 threw that module at you guys and hopefully it'll save you some headaches
48:26 down the road. Real quick, guys, if you want to be able to download all the
48:29 resources from this video, they'll be available for free in my free school
48:32 community, which will be the link in the pinned comment. There'll be a zip file
48:36 in there that has all 23 of these workflows, as you can see, and also two
48:40 PDFs at the bottom, which are covered in the video. So, like I said, join the
48:43 Free School community. Not only does it have all of my YouTube resources, but
48:46 it's also a really quick growing community of people who are obsessed
48:49 with AI automation and using ND every day. All you'll have to do is search for
48:53 the title of this video using the search bar or you can click on YouTube
48:56 resources and find the post associated with this video. And then you'll have
48:59 the zip file right here to download which once again is going to have all 23
49:04 of these JSON N workflows and two PDFs. And there may even be some bonus files
49:07 in here. You'll just have to join the free school community to find out. Okay,
49:11 so we talked about AI agents. We talked about AI workflows. We've gotten into
49:14 NADN and set up our account. We understand workflows, nodes, triggers,
49:19 JSON, stuff like that, and data types. Now, it's time to use all that stuff
49:21 that we've talked about and start applying it. So, we're going to head
49:24 into this next portion of this course, which is going to be about step-by-step
49:26 builds, where I'm going to walk you through every single step live, and
49:31 we'll have some pretty cool workflows set up by the end. So, let's get into
49:34 it. Today, we're going to be looking at three simple AI workflows that you can
49:37 build right now to get started learning NAND. We're going to walk through
49:40 everything step by step, including all of the credentials and the setups. So,
49:43 let's take a look at the three workflows we're going to be building today. All
49:46 right, the first one is going to be a rag pipeline and chatbot. And if you
49:50 don't know what rag means, don't worry. We're going to explain it all. But at a
49:52 high level, what we're doing is we're going to be using Pine Cone as a vector
49:55 database. If you don't know what a vector database is, we'll break it down.
49:58 We're going to be using Google Drive. We're going to be using Google Docs. And
50:01 then something called Open Router, which lets us connect to a bunch of different
50:05 AI models like OpenAI's models or Anthropics models. The second workflow
50:08 we're going to look at is a customer support workflow that's kind of going to
50:11 be building off of the first one we just built. Because in the first workflow,
50:14 we're going to be putting data into a Pine Cone vector database. And in this
50:17 one, we're going to use that data in there in order to respond to customer
50:21 support related emails. So, we'll already have had Pine Cone set up, but
50:24 we're going to set up our credentials for Gmail. And then we're also going to
50:28 be using an NAN AI agent as well as Open Router once again. And then finally,
50:31 we're going to be doing LinkedIn content creation. And in this one, we'll be
50:35 using an NAN AI agent and open router once again, but we'll have two new
50:38 credentials to set up. The first one being Tavi, which is going to let us
50:41 search the web. And then the second one will be Google Sheets where we're going
50:44 to store our content ideas, pull them in, and then have the content written
50:49 back to that Google sheet. So by the end of this video, you're going to have
50:51 three workflows set up and you're going to have a really good foundation to
50:55 continue to learn more about NADN. You'll already have gotten a lot of
50:57 credentials set up and understand what goes into connecting to different
51:00 services. One of the trickiest being Google. So we'll walk through that step
51:03 by step and then you'll have it configured and you'll be good. And then
51:05 from there, you'll be able to continuously build on top of these three
51:08 workflows that we're going to walk through together because there's really
51:11 no such thing as a finished product in the space. Different AI models keep
51:14 getting released and keep getting better. There's always ways to improve
51:17 your templates. And the cool thing about building workflows in NAN is that you
51:20 can make them super customized for exactly what you're looking for. So, if
51:24 this sounds good to you, let's hop into that first workflow. Okay, so for this
51:27 first workflow, we're building a rag pipeline and chatbot. And so if that
51:31 sounds like a bunch of gibberish to you, let's quickly understand what rag is and
51:36 what a vector database is. So rag stands for retrieval augmented generation. And
51:40 in the simplest terms, let's say you ask me a question and I don't actually know
51:43 the answer. I would just kind of Google it and then I would get the answer from
51:47 my phone and then I would tell you the answer. So in this case, when we're
51:50 building a rag chatbot, we're going to be asking the chatbot questions and it's
51:53 not going to know the answer. So it's going to look inside our vector
51:56 database, find the answer, and then it's going to respond to us. And so when
52:00 we're combining the elements of rag with a vector database, here's how it works.
52:03 So the first thing we want to talk about is actually what is a vector database.
52:07 So essentially this is what a vector database would look like. We're all
52:11 familiar with like an x and yaxis graph where you can plot points on there on a
52:14 two dimensional plane. But a vector database is a multi-dimensional graph of
52:19 points. So in this case, you can see this multi-dimensional space with all
52:23 these different points or vectors. And each vector is placed based on the
52:27 actual meaning of the word or words in the vector. So over here you can see we
52:31 have wolf, dog and cat. And they're placed similarly because the meaning of
52:35 these words are all like animals. Whereas over here we have apple and
52:38 banana which the meaning of the words are food more likely fruits. And that's
52:42 why they're placed over here together. So when we're searching through the
52:46 database, we basically vectorize a question the same way we would vectorize
52:50 any of these other points. And in this case, we were asking for a kitten. And
52:53 then that query gets placed over here near the other animals and then we're
52:56 able to say okay well we have all these results now. So what that looks like and
53:00 what we'll see when we get into NAND is we have a document that we want to
53:03 vectorize. We have to split the document up into chunks because we can't put like
53:07 a 50page PDF as one chunk. So it gets split up and then we're going to run it
53:10 through something called an embeddings model which basically just turns text
53:15 into numbers. Just as simple as that. And as you can see in this case let's
53:18 say we had a document about a company. We have company data, finance data, and
53:22 marketing data. And they all get placed differently because they mean different
53:26 things. And the the context of those chunks are different. And then this
53:30 visual down here is just kind of how an LLM or in this case, this agent takes
53:34 our question, turns it into its own question. We vectorize that using the
53:38 same embeddings model that we used up here to vectorize the original data. And
53:42 then because it gets placed here, it just grabs back any vectors that are
53:46 nearest, maybe like the nearest four or five, and then it brings it back in
53:49 order to respond to us. So don't want to dive too much into this. Don't want to
53:53 over complicate it, but hopefully this all makes sense. Cool. So now that we
53:57 understand that, let's actually start building this workflow. So what we're
53:59 going to do here is we are going to click on add first step because every
54:03 workflow needs a trigger that basically starts the workflow. So, I'm going to
54:08 type in Google Drive because what we're going to do is we are going to pull in a
54:12 document from our Google Drive in order to vectorize it. So, I'm going to choose
54:15 a trigger which is on changes involving a specific folder. And what we have to
54:19 do now is connect our account. As you can see, I'm already connected, but what
54:22 we're going to do is click on create new credential in order to connect our
54:25 Google Drive account. And what we have to do is go get a client ID and a
54:29 secret. So, what we want to do is click on open docs, which is going to bring us
54:33 to Naden's documents on how to set up this credential. We have a prerequisite
54:37 which is creating a Google Cloud account. So I'm going to click on Google
54:40 Cloud account and we're going to set up a new project. Okay. So I just signed
54:43 into a new account and I'm going to set up a whole project and walk through the
54:46 credentials with you guys. You'll click up here. You'll probably have something
54:49 up here that says like new project and then you'll click into new project. All
54:54 we have to do now is um name it and you you'll be able to start for free so
54:56 don't worry about that yet. So I'm just going to name this one demo and I'm
55:00 going to create this new project. And now up here in the top right you're
55:02 going to see that it's kind of spinning up this project. and then we'll move
55:06 forward. Okay, so it's already done and now I can select this project. So now
55:10 you can see up here I'm in my new project called demo. I'm going to click
55:15 on these three lines in the top left and what we're going to do first is go to
55:18 APIs and services and click on enabled APIs and services. And what we want to
55:22 do is add the ones we need. And so right now all I'm going to do is add Google
55:27 Drive. And you can see it's going to come up with Google Drive API. And then
55:31 all we have to do is really simply click enable. And there we I just enabled it.
55:35 So you can see here the status is enabled. And now we have to set up
55:37 something called our OOTH consent screen, which basically is just going to
55:43 let Nadn know that Google Drive and Naden are allowed to talk to each other
55:46 and have permissions. So right here, I'm going to click on OOTH consent screen.
55:49 We don't have one yet, so I'm going to click on get started. I'm going to give
55:53 it a name. So we're just going to call this one demo. Once again, I'm going to
55:57 add a support email. I'm going to click on next. Because I'm not using a Google
56:01 Workspace account, I'm just using a, you know, nate88@gmail.com. I'm going to have to
56:05 choose external. I'm going to click on next. For contact information, I'm
56:08 putting the same email as I used to create this whole project. Click on next
56:12 and then agree to terms. And then we're going to create that OOTH consent
56:17 screen. Okay, so we're not done yet. The next thing we want to do is we want to
56:20 click on audience. And we're going to add ourselves as a test user. So we
56:23 could also make the app published by publishing it right here, but I'm just
56:26 going to keep it in test. And when we keep it in test mode, we have to add a
56:30 test user. So I'm going to put in that same email from before. And this is
56:32 going to be the email of the Google Drive we want to access. So I put in my
56:36 email. You can see I saved it down here. And then finally, all we need to do is
56:40 come back into here. Go to clients. And then we need to create a new client.
56:45 We're going to click on web app. We're going to name it whatever we want. Of
56:47 course, I'm just going to call this one demo once again. And now we need to
56:52 basically add a redirect URI. So if you click back in Nitn, we have one right
56:57 here. So, we're going to copy this, go back into cloud, and we're going to add
57:00 a URI and paste it right in there, and then hit create, and then once that's created,
57:06 it's going to give us an ID and a secret. So, all we have to do is copy
57:10 the ID, go back into Nit and paste that right here. And then we need to go grab our
57:16 secret from Google Cloud, and then paste that right in there. And now we have a
57:19 little button that says sign in with Google. So, I'm going to open that up.
57:22 It's going to pull up a window to have you sign in. Make sure you sign in with
57:25 the same account that you just had yourself as a test user. That one. And
57:30 then you'll have to continue. And then here is basically saying like what
57:33 permissions do we have? Does anyone have to your Google Drive? So I'm just going
57:36 to select all. I'm going to hit continue. And then we should be good.
57:39 Connection successful and we are now connected. And you may just want to
57:43 rename this credential so you know you know which email it is. So now I've
57:47 saved my credential and we should be able to access the Google Drive now. So,
57:49 what I'm going to do is I'm going to click on this list and it's going to
57:52 show me the folders that I have in Google Drive. So, that's awesome. Now,
57:56 for the sake of this video, I'm in my Google Drive and I'm going to create a
57:59 new folder. So, new folder. We're going to call this one um FAQ. Create this one
58:05 because we're going to be uploading an FAQ document into it. So, here's my FAQ
58:09 folder um right here. And then what I have is down here I made a policy and
58:14 FAQ document which looks like this. We have some store policies and then we
58:17 also have some FAQs at the bottom. So, all I'm going to do is I'm going to drag
58:21 in my policy and FAQ document into that new FAQ folder. And then if we come into
58:27 NAN, we click on the new folder that we just made. So, it's not here yet. I'm
58:30 just going to click on these dots and click on refresh list. Now, we should
58:35 see the FAQ folder. There it is. Click on it. We're going to click on what are
58:38 we watching this folder for. I'm going to be watching for a file created. And
58:43 then, I'm just going to hit fetch test event. And now we can see that we did in
58:47 fact get something back. So, let's make sure this is the right one. Yep. So,
58:50 there's a lot of nasty information coming through. I'm going to switch over
58:52 here on the right hand side. This is where we can see the output of every
58:55 node. I'm going to click on table and I'm just going to scroll over and there
59:00 should be a field called file name. Here it is. Name. And we have policy and FAQ
59:04 document. So, we know we have the right document in our Google Drive. Okay. So,
59:08 perfect. Every time we drop in a new file into that Google folder, it's going
59:11 to start this workflow. And now we just have to configure what happens after the
59:15 workflow starts. So, all we want to do really is we want to pull this data into
59:20 n so that we can put it into our pine cone database. So, off of this trigger,
59:24 I'm going to add a new node and I'm going to grab another Google Drive node
59:28 because what happened is basically we have the file ID and the file name, but
59:32 we don't have the contents of the file. So, we're going to do a download file
59:35 node from Google Drive. I'm going to rename this one and just call it
59:38 download file just to keep ourselves organized. We already have our
59:41 credential connected and now it's basically saying what file do you want
59:45 to download. We have the ability to choose from a list. But if we choose
59:48 from the list, it's going to be this file every time we run the workflow. And
59:52 we want to make this dynamic. So we're going to change from list to by ID. And
59:56 all we have to do now is we're going to look on the lefth hand side for that
59:59 file that we just pulled in. And we're going to be looking for the ID of the
60:02 file. So I can see that I found it right down here in the spaces array because we
60:06 have the name right here and then we have the ID right above it. So, I'm
60:10 going to drag ID, put it right there in this folder. It's coming through as a
60:14 variable called JSON ID. And that's just basically referencing, you know,
60:17 whenever a file comes through on the the Google Drive trigger. I'm going to use
60:21 the variable JSON. ID, which will always pull in the files ID. So, then I'm going
60:25 to hit test step and we're going to see that we're going to get the binary data
60:28 of this file over here that we could download. And this is our policy and FAQ
60:33 document. Okay. So, there's step two. We have the file downloaded in NADN. And
60:37 now it's just as simple as putting it into pine cone. So before we do that,
60:41 let's head over to pine cone.io. Okay, so now we are in pine cone.io, which is
60:45 a vector database provider. You can get started for free. And what we're going
60:48 to do is sign up. Okay, so I just got logged in. And once you get signed up,
60:52 you should see us a page similar to this. It's a get started page. And what
60:55 we want to do is you want to come down here and click on, you know, begin setup
60:59 because we need to create an index. So I'm going to click on begin setup. We
61:03 have to name our index. So you can call this whatever you want. We have to
61:08 choose a configuration for a text model. We have to choose a configuration for an
61:11 embeddings model, which is sort of what I talked about right in here. This is
61:15 going to turn our text chunks into a vector. So what I'm going to do is I'm
61:19 going to choose text embedding three small from OpenAI. It's the most cost
61:23 effective OpenAI embedding model. So I'm going to choose that. Then I'm going to
61:26 keep scrolling down. I'm going to keep mine as serverless. I'm going to keep
61:29 AWS as the cloud provider. I'm going to keep this region. And then all I'm going
61:33 to do is hit create index. Once you create your index, it'll show up right
61:36 here. But we're not done yet. You're going to click into that index. And so I
61:39 already obviously have stuff in my vector database. You won't have this.
61:41 What I'm going to do real quick is just delete this information out of it. Okay.
61:45 So this is what yours should look like. There's nothing in here yet. We have no
61:48 name spaces and we need to get this configured. So on the left hand side, go
61:53 over here to API keys and you're going to create a new API key. Name it
61:58 whatever you want, of course. Hit create key. And then you're going to copy that
62:02 value. Okay, back in NDN, we have our API key copied. We're going to add a new
62:07 node after the download file and we're going to type in pine cone and we're
62:10 going to grab a pine cone vector store. Then we're going to select add documents
62:14 to a vector store and we need to set up our credential. So up here, you won't
62:18 have these and you're going to click on create new credential. And all we need
62:21 to do here is just an API key. We don't have to get a client ID or a secret. So
62:24 you're just going to paste in that API key. Once that's pasted in there and
62:27 you've given it a name so you know what this means. You'll hit save and it
62:30 should go green and we're connected to Pine Cone and you can make sure that
62:34 you're connected by clicking on the index and you should have the name of
62:37 the index right there that we just created. So I'm going to go ahead and
62:40 choose my index. I'm going to click on add option and we're going to be
62:43 basically adding this to a Pine Cone namespace which back in here in Pine
62:48 Cone if I go back into my database my index and I click in here you can see
62:51 that we have something called namespaces. And this basically lets us
62:55 put data into different folders within this one index. So if you don't specify
62:59 an index, it'll just come through as default and that's going to be fine. But
63:02 we want to get into the habit of having our data organized. So I'm going to go
63:05 back into NADN and I'm just going to name this name space FAQ because that's
63:10 the type of data we're putting in. And now I'm going to click out of this node.
63:13 So you can see the next thing that we need to do is connect an embeddings
63:17 model and a document loader. So let's start with the embeddings model. I'm
63:20 going to click on the plus and I'm going to click on embeddings open AAI. And
63:23 actually, this is one thing I left out of the Excalaw is that we also will need
63:27 to go get an OpenAI key. So, as you can see, when we need to connect a
63:30 credential, you'll click on create new credential and we just need to get an
63:33 API key. So, you're going to type in OpenAI API. You'll click on this first
63:37 link here. If you don't have an account yet, you'll sign in. And then once you
63:40 sign up, you want to go to your dashboard. And then on the lefth hand
63:44 side, very similar thing to Pine Cone, you'll click on API keys. And then we're
63:47 just going to create a new key. So you can see I have a lot. We're going to
63:49 make a new one. And I'm calling everything demo, but this is going to be
63:53 demo number three. Create new secret key. And then we have our key. So we're
63:56 going to copy this and we're going to go back into Nit. Paste that right here. We
63:59 paste it in our key. We've given in a name. And now we'll hit save and we
64:03 should go green. Just keep in mind that you may need to top up your account with
64:06 a few credits in order for you to actually be able to run this model. Um,
64:10 so just keep that in mind. So then what's really important to remember is
64:13 when we set up our pine cone index, we use the embedding model text embedding
64:17 three small from OpenAI. So that's why we have to make sure this matches right
64:20 here or this automation is going to break. Okay, so we're good with the
64:24 embeddings and now we need to add a document loader. So I'm going to click
64:27 on this plus right here. I'm going to click on default data loader and we have
64:31 to just basically tell Pine Cone the type of data we're putting in. And so
64:35 you have two options, JSON or binary. In this case, it's really easy because we
64:39 downloaded a a Google Doc, which is on the lefth hand side. You can tell it's
64:42 binary because up top right here on the input, we can switch between JSON and
64:47 binary. And if we were uploading JSON, all we'd be uploading is this gibberish
64:51 nonsense information that we don't need. We want to upload the binary, which is
64:55 the actual policy and FAQ document. So, I'm just going to switch this to binary.
64:58 I'm going to click out of here. And then the last thing we need to do is add a
65:01 text splitter. So, this is where I was talking about back in this Excal. we
65:05 have to split the document into different chunks. And so that's what
65:08 we're doing here with this text splitter. I'm going to choose a
65:12 recursive character text splitter. There's three options and I won't dive
65:15 into the difference right now, but recursive character text splitter will
65:18 help us keep context of the whole document as a whole, even though we're
65:22 splitting it up. So for now, chunk size is a th00and. That's just basically how
65:25 many characters am I going to put in each chunk? And then is there going to
65:29 be any overlap between our chunks of characters? So right now I'm just going
65:33 to leave it default a,000 and zero. So that's it. You just built your first
65:37 automation for a rag pipeline. And now we're just going to click on the play
65:40 button above the pine cone vector store node in order to see it get vectorized.
65:43 So we're going to basically see that we have four items that have left this
65:47 node. So this is basically telling us that our Google doc that we downloaded
65:51 right here. So this document got turned into four different vectors. So if I
65:55 click into the text splitter, we can see we have four different responses and
65:59 this is the contents that went into each chunk. So we can just verify this by
66:03 heading real quick into Pine Cone, we can see we have a new name space that we
66:07 created called FAQ. Number of records is four. And if we head over to the
66:10 browser, we can see that we do indeed have these four vectors. And then the
66:13 text field right here, as you can see, are the characters that were put into
66:17 each chunk. Okay, so that was the first part of this workflow, but we're going
66:21 to real quick just make sure that this actually works. So we're going to add a
66:24 rag chatbot. Okay. So, what I'm going to do now is hit the tab, or I could also
66:27 have just clicked on the plus button right here, and I'm going to type in AI
66:31 agent, and that is what we're going to grab and pull into this workflow. So, we
66:35 have an AI agent, and let's actually just put him right over here. Um, and
66:41 now what we need to do is we need to set up how are we actually going to talk to
66:44 this agent. And we're just going to use the default N chat window. So, once
66:47 again, I'm going to hit tab. I'm going to type in chat. And we have a chat
66:51 trigger. And all I'm going to do is over here, I'm going to grab the plus and I'm
66:54 going to drag it into the front of the AI agent. So basically now whenever we
66:58 hit open chat and we talk right here, the agent will read that chat message.
67:02 And we know this because if I click into the agent, we can see the user message
67:07 is looking for one in the connected chat trigger node, which we have right here
67:10 connected. Okay, so the first step with an AI agent is we need to give it a
67:14 brain. So we need to give it some sort of AI model to use. So we're going to
67:18 click on the plus right below chat model. And what we could do now is we
67:22 could set up an OpenAI chat model because we already have our API key from
67:25 OpenAI. But what I want to do is click on open router because this is going to
67:30 allow us to choose from all different chat models, not just OpenAIs. So we
67:33 could do Claude, we could do Google, we could do Plexity. We have all these
67:36 different models in here which is going to be really cool. And in order to get
67:39 an Open Router account, all you have to do is go sign up and get an API key. So
67:43 you'll click on create new credential and you can see we need an API key. So
67:47 you'll head over to openouter.ai. You'll sign up for an account. And then all you
67:50 have to do is in the top right, you're going to click on keys. And then once
67:54 again, kind of the same as all all the other ones. You're going to create a new
67:57 key. You're going to give it a name. You're going to click create. You have a
68:01 secret key. You're going to click copy. And then when we go back into NN and
68:04 paste it in here, give it a name. And then hit save. And we should go green.
68:07 We've connected to Open Router. And now we have access to any of these different
68:12 chat models. So, in this case, let's use let's use Claude um 3.5
68:19 Sonnet. And this is just to show you guys you can connect to different ones.
68:22 But anyways, now we could click on open chat. And actually, let me make sure you
68:26 guys can see him. If we say hello, it's going to use its brain claw 3.5 sonnet.
68:30 And now it responded to us. Hi there. How can I help you? So, just to validate
68:34 that our information is indeed in the Pine Cone vector store, we're going to
68:38 click on a tool under the agent. We're going to type in Pine Cone um and grab a
68:43 Pine Cone vector store and we're going to grab the account that we just
68:46 selected. So, this was the demo I just made. We're going to give it a name. So,
68:50 in this case, I'm just going to say knowledge base. We're going to give a description.
68:59 Call this tool to access the policy and FAQ database. So, we're basically just
69:03 describing to the agent what this tool does and when to use it. And then we
69:08 have to select the index and the name space for it to look inside of. So the
69:11 index is easy. We only have one. It's called sample. But now this is important
69:14 because if you don't give it the right name space, it won't find the right
69:19 information. So we called ours FAQ. If you remember in um our Pine Cone, we
69:23 have a namespace and we have FAQ right here. So that's why we're doing FAQ. And
69:26 now it's going to be looking in the right spot. So before we can chat with
69:30 it, we have to add an embeddings model to our Pine Cone vector store, which
69:33 same thing as before. We're going to grab OpenAI and we're going to use
69:37 embedding3 small and the same credential you just made. And now we're going to be
69:41 good to go to chat with our rag agent. So looking back in the document, we can
69:44 see we have some different stuff. So I'm going to ask this chatbot what the
69:48 warranty policy is. So I'm going to open up the chat window and say what is our
69:54 warranty policy? Send that off. And we should see that it's going to use its
69:57 brain as well as the vector store in order to create an answer for us because
70:00 it didn't know by itself. So there we go. just finished up and it said based on the information
70:06 from our knowledge base, here's the warranty policy. We have one-year
70:10 standard coverage. We have, you know, this email for claims processes. You
70:14 must provide proof of purchase and for warranty exclusions that aren't covered,
70:18 damage due to misuse, water damage, blah blah blah. Back in the policy
70:22 documentation, we can see that that is exactly what we have in our knowledge
70:26 base for warranty policy. So, just because I don't want this video to go
70:29 too long, I'm not going to do more tests, but this is where you can get in
70:31 there and make sure it's working. One thing to keep in mind is within the
70:34 agent, we didn't give it a system prompt. And what a system prompt is is
70:38 just basically a message that tells the agent how to do its job. So what you
70:42 could do is if you're having issues here, you could say, you know, like this
70:45 is the name of our tool which is called knowledgeb. You could tell the agent and
70:49 in system prompt, hey, like your job is to help users answer questions about the
70:54 um you know, our policy database. You have a tool called knowledgebase. You
70:58 need to use that in order to help them answer their questions. and that will
71:01 help you refine the behavior of how this agent acts. All right, so the next one
71:05 that we're doing is a customer support workflow. And as always, you have to
71:08 figure out what is the trigger for my workflow. In this case, it's going to be
71:12 triggered by a new email received. So I'm going to click on add first step.
71:16 I'm going to type in Gmail. Grab that node. And we have a trigger, which is on
71:19 message received right here. And we're going to click on that. So what we have
71:23 to do now is obviously authorize ourselves. So we're going to click on
71:26 create new credential right here. And all we have to do here is use OOTH 2. So
71:30 all we have to do is click on sign in. But before we can do that, we have to
71:33 come over to our Google Cloud once again. And now we have to make sure we
71:36 enable the Gmail API. So we'll click on Gmail API. And it'll be really simple.
71:40 We'll just have to click on enable. And now we should be able to do that OOTH
71:43 connection and actually sign in. You'll click on the account that you want to
71:46 access the Gmail. You'll give it access to everything. Click continue. And then
71:50 we're going to be connected as you can see. And then you'll want to name this
71:54 credential as always. Okay. So now we're using our new credential. And what I'm
71:57 going to do is if I hit fetch test event. So now we are seeing an email
72:01 that I just got in this inbox which in this case was nencloud was granted
72:05 access to your Google account blah blah blah. Um so that's what we just got.
72:09 Okay. So I just sent myself a different email and I'm going to fetch that email
72:13 now from this inbox. And we can see that the snippet says what is the privacy
72:17 policy? I'm concerned about my data and passwords. And what we want to do is we
72:21 want to turn off simplify because what this button is doing is it's going to
72:24 take the content of the email and basically, you know, cut it off. So in
72:28 this case, it didn't matter, but if you're getting long emails, it's going
72:30 to cut off some of the email. So if we turn off simplify fetch test event, once
72:34 again, we're now going to get a lot more information about this email, but we're
72:37 still going to be able to access the actual content, which is right here. We
72:41 have the text, what is privacy policy? I'm concerned about my data and
72:44 passwords. Thank you. And then you can see we have other data too like what the
72:48 subject was, who the email is coming from, what their name is, all this kind
72:52 of stuff. But the idea here is that we are going to be creating a workflow
72:56 where if someone sends an email to this inbox right here, we are going to
72:59 automatically look up the customer support policy and respond back to them
73:02 so we don't have to. Okay. So the first thing I'm actually going to do is pin
73:05 this data just so we can keep it here for testing. Which basically means
73:08 whenever we rerun this, it's not going to go look in our inbox. It's just going
73:12 to keep this email that we pulled in, which helps us for testing, right? Okay,
73:16 cool. So, the next step here is we need to have AI basically filter to see is
73:21 this email customer support related? If yes, then we're going to have a response
73:24 written. If no, we're going to do nothing because maybe the use case would
73:28 be okay, we're going to give it an access to an inbox where we're only
73:32 getting customer support emails. But sometimes maybe that's not the case. And
73:35 let's just say we wanted to create this as sort of like an inbox manager where
73:38 we can route off to different logic based on the type of email. So that's
73:41 what we're going to do here. So I'm going to click on the plus after the
73:44 Gmail trigger and I'm going to search for a text classifier node. And what
73:49 this does is it's going to use AI to read the incoming email and then
73:53 determine what type of email it is. So because we're using AI, the first thing
73:56 we have to do is connect a chat model. We already have our open router
73:59 credential set up. So I'm going to choose that. I'm going to choose the
74:01 credential and then I'm for this one, let's just keep it with 40 mini. And now
74:07 this AI node actually has AI and I'm going to click into the text classifier.
74:09 And the first thing we see is that there's a text to classify. So all we
74:13 want to do here is we want to grab the actual content of the email. So I'm
74:17 going to scroll down. I can see here's the text, which is the email content.
74:20 We're going to drag that into this field. And now every time a new email
74:25 comes through, the text classifier is going to be able to read it because we
74:28 put in a variable which basically represents the content of the email. So
74:32 now that it has that, it still doesn't know what to classify it as or what its
74:35 options are. So we're going to click on add category. The first category is
74:39 going to be customer support. And then basically we need to give it a
74:42 description of what a customer support email could look like. So I wanted to
74:46 keep this one simple. It's pretty vague, but you could make this more detailed,
74:49 of course. And I just sent an email that's related to helping out a
74:52 customer. They may be asking questions about our policies or questions about
74:56 our products or services. And what we can do is we can give it specific
74:59 examples of like here are some past customer support emails and here's what
75:02 they've looked like. And that will make this thing more accurate. But in this
75:05 case, that's all we're going to do. And then I'm going to add one more category
75:08 that's just going to be other. And then for now, I'm just going to say any email
75:14 that is not customer support related. Okay, cool. So now when we click out of
75:18 here, we can see we have two different branches coming off of this node, which
75:21 means when the text classifier decides, it's either going to send it off this
75:24 branch or it's going to send it down this branch. So let's quickly hit play.
75:28 It's going to be reading the email using its brain. And now you can see it has
75:32 outputed in the customer support branch. We can also verify by clicking into
75:35 here. And we can see customer support branch has one item and other branch has
75:39 no items. And just to keep ourselves organized right now, I'm going to click
75:42 on the other branch and I'm just going to add an operation that says do nothing
75:46 just so we can see, you know, what would happen if it went this way for now. But
75:49 now is where we want to configure the logic of having an agent be able to read
75:55 the email, hit the vector database to get relevant information and then help
75:58 us write an email. So I'm going to click on the plus after the customer support
76:02 branch. I'm going to grab an AI agent. So this is going to be very similar to
76:05 the way we set up our AI agent in the previous workflow. So, it's kind of
76:08 building on top of each other. And this time, if you remember in the previous
76:12 one, we were talking to it with a connected chat trigger node. And as you
76:15 can see here, we don't have a connected chat trigger node. So, the first thing
76:19 we want to do is change that. We want to define below. And this is where you
76:22 would think, okay, what do we actually want the agent to read? We want it to
76:25 read the email. So, I'm going to do the exact same thing as before. I'm going to
76:29 go into the Gmail trigger node, scroll all the way down until we can find the
76:32 actual email content, which is right here, and just drag that right in.
76:35 That's all we're going to do. And then we definitely want to add a system
76:39 message for this agent. We are going to open up the system message and I'm just
76:42 going to click on expression so I can expand this up full screen. And we're
76:46 going to write a system prompt. Again, for the sake of the video, keeping this
76:49 prompt really concise, but if you want to learn more about prompting, then
76:52 definitely check out my communities linked down below as well as this video
76:56 up here and all the other tutorials on my channel. But anyways, what we said
76:59 here is we gave it an overview and instructions. The overview says you are
77:03 a customer support agent for TechHaven. Your job is to respond to incoming
77:06 emails with relevant information using your knowledgebased tool. And so when we
77:10 do hook up our Pine Cone vector database, we're just going to make sure
77:13 to call it knowledgebase because that's what the agent thinks it has access to.
77:17 And then for the instructions, I said your output should be friendly and use
77:20 emojis and always sign off as Mr. Helpful from TechHaven Solutions. And
77:24 then one more thing I forgot to do actually is we want to tell it what to
77:27 actually output. So if we didn't tell it, it would probably output like a
77:31 subject and a body. But what's going to happen is we're going to reply to the
77:34 incoming email. We're not going to create a new one. So we don't need a
77:38 subject. So I'm just going to say output only the body content of the email. So
77:44 then we'll give it a try and see what that prompt looks like. We may have to
77:47 come back and refine it, but for now we're good. Um, and as you know, we have
77:51 to connect a chat model and then we have to connect our pine cone. So first of
77:54 all, chat model, we're going to use open router. And just to show you guys, we
77:58 can use a different type of model here. Let's use something else. Okay. So,
78:01 we're going to go with Google Gemini 2.0 Flash. And then we need to add the Pine
78:04 Cone database. So, I'm going to click on the plus under tool. I'm going to search
78:09 for Pine Cone Vector Store. Grab that. And we have the operation is going to be
78:13 retrieving documents as a tool for an AI agent. We're going to call this
78:20 knowledge capital B. And we're going to once again just say call this tool to
78:27 access policy and FAQ information. We need to set up the index as well as the
78:31 namespace. So sample and then we're going to call the namespace, you know,
78:35 FAQ because that's what it's called in our pine cone right here as you can see.
78:38 And then we just need to add our embeddings model and we should be good
78:42 to go which is embedded OpenAI text embedding three small. So we're going to
78:46 hit the play above the AI agent and it's going to be reading the email. As you
78:49 can see once again the prompt user message. It's reading the email. What is
78:52 the privacy policy? I'm concerned about my data and my passwords. Thank you. So
78:56 we're going to hit the play above the agent. We're going to watch it use its
78:59 brain. We're going to watch it call the vector store. And we got an error. Okay.
79:04 So, I'm getting this error, right? And it says provider returned error. And
79:09 it's weird because basically why it's erroring is because of our our chat
79:12 model. And it's it's weird because it goes green, right? So, anyways, what I
79:16 would do here is if you're experiencing that error, it means there's something
79:19 wrong with your key. So, I would go reset it. But for now, I'm just going to
79:22 show you the quick fix. I can connect to a OpenAI chat model real quick. And I
79:27 can run this here and we should be good to go. So now it's going to actually
79:31 write the email and output. Super weird error, but I'm honestly glad I caught
79:34 that on camera to show you guys in case you face that issue because it could be
79:37 frustrating. So we should be able to look at the actual output, which is,
79:41 "Hey there, thank you for your concern about privacy policy. At Tech Haven, we
79:45 take your data protection seriously." So then it gives us a quick summary with
79:48 data collection, data protection, cookies. If we clicked into here and
79:51 went to the privacy policy, we could see that it is in fact correct. And then it
79:55 also was friendly and used emojis like we told it to right here in the system
79:59 prompt. And finally, it signed off as Mr. Helpful from Tech Haven Solutions,
80:02 also like we told it to. So, we're almost done here. The last thing that we
80:05 want to do is we want to have it actually reply to this person that
80:09 triggered the whole workflow. So, we're going to click on the plus. We're going
80:13 to type in Gmail. Grab a Gmail node and we're going to do reply to a message.
80:17 Once we open up this node, we already know that we have it connected because
80:20 we did that earlier. We need to configure the message ID, the message
80:24 type, and the message. And so all I'm going to do is first of all, email type.
80:28 I'm going to do text. For the message ID, I'm going to go all the way down to
80:31 the Gmail trigger. And we have an ID right here. This is the ID we want to
80:35 put into the message ID so that it responds in line on Gmail rather than
80:39 creating a new thread. And then for the message, we're going to just drag in the
80:43 output from the agent that we just had write the message. So, I'm going to grab
80:46 this output, put it right there. And now you can see this is how it's going to
80:49 respond in email. And the last thing I want to do is I want to click on add option, append
80:55 nadn attribution, and then just check that off. So then at the bottom of the
80:59 email, it doesn't say this was sent by naden. So finally, we'll hit this test
81:04 step. We will see we get a success message that the email was sent. And
81:07 I'll head over to the email to show you guys. Okay, so here it is. This is the
81:11 one that we sent off to that inbox. And then this is the one that we just got
81:13 back. As you can see, it's in the same thread and it has basically the privacy
81:19 policy outlined for us. Cool. So, that's workflow number two. Couple ways we
81:22 could make this even better. One thing we could do is we could add a node right
81:25 here. And this would be another Gmail one. And we could basically add a label
81:31 to this email. So, if I grab add label to message, we would do the exact same
81:34 thing. We'd grab the message ID the same way we grabbed it earlier. So, now it
81:38 has the message ID of the label to actually create. And then we would just
81:41 basically be able to select the label we want to give it. So in this case, we
81:44 could give it the customer support label. We hit test step, we'll get
81:48 another success message. And then in our inbox, if we refresh, we will see that
81:51 that just got labeled as customer support. So you could add on more
81:55 functionality like that. And you could also down here create more sections. So
81:59 we could have finance, you know, a logic built out for finance emails. We could
82:02 have logic built out for all these other types of emails and um plug them into
82:07 different knowledge bases as well. Okay. So the third one we're going to do is a
82:11 LinkedIn content creator workflow. So, what we're going to do here is click on
82:14 add first step, of course. And ideally, you know, in production, what this
82:17 workflow would look like is a schedule trigger, you know. So, what you could do
82:20 is basically say every day I want this thing to run at 7:00 a.m. That way, I'm
82:23 always going to have a LinkedIn post ready for me at, you know, 7:30. I'll
82:27 post it every single day. And if you wanted it to actually be automatic,
82:30 you'd have to flick this workflow from inactive to active. And, you know, now
82:34 it says, um, your schedule trigger will now trigger executions on the schedule
82:37 you have defined. So now it would be working, but for the sake of this video,
82:40 we're going to turn that off and we are just going to be using a manual trigger
82:44 just so we can show how this works. Um, but it's the same concept, right? It
82:48 would just start the workflow. So what we're going to do from here is we're
82:51 going to connect a Google sheet. So I'm going to grab a Google sheet node. I'm
82:55 going to click on get rows and sheet and we have to create our credential once
82:58 again. So we're going to create new credential. We're going to be able to do
83:02 ooth to sign in, but we're going to have to go back to Google Cloud and we're
83:05 going to have to grab a sheet and make sure that we have the Google Sheets API
83:08 enabled. So, we'll come in here, we'll click enable, and now once this is good
83:12 to go, we'll be able to sign in using OOTH 2. So, very similar to what we just
83:16 had to do for Gmail in that previous workflow. But now, we can sign in. So,
83:20 once again, choosing my email, allowing it to have access, and then we're
83:23 connected successfully, and then giving this a good name. And now, what we can
83:26 do is choose the document and the sheet that it's going to be pulling from. So,
83:29 I'm going to show you. I have one called LinkedIn posts, and I only have one
83:33 sheet, but let's show you the sheet real quick. So, LinkedIn posts, what we have
83:38 is a topic, a status, and a content. And we're just basically going to be pulling
83:42 in one row where the status equals to-do, and then we are going to um
83:46 create the content, upload it back in right here, and then we're going to
83:50 change the status to created. So, then this same row doesn't get pulled in
83:52 every day. So, how this is going to work is that we're going to create a filter.
83:56 So the first filter is going to be looking within the status column and it
84:01 has to equal to-do. And if we click on test step, we should see that we're
84:04 going to get like all of these items where there's a bunch of topics. But we
84:08 don't want that. We only want to get the first row. So at the bottom here, add
84:12 option. I'm going to say return only first matching row. Check that on. We'll
84:15 test this again. And now we're only going to be getting that top row to
84:19 create content on. Cool. So we have our first step here, which is just getting
84:23 the content from the Google sheet. Now, what we're going to do is we need to do
84:27 some web search on this topic in order to create that content. So, I'm going to
84:30 add a new node. This one's going to be called an HTTP request. So, we're going
84:34 to be making a request to a specific API. And in this case, we're going to be
84:38 using Tavly's API. So, go on over to tavly.com and create a free account.
84:42 You're going to get a,000 searches for free per month. Okay, here we are in my
84:46 account. I'm on the free researcher plan, which gives me a thousand free
84:49 credits. And right here, I'm going to add an API key. We're going to name it,
84:54 create a key, and we're going to copy this value. And so, you'll start to get
84:56 to the point when you connect to different services, you always need to
85:00 have some sort of like token or API key. But anyways, we're going to grab this in
85:03 a sec. What we need to do now is go to the documentation that we see right
85:06 here. We're going to click on API reference. And now we have right here.
85:10 This is going to be the API that we need to use in order to search the web. So,
85:14 I'm not going to really dive into like everything about HTTP requests right
85:17 now. I'm just going to show you the simple way that we can get this set up.
85:21 So first thing that we're going to do is we obviously see that we're using an
85:25 endpoint called Tavali search and we can see it's a post request which is
85:28 different than like a git request and we have all these different things we need
85:31 to configure and it can be confusing. So all we want to do is on the top right we
85:35 see this curl command. We're going to click on the copy button. We're going to
85:40 go back into our NEN, hit import curl, paste in the curl command, hit
85:46 import, and now the whole node magically just basically filled in itself. So
85:50 that's really awesome. And now we can sort of break down what's going on. So
85:53 for every HTTP request, you have to have some sort of method. Typically, when
85:58 you're sending over data to a service, which in this case, we're going to be
86:01 sending over data to Tavali. It's going to search the web and then bring data
86:05 back to us. That's a post request because we're sending over body data. If
86:09 we were just like kind of trying to hit an and if we were just trying to access
86:14 like you know um bestbuy.com and we just wanted to scrape the information that
86:17 could just be a simple git request because we're not sending anything over
86:20 anyways then we're going to have some sort of base URL and endpoint which is
86:24 right here. The base URL we're hitting is api.com/tavaly and then the endpoint
86:30 we're hitting is slash search. So back in the documentation you can see right
86:34 here we have slash search but if we were doing like an extract we would do slash
86:37 extract. So that's how you can kind of see the difference with the endpoints.
86:40 And then we have a few more things to configure. The first one of course is
86:44 our authorization. So in this case, we're doing it through a header
86:46 parameter. As you can see right here, the curl command set it up. Basically
86:51 all we have to do is replace this um token with our API key from Tavi. So I'm
86:56 going to go back here, copy that key in N. I'm going to get rid of token and
87:00 just make sure that you have a space after the word bearer. And then you can
87:03 paste in your token. And now we are connected to Tavi. But we need to
87:07 configure our request before we send it off. So right here are the parameters
87:11 within our body request. And I'm not going to dive too deep into it. You can
87:13 go to the documentation if you want to understand like you know the main thing
87:17 really is the query which is what we're searching for. But we have other things
87:20 like the topic. It can be general or news. We have search depth. We have max
87:24 results. We have a time range. We have all this kind of stuff. Right now I'm
87:28 just going to leave everything here as default. We're only going to be getting
87:31 one result. And we're going to be doing a general topic. We're going to be doing
87:34 basic search. But right now, if we hit test step, we should see that this is
87:37 going to work. But it's going to be searching for who is Leo Messi. And
87:40 here's sort of like the answer we get back as well as a URL. So this is an
87:45 actual website we could go to about Lionel Messi and then some content from
87:51 that website. Right? So we are going to change this to an expression so that we
87:54 can put a variable in here rather than just a static hard-coded who is Leo
87:58 Messi. We'll delete that query. And all we're going to do is just pull in our
88:02 topic. So, I'm just going to simply pull in the topic of AI image generation.
88:06 Obviously, it's a variable right here, but this is the result. And then we're
88:09 going to test step. And this should basically pull back an article about AI
88:14 image generation. And you know, so here is a deep AI um link. We'll go to it.
88:19 And we can see this is an AI image generator. So maybe this isn't exactly
88:23 what we're looking for. What we could do is basically just say like, you know, we
88:28 could hardcode in search the web for. And now it's going to be saying search
88:31 the web for AI image generation. We could come in here and say yeah actually
88:34 you know let's get three results not just one. And then now we could test
88:37 that step and we're going to be getting a little bit different of a search
88:42 result. Um AI image generation uses text descriptions to create unique visuals.
88:45 And then now you can see we got three different URLs rather than just one.
88:49 Anyways, so that's our web search. And now that we have a web search based on
88:53 our defined topic, we just need to write that content. So I'm going to click on
88:58 the plus. I'm going to grab an AI agent. And once again, we're not giving it the
89:01 connected chat trigger node to look at. That's nowhere to be found. We're going
89:05 to feed in the research that was just done by Tavi. So, I'm going to click on
89:10 expression to open this up. I'm going to say article one with a colon and I'm
89:15 just going to drag in the content from article one. I'm going to say article 2
89:20 with a colon and just drag in the content from article 2. And then I'm
89:25 going to say article 3 colon and just drag in the content from the third
89:29 article. So now it's looking at all three article contents. And now we just
89:32 need to give it a system prompt on how to write a LinkedIn post. So open this
89:36 up. Click on add option. Click on system message. And now let's give it a prompt
89:41 about turning these three articles into a LinkedIn post. Okay. So I'm heading
89:45 over to my custom GPT for prompt architect. If you want to access this,
89:48 you can get it for free by joining my free school community. Um you'll join
89:51 that. It's linked in the description and then you can just search for prompt
89:54 architect and you should find the link. Anyways, real quick, it's just asking
89:58 for some clarification questions. So, anyways, I'm just shooting off a quick
90:01 reply and now it should basically be generating our system prompt for us. So,
90:05 I'll check in when this is done. Okay, so here is the system prompt. I am going
90:09 to just paste it in here and I'm just going to, you know, disclaimer, this is
90:12 not perfect at all. Like, I don't even want this tool section at all because we
90:16 don't have a tool hooked up to this agent. Um, we're obviously just going to
90:19 give it a chat model real quick. So, in this case, what I'm going to do is I'm
90:22 going to use Claude 3.5 Sonnet just because I really like the way that it
90:25 writes content. So, I'm using Claude through Open Router. And now, let's give
90:28 it a run and we'll just see what the output looks like. Um, I'll just click
90:31 into here while it's running and we should see that it's going to read those
90:34 articles and then we'll get some sort of LinkedIn post back. Okay, so here it is.
90:39 The creative revolution is here and it's AI powered. Gone are the days of hiring
90:42 expensive designers or struggling with complex software. Today's entrepreneurs
90:46 can transform ideas into a stunning visuals instantly using AI image
90:50 generators. So, as you can see, we have a few emojis. We have some relevant
90:53 hashtags. And then at the end, it also said this post, you know, it kind of
90:56 explains why it made this post. We could easily get rid of that. If all we want
90:59 is the content, we would just have to throw that in the system prompt. But now
91:03 that we have the post that we want, all we have to do is send it back into our
91:07 Google sheet and update that it was actually made. So, we're going to grab
91:11 another sheets node. We're going to do update row and sheet. And this one's a
91:14 little different. It's not just um grabbing stuff from a row. We're trying
91:18 to update stuff. So, we have to say what document we want, what sheet we want.
91:21 But now, it's asking us what column do we want to match on. So, basically, I'm
91:25 going to choose topic. And all we have to do is go all the way back down to the
91:28 sheet. We're going to choose the topic and drag it in right here. Which is
91:32 basically saying, okay, when this node gets called, whenever the topic equals
91:37 AI image generation, which is a variable, obviously, whatever whatever
91:40 topic triggered the workflow is what's going to pop up here. We're going to
91:44 update that status. So, back in the sheets, we can see that the status is
91:47 currently to-do, and we need to change it to created in order for it to go
91:51 green. So, I'm just going to type in created, and obviously, you have to
91:53 spell this correctly the same way you have it in your Google Sheets. And then
91:56 for the content, all I'm going to do is we're just going to drag in the output
92:00 of the AI agent. And as you can see, it's going to be spitting out the
92:03 result. And now if I hit test step and we go back into the sheet, we'll
92:06 basically watch this change. Now it's created. And now we have the content of
92:10 our LinkedIn post as well with some justification for why it created the
92:14 post like this. And so like I said, you could basically have this be some sort
92:17 of, you know, LinkedIn content making machine where every day it's going to
92:21 run at 7:00 a.m. It's going to give you a post. And then what you could do also
92:24 is you can automate this part of it where you're basically having it create
92:27 a few new rows every day if you give it a certain sort of like general topic to
92:32 create topics on and then every day you can just have more and more pumping out.
92:35 So that is going to do it for our third and final workflow. Okay, so that's
92:39 going to do it for this video. I hope that it was helpful. You know, obviously
92:42 we connected to a ton of different credentials and a ton of different
92:46 services. We even made a HTTP request to an API called Tavali. Now, if you found
92:49 this helpful and you liked this sort of live step-by-step style and you're also
92:53 looking to accelerate your journey with NAN and AI automations, I would
92:56 definitely recommend to check out my paid community. The link for that is
92:58 down in the description. Okay, so hopefully those three workflows taught
93:02 you a ton about connecting to different services and setting up credentials.
93:05 Now, I'm actually going to throw in one more bonus step-by-step build, which is
93:09 actually one that I shared in my paid community a while back, and I wanted to
93:12 bring it to you guys now. So, definitely finish out this course, and if you're
93:14 still looking for some more and you like the way I teach, then feel free to check
93:17 out the paid community. The link for that's down in the description. We've
93:19 got a course in there that's even more comprehensive than what you're watching
93:22 right now on YouTube. We've also got a great community of people that are using
93:25 Niten to build AI automations every single day. So, I'd love to see you guys
93:28 in that community. But, let's move ahead and build out this bonus workflow. Hey
93:34 guys. So, today I wanted to do a step by step of an invoice workflow. And this is
93:39 because there's different ways to approach stuff like this, right? There's
93:42 the conversation of OCR. There's a conversation of maybe extracting text
93:46 from PDFs. Um, there's the conversation of if you're always getting invoices in
93:50 the exact same format, you probably don't need AI because you could use like
93:54 a code node to extract the different parameters and then push that through.
93:58 So, that's kind of stuff we're going to talk about today. And I I haven't showed
94:01 this one on YouTube. It's not like a YouTube build, but it's not an agent.
94:04 It's an AI powered workflow. And I also wanted to talk about like just the
94:07 foundational elements of connecting pieces, thinking about the workflow. So,
94:11 what we're going to do first actually is we're going to hop into Excalar real
94:14 quick. and I'm going to create a new one. And we're just going to real quickly
94:20 wireframe out what we're doing. So, first thing we're going to draw out here
94:24 is the trigger. So, we'll make this one yellow. We'll call this the
94:30 trigger. And what this is going to be is invoice. Sorry, we're going to do new
94:41 um Google Drive. So the Google Drive node, it's going to be triggering the
94:45 workflow and it's going to be when a new invoice gets dropped into
94:49 um the folder that we're watching. So that's the trigger. From there, and like
94:53 I said, this is going to be a pretty simple workflow. From there, what we're
94:56 going to do is basically it's going to be a PDF. So the first thing to
95:02 understand is actually let me just put Google Drive over here. So the first
95:07 thing to understand from here is um you know what what do the invoices
95:12 look like? These are the questions that we're going to have. So the first one's what
95:19 do the invoices look like? Um because that determines what happens next. So if
95:23 they are PDFs that happen every single time and they're always in the same
95:27 format, then next we'd want to do okay well we can just kind of extract the
95:33 text from this and then we can um use a code node to extract the information we
95:39 need per each parameter. Now if it is a scanned invoice where it's maybe not as
95:44 we're not maybe not as able to extract text from it or like turn it into a text
95:48 doc, we'll probably have to do some OCR element. Um, but if it's PDF that's
95:54 generated by a computer, so we can extract the text, but they're not going
95:57 to come through the same every time, which is what we have in this case. I
96:00 have two example invoices. So, we know we're overall we're looking for like
96:04 business name, client name, invoice number, invoice date, due date, payment
96:07 method, bank details, maybe stuff like that, right? But both of these are
96:10 formatted very differently. They all have the same information, but they're
96:15 formatted differently. So that's why we can't that's why we want to use an AI
96:19 sort of information extractor node. Um so that's one of the main questions. The
96:22 other ones we'd think about would be like you know where do they go? So once we get
96:30 them where do they go? Um you know the frequency of them coming in and then
96:34 also like really any other action. So bas building off of where do they go? It's
96:42 also like what actions will we take? So, does that mean um are we just going to
96:46 throw it in a CRM or are we just going to throw it in a CRM or maybe a database
96:49 or are we also going to like send them an automated follow-up based on you know
96:53 the email that we extract from it and say, "Hey, we received your invoice.
96:56 Thanks." So, like what does that look like? So, those are the questions we
96:59 were initially going to ask. Um and then that helps us pretty much plan out the
97:05 next steps. So because we figured out um extract the same like x amount of
97:21 fields. So because we found out we want fields but the formats may not be
97:31 [Music] consistent. We will use an AI information extractor. Um that is just a long
97:43 sentence. So shorten this up a little bit or sorry make it smaller a little
97:46 bit. Okay so we have that. Um the um like updated to our Google sheet which will
98:08 invoice which I'll just call invoice database and then a follow-up email can
98:14 be sent or no not a follow-up email we'll just say an email, an internal email will be sent.
98:22 So, an email will be sent to the internal billing team. Okay, so this is what we've got, right?
98:30 We have our questions. We've kind of answered the questions. So, now we know
98:32 what the rest of the flow is going to look like. We already know this is not
98:35 going to be an agent. It's going to be a workflow. So, what we're going to do is
98:38 we're going to add another node right here, which is going to be, you know,
98:43 PDF comes in. And what we want to do is we want to extract the text from that
98:51 PDF. Um let's make this text smaller. So we're going to extract the text. And
98:55 we'll do this by using a um extract text node. Okay, cool. Now
99:01 once we have the text extracted, what do we need to do? We need to
99:09 um just moving over these initial questions. So we have the text
99:13 extracted. Extracted. What comes next? What comes next is we need to
99:18 um like decide on the fields to extract. And how do we get this? We get this
99:29 from our invoice database. So let's quickly set up the invoice database. I'm
99:33 going to do this by opening up a Google sheet which we are just going to call
99:41 the oops invoice DB. So now we need to figure out what we actually want to put
99:48 into our invoice DB. So first thing we'll do is um you know we're pretending
99:53 that our business is called Green Grass. So we don't need that. We don't need the
99:55 business information. We really just need the client information. So invoice
100:00 number will be the first thing we want. So, we're just setting up our database
100:04 here. So, invoice number. From there, we want to get client name, client address,
100:08 client email, client phone. [Music] oops, client name, client
100:21 email, client address, and then we want client phone. Okay, so we have those
100:29 five things. And let's see what else we want. probably the amount. So, we'll
100:43 um, and due date. Invoice date and due date. Okay. Invoice date and due date. Okay.
100:52 So, we have these, what are these? Eight. Eight fields. And I'm just going
100:57 to change these colors so it looks visually better for us. So, here are the
101:01 fields we have and this is what we want to extract from every single invoice
101:04 that we are going to receive. Cool. So, we know we have these
101:09 eight things. I'm just going to actually fine. So, we have our eight
101:17 fields to extract um and then they're going to be pushed to invoice DB and then we'll set up the
101:23 once we have these fields we can basically um create our email. So this
101:29 is going to be an AI node that's going to info extract. So it's going to
101:34 extract the eight fields that we have over here. So we're going to send the data
101:39 into there and it's going to extract those fields. Once we extract those
101:47 fields, we don't probably need to set the data because because coming out of
101:51 this will basically be those eight fields. So um you know every time what's
101:57 going to happen is actually sorry let me add another node here so we can
102:01 connect these. So what's going to come out of here is one item which will be
102:05 the one PDF and then what's coming out of here will be eight items every time.
102:09 So that's what we've got. We could also want to think about maybe if two
102:12 invoices get dropped in at the same time. How do we want to handle that loop
102:16 or just push through? But we won't worry about that yet. So we've got one item
102:19 coming in here. the node that's extracting the info will push out the
102:22 eight items and the eight items only. And then what we can do from there is
102:31 update invoice DB and then from there we can also and this could be like out of
102:35 here we do two things or it could be like a uh sequential if that makes
102:39 sense. So, well, what else we know we need to do is we know that we also need
102:43 to email billing team. And so, what I was saying there is we could either have it like this
102:51 where at the same time it branches off and it does those two things. And it
102:54 really doesn't matter the order because they're both going to happen either way.
102:57 So, for now, to keep the flow simple, we'll just do this or we're going to
103:02 email the billing team. Um, and what's going to happen is, you
103:12 internal, because this is internal, we already know like the billing email. So,
103:21 billing@acample.com. This is what we're going to feed in because we already know
103:23 the billing email. We don't have to extract this from anywhere. Um, so we
103:30 have all the info we need. We will what else do we need to feed in
103:34 here? So some of these some of these fields we'll have to filter in. So some
103:42 of the extracted fields because like we want to say hey you know we got this invoice on this
103:50 date um to this client and it's due on this date. So, we'll have some of the
103:53 extracted fields. We'll have a billing example and then potentially
103:59 like potentially like previous or like an email template potentially like that's that's something
104:06 we can think about or we can just prompt So, yeah. Okay. Okay. So what we want to
104:17 do here is actually this. What we need to do is the email has to be generated
104:22 somewhere. So before we feed into an emailing team node and let me actually
104:26 change this. So we're going to have green nodes be AI and then blue nodes
104:30 are going to be not AI. So we're going to get another AI node right here which
104:35 is going to be craft email. So we'll connect these pieces once again.
104:43 Um, and so I hope this I hope you guys can see like this is me trying to figure
104:46 out the workflow before we get into nit because then we can just plug in these
104:49 pieces, right? Um, and so I didn't even think about this. I mean, obviously we would have
104:54 got in there and end and realized, okay, well, we need an email to actually
105:00 configure these next fields, but that's just how it works, right? So
105:05 anyways, this stuff is actually hooked up to the wrong place. We need this to
105:07 be hooked up over here to the craft email tool. So, email template will also be
105:14 hooked up here. And then the billing example will be hooked up. This is the
105:18 No, this will still go here because that's actually the email team or the
105:21 email node. So, email node, send email node, which is an action and we'll be feeding in this as
105:31 well as the actual email. So the email that's written by AI will be fed in. And I
105:40 think that ends the process, right? So we'll just add a quick Oops. We'll just add a quick
105:47 yellow note over here. And I always my my colors always change, but just trying to keep things
105:53 consistent. Like in here, we're just saying, okay, the process is going to
105:56 end now. Okay, so this is our workflow, right? New invoice PDF comes through. We
106:01 want to extract the text. We're using an extract text node which is just going to
106:05 be a static extract from PDF PDF or convert PDF to text file type of thing.
106:09 We'll get one item sent to an AI node to extract the eight fields we need. The
106:12 eight items will be fed into the next node which is going to update our Google
106:16 sheet. Um and I'll just also signify here this is going to be a Google sheet
106:19 because it's important to understand the integrations and like who's involved in
106:24 each process. So this is going to be AI. This is going to be AI and that's
106:29 going to be an extract node. This is going to be a Gmail node and then we
106:33 have the process end. Cool. So this is our wireframe. Now we can get into naden
106:38 and start building out. We can see that this is a very very sequential flow. We
106:42 don't need an agent. We just need two AI nodes So let us get into niten and start
106:51 building this thing. So um we know we know what's starting this
106:55 process which is which is a trigger. So, I'm going to grab a Google Drive
106:59 trigger. We're going to do um on changes to a specific file or no, no, specific
107:04 folder, sorry. Changes involving a specific folder. We're going to choose
107:08 our folder, which is going to be the projects folder, and we're going to be
107:12 watching for a file created. So, we've got our ABC Tech Solutions.
107:18 I'm going to download this as a PDF real quick. So, download as a PDF. I'm going
107:23 to go to my projects folder in the drive, and I'm going to drag this guy in
107:26 here. Um, there it is. Okay, so there's our PDF. We'll come in here and we'll hit
107:32 fetch test event. So, we should be getting our PDF. Okay, nice. We will just make sure it's the
107:39 right one. So, we we should see a ABC Tech Solutions Invoice. Cool. So, I'm
107:42 going to pin this data just so we have it here. So, just for reference, pinning
107:46 data, all it does is just keeps it here. So, if we were to refresh this this
107:50 page, we'll still have our pinned data, which is that PDF to play with. But if
107:53 we would have not pinned it, then we would have had to fetch test event once
107:56 again. So not a huge deal with something like this, but if you're maybe doing web
108:00 hooks or API calls, you don't want to have to do it every time. So you can pin
108:03 that data. Um or like an output of an AI node if you don't want to have to rerun the AI.
108:11 But anyway, so we have our our PDF. We know next based on our wireframe. And
108:16 let me just call this um invoice flow wireframe. So we know next is we need to extract
108:23 text. So perfect. We'll get right into NADN. We'll click on next and we will do
108:28 an extract from file. So let's see. We want to extract from PDF. And
108:34 although what do we have here? We don't have any binary. So we
108:40 were on the right track here, but we forgot that in order to we get the we
108:44 get the PDF file ID, but we don't actually have it. So what we need to do
108:49 here first is um basically download the file because we need the binary to then feed that into
109:04 So we need the binary. So, sorry if that's like I mean really small, but basically in order to
109:11 extract the text, we need to download the file first to get the binary and
109:16 then we can um actually do that. So, little little thing we missed in the
109:19 wireframe, but not a huge deal, right? So, we're going to extend this one off.
109:23 We're going to do a Google Drive node once again, and we're going to look at
109:27 download file. So, now we can say, okay, we're downloading a file. Um, we can
109:32 choose from a list, but this has to be dynamic because it's going to be based
109:35 on that new trigger every time. So, I'm going to do by ID. And now on the lefth
109:39 hand side, we can look for the file ID. So, I'm going to switch to schema real
109:44 quick. Um, so we can find the the ID of the file. We're just going to have to go
109:47 through. So, we have a permissions ID right here. I don't think that's the
109:51 right one. We have a spaces ID. I don't think that's the right one either. We're
109:56 looking for an actual file ID. So, let's see. parents icon link, thumbnail link, and
110:06 sometimes you just have to like find it So, I feel like I probably have just
110:13 skipped right over it. Otherwise, we'll IDs. Maybe it is this one. Yeah, I think
110:21 Okay, sorry. I think it is this one because we see the name is right here
110:23 and the ID is right here. So, we'll try this. We're referencing that
110:27 dynamically. We also see in here we could do a Google file conversion which
110:31 basically says um you know if it's docs convert it to HTML if it's drawings
110:35 convert it to that. If it's this convert it to that there's not a PDF one so
110:39 we'll leave this off and we'll hit test step. So now we will see we got the
110:43 invoice we can click view and this is exactly what we're looking at here with
110:47 the invoice. So this is the correct one. Now since we have it in our binary data
110:50 over here we have binary. Now we can extract it from the file. So um you know
110:56 on the left is the inputs on the right is going to be our output. So we're
111:00 extracting from PDF. We're looking in the input binary field called data which
111:05 is right here. So I'll hit test step and now we have text. So here's the actual
111:10 text right um the invoice the information we need and out of this is
111:12 what we're going to pass over to extract. So let's go back to the
111:18 wireframe. We have our text extracted. Now, what we want to do is extract um
111:22 the specific eight fields that we need. So, hopping back into the workflow, we
111:26 know that this is going to be an AI node. So, it's going to be an
111:28 information extractor. We have to first of all classify we we know that one item is
111:34 going in here and that's right here for us in the table, which is the actual
111:37 text of the invoice. So, we can open this up and we can see this is the text
111:40 of the invoice. We want to do it from attribute description. So, that's what it's
111:46 looking for. So, we can add our eight attributes. So, we know there's going to
111:48 be eight of them, right? So, we can create eight. But, let's just first of
111:52 all go into our database to see what we want. So, the first one's invoice
111:55 number. So, I'm going to copy this over here. Invoice number. And we just have
111:59 to describe what that is. So, I'm just going to say the number of the
112:03 invoice. And this is required. We're going to make them all required. So,
112:06 number of the invoice. Then we have the client name. Paste that in
112:11 here. Um, these should all be pretty self um explanatory. So the name of the
112:17 client we're going to make it required client email. So this is going
112:22 to be a little bit repetitive but the email of the client and let me just
112:31 quickly copy this for the next two client address. So there's client
112:36 address and we're going to required. And then what's the last one
112:48 here? Client phone. Paste that in there, which is obviously going to be the phone
112:53 number of the client. And here we can say, is this going to be a string or is
112:56 it going to be a number? I'm going to leave it right now as a string just
112:59 because over here on the left you can see the phone. We have parenthesis in
113:03 there. And maybe we want the format to come over with the parenthesis and the
113:07 little hyphen. So let's leave it as a string for now. We can always test and
113:10 we'll come back. But client phone, we're going to leave that. We have total
113:15 amount. Same reason here. I'm going to leave this one as a string because I
113:18 want to keep the dollar sign when we send it over to sheets and we'll see how
113:22 it comes over. But the total amount of the invoice required. What's coming next is invoice
113:31 date and due date. So, invoice date and due date, we can say these are
113:36 going to be dates. So, we're changing the var the data type here. They're both
113:41 required. And the date the invoice was sent. And then we're going to
113:47 say the date the invoice is due. So, we're going to make sure this works. If
113:49 we need to, we can get in here and make these descriptions more descriptive. But
113:53 for now, we're good. We'll see if we have any options. You're an expert
113:56 extraction algorithm. Only extracts relevant information from the text. If
113:59 you do not know the value of the attribute to extract, you may omit the
114:03 attributes value. So, we'll just leave that as is. Um, and we'll hit test step.
114:08 It's going to be looking at this text. And of course, we're using AI, so we
114:12 have to connect a chat model. So, this will also alter the performance. Right
114:15 now, we're going to go with a Google Gemini 20 flash. See if that's powerful
114:20 enough. I think it should be. And then, we're going to hit play once again. So,
114:22 now it's going to be extracting information using AI. And what's great
114:26 about this is that we already get everything out here in its own item. So,
114:30 it's really easy to map this now into our Google sheet. So, let's make sure
114:35 this is all correct. Um, invoice number. That looks good. I'm going to open up
114:38 the actual one. Yep. Client name ABC. Yep. Client email finance at ABC
114:44 Tech. Yep. Address and phone. We have address and phone. Perfect. We have total amount
114:53 is 141 175. 14175. We have um March 8th and March 22nd. If we go back up here, March 8th,
115:00 March 22nd. Perfect. So, that one extracted it. Well, and um okay, so we have one item coming out,
115:08 but technically there's eight like properties in there. So, anyways, let's
115:12 go back to our our uh wireframe. So, after we extracted the eight items, what
115:16 do we do next? We're going to put them into our Google Sheet um database. So,
115:21 what we know is we're going to grab a Google Sheets. We're going to do an
115:26 append row because we're adding a row. Um, we already have a credential
115:28 selected. So, hopefully we can choose our invoice database. It's just going to
115:32 be the first sheet, sheet one. And now what happens is we have to map the
115:36 columns. So, you can see these are dragable. We can grab each one. If I go
115:40 to schema, it's a little more apparent. So, we have these eight items. And it's
115:42 going to be really easy now that we use an information extractor because we can
115:46 just map, you know, invoice number to invoice number, client name, client
115:51 name, email, email. And it's referencing these variables because every time after
115:56 we do our information extractor, they're going to be coming out as JSON.output
115:59 and then invoice number. And then for client name, JSON.output client name. So
116:03 we have these dynamic variables that will happen every single time. And
116:06 obviously I'll show this when we do another example, but we can keep mapping
116:10 everything in. And we also did it in that order. So it's really really easy
116:14 to do. We're just dragging and dropping and we are finished. Cool. So if I hit
116:21 test step here, this is going to give us a message that says like here are the
116:25 fields basically. So there are the fields. They're mapped correctly. Come
116:28 into the sheets. We now have automatically gotten this updated in our
116:34 invoice database. And um that's that. So let me just change some of these nodes.
116:39 So this is going to be update database. Um this is information extractor extract
116:44 from file. I'm just going to say this is download binary. So now we know what's going on
116:50 in each step. And we'll go back to the wireframe real quick. What happens after
116:53 we update the database? Now we need to craft the email. And this is going to be
116:58 using AI. And what's going to go into this is some of the extracted fields and
117:01 maybe an email template. What we're going to do more realistically is just a
117:08 system prompt. So back into nitn let's add a um open AI message and model node. So
117:15 what we're going to do is we're going to choose our model to talk to. In this
117:19 case we'll go 40 mini. It should be powerful enough. And now we're going to
117:23 set up our system prompt and our user prompt. So at this point if you don't
117:27 understand the difference the system prompt is the instructions. So we're telling this node
117:33 how to behave. So first I'm going to change this node name to create email
117:38 because that's like obviously what's going on keeping you organized. And now
117:41 how do we explain to this node what its role is? So you are an email
117:48 expert. You will receive let me actually just open this up. You will receive
117:58 um invoice information. Your job is to notify the internal billing
118:07 team that um an invoice was received. Receive/s sent. Okay. So,
118:15 honestly, I'm going to leave it at that for now. It's really simple. If we
118:18 wanted to, we can get in here and change the prompting as far as like here is the
118:22 format. Here is the way you should be doing it. One thing I like to do is I
118:25 like to say, you know, this is like your overview. And then if we need to get
118:28 more granular, we can give it different sections like output or rules or
118:32 anything like that. I'm also going to say you are an email expert
118:38 for green grass corp named [Music] um named um Greeny. Okay, so we have
118:49 Greenie from Green Grass Corp. That's our email expert that's going to email
118:53 the billing team every time this workflow happens. So that's the
118:58 overview. Now in the user prompt, think of this as like when you're talking to
119:01 chatbt. So obviously I had chatbt create these invoices chatgbt this when we say hello
119:08 that's a user message because this is an interact like an an interaction and it's
119:12 going to change every time. But behind the scenes in this chatbt openai has a
119:16 system prompt in here that's basically like you're a helpful assistant. You
119:19 help you know users answer questions. So this window right here that we type in
119:24 is our user message and behind the scenes telling the node how to act is
119:30 our system prompt. Cool. So in here I like to have dynamic information go into
119:33 the user message while I like to have static information in the actual system
119:37 prompt. So except for maybe the except the exception usually of
119:42 like giving it the current time and day because that's an expression. So
119:46 anyways, let's change this to an expression. Let's make this full screen.
119:50 We are going to be giving it the invoice information that it needs to write the
119:55 email because that's what it that's what it's expecting. In the system prompt, we
119:59 said you will receive invoice information. So, first thing is going to
120:05 be invoice number. We are going to grab invoice number and just drag it in.
120:10 We're going to grab client name and just drag it in. So, it's going
120:14 to dynamically get these different things every time, right?
120:19 So, let's say maybe it doesn't even we don't need client email. Okay, maybe we
120:24 do. We want client email. Um, so we'll give it that. But the billing team right now doesn't need
120:30 the address or phone. Let's just say that. But it does want we do want them
120:36 to know the total amount of that invoice. And we definitely want them to
120:40 know the invoice date and the invoice due date. So we can we can now drag in these two
120:48 things. So this was us just being able to customize what the AI node sees. Just
120:53 keep in mind if we don't drag anything in here, even if it's all on the input,
120:59 the AI node doesn't see any of it. So let's hit test step and we'll see the
121:02 type of email we get. We're going to have to make some changes. I already
121:04 know because you know we have to separate everything. But what it did is
121:08 it created a subject which is new invoice received and then the invoice
121:12 number. Dear billing team, I hope this message finds you well. We've received
121:15 an invoice that requires your attention and then it lists out some information
121:18 and then it also signs off Greeny Green Grass Corp. So, first thing we want to do is um if
121:25 we go back to our wireframe, what we have to send in, and we didn't document this well enough
121:35 actually, but what goes into here is a um you know, in order to send an email,
121:40 we need a two, we need a subject, and we need the email body.
121:48 So, that those are the three things we need. the two is coming from here. So,
121:52 we know that. And the subject and email are going to come from the um craft
121:58 email node. So, we have the the two and then actually I'm going to move this up
122:01 here. So, now we can just see where we're getting all of our pieces from.
122:04 So, the two is coming from internal knowledge. This can be hardcoded, but
122:07 the subject and email are going to be dynamic from the AI note. Cool. So what
122:13 we want to do now say output and we're going to tell it how to output information. So
122:26 output output the following parameters separately and we're just
122:33 going to say subject and email. So now it should be outputting two parameters
122:37 separately, but it's not going to because even though it says here's the
122:40 subject and then it gives us a subject and then it says here's the email and
122:43 gives us an email, they're still in one field. Meaning if we hook up another
122:48 node which would be a Gmail send email as we here. Okay, so now this is the next
122:58 node. Here's the fields we need. But as you can see coming out of the create
123:03 email AI node, we have this whole parameter called content which has the
123:07 subject and the email. And we need to get these split up so that we can drag
123:10 one into the subject and one into the two. Right? So first of all, I'm just
123:14 making these expressions just so we can drag stuff in later. Um, and so that's
123:20 what we need to do. And our fix there is we come into here and we just check this
123:25 switch that says output content is JSON, which will then will rerun. And now
123:29 we'll get subject and body. Subject and email in two different
123:34 fields right here. We can see which is awesome because then we can open up our
123:38 send email node. We can grab our subject. It's going to be dynamic. And
123:41 we can grab our email. It's going to be dynamic. Perfect. We're going to change
123:45 this to text. And we're going to add an option down here. And we're just going
123:49 to say append nadn attribution and turn that off because we just don't want to see the
123:55 message at the bottom that says this was sent by nadn. And if we go back to our
124:00 wireframe wherever that is over here, we know that this is the email that's going
124:03 to be coming through or we're going to be sending to every time because we're
124:06 sending internally. So we can put that right in here, not as a variable. Every
124:10 time this is going to be sending to billing@acample.com. So this really
124:13 could be fixed. It doesn't have to be an expression. Cool. So we will now hit test step and we can
124:21 see that we got this this email sent. So let me open up a new
124:26 tab. Let me go into whoa into our Gmail. I will go to the sent items and
124:34 we will see we just got this billing email. So obviously it was a fake email
124:37 but this is what it looks like. We've received a new invoice from ABC Tech.
124:40 Please find the details below. We got invoice number, client name, client
124:45 email, total amount, total invoice date, due date. Please process these this
124:48 invoice accordingly. So that's perfect. Um, we could also, if
124:54 we wanted to, we could prompt it a little bit differently to say, you know,
124:59 like this has been updated within the database and, um, you can check it out
125:03 here. So, let's do that real quick. What we're going to do is we're going to say
125:06 because we've already updated the database, I'm going to come into our
125:09 Google sheet. I'm going to copy this link and I'm going to we're basically going to bake this
125:19 So, I'm going to say we're going to give it a section called
125:26 email. Inform the billing team of the invoice. let them know we have also updated this
125:38 in the invoice database and they can view it here and we'll just give them
125:43 this link to that Google doc. So every time they'll just be able to send that
125:45 over. So I'm going to hit test up. We should see a new email over here which
125:50 is going to include that link I hope. So there's the link. We'll run this email
125:55 tool again to send a new email. Hop back into Google Gmail. We got a new one. And
126:01 now we can see we have this link. So you can view it here. We've already updated
126:05 this in the invoice database. We click on the link. And now we have our
126:09 database as well. So cool. Now let's say at this point we're happy with our
126:13 prompting. We're happy with the email. This is done. If we go back to the
126:17 wireframe, the email is the last node. So um maybe just to make it look
126:21 consistent, we will just add over something here that just says nothing.
126:25 And now we know that the process is done because nothing to do. So this is
126:28 basically like this is what we wireframed out. So we know that we're
126:32 happy with this process. We understand what's going on. Um but now let's unpin
126:37 this data real quick and let's drop in another invoice to make sure that even
126:41 though it's formatted differently. So this XYZ formatted differently, but the
126:45 AI should still be able to extract all the information that we need. So I'm
126:48 going to come in here and download this one as a PDF. We have it right there. I'm going
126:54 to drop it into our Google Drive. So, we have XYZ Enterprises now. Come back into
127:00 the workflow and we'll hit test fetch test event. Let's just make sure this is
127:03 the right one. So, XYZ Enterprises. Nice. And I'm just going to hit test workflow and
127:10 we'll watch it download, extract, get the information, update the database,
127:13 create the email, send the email, and then nothing else should happen after
127:19 that. So, boom, we're done. Let's click into our email. Here we have our new
127:23 invoice received. So it updated differently like the subjects dynamic
127:27 because it was from XYZ a different invoice number. As you remember the ABC
127:31 one, it started with TH AI and this one starts with INV. So that's why the subject is different.
127:38 Dear billing team, we have received a new invoice from XYZ Enterprises. Please
127:42 find the details below. There's the number, the name, all this kind of
127:46 information. Um the total amount was 13856. Let's go make sure that's right.
127:51 Total amount 13856. Um March 8th March 22nd once again. Is that
128:01 correct? March 8th March 22nd. Nice. And finance XYZ XYZ. Perfect. Okay. The
128:05 invoice has been updated in the database. You can view it here. So let's
128:08 click on that link. Nice. We got our information populated into the
128:12 spreadsheet. As you can see, it all looks correct to me as well. Our strings
128:15 are coming through nice and our dates are coming through nice. So I'm going to
128:18 leave it as is. Now, keep in mind because these are technically coming
128:23 through as strings, um, that's fine for phone, but Google Sheets automatically
128:27 made these numbers, I believe. So, if we wanted to, we could sum these up because
128:31 they're numbers. Perfect. Okay, cool. So, that's how that works,
128:36 right? Um, that's the email. We wireframed it out. We tested it with two
128:40 different types of invoices. They weren't consistent formatting, which
128:43 means we couldn't probably have used a code node, but the AI is able to read
128:47 this and extract it. As you can see right here, we got the same eight items
128:51 extracted that we were looking for. So, that's perfect. Um, cool. So, going to
129:00 will yeah I will attach the actual flow and I will attach the
129:06 just a picture of this wireframe I suppose in this um post and by now you
129:12 guys have already seen that I'm sure but yeah I hope this was helpful um the
129:16 whole process of like the way that I approached and I know this was a 35minut
129:20 build so it's not like the same as like building something more complex but as
129:24 far as like a general workflow you know this is a pretty solid solid one to get
129:27 started with. Um it it shows elements of using AI within a simple workflow that's going to
129:35 be sequential and it shows like you know the way we have to reference our
129:38 variables and how we have to drag things in and um obviously the component of
129:42 like wireframing out in the beginning to understand the full flow at least 80%
129:48 85% of the full flow before you get in there. So cool. Hope you guys enjoy this
129:52 one and I will see you guys in the community. Thanks. All right. All right,
129:55 I hope you guys enjoyed those step-by-step builds. Hopefully, right
129:57 now, you're feeling like you're in a really good spot with Naden and
130:01 everything starting to piece together. This next video we're going to move into
130:05 is about APIs because in order to really get more advanced with our workflows and
130:08 our AI agents, we have to understand the most important thing, which is APIs.
130:12 They let our Nin workflows connect to anything that you actually want to use.
130:15 So, it's really important to understand how to set up. And when you understand
130:18 it, the possibilities are endless. And it's really not even that difficult. So,
130:21 let's break it down. If you're building AI agents, but you don't really
130:24 understand what an API is or how to use them, don't worry. You're not alone. I
130:27 was in that exact same spot not too long ago. I'm not a programmer. I don't know
130:31 how to code, but I've been teaching tens of thousands of people how to build real
130:34 AI systems. And what changed the game for me was when I understood how to use
130:38 APIs. So, in this video, I'm going to break it down as simple as possible, no
130:41 technical jargon, and by the end, you'll be confidently setting up API calls
130:45 within your own Agentic workflows. Let's make this easy. So the purpose of this
130:48 video is to understand how to set up your own requests so you can access any
130:52 API because that's where the power truly comes in. And before we get into NDN and
130:56 we set up a couple live examples and I show you guys my thought process when
131:00 I'm setting up these API calls. First I thought it would just be important to
131:04 understand what an API really is. And APIs are so so powerful because let's
131:08 say we're building agents within NADN. Basically we can only do things within
131:12 NADN's environment unless we use an API to access some sort of server. So
131:16 whether that's like a Gmail or a HubSpot or Air Table, whatever we want to access
131:20 that's outside of Niden's own environment, we have to use an API call
131:24 to do so. And so that's why at the end of this video, when you completely
131:27 understand how to set up any API call you need, it's going to be a complete
131:31 gamecher for your workflows and it's also going to unlock pretty much
131:35 unlimited possibilities. All right, now that we understand why APIs are
131:38 important, let's talk about what they actually do. So API stands for
131:42 application programming interface. And at the highest level in the most simple
131:45 terms, it's just a way for two systems to talk to each other. So NAND and
131:49 whatever other system we want to use in our automations. So keeping it limited
131:53 to us, it's our NAN AI agent and whatever we want it to interact with.
131:56 Okay, so I said we're going to make this as simple as possible. So let's do it.
132:00 What we have here is just a scenario where we go to a restaurant. So this is
132:03 us right here. And what we do is we sit down and we look at the menu and we look
132:06 at what food that the restaurant has to offer. And then when we're ready to
132:10 order, we don't talk directly to the kitchen or the chefs in the kitchen. We
132:13 talk to the waiter. So we'd basically look at the menu. We'd understand what
132:16 we want. Then we would talk to the waiter and say, "Hey, I want the chicken
132:19 parm." The waiter would then take our order and deliver it to the kitchen. And
132:23 after the kitchen sees the request that we made and they understand, okay, this
132:26 person wants chicken parm. I'm going to grab the chicken parm, not the salmon.
132:30 And then we're basically going to feed this back down the line through the
132:33 waiter all the way back to the person who ordered it in the first place. And
132:37 so that's how you can see we use an HTTP request to talk to the API endpoint and
132:42 receive the data that we want. And so now a little bit more of a technical
132:45 example of how this works in NADN. Okay, so here is our AI agent. And when it
132:48 wants to interact with the service, it first has to look at that services API
132:53 documentation to see what is offered. Once we understand that, we'll read that
132:56 and we'll be ready to make our request and we will make that request using an
133:00 HTTP request. From there, that HTTP request will take our information and
133:04 send it to the API endpoint. the endpoint will look at what we ordered
133:07 and it will say, "Okay, this person wants this data. So, I'm going to go
133:10 grab that. I'm going to send it back, send it back to the HTTP request." And
133:14 then the HTTP request is actually what delivers us back the data that we asked
133:17 and we know that it was available because we had to look at the API
133:21 documentation first. So, I hope that helps. I think that looking at it
133:24 visually makes a lot more sense, especially when you hear, you know, HTTP
133:28 API endpoint, all this kind of stuff. But really, it's just going to be this
133:31 simple. So now let me show an example of what actually this looks like in Naden
133:34 and when you would use one and when you wouldn't need to use one. So here we
133:37 have two examples where we're accessing a service called open weather map which
133:41 basically just lets us grab the weather data from anywhere in the world. And so
133:44 on the left what we're doing is we're using open weather's native integration
133:48 within nadn. And so what I mean by native integration is just that when we
133:51 go into nadn and we click on the plus button to add an app and we want to see
133:54 like you know the different integrations. It has air tableable it
133:58 has affinity it has airtop it has all these AWS things. It has a ton of native
134:02 integrations and all that a native integration is is an HTTP request but
134:07 it's just like wrapped up nicely in a UI for us to basically fill in different
134:11 parameters. And so once you realize that it really clears everything up because
134:14 the only time you actually need to use an HTTP request is if the service you
134:19 want to use is not listed in this list of all the native integrations. Anyways,
134:23 let me show you what I mean by that. So like I said on the left we have Open
134:27 Weather Maps native integration. So basically what we're doing here is we're
134:29 sending over, okay, I'm using open weather map. I'm going to put in the
134:32 latitude and the longitude of the city that I'm looking for. And as you can see
134:36 over here, what we get back is Chicago as well as a bunch of information about
134:39 the current weather in Chicago. And so if you were to fill this out, it's super
134:42 super intuitive, right? All you do is put in the Latin long, you choose your
134:46 format as far as imperial or metric, and then you get back data. And that's the
134:49 exact same thing we're doing over here where we use an HTTP request to talk to
134:54 Open Weather's API endpoint. And so this is just looks a little more scary and
134:57 intimidating because we have to set this up ourselves. But if we zoom in, we can
135:00 see it's pretty simple. We're making a git request to open weather maps URL
135:05 endpoint. And then we're putting over the lat and the long, which is basically
135:09 the exact same from the one on the left. And then, as you can see, we get the
135:13 same information back about Chicago, and then some weather information about
135:16 Chicago. And so the purpose of that was just to show you guys that these native
135:20 integrations, all we're doing is we're accessing some sort of API endpoint. it
135:24 just looks simpler and easier and there's a nice user interface for us
135:27 rather than setting everything up manually. Okay, so hopefully that's
135:30 starting to make a little more sense. Let's move down here to the way that I
135:34 think about setting up these HTTP requests, which is we're basically just
135:37 setting up filters and making selections. All we're doing is we're
135:42 saying, okay, I want to access server X. When I access server X, I need to tell
135:45 it basically, what do I want from you? So, it's the same way when you're going
135:48 to order some pizza. You have to first think about which pizza shop do I want
135:52 to call? And then once you call them, it's like, okay, I need to actually
135:54 order something. It has to be small, medium, large. It has to be pepperoni or
135:58 cheese. You have to tell it what you want and then they will send you the
136:01 data back that you asked for. So when we're setting these up, we basically
136:04 have like five main things to look out for. The first one you have to do every
136:08 time, which is a method. And the two most common are going to be a get or a
136:11 post. Typically, a get is when you're just going to access an endpoint and you
136:14 don't have to send over any information. You're just going to get something back.
136:17 But a post is when you're going to send over certain parameters and certain data
136:21 and say okay using this information send me back what I'm asking for. The great
136:24 news is and I'll show you later when we get into any end to actually do a live
136:29 example. It'll always tell you to say you know is this a get or a post. Then
136:32 the next thing is the endpoint. You have to tell it like which website or you
136:35 know which endpoint you want to actually access which URL. From there we have
136:38 three different parameters to set up. And also just realize that this one
136:43 should say body parameters. But this used to be the most confusing part to
136:46 me, but it's really not too bad at all. So, let's break it down. So, keep in
136:49 mind when we're looking at that menu, that API documentation, it's always
136:52 going to basically tell us, okay, here are your query parameters, here are your
136:55 header parameters, and here are your body parameters. So, as long as you
136:57 understand how to read the documentation, you'll be just fine. But
137:01 typically, the difference here is that when you're setting up query parameters,
137:04 this is basically just saying a few filters. So if you search pizza into
137:08 Google, it'll come google.com/arch, which would be Google's endpoint. And then we would have a
137:14 question mark and then Q equals and then a bunch of different filters. So as you
137:16 can see right here, the first filter is just Q equals pizza. And the Q is, you
137:21 know, it stands for query parameters. And you don't even have to understand
137:23 that. That's just me showing you kind of like a real example of how that works.
137:26 From there, we have to set up a header parameter, which is pretty much always
137:30 going to exist. And I basically just think of header parameters as, you know,
137:34 authorizing myself. So, usually when you're doing some sort of API where you
137:37 have to pay, you have to get a unique API key and then you'll send that key.
137:40 And if you don't put your key in, then you're not going to be able to get the
137:43 data back. So, like if you're ordering a pizza and you don't give them your
137:46 credit card information, they're not going to send you a pizza. And usually
137:49 an API key is something you want to keep secret because let's say, you know, you
137:53 put 10 bucks into some sort of API that's going to create images for you.
137:56 If that key gets leaked, then anyone could use that key and could go create
138:00 images for themselves for free, but they'd be running down your credits. And
138:03 these can come in different forms, but I just wanted to show you a really common
138:06 one is, you know, you'll have your key value pairs where you'll put
138:10 authorization as the name and then in the value you'll put bearer space your
138:15 API key or in the name you could just put API_key and then in the value you'd
138:19 put your API key. But once again, the API documentation will tell you how to
138:23 configure all this. And then finally, the body parameters if you need to send
138:26 something over to get something back. Let's say we're, you know, making an API
138:31 call to our CRM and we want to get back information about John. We could send
138:35 over something like name equals John. The server would then grab all records
138:39 that have name equal John and then it would send them back to you. So those
138:42 are basically like the five main things to look out for when you're reading
138:45 through API documentation and setting up your HTTP request. But the beautiful
138:51 thing about living in 2025 is that we now have the most beautiful thing in the
138:55 world, which is a curl command. And so what a curl command is is it lets you
138:59 hit copy and then you basically can just import that curl into nadn and it will
139:03 pretty much set up the request for you. Then at that point it really is just
139:07 like putting in your own API key and tweaking a few things if you want to. So
139:11 let's take a look at this curl statement for a service called Tavi. As you can
139:14 see that's the endpoint is api.tavi.com. All this basically does is
139:18 it lets you search the internet. So you can see here this curl statement tells
139:21 us pretty much everything we need to know to use this. So it's telling us
139:24 that it's going to be a post. It's showing us the API endpoint that we're
139:27 going to be accessing. It shows us how to set up our header. So that's going to
139:30 be authorization and then it's going to be bearer space API token. It's
139:34 basically just telling us that we're going to get this back in JSON format.
139:37 And then you can see all of these different key value pairs right here in
139:40 the data section. And these are basically going to be body parameters
139:43 where we can say, you know, query is who is Leo Messi? So that's what we' be
139:47 searching the internet for. We have topic equals general. We have search
139:50 depth equals basic. So hopefully you can see all of these are just different
139:54 filters where we can choose okay do we want you know do we want one max result
139:58 or do we want four or do we have a time range or do we not. So this is really
140:02 just at the end of the day. It's basically like ordering Door Dash
140:05 because what we would have up here is you know like what actual restaurant do
140:09 we want to order food from? We would put in our credit card information. We would
140:12 say do we want this to be delivered to us? Do we want to pick it up? We would
140:15 basically say you know do you want a cheeseburger? No pickles. No onions.
140:18 Like what are the different things that you want to flip? Do you want a side? Do
140:21 you want fries? Or do you want salad? Like, what do you want? And so once you
140:25 get into this mindset where all I have to do is understand this documentation
140:28 and just tweak these little things to get back what I want, it makes setting
140:33 up API calls so much easier. And if another thing that kind of intimidates
140:37 you is the aspect of JSON, it shouldn't because all it is is
140:41 a key value pair like we kind of talked about. You know, this is JSON right here
140:44 and you're going to send your body parameter over as JSON and you're also
140:48 going to get back JSON. So the more and more you use it, you're going to
140:51 recognize like how easy it is to set up. So anyways, I hope that that made sense
140:54 and broke it down pretty simply. Now that we've seen like how it all works,
140:58 it's going to be really valuable to get into niten. I'm going to open up a API
141:02 documentation and we're just going to set up a few requests together and we'll
141:06 see how it works. Okay, so here's the example. You know, I did open weather's
141:09 native integration and then also open weather as an HTTP request. And you can
141:13 see it was like basically the exact same thing. Um, so let's say that what we
141:17 want to do is we want to use Perplexity, which if you guys don't know what
141:20 Perplexity is, it is basically, you know, kind of similar to Chatbt, but it
141:23 has really good like internet search and research. So let's say we wanted to use
141:28 this and hook it up to an AI agent, so it can do web search for us. But as you
141:33 can see, if I type in Perplexity, there's no native integration for
141:37 Perplexity. So that basically signals to us, okay, we can only access Perplexity
141:41 using an HTTP request. And real quick side note, if you're ever thinking to
141:45 yourself, hm, I wonder if I can have my agent interact with blank. The answer is
141:49 yes, if there's API documentation. And all you have to do typically to find out
141:53 if there's API documentation is just come in here and be like, you know,
141:56 Gmail API documentation. And then we can see Gmail API is a restful API, which
142:01 means it has an API and we can use it within our automations. Anyways, getting
142:04 back to this example of setting up a perplexity HTTP request. We have our
142:08 HTTP request right here and it's left completely blank. So, as you can see, we
142:12 have our method, we have our endpoint, we have query, header, and body
142:16 parameters, but nothing has been set up yet. So, what we need to do is we would
142:19 head over to Perplexity, as you can see right here. And at the bottom, there's
142:22 this little thing called API. So, I'm going to click on that. And this opens
142:26 up this little page. And so, what I have here is Perplexity's API. If I click on
142:30 developer docs, and then right here, I have API reference, which is integrate
142:34 the API into your workflows, which is exactly what we want to do. This page is
142:38 where people might get confused and it looks a little bit intimidating, but
142:40 hopefully this breakdown will show you how you can understand any API doc,
142:44 especially if there's a curl command. So, all I'm going to do first of all is
142:48 I'm just going to right away hit copy and make sure you know you're in curl.
142:51 If you're in Python, it's not the same. So, click on curl. I copied this curl
142:54 command and I'm going to come back into nit hit import curl and all I have to do
142:59 is paste. Click import and basically what you're going to see is this HTTP
143:02 request is going to get basically populated for us. So now we have the
143:06 method has been changed to post. We have the correct URL which is the API
143:10 endpoint which basically is telling this node, okay, we're going to use
143:13 Perplexity's API. You can see that the curl had no query parameters. So that's
143:17 left off. It did turn on headers which is basically just having us put our API
143:21 key in there. And then of course we have the JSON body that we need to send over.
143:24 Okay. So at this point what I would do is now that we have this set up, we know
143:28 we just need to put in a few things of our own. So the first thing to tackle is
143:32 how do we actually authorize ourselves into Perplexity? All right. All right.
143:35 So, I'm back in Perplexity. I'm going to go to my account and click on settings.
143:37 And then all I'm going to do is basically I need to find where I can get
143:41 my API key. So, on the lefth hand side, if I go all the way down, I can see API
143:45 keys. So, I'll click on that. And all this is going to do is this shows my API
143:48 key right here. So, I'll have to click on this, hit copy, come back into NN,
143:52 and then all I'm going to do is I'm going to just delete where this says
143:55 token, but I'm going to make sure to leave a space after bearer and hit
144:00 paste. So now this basically authorizes ourselves to use Perplexity's endpoint.
144:04 And now if we look down at the body request, we can see we have this thing
144:08 set up for us already. So if I hit test step, this is real quick going to make a
144:12 request over. It just hit Perplexity's endpoint. And as you can see, it came
144:15 back with data. And what this did is it basically searched Perplexity for how
144:20 many stars are there in our galaxy. And that's where right here we can see the
144:23 Milky Way galaxy, which is our galaxy, is estimated to contain between 100
144:27 billion and 400 billion stars, blah blah blah. So we know basically okay if we
144:30 want to change how this endpoint works what data we're going to get back this
144:34 right here is where we would change our request and if we go back into the
144:37 documentation we can see what else we have to set up. So the first thing to
144:40 notice is that there's a few things that are required and then some things that
144:43 are not. So right here we have you know authorization that's always required.
144:47 The model is always required like which perplexity model are we going to use?
144:51 The messages are always required. So this is basically a mix of a system
144:55 message and a user message. So here the example is be precise and concise and
144:58 then the user message is how many stars are there in the galaxy. So if I came
145:03 back here and I said you know um be funny in your answer. So I'm basically
145:08 telling this model how to act and then instead of how many stars are there in
145:11 the galaxy I'm just going to say how long do cows live? And I'll make another
145:16 request off to perplexity. So you can see what comes back is this longer
145:20 content. So it's not being precise and concise and it says so you're wondering
145:25 how long cows live. Well, let's move into the details. So, as you can see,
145:28 it's being funny. Okay, back into the API documentation. We have a few other
145:31 things that we could configure, but notice how these aren't required the
145:35 same way that these ones are. So, we have max tokens. We could basically put
145:39 in an integer and say how many tokens do you want to use at the maximum. We could
145:42 change the temperature, which is, you know, like how random the response would
145:45 be. And this one says, you know, it has to be between zero and two. And as we
145:48 keep scrolling down, you can see that there's a ton of other little levers
145:51 that we can just tweak a little bit to change the type of response that we get
145:55 back from Plexity. And so once you start to read more and more API documentation,
145:58 you can understand how you're really in control of what you get back from the
146:02 server. And also you can see like, you know, sometimes you have to send over
146:05 booleans, which is basically just true or false. Sometimes you can only send
146:09 over numbers. Sometimes you can only send over strings. And sometimes it'll
146:12 tell you, you know, like what will the this value default to? and also what are
146:15 the only accepted values that you actually could fill out. So for example,
146:18 if we go back to this temperature setting, we can see it has to be a
146:22 number and if you don't fill that out, it's going to be 0.2. But we can also
146:26 see that if you do fill this out, it has to be between zero or two. Otherwise,
146:30 it's not going to work. Okay, cool. So that's basically how it works. We just
146:34 set up an HTTP request and we change the system prompt and we change the user
146:36 prompt and that's how we can customize this thing to work for us. And that's
146:40 really cool as a node because we can set up, you know, a workflow to pass over
146:44 some sort of variable into this request. So it searches the web for something
146:47 different every time. But now let's say we want to give our agent access to this
146:50 tool and the agent will decide what am I going to search the web for based on
146:54 what the human asks me. So it's pretty much the exact same process. We'll click
146:57 on add a tool and we're going to add an HTTP request tool only because Plexity
147:01 doesn't have a native integration. And then once again you can see we have an
147:04 import curl button. So if I click on this and I just import that same curl
147:07 that we did last time once again it fills out this whole thing for us. So we
147:11 have post, we have the perplexity endpoint, we have our authorization
147:14 bearer, but notice we have to put in our token once again. And so a cool little
147:18 hack is let's say you know you're going to use perplexity a lot. Rather than
147:22 having to go grab your API key every single time, what we can do is we can
147:25 just send it over right here in the authentication tab. So let me show you
147:29 what I mean by that. If I click into authentication, I can click on generic
147:33 credential type. And then from here, I can basically choose, okay, is this a
147:37 basic O, a bearer O, all this kind of stuff. A lot of times it's just going to
147:39 be header off. So that's why we know right here we can click on header off.
147:42 And as you can see, we know that because we're sending this over as a header
147:45 parameter and we just did this earlier and it worked. So as you can see, I have
147:50 header offs already set up. I probably already have a Plexity one set up right
147:53 here. But I'm just going to go ahead and create a new one with you guys to show
147:56 you how this works. So I just create a new header off and all we have to do is
148:01 the exact same thing that we had down in the request that we just sent over which
148:04 means in the name we're just going to type in authorization with a capital A
148:08 and once again we can see in the API docs this is how you do it. So
148:11 authorization and then we can see that the value has to be capital B bearer
148:16 space API token. So I'm just going to come into here bearer space API token
148:21 and then all I have to do is you know first of all name this so we can can
148:26 save it and then if I hit save now every single time we want to use perplexity's
148:30 endpoint we already have our credentials saved. So that's great and then we can
148:33 turn off the headers down here because we don't need to send it over twice. So
148:37 now all we have to do is change this body request a little bit just to make
148:41 it more dynamic. So in order to make it dynamic the first thing we have to do is
148:44 change this to an expression. Now, we can see that we can basically add a
148:48 variable in here. And what we can do is we can add a variable that basically
148:52 just tells the AI model, the AI agent, here is where you're going to send over
148:57 your internet search query. And so we already know that all that that is is
149:01 the user content right here. So if I delete this and basically if I do two
149:06 curly braces and then within the curly braces I do a dollar sign and I type in
149:10 from, I can grab a from AI function. And this from AI function just indicates to
149:15 the AI agent I need to choose something to send over here. And you guys will see
149:19 an example and it will make more sense. I also did a full video breaking this
149:21 down. So if you want to see that, I'll tag it right up here. Anyways, as you
149:25 can see, all we have to really do is enter in a key. So I'm just going to do,
149:28 you know, two quotes and within the quote, I'm going to put in search term.
149:32 And so now the agent will be reading this and say, okay, whenever the user
149:35 interacts with me and I know I need to search the internet, I'm just going to
149:39 fill this whole thing in with the search term. So, now that that's set up, I'm
149:43 just going to change this request to um actually I'm just going to call it web
149:46 search to make that super intuitive for the AI agent. And now what we're going
149:49 to do is we are going to talk to the agent and see if it can actually search
149:53 the web. Okay, so I'm asking the AI agent to search the web for the best
149:57 movies. It's going to think about it. It's going to use this tool right here
150:00 and then we'll basically get to go in there and we can see what it filled in
150:03 in that search term placeholder that we gave it. So, first of all, the answer it
150:09 gave us was IMDb, top 250, um, Rotten Tomatoes, all this kind of stuff, right?
150:12 So, that's movie information that we just got from Perplexity. And what I can
150:16 do is click into the tool and we can see in the top left it filled out search
150:20 term with best movies. And we can even see that in action. If we come down to
150:23 the body request that it sent over and we expand this, we can see on the right
150:27 hand side in the result panel, this is the JSON body that it sent over to
150:32 Perplexity and it filled it in with best movies. And then of course what we got
150:36 back was our content from Perplexity, which is, you know, here are some of the
150:40 best movies across major platforms. All right, then real quick before we wrap up
150:43 here, I just wanted to talk about some common responses that you can get from
150:47 your HTTP requests. So the rule of thumb to follow is if you get data back and
150:52 you get a 200, you're good. Sometimes you'll get a response back, but you
150:55 won't explicitly see a 200 message. But if you're getting the data back, then
150:59 you're good to go. And a quick example of this is down here we have that HTTP
151:02 request which we went over earlier in this video where we went to open weather
151:06 maps API and you can see down here we got code 200 and there's data coming
151:11 back and 200 is good that's a success code. Now if you get a request in the
151:15 400s that means that you probably set up the request wrong. So 400 bad request
151:19 that could mean that your JSON's invalid. It could just mean that you
151:22 have like an extra quotation mark or you have you know an extra comma something
151:26 as silly as that. So let me show a quick example of that. We're going to test
151:29 this workflow and what I'm doing is I'm trying to send over a query to Tavi. And
151:32 you can see what we get is an error that says JSON parameter needs to be valid
151:36 JSON. And this would be a 400 error. And the issue here is if we go into the JSON
151:40 body that we're trying to send over, you can see in the result panel, we're
151:44 trying to send over a query that has basically two sets of quotation marks.
151:47 If you can see that. But here's the great news about JSON. It is so
151:51 universally used and it's been around for so long that we could basically just
151:55 copy the result over to chatbt. Paste it in here and say, I'm getting an error
151:59 message that says JSON parameter needs to be valid JSON. What's wrong with my
152:02 JSON? And as you can see, it says the issue with your JSON is the use of
152:06 double quotes around the string value in this line. So now we'd be able to go fix
152:09 that. And if we go back into the workflow, we take away these double
152:13 quotes right here. Test the step again. Now you can see it's spinning and it's
152:16 going to work. and we should get back some information about pineapples on
152:19 pizza. Another common error you could run into is a 401, meaning unauthorized.
152:23 This typically just means that your API key is wrong. You could also get a 403,
152:26 which is forbidden. That just means that maybe your account doesn't have access
152:30 to this data that you're requesting or something like that. And then another
152:33 one you could get is a 404, which sometimes you'll get that if you type in
152:36 a URL that doesn't exist. It just means this doesn't exist. We can't find it.
152:39 And a lot of times when you're looking at the actual API documentation that you
152:43 want to set up a request to. So here's an example with Tavi, it'll show you
152:47 what typical responses could look like. So here's one where you know we're using
152:50 Tavi to search for who is Leo Messi. This was an example we looked at
152:54 earlier. And with a 200 response, we are getting back like a query, an answer
152:58 results, stuff like that. We could also see we could get a 400 which would be
153:02 for bad request, you know, invalid topic. We could have a 401 which means
153:06 invalid API key. We could get all these other ones like 429, 432, but in general
153:12 400 is bad. And then even worse is a 500. And this just basically means
153:15 something's wrong with the server. Maybe it doesn't exist anymore or there's a
153:19 bug on the server side. But the good news about a 500 is it's not your fault.
153:22 You didn't set up the request wrong. It just means something's wrong with the
153:26 server. And it's really important to know that because if you think you did
153:28 something wrong, but it's really not your fault at all, you may be banging
153:32 your head against the wall for hours. So anyways, what I wanted to highlight here
153:35 is there's never just like a one-sizefits-all. I know how to set up
153:39 this one API call so I can just set up every single other API call the exact
153:42 same way. The key is to really understand how do you read the API
153:45 documentation? How do you set up your body parameters and your different
153:48 header parameters? And then if you start to run into issues, the key is
153:52 understanding and actually reading the error message that you're getting back
153:55 and adjusting from there. All right, so that's going to do it for this video.
153:58 Hopefully this has left you feeling a lot more comfortable with diving into
154:02 API documentation, walking through it just step by step using those curl
154:06 commands and really just understanding all I'm doing is I'm setting up filters
154:09 and levers here. I don't have to get super confused. It's really not that
154:12 technical. I'm pretty much in complete control over what my API is going to
154:16 send me back. The same way I'm in complete control when I'm, you know,
154:19 ordering something on Door Dash or ordering something on Amazon, whatever
154:23 it is. Hopefully by now the concept of APIs and HTTP requests makes a lot more
154:27 sense. But really, just to drive it home, what we're going to do is hop into
154:30 some actual setups in NADN of connecting to some different popular APIs and walk
154:35 through a few more step by steps just to really make sure that we understand the
154:38 differences that can come with different API documentation and how you read it
154:41 and how you set up stuff like your credentials and the body requests. So,
154:45 let's move into this next part, which I think is going to be super valuable to
154:49 see different API calls in action. Okay, so in nodn when we're working with a
154:52 large language model, whether that's an AI agent or just like an AI node, what
154:57 happens is we can only access the information that is in the large
155:00 language models training data. And a lot of times that's not going to be super
155:04 up-to-date and real time. So what we want to do is access different APIs that
155:08 let us search the web or do real-time search. And what we saw earlier in that
155:13 third step-by-step workflow was we used a tool called Tavi and we accessed it
155:17 through an HTTP request node which as you guys know looks like this. And we
155:21 were able to use this to communicate with Tavali's API server. So if we ever
155:25 want to access real-time information or do research on certain search terms, we
155:30 have to use some sort of API to do that. So, like I said, we talked about Tavi,
155:34 but in this video, I'm going to help you guys set up Perplexity, which if you
155:38 don't know what it is, it's kind of like Chat GBT, but it's really, really good
155:42 for web search and in-depth research. And it has that same sort of like, you
155:46 know, chat interface as chatbt, but what you also have is access to the API. So,
155:50 if I click on API, we can see this little screen, but what we want to go to
155:54 is the developer docs. And in the developer docs, what we're looking for
155:57 is the API reference. We can also click on the quick start guide right here
156:00 which just shows you how you can set up your API key and get all that kind of
156:03 stuff. So that's exactly what we're going to do is set up an API call to
156:08 Perplexity. So I'm going to click on API reference and what we see here is the
156:12 endpoint to access Perplexi's API. And so what I'm going to do is just grab
156:15 this curl command from that right hand side, go back into our NEN and I'm just
156:19 going to import the curl right into here. And then all we have to do from
156:22 there is basically configure what we want to research and put in our own API
156:27 key. So there we go. We have our node pretty much configured. And now the
156:30 first thing we see we need to set up is our authorization API key. And what we
156:34 could do is set this up in here as a generic credential type and save it. But
156:37 right now we're just going to keep things as simple as possible where we
156:40 imported the curl. And now I'm just going to show you where to plug in
156:43 little things. So we have to go back over to Perplexity and we need to go get
156:46 an API key. So I'm going to come over here to the left and I'm going to click
156:49 on my settings. And hopefully in here we're able to find where our API key
156:53 lives. Now we can see in the bottom left over here we have API keys. And what I'm
156:56 going to do is come in here and just create a new secret key. And we just got
156:59 a new one generated. So I'm just going to click on this button, click on copy,
157:03 and all we have to do is replace the word right here that says token. So I'm
157:07 just going to delete that. I'm going to make sure to leave a space after the
157:10 word bearer. And I'm going to paste in my Perplexity API key. So now we should
157:14 be connected. And now what we need to do is we need to set up the actual body
157:17 request. So if I go back into the documentation, we can see this is
157:21 basically what we're sending over. So that first thing is a model, which is
157:24 the name of the model that will complete your prompt. And if we wanted to look at
157:27 different models, we could click into here and look at other supported models
157:31 from Perplexity. So it took us to the screen. We click on models, and we can
157:35 see we have Sonar Pro or Sonar. We have Sonar Deep Research. We have some
157:38 reasoning models as well. But just to keep things simple, I'm going to stick
157:41 with the default model right now, which is just sonar. Then we have an object
157:45 that we're sending over, which is messages. And within the messages
157:49 object, we have a few things. So first of all we're sending over content which
157:53 is the contents of the message in turn of conversation. It can be in a string
157:57 or an array of parts. And then we have a role which is going to be the role of
158:00 the speaker in the conversation. And we have available options system user or
158:05 assistant. So what you can see in our request is that we're sending over a
158:08 system message as well as a user message. And the system message is
158:11 basically the instructions for how this AI model on perplexity should act. And
158:15 then the user message is our dynamic search query that is going to change
158:19 every time. And if we go back into the documentation, we can see that there are
158:22 a few other things we could add, but we don't have to. We could tell Perplexity
158:26 what is the max tokens we want to use, what is the temperature we want to use.
158:30 We could have it only search for things in the past week or day. So, this
158:34 documentation is basically going to be all the filters and settings that you
158:38 have access to in order to customize the type of results that you want to get
158:41 back. But, like I said, keeping this one really simple. We just want to search
158:45 the web. All I'm going to do is keep it as is. And if I disconnect this real
158:48 quick and we come in and test step, it's basically going to be searching
158:51 perplexity for how many stars are there in our galaxy. And then the AI model of
158:56 sonar is the one that's going to grab all of these five sources and it's going
159:00 to answer us. And right here it says the Milky Way galaxy, which is our home
159:04 galaxy, is estimated to contain between 100 billion and 400 billion stars. This
159:08 range is due to the difficulty blah blah blah blah blah. So that's basically how
159:11 it was able to answer us because it used an AI model called sonar. So now if we
159:15 wanted to make this search a little bit more dynamic, we could basically plug
159:19 this in and you can see in here what I'm doing is I'm just setting a search term.
159:23 So let's test this step. What happens is the output of this node is a research
159:27 term. Then we could reference that variable of research term right in here
159:31 in our actual body request to perplexity. So I would delete this fixed
159:35 message which is how many stars are there in our galaxy. And all I would do
159:38 is I'd drag in research term from the left, put it in between the two quotes,
159:43 and now it's coming over dynamically as anthropic latest developments. And all
159:47 I'd have to do now is hit test step. And we will get an answer from Perplexity
159:51 about Anthropic's recent developments. There we go. It just came back. We can
159:54 see there's five different sources right here. It went to Anthropic, it went to
159:57 YouTube, it went to TechCrunch. And what we get is that today, May 22nd, so real
160:03 time information, Claude Opus 4 was released. And that literally came out
160:07 like 2 or 3 hours ago. So that's how we know this is searching the web in real
160:11 time. And then all we'd have to do is have, you know, maybe an AI model is
160:14 changing our search term or maybe we're pulling from a Google sheet with a bunch
160:17 of different topics we need to research. But whatever it is, as long as we are
160:21 passing over that variable, this actual search result from Perplexity is going
160:25 to change every single time. And that's the whole point of variables, right?
160:28 They they vary. They're dynamic. So, I know that one was quick, but Perplexity
160:32 is a super super versatile tool and probably an API that you're going to be
160:35 calling a ton of times. So, just wanted there. So, Firecall is going to allow us
160:43 to turn any website into LLM ready data in a matter of seconds. And as you can
160:46 see right here, it's also open source. So, once you get over to Firecol, click
160:49 on this button and you'll be able to get 500 free credits to play around with. As
160:52 you can see, there's four different things we can do with Firecrawl. We can
160:57 scrape, we can crawl, we can map or we can do this new extract which basically
161:01 means we can give firecraw a URL and also a prompt like can you please
161:04 extract the company name and the services they offer and an icebreaker
161:08 out of this URL. So there's some really cool use cases that we can do with
161:11 firecrawl. So in this video we're going to be mainly looking at extract, but I'm
161:14 also going to show you the difference between scrape and extract. And we're
161:17 going to get into end and connect up so you can see how this works. But the
161:20 playground is going to be a really good place to understand the difference
161:23 between these different endpoints. All right, so for the sake of this video,
161:26 this is the website we're going to be looking at. It's called quotes to
161:29 scrape. And as you can see, it's got like 10 on this first page and it also
161:32 has different pages of different categories of quotes. And as you can
161:35 see, if we click into them, there are different quotes. So what I'm going to
161:37 do is go back to the main screen and I'm going to copy the URL of this website
161:41 and we're going to go into niten. We're going to open up a new node, which is
161:45 going to be an HTTP request. And this is just to show you what a standard get
161:49 request to a static website looks like. So we're going to paste in the URL, hit
161:53 test step, and on the right hand side, we're going to get all the HTML back
161:57 from the quotes to scrape website. Like I said, what we're looking at here is a
162:00 nasty chunk of HTML. It's pretty hard for us to read, but basically what's
162:04 going on here is this is the code that goes to the website in order to have it
162:07 be styled and different fonts and different colors. So right here, what
162:10 we're looking at is the entire first page of this website. So if we were to
162:14 search for Harry, if I copy this, we go back into edit and we control F this.
162:19 You can see there is the exact quote that has the word Harry. So everything
162:22 from the website's in here, it's just wrapped up in kind of an ugly chunk of
162:26 HTML. Now hopping back over to the fireall playground using the scrape
162:30 endpoint, we can replace that same URL. We'll run this and it's going to output
162:33 markdown formatting. So now we can see we actually have everything we're
162:36 looking for with a different quotes and it's a lot more readable for a human. So
162:41 that's what a web scrape is, right? We get the information back, whether that's
162:44 HTML or markdown, but then we would typically feed that into some sort of
162:47 LLM in order to extract the information we're looking for. In this case, we'd be
162:52 looking for different quotes. But what we can do with extract is we can give it
162:56 the URL and then also say, hey, get all of the quotes on here. And using this
162:59 method, we can say, not just these first 10 on this page. I want you to crawl
163:03 through the whole site and basically get all of these quotes, all of these
163:06 quotes, all of these quotes, all of these quotes. So it's going to be really
163:09 cool. So I'm going to show how this works in firecrawl and then we're going
163:11 to plug it into noden. All right. So what we're doing here is we're saying
163:14 extract all of the quotes and authors from this website. I gave it the website
163:18 and now what it's doing is it's going to generate the different parameters that
163:22 the LLM will be looking to extract out of the content of the website. Okay. So
163:26 here's the run we're about to execute. We have the URL and then we have our
163:29 schema for what the LLM is going to be looking for. And it's looking for text
163:33 which would be the quote and it's a string. And then it's also going to be
163:35 looking for the author of that quote which is also a string. And then the
163:39 prompt we're feeding here to the LLM is extract all quotes and their
163:43 corresponding authors from the website. So we're going to hit run and we're
163:46 going to see that it's not only going to go to that first URL, it's basically
163:49 going to take that main domain, which is quotes to scrape.com, and it's going to
163:53 be crawling through the other sections of this website in order to come back
163:56 and scrape all the quotes on there. Also, quick plug, go ahead and use code
164:00 herk10 to get 10% off the first 12 months on your Firecrawl plan. Okay, so
164:04 it just finished up. As you can see, we have 79 quotes. So down here we have a
164:09 JSON response where it's going to be an object called quotes. And in there we
164:12 have a bunch of different items which has you know text author text author
164:17 text author and we have pretty much everything from that website now. Okay
164:20 cool. But what we want to do is look at how we can do this in n so that we have
164:25 you know a list of 20 30 40 URLs that we want to extract information from. We can
164:28 just loop through and send off that automation rather than having to come in
164:33 here and type that out in firecrawl. Okay. So what we're going to do is go
164:35 back into edit end. And I apologize because there may be some jumping around
164:38 here, but we're basically just gonna clear out this HTTP request and grab a
164:42 new one. Now, what we're going to do is we want to go into Firecrawl's
164:45 documentation. So, all we have to do is import the curl command for the extract
164:48 endpoint rather than trying to figure out how to fill out these different
164:51 parameters. So, back in Firecrawl, once you set up your account, up in the top
164:54 right, you'll see a button called docs. You want to click into there. And now,
164:57 we can see a quick start guide. We have different endpoints. And what we're
165:00 going to do is on the left, scroll down to features and click on extract. And
165:04 this is what we're looking for. So, we've got some information here. The
165:07 first thing to look at is when we're using the extract, you can extract
165:10 structured data from one or multiple URLs, including wild cards. So, what we
165:14 did was we didn't just scrape one single page. We basically scraped through all
165:18 of the pages that had the main base domain of um quotescrape.com or
165:23 something like that. And if you put a asterk after it, it's going to basically
165:26 mean this is a wild card and it's going to go scrape all pages that are after it
165:30 rather than just scraping this one predefined page. As you can see right
165:34 here, it'll automatically crawl and parse all the URLs it can discover, then
165:38 extract the requested data. And we can see that's how it worked because if we
165:41 come back into the request we just made, we can see right here that it added a
165:45 slash with an asterisk after quotes to scrape.com. Okay. Anyway, so what we're
165:48 looking for here is this curl command. This is basically going to fill out the
165:51 method, which is going to be a post request. It's going to fill out the
165:54 endpoint. It'll fill out the content type, and it'll show us how to set up
165:58 our authorization. And then we'll have a body request that we'll need to make
166:02 some minor changes to. So in the top right I'm going to click copy and I'm
166:05 going to come back into edit end. Hit import curl. Paste that in there. Hit
166:09 import. And as you can see everything pretty much just got populated. So like
166:12 I said the method is going to be a post. We have the endpoint already set up. And
166:15 what I want to do is show you guys how to set up this authorization so that we
166:18 can keep it saved forever rather than having to put it in here in the
166:22 configuration panel every time. So first of all, head back over to your
166:26 firecrawl. Go to API keys on the lefth hand side. And you're just going to want
166:29 to copy that API key. So once you have that copied, head back into NN. And now
166:33 let's look at how we actually set this up. So typically what you do is we have
166:37 this as a header parameter. Not all authorizations are headers, but this one
166:41 is a header. And the key or the name is authorization and the value is bearer
166:46 space your API key. So what you'd typically do is just paste in your API
166:50 key right there and you'd be good to go. But what we want to do is we want to
166:53 save our firecrawl credential the same way you'd save, you know, a Google
166:58 Sheets credential or a Slack credential. So, we're going to come into
167:01 authentication, click on generic. We're going to click on generic type and
167:04 choose header because we know down here it's a header off. And then you can see
167:07 I have some other credentials already saved. We're going to create a new one.
167:11 I'm just going to name this firecrawl to keep ourselves organized. For the name,
167:14 we're going to put authorization. And for the value, we're going to type
167:18 bearer with a capital B space and then paste in our API key. And we'll hit
167:21 save. And this is going to be the exact same thing that we just did down below,
167:25 except for now we have it saved. So, we can actually flick this field off. We
167:28 don't need to send headers because we're sending them right here. And now we just
167:32 need to figure out how to configure this body request. Okay, so I'm going to
167:35 change this to an expression and open it up just so we can take a look at it. The
167:38 first thing we notice is that by default there are three URLs in here that we
167:41 would be extracting from. We don't want to do that here. So I'm going to grab
167:44 everything within the array, but I'm going to keep the two quotation marks.
167:47 Now all we need to do is put the URL that we're looking to extract
167:49 information from in between these quotation marks. So here I just put in
167:53 the quotes to scrape.com. But what we want to do if you remember is we want to
167:57 put an asterisk after that so that it will go and crawl all of the pages, not
168:01 just that first page and which would only have like nine or 10 quotes. And
168:04 now the rest is going to be really easy to configure because we already did this
168:07 in the playground. So we know exactly what goes where. So I'm going to click
168:10 back into our playground example. First thing is this is the quote that
168:13 firecross sent off. So I'm going to copy that. Go back and edit in and I'm just
168:17 going to replace the prompts right here. We don't want the company mission blah
168:20 blah blah. We want to paste this in here and we're looking to extract all quotes
168:24 and their corresponding authors from the website. And then next is basically
168:27 telling the LLM, what are you pulling back? So, we just told it it's pulling
168:31 back quotes and authors. So, we need to actually make the schema down here in
168:36 the body request match the prompt. So, all we have to do is go back into our
168:39 playground. Right here is the schema that we sent over in our example. And
168:42 I'm just going to click on JSON view and I'm going to copy this entire thing
168:46 which is wrapped up in curly braces. We'll come back into end and we'll start
168:51 after schema colon space. Replace all this with what we just had in um fire
168:55 crawl. And actually I think I've noticed the way that this copied over. It's not
168:58 going to work. So let me show you guys that real quick. If we hit test step,
169:01 it's going to say JSON parameter needs to be valid JSON. So what I'm going to
169:05 do is I'm going to copy all of this. Now I came into chat GBT and I'm just saying
169:08 fix this JSON. What it's going to do is it's going to just basically push these
169:12 over. When you copy it over from Firecrol, it kind of aligns them on the
169:15 left, but you don't want that. So, as you can see, it just basically pushed
169:18 everything over. We'll copy this into our Nit end right there. And all it did
169:21 was bump everything over once. And now we should be good to go. So, real quick
169:24 before we test this out, I'm just going to call this extract. And then we'll hit
169:28 test step. And we should see that it's going to be pulling. And it's going to
169:33 give us a message that says um true. And it gives us an ID. And so now what we
169:37 need to do next is pull this ID back to see if our request has been fulfilled
169:41 yet. So I'm back in the documentation. And now we are going to look at down
169:45 here asynchronous extraction and status checking. So this is how we check the
169:49 status of a request. As you saw, we just made one. So here I'm going to click on
169:52 copy this curl command. We're going to come back and end it in and we're going
169:56 to add another HTTP request and we're going to import that in there. And you
170:00 can see this one is going to be a get command. It's going to have a different
170:02 endpoint. And what we need to do if you look back at the documentation is at the
170:07 end of the extract slash we have to put the extract ID that we're looking to
170:13 check the status of. So back in n the ID is going to be coming from the left hand
170:16 side the previous node every time. So I'm just going to change the URL field
170:21 to an expression. Put a backslash and then I'm going to grab the ID pull it
170:25 right in there and we're good to go. Except we need to set up our credential.
170:28 And this is why it's great. We already set this up as a generic as a header.
170:32 And now we can just pull in easily our fire crawl off and hit test step. So
170:37 what happens now is our request hasn't been done yet. So as you can see it
170:41 comes back as processing and the data is an empty array. So what we're going to
170:44 set up real quick is something called polling where we're basically checking
170:48 in on a specific ID which is this one right here. And we're going to check and
170:51 if it's if it's empty, if the data field is empty, then that means we're going to
170:55 wait a certain amount of time and come back and try again. So after the
170:59 request, I'm going to add a if. So, this is just basically going to help us
171:02 create our filter. So, we're dragging in JSON.data, which as you can see is an
171:06 empty array, and we're just going to say is empty. But one thing you have to keep
171:10 in mind is this doesn't match. As you can see, we're dragging in an array, and
171:14 we were trying to do a filter of a string. So, we have to go to array and
171:18 then say is empty. And we'll hit test step. And this is going to say true. The
171:23 data field is empty. And so, if true, what we want to do is we're going to add
171:27 a wait. And this will wait for, you know, let's in in this case we'll just
171:30 say five seconds. So if we hit test step, it's going to wait for five
171:33 seconds. And um I wish actually I switched the logic so that this would be
171:37 on the bottom, but whatever. And then we would just drag this right back into
171:41 here. And we would try it again. So now after 5 seconds had passed or however
171:45 much time, we would try this again. And now we can see that we have our item
171:48 back and the data field is no longer empty because we have our quotes object
171:53 which has 83 quotes. So even got more than that time we did it in the
171:56 playground. And I'm thinking this is just because, you know, the extract is
171:59 kind of still in beta. So it may not be super consistent, but that's still way
172:03 better than if we were to just do a simple getit request. And then as you
172:07 can see now, if we ran this next step, this would come out. Ah, but this is interesting. So
172:13 before it knows what it's pulling back, the JSON.data field is an array. And so
172:17 we're able to set up is the array empty? But now it's an object. So we can't put
172:21 it through the same filter because we're looking at a filter for an array. So
172:25 what I'm thinking here is we could set up this continue using error output. So
172:29 because this this node would error, we could hit test step and we could see now
172:33 it's going to go down the false branch. And so this basically just means it's
172:36 going to let us continue moving through the process. And we could do then
172:38 whatever we want to do down here. Obviously this isn't perfect because I
172:41 just set this up to show you guys and ran into that. But that's typically sort
172:45 of the way we would think is how can we make this a little more dynamic because
172:49 it has to deal with empty arrays or potentially full objects. Anyways, what
172:52 I wanted to show you guys now is back in our request if we were to get rid of
172:56 this asterk. What would happen? So, we're just going to run this whole
172:59 process again. I'll hit test workflow. And now it's going to be sending that
173:04 request only to, you know, one URL rather than the other one. Aha. And I'm
173:08 glad we are doing live testing because I made the mistake of putting this in as
173:12 JSON ID which doesn't exist if we're pulling from the weight node. So all we
173:16 have to do in here is get rid of JSON. ID and pull in a basically a you know a
173:21 node reference variable. So we're going to do two curly braces. We're going to
173:25 be pulling from the extract node. And now we just want to say
173:29 item.json ID and we should be good to go now. So I'm just going to refresh this
173:33 and we'll completely do it again. So test workflow, we're doing the exact
173:37 same thing. It's not ready yet. So, we're going to wait 5 seconds and then
173:40 we're going to go check again. We hopefully should see, okay, it's not
173:42 ready still. So, we're going to wait five more seconds. Come check again. And
173:46 then whenever it is ready now, as you can see, it goes down this branch. And
173:50 we can see that we actually get our items back. And what you see here is
173:54 that this time we only got 10 quotes. Um, you know, it says nine, but
173:57 computers count from zero. But we only got 10 quotes because um we didn't put
174:04 an asterisk after the URL. So, Firecrawl didn't know I need to go scrape
174:07 everything out of this whole base URL. I'm only going to be scraping this one
174:11 specific page, which is this one right here, which does in fact only have 10
174:15 quotes. And by the way, super simple template here, but if you want to try it
174:18 out and just plug in your API key and different URLs, you can grab that in the
174:22 free school community. You'll hop in there, you will click on YouTube
174:24 resources and click on the post associated with this video, and you'll
174:28 have the JSON right there to download. Once you download that, all you have to
174:31 do is import it from file right up here, and you'll have the workflow. So,
174:34 there's a lot of cool use cases for firecrawl. It'd be cool to be able to
174:38 pull from a a sheet, for example, of 30 or 40 or 50 URLs that we want to run
174:42 through and then update based on the results. You could do some really cool
174:45 stuff here, like researching a ton of companies and then having it also create
174:49 some initial outreach for you. So, I hope you guys enjoyed that one.
174:51 Firecrawl is a super cool tool. There's lots of functionality there and there's
174:55 lots of uses of AI in Firecrawl, which is awesome. We're going to move into a
174:57 different tool that you can use to scrape pretty much anything, which is
175:00 called Appify, which has a ton of different actors, and you can scrape,
175:04 like I said, almost anything. So, let's go into the setup video. So, Ampify is
175:08 like a marketplace for actors, which essentially let us scrape anything on
175:10 the internet. As you can see right here, we're able to explore 4,500 plus
175:14 pre-built actors for web scraping and automation. And it's really not that
175:17 complicated. An actor is basically just a predefined script that was already
175:20 built for us that we can just send off a certain request to. So, you can think of
175:23 it like a virtual assistant where you're saying, "Hey, I want you to I want to
175:26 use the Tik Tok virtual assistant and I want you to scrape, you know, videos
175:30 that have the hashtag of AI content." Or you could use the LinkedIn job scraper
175:33 and you could say, "I want to find jobs that are titled business analyst." So,
175:36 there's just so many ways you could use Appify. You could get leads from Google
175:39 Maps. You could get Instagram comments. You could get Facebook posts. There's
175:43 just almost unlimited things you can do here. You can even tap into Apollo's
175:46 database of leads and just get a ton. So today I'm just going to show you guys in
175:50 NAN the easiest way to set up this Aify actor where you're going to start the
175:53 actor and then you're going to just grab those results. So what you're going to
175:56 want to do is head over to Aify using the link in the description and then use
176:01 code 30 Nate Herk to get 30% off. Okay, like I said, what we're going to be
176:03 covering today is a two-step process where you make one request to Aify to
176:07 start up an actor and then you're going to wait for it to finish up and then
176:10 you're just going to pull those results back in. So let me show you what that
176:12 looks like. What I'm going to do is hit test workflow and this is going to start
176:15 the Google Maps actor. And what we're doing here is we're asking for dentists
176:18 in New York. And then if I go to my Appify console and I go over here to
176:22 actors and click on the Google Maps extractor one, if I click on runs, we
176:25 can see that there's one currently finishing up right now. And now that
176:28 it's finished, I can go back into our workflow. I can hook it up to the get
176:32 results node. Hit test step. And this is going to pull in those 50 dentists that
176:36 we just scraped in New York. And you can see this contains information like their
176:39 address, their website, their phone number, all this kind of stuff. So you
176:43 can just basically scrape these lists of leads. So anyways, that's how this
176:46 works, but let's walk through a live setup. So once you're in your Appify
176:49 console, you click on the Appify store, and this is where you can see all the
176:52 different actors. And let's do an example of like a social media one. So
176:55 I'm going to click on this Tik Tok scraper since it's just the first one
176:58 right here. And this may seem a little bit confusing, but it's not going to be
177:01 too bad at all. We get to basically do all this with natural language. So let
177:04 me show you guys how this works. So basically, we have this configuration
177:07 panel right here. When you open up any sort of actor, they won't always all be
177:11 the same, but in this one, what we have is videos with this hashtag. So, we can
177:14 put something in. I put in AI content to play around with earlier. And then you
177:17 can see it asks, how many videos do you want back? So, in this case, I put 10.
177:21 Let's just put 25 for the sake of this demo. And then you have the option to
177:24 add more settings. So, down here, we could do, you know, we could add certain
177:26 profiles that we want to scrape. We could add a different search
177:29 functionality. We could even have it download the videos for us. So, once
177:32 you're good with this configuration, and just don't over complicate it. Think of
177:35 it the same way you would like put in filters on an e-commerce website or the
177:39 same way you would, you know, fill in your order when you're door dashing some
177:42 food. So, now that we have this filled out the way we want it, all I'm going to
177:45 do is come up to the top right and hit API and click API endpoints. The first
177:49 thing we're going to do is we're going to use this endpoint called run actor.
177:52 This is the one that's basically just going to send a request to Apify and
177:55 start this process, but it's not going to give us the live results back. That's
177:59 why the second step later is to pull the results back. What you could do is you
178:02 could run the actor synchronously, meaning it's going to send it off and
178:05 it's just going to spin in and it end until we're done and until it has the
178:09 results. But I found this way to be more consistent. So anyways, all you have to
178:12 do is click on copy and it's already going to have copied over your appy API
178:17 key. So it's really, really simple. All we're going to do here is open up a new
178:20 HTTP request. I'm going to just paste in that URL that we just copied right here.
178:24 And that's basically all we have to do except for we want to change this method
178:28 to post because as you can see right here, it says post. And so this is
178:31 basically just us putting in the actor's phone number. And so we're giving it a
178:35 call. But now what we have to do is actually tell it what we want. So right
178:38 here, we've already filled this out. I'm going to click on JSON and all I have to
178:42 do is just copy this JSON right here. Go back into N. Flick this on to send a
178:47 body and we want to send over just JSON. And then all I have to do is paste that
178:49 in there. So, as you can see, what we're sending over to this Tik Tok scraper is
178:54 I want AI content and I want 25 results. And then all this other stuff is false.
178:57 So, I'm just going to hit test step. And so, this basically returns us with an ID
179:00 and says, okay, the actor started. If we go back into here and we click on runs,
179:04 we can see that this crawler is now running. and it's going to basically
179:07 tell us how much it costed, how long it took, and all this kind of stuff. And
179:11 now it's already done. So, what we need to do now is we need to click on API up
179:14 in the top right. Click on API endpoints again and scroll all the way down to the
179:19 bottom where we can see get last run data set items. So, all I need to do is
179:23 hit this copy button right here. Go back into Nitn and then open up another HTTP
179:27 request. And then I'm just going to paste that URL right in there once
179:30 again. And I don't even have to change the method because if we go in here, we
179:34 can see that this is a get. So, all I have to do is hit test step. And this is
179:38 going to pull in those 25 results from our Tik Tok scrape based on the search
179:42 term AI content. So, you can see right here it says 25 items. And just to show
179:46 you guys that it really is 25 items, I'm just going to grab a set field. We're
179:49 going to just drag in the actual text from here and hit test step. And it
179:53 should Oh, we have to connect a trigger. So, I'm just going to move this trigger
179:57 over here real quick. And um what you can do is because we already have our
180:00 data here, I can just pin it so we don't actually have to run it again. But then
180:03 I'll hit test step. And now we can see we're going to get our 25 items right
180:08 here, which are all of the text content. So I think just the captions or the
180:11 titles of these Tik Toks. And we have all 25 Tik Toks as you can see. So I
180:15 just showed you guys the two-step method. And why I've been using it
180:18 because here's an example where I did the synchronous run. So all I did was I
180:22 came to the Google maps and I went to API endpoints and then I wanted to do
180:26 run actor synchronously which basically means that it would run it in n and it
180:30 would spin until the results were done and then it should feed back the output.
180:34 So I copied that I put it into here and as you can see I just ran it with the
180:37 Google maps looking for plumbers and we got nothing back. So that's why we're
180:40 taking this two-step approach where as you can see here we're going to do that
180:43 exact same request. We're doing a request for plumbers and we're going to
180:47 fire this off. And so nothing came back in Nitn. But if we go to our actor and
180:50 we go to runs, we can see right here that this was the one that we just made
180:54 for plumbers. And if we click into it, we can see all the plumbers. So that's
180:57 why we're taking the two-step approach. I'm going to make the exact same request
181:00 here for New York plumbers. And what I'm going to do is just run this workflow.
181:03 And now I wanted to talk about what we have to do because what happens is we
181:07 started the actor. And as you can see, it's running right now. And then it went
181:10 to grab the results, but the results aren't done yet. So that's why it comes
181:13 back and says this is an item, but it's empty. So, what we want to do is we want
181:17 to go to our runs and we want to see how long this is taking on average for 50
181:21 leads. As you can see, the most amount of time it's ever taken was 19 seconds.
181:24 So, I'm just going to go in here and in between the start actor and grab
181:28 results, I'm going to add a wait, and I'm just going to tell this thing to
181:31 wait for 22 seconds just to be safe. And now, what I'm going to do is just run
181:33 this thing again. It's going to start the actor. It's going to wait for 22
181:37 seconds. So, if we go back into Ampify, you can see that the actor is once again
181:41 running. After about 22 seconds, it's going to pass over and then we should
181:45 get all 50 results back in our HTTP request. There we go. Just finished up.
181:48 And now you can see that we have 50 items which are all of the plumbers that
181:53 we got in New York. So from here, now that you have these 50 leads and
181:56 remember if you want to come back into Ampify and change up your input, you can
182:00 change how many places you want to extract. So if you changed this to 200
182:03 and then you clicked on JSON and you copied in that body, you would now be
182:07 searching for 200 results. But anyways, that's the hard part is getting the
182:11 leads into end. But now we have all this data about them and we can just, you
182:14 know, do some research, send them off a email, whatever it is, we can just
182:18 basically have this thing running 24/7. And if you wanted to make this workflow
182:21 more advanced to handle a little bit more dynamic amount of results. What
182:25 you'd want to use is a technique called polling. So basically, you'd wait, you
182:28 check in, and then if the results were all done, you continue down the process.
182:32 But if they weren't all done, you would basically wait again and come back. And
182:36 you would just loop through this until you're confident that all of the results
182:39 are done. So that's going to be it for this one. I'll have this template
182:42 available in my free school community if you want to play around with it. Just
182:44 remember you'll have to come in here and you'll have to switch out your own API
182:47 key. And don't forget when you get to Ampify, you can use code 30 Nate Herk to
182:51 get 30% off. Okay, so those were some APIs that we can use to actually scrape
182:54 information. Now, what if we want to use APIs to generate some sort of content?
182:58 We're going to look at an image generation API from OpenAI and we're
183:02 going to look at a video generation API called Runway. So these next two
183:05 workflows will explain how you set up those API calls and also how you can
183:09 bake them into a workflow to be a little bit more practical. So let's take a
183:13 look. So this workflow right here, all I had to do was enter in ROI on AI
183:17 automation and it was able to spit out this LinkedIn post for me. And if you
183:21 look at this graphic, it's insane. It looks super professional. It even has a
183:24 little LinkedIn logo in the corner, but it directly calls out the actual
183:28 statistics that are in the post based on the research. And for this next one, all
183:31 I typed in was mental health within the workplace and it spit out this post.
183:34 According to Deote Insights, organizations that support mental health
183:38 can see up to 25% increase in productivity. And as you can see down
183:42 here, it's just a beautiful graphic. So, a few weeks ago when Chacht came out
183:45 with their image generation model, you probably saw a lot of stuff on LinkedIn
183:48 like this where people were turning themselves into action figures or some
183:51 stuff like this where people were turning themselves into Pixar animation
183:55 style photos or whatever it is. And obviously, I had to try this out myself.
183:58 And of course, this was very cool and everyone was getting really excited. But
184:01 then I started to think about how could this image generation model actually be
184:06 used to save time for a marketing team because this new image model is actually
184:09 good at spelling and it can make words that don't look like gibberish. It opens
184:13 up a world of possibilities. So here's a really quick example of me giving it a
184:16 one-s sentence prompt and it spits out a poster that looks pretty solid. Of
184:20 course, we were limited to having to do this in chatbt and coming in here and
184:24 typing, but now the API is released, so we can start to save hours and hours of
184:27 time. And so, the automation I'm going to show with you guys today is going to
184:30 help you turn an idea into a fully researched LinkedIn post with a graphic
184:34 as well. And of course, we're going to walk through setting up the HTTP request
184:39 to OpenAI's image generation model. But what you can do is also download this
184:42 entire template for free and you can use it to post on LinkedIn or you can also
184:46 just kind of build on top of it to see how you can use image generation to save
184:50 you hours and hours within some sort of marketing process. So this workflow
184:54 right here, all I had to do was enter in ROI on AI automation and it was able to
184:58 spit out this LinkedIn post for me. And if you look at this graphic, it's
185:01 insane. It looks super professional. It even has a little LinkedIn logo in the
185:05 corner, but it directly calls out the actual statistics that are in the post
185:09 based on the research. So 74% of organizations say their most advanced AI
185:13 initiatives are meeting or exceeding ROI expectations right here. And on the
185:17 other side, we can see that only 26% of companies have achieved significant
185:21 AIdriven gains so far, which is right here. And I was just extremely impressed
185:24 by this one. And for this next one, all I typed in was mental health within the
185:28 workplace. And to spit out this post, according to Deote Insights,
185:31 organizations that support mental health can see up to 25% increase in
185:35 productivity. And as you can see down here, it's just a beautiful graphic.
185:38 something that would probably take me 20 minutes in Canva. And if you can now
185:42 push out these posts in a minute rather than 20 minutes, you can start to push
185:45 out more and more throughout the day and save hours every week. And because the
185:49 post is being backed by research, the graphic is being backed by the research
185:53 post. You're not polluting anything into the internet. A lot of people in my
185:56 comments call it AI slop. Anyways, let's do a quick live run of this workflow and
185:59 then I'll walk through step by step how to set up this API call. And as always,
186:03 if you want to download this workflow for free, all you have to do is join my
186:06 free school community. link is down in the description and then you can search
186:09 for the title of the video. You can go into YouTube resources. You need to find
186:13 the post associated with this video and then when you're in there, you'll be
186:16 able to download this JSON file and that is the template. So you download the
186:20 JSON file. You'll go back into Nitn. You'll open up a new workflow and in the
186:25 top right you'll go to import from file. Import that JSON file and then there'll
186:28 be a little sticky note with a setup guide just sort of telling you what you
186:31 need to plug in to get this thing to work for you. Okay, quick disclaimer
186:33 though. I'm not actually going to post this to LinkedIn. you certainly could,
186:37 but um I'm just going to basically send the post as well as the attachment to my
186:41 email because I don't want to post on LinkedIn right now. Anyways, as you can
186:45 see here, this workflow is starting with a form submission. So, if I hit test
186:48 workflow, it's going to pop up with a form where we have to enter in our email
186:53 for the workflow to send us the results. Topic of the post and then also I threw
186:57 in here a target audience. So, you could have these posts be kind of flavored
187:00 towards a specific audience if you want to. Okay, so this form is waiting for
187:04 us. I put in my email. I put the topic of morning versus night people and the
187:07 target audience is working adults. So, we'll hit submit, close out of here, and
187:10 we'll see the LinkedIn post agent is going to start up. It's using Tavi here
187:14 for research and it's going to create that post and then pass the post on to
187:19 the image prompt agent. And that image prompt agent is going to read the post
187:22 and basically create a prompt to feed into OpenAI's image generator. And as you can
187:28 see, it's doing that right now. We're going to get that back as a base 64
187:32 string. And then we're just converting that to binary so we can actually post
187:36 that on LinkedIn or send that in email as an attachment and we'll break down
187:39 all these steps. But let's just wait and see what these results look like here.
187:43 Okay, so all that just finished up. Let me pop over to email. So in email, we
187:46 got our new LinkedIn post. Are you a morning lark or a night owl? The science
187:49 of productivity. I'm not going to read through this right now exactly, but
187:53 let's take a look at the image we got. When are you most productive? In the
187:57 morning, plus 10% productivity or night owls thrive in flexibility. I mean, this
188:00 is insane. This is a really good graphic. Okay, so now that we've seen
188:04 again how good this is, let's just break down what's going on. We're going to
188:07 start off with the LinkedIn post agent. All we're doing is we're feeding in two
188:11 things from the form submission, which was what is the topic of the post, as
188:14 well as who's the target audience. So right here, you can see morning versus
188:18 night people and working adults. And then we move into the actual system
188:21 prompt, which I'm not going to read through this entire thing. If you
188:23 download the template, the prompt will be in there for you to look at. But
188:26 basically I told it you are an AI agent specialized in creating professional
188:30 educational and engaging LinkedIn posts based on a topic provided by the user.
188:34 We told it that it has a tool called Tavly that it will use to search the web
188:38 and gather accurate information and that the post should be written to appeal to
188:42 the provided target audience. And then basically just some more information
188:45 about how to structure the post, what it should output and then an example which
188:49 is basically you receive a topic. You search the web, you draft the post and
188:53 you format it with source citations, clean structure, optional hashtags and a
188:57 call to action at the end. And as you can see what it outputs is a super clean
189:01 LinkedIn post right here. So then what we're going to do is basically we're
189:05 feeding this output directly into that next agent. And by the way, they're both
189:10 using chat GBT 4.1 through open router. All right, but before we look at the
189:12 image prompt agent, let's just take a look at these two things down here. So
189:15 the first one is the chat model that plugs into both image prompt agent and
189:19 the LinkedIn post agent. So all you have to do is go to open router, get an API
189:22 key, and then you can choose from all these different models. And in here, I'm
189:24 using GPT4.1. And then we have the actual tool that the LinkedIn agent uses for its
189:31 research which is Tavi. And what we're doing here is we're sending off a post
189:35 request using an HTTP request tool to the Tavi endpoint. So this is where
189:39 people typically start to feel overwhelmed when trying to set up these
189:42 requests because it can be confusing when you're trying to look through that
189:45 API documentation. Which is exactly why in my paid community I created a APIs
189:49 and HTTP requests deep dive because truthfully you need to understand how to
189:54 set up these requests because being able to connect to different APIs is where
189:58 the magic really happens. So Tavi just lets your LLM connect to the web and
190:02 it's really good for web search and it also gives you a thousand free searches
190:05 per month. So that's the plan that I'm on. Anyways, once you're in here and you
190:08 have an account and you get an API key, all I did was went to the Tavali search
190:12 endpoint and you can see we have a curl statement right here where we have this
190:17 endpoint. We have post as the method we have this is how we authorize ourselves
190:20 and this is all going to be pretty similar to the way that we set up the
190:23 actual request to OpenAI's image generation API. So, I'm not going to
190:26 dive into this too much. When you download this template, all you have to
190:30 do is plug in your Tavi API. But later in this video when we walk through
190:35 setting up the request to OpenAI, this should make more sense. Anyways, the
190:38 main thing to take away from this tool is that we're using a placeholder for
190:41 the request because in the request we sent over to Tavali, we basically say,
190:44 okay, here's the search query that we're going to search the internet for. And
190:47 then we have all these other little settings we can tweak like the topic,
190:51 how many results, how many chunks per source, all this kind of stuff. All we
190:55 really want to touch right now is the query. And as you can see, I put this in
190:59 curly braces, meaning it's a placeholder. I'm calling the placeholder
191:02 search term. And down here, I'm defining that placeholder as what the user is
191:06 searching for. So, as you can see, this data in the placeholder is going to be
191:09 filled in by the model. So, based on our form submission, when we asked it to,
191:13 you know, create a LinkedIn post about morning versus night people, it fills
191:17 out the search term with latest research on productivity, morning people versus
191:21 night people, and that's basically how it searches the internet. And then we
191:24 get our results back. And now it creates a LinkedIn post that we're ready to pass
191:29 off to the next agent. So the output of this one gets fed into this next one,
191:32 which all it has to do is read the output. As you can see right here, we
191:36 gave it the LinkedIn post, which is the full one that we just got spit out. And
191:39 then our system message is basically telling it to turn that into an image
191:43 prompt. This one is a little bit longer. Not too bad, though. I'm not going to
191:46 read the whole thing, but essentially we're telling it that it's going to be
191:50 an AI agent that transforms a LinkedIn post into a visual image prompt for a
191:56 textto-image AI generation model. So, we told it to read the post, identify the
192:00 message, identify the takeaways, and then create a compelling graphic prompt
192:04 that can be used with a textto image generator. We gave it some output
192:06 instructions like, you know, if there's numbers, try to work those into the
192:10 prompt. Um, you can use, you know, text, charts, icons, shapes, overlays,
192:14 anything like that. And then the very bottom here, we just gave it sort of
192:17 like an example prompt format. And you can see what it spits out is a image
192:21 prompt. So it says a dynamic split screen infographic style graphic. Left
192:25 side has a sunrise, it's bright yellow, and it has morning larks plus 10%
192:29 productivity. And the right side is a morning night sky, cool blue gradients,
192:33 a crescent moon, all this kind of stuff. And that is exactly what we saw back in
192:38 here when we look at our image. And so this is just so cool to me because first
192:41 of all, I think it's really cool that it can read a post and kind of use its
192:44 brain to say, "Okay, this would be a good, you know, graphic to be looking at
192:47 while I'm reading this post." But then on top of that, it can actually just go
192:51 create that for us. So, I think this stuff is super cool. You know, I
192:53 remember back in September, I was working on a project where someone
192:57 wanted me to help them with LinkedIn automated posting and they wanted visual
193:00 elements as well and I was like, uh, I don't know, like that might have to be a
193:04 couple month away thing when we have some better models and now we're here.
193:07 So, it's just super exciting to see. But anyways, now we're going to feed that
193:12 output, the image prompt into the HTTP request to OpenAI. So, real quick, let's
193:16 go take a look at OpenAI's documentation. So, of course, we have
193:20 the GBT image API, which lets you create, edit, and transform images.
193:24 You've got different styles, of course. You can do like memes with a with text.
193:29 You can do creative things. You can turn other images into different images. You
193:31 can do all this kind of stuff. And this is where it gets really cool, these
193:35 posters and the visuals with words because that's the kind of stuff where
193:39 typically AI image gen like wasn't there yet. And one thing real quick in your
193:42 OpenAI account, which is different than your chatbt account, this is where you
193:46 add the billing for your OpenAI API calls. You have to have your
193:50 organization verified in order to actually be able to access this model
193:54 through API. Right now, it took me 2 minutes. You basically just have to
193:57 submit an ID and it has to verify that you're human and then you'll be verified
194:00 and then you can use it. Otherwise, you're going to get an error message
194:02 that looks like this that I got earlier today. But anyways, the verification
194:06 process does not take too long. Anyways, then you're going to head over to the
194:08 API documentation that I will have linked in the description where we can
194:12 see how we can actually create an image in NAN. So, we're going to dive deeper
194:16 into this documentation in the later part of this video where I'm walking
194:19 through a step-by-step setup of this. But, we're using the endpoint um which
194:23 is going to create an image. So, we have this URL right here. We're going to be
194:27 creating a post request and then we just obviously have our things that we have
194:30 to configure like the prompt in the body. We have to obviously send over
194:35 some sort of API key. We have to, you know, we can choose the size. We can
194:37 choose the model. All this kind of stuff. So back in NN, you can see that
194:41 I'm sending a post request to that endpoint. For the headers, I set up my
194:44 API key right here, but I'm going to show you guys a better way to do that in
194:47 the later part of this video. And then for the body, we're saying, okay, I want
194:51 to use the GBT image model. Here's the actual prompt to use for the image which
194:54 we dragged in from the image prompt agent. And then finally the size we just
194:59 left it as that 1024 * 1024 square image. And so this is interesting
195:03 because what we get back is we get back a massive base 64 code. Like this thing
195:08 is huge. I can't even scroll right now. My screen's kind of frozen. Anyways, um
195:12 yeah, there it goes. It just kind of lagged. But we got back this massive
195:15 file. We can see how many tokens this was. And then what we're going to do is
195:20 we're going to convert that to binary data. So that's how we can actually get
195:23 the file as an image. As you can see now after we turn that nasty string into a
195:28 file, we have the binary image right over here. So all I did was I basically
195:32 just dragged in this field right here with that nasty string. And then when
195:36 you hit test step, you'll get that binary data. And then from there, you
195:39 have the binary data, you have the LinkedIn post. All you have to do is,
195:43 you know, activate LinkedIn, drag it right in there. Or you can just do what
195:47 I did, which is I'm sending it to myself in email. And of course, before you guys
195:50 yell at me, let's just talk about how much this run costed me. So, this was
195:55 4,273 tokens. And if we look at this API and we go down to the pricing section,
195:59 we can see that for image output tokens, which was generated images, it's going
196:03 to be 40 bucks for a million tokens, which comes out to about 17 cents. If
196:06 you can see that right here, hopefully I did the math right. But really, for the
196:09 quality and kind of for the industry standard I've seen for price, that's on
196:13 the cheaper end. And as you can see down here, it translates roughly to 2 cents,
196:17 7 cents, 19 cents per generated image for low, medium, blah blah blah blah
196:20 blah. But anyways, now that that's out of the way, let's just set up an HTTP
196:25 request to that API and generate an image. So, I'm going to add a first
196:28 step. I'm just going to grab an HTTP request. So, I'm just going to head over
196:31 to the actual API documentation from OpenAI on how to create an image and how
196:35 to hit this endpoint. And all we're going to do is we're going to copy this
196:38 curl command over here on the right. If it you're not seeing a curl command, if
196:41 you're seeing Python, just change that to curl. Copy that. And then we're going
196:45 to go back into Nitn. Hit import curl. Paste that in there. And then once we
196:49 hit import, we're almost done. So that curl statement basically just autopop
196:52 populated almost everything we need to do. Now we just have a few minor tweaks.
196:55 But as you can see, it changed the method to post. It gave us the correct
196:59 URL endpoint already. It has us sending a header, which is our authorization,
197:02 and then it has our body parameters filled out where all we'd really have to
197:06 change here is the prompt. And if we wanted to, we can customize this kind of
197:09 stuff. And that's why it's going to be really helpful to be able to understand
197:13 and read API documentation so you know how to customize these different
197:16 requests. Basically, all of these little things here like prompt, background,
197:20 model, n, output format, they're just little levers that you can pull and
197:23 tweak in order to change your output. But we're not going to dive too deep
197:26 into that right now. Let's just see how we can create an image. Anyways, before
197:30 we grab our API key and plug that in, when you're in your OpenAI account, make
197:33 sure that your organization is verified. Otherwise, you're going to get this
197:35 error message and it's not going to let you access the model. Doesn't take long.
197:39 Just submit an ID. And then also make sure that you have billing information
197:43 set up so you can actually pay for um an image. But then you're going to go down
197:47 here to API keys. You're going to create new secret key. This one's going to be
197:53 called image test just for now. And then you're going to copy that API key. Now
197:56 back in any then it has this already set up for us where all we need to do is
198:00 delete all this. We're going to keep the space after bearer. And we can paste in
198:03 our API key like that. and we're good to go. But if you want a better method to
198:08 be able to save this key in Nadn so you don't have to go find it every time.
198:12 What you can do is come to authentication, go to general or actually no it's generic and then you're
198:17 going to choose header off and we know it's header because right here we're
198:20 sending headers as a header parameter and this is where we're authorizing
198:22 oursel. So we're just going to do the same up here with the header off. And
198:26 then we're going to create a new one. I'm just going to call this one openai
198:30 image just so we can keep ourselves organized. And then you're going to do the same
198:34 thing as what we saw down in that header parameter field. Meaning the
198:39 authorization is the name and then the value was bearer space API key. So
198:44 that's all I'm going to do. I'm going to hit save. We are now authorized to
198:49 access this endpoint. And I'm just going to turn off sending headers because
198:53 we're technically sending headers right up here with our authentication. So we
198:57 should be good now. Right now we'll be getting an image of a cute baby sea
199:01 otter. Um, and I'm just going to say making pancakes. And we'll hit test
199:05 step. And this should be running right now. Um, okay. So, bad request. Please
199:09 check your parameters. Invalid type for n. It expected an integer, but it got a
199:13 string instead. So, if you go back to the API documentation, we can see n
199:18 right here. It should be integer or null, and it's also optional. So, I'm
199:21 just going to delete that. We don't really need that. And I'm going to hit test step.
199:26 And while that's running real quick, we'll just go back at n. And this
199:29 basically says the number of images to generate must be between 1 and 10. So
199:32 that's like one of those little levers you could tweak like I was talking about
199:36 if you want to customize your request. But right now by default it's only going
199:40 to give us one. Looks like this HTTP request is working. So I'll check in
199:45 with you guys in 20 seconds when this is done. Okay. So now that that finished
199:49 up, didn't take too long. We have a few things and all we really need is this
199:52 base 64. But we can see again this one costed around 17. And now we just have
199:57 to turn this into binary so we can actually view an image. So I'm going to
200:01 add a plus after the HTTP request. I'm just going to type in binary. And we can
200:06 see convert to file, which is going to convert JSON data to binary data. And
200:11 all we want to do here is move a B 64 string to file because this is a B 64
200:15 JSON. And this basically represents the image. So I'm going to drag that into
200:19 there. And then when I hit test step, we should be getting a binary image output
200:24 in a field called data. As you can see right here, and this should be our image
200:28 of a cute sea otter making pancakes. As you can see, um it's not super
200:32 realistic, and that's because the prompt didn't have any like photorealistic,
200:36 hyperrealistic elements in there, but you can easily make it do so. And of
200:39 course, I was playing around with this earlier, and just to show you guys, you
200:41 can make some pretty cool realistic images, here was um a post I made about
200:47 um if ancient Rome had access to iPhones. And obviously, this is not like
200:51 a real Twitter account. Um, but this is a dinosaurs evolved into modern-day
200:55 influencers. This was just for me testing like an automation using this
200:59 API and auto posting, but not as practical as like these LinkedIn
201:02 graphics. But if you guys want to see a video sort of like this, let me know. Or
201:05 if you also want to see a more evolved version of the LinkedIn posting flow and
201:09 how we can make it even more robust and even more automated, then definitely let
201:15 well. Okay. Okay. So, all I have to do in this form submission is enter in a
201:19 picture of a product, enter in the product name, the product description,
201:22 and my email address. And we'll send this off, and we'll see the workflow
201:26 over here start to fire off. So, we're going to upload the photo. We're going
201:29 to get an image prompt. We're going to download that photo. Now, we're creating
201:32 a professional graphic. So, after our image has been generated, we're
201:35 uploading it to a API to get a public URL so we can feed that URL of the image
201:40 into Runway to generate a professional video. Now, we're going to wait 30
201:42 seconds and then we'll check in to see if the video is done. If it's not done
201:45 yet, we're going to come down here and pull, wait five more seconds, and then
201:48 go check in. And we're going to do this infinitely until our video is actually
201:52 done. So, anyways, it just finished up. It ended up hitting this check eight
201:55 times, which indicates I should probably increase the wait time over here. But
201:58 anyways, let's go look at our finished products. So, we just got this new
202:01 email. Here are the requested marketing materials for your toothpaste. So,
202:04 first, let's look at the video cuz I think that's more exciting. So, let me
202:06 open up this link. Wow, we got a 10-second video. It's spinning. It's 3D.
202:10 The lighting is changing. This looks awesome. And then, of course, it also
202:13 sends us that image. in case we want to use that as well. And one of the steps
202:16 in the workflow is that it's going to upload your original image to your
202:19 Google Drive. So here you can see this was the original and then this was the
202:22 finished product. So now you guys have seen a demo. We're going to build this
202:25 entire workflow step by step. So stick with me because by the end of this
202:28 video, you'll have this exact system up and running. Okay. So when we're setting
202:32 up a system where we're creating an image from text and then we're creating
202:35 a video from that image, the two most important things are going to be that
202:38 image prompt and that video prompt. So what we're going to do is head over to
202:41 my school community. The link for that will be down in the description. It's a
202:43 free school community. And then what you're going to do is either search for
202:46 the title of this video or click on YouTube resources and find the post
202:50 associated with this video. And when you click into there, there'll be a doc that
202:54 will look like this or a PDF and it will have the two prompts that you'll need in
202:57 order to run the system. So head over there, get that doc, and then we can hop
203:00 into the step by step. And that way we can start to build this workflow and you
203:03 guys will have the prompts to plug right in. Cool. So once you have those, let's
203:07 get started on the workflow. So as you guys know, a workflow always has to
203:11 start with some sort of trigger. So in this case, we're going to be triggering
203:14 this workflow with a form submission. So I'm just going to grab the native NAN
203:18 form on new form event. So we're going to configure what this form is going to
203:20 look like and what it's going to prompt a user to input. And then whenever
203:24 someone actually submits a response, that's when the workflow is going to
203:27 fire off. Okay. So I'm going to leave the authentication as none. The form
203:31 title, I'm just putting go to market. For the form description, I'm going to
203:36 say give us a product photo, title, and description, and we'll get back to you
203:40 with professional marketing materials. And if you guys are interested in what I
203:43 just used to dictate that text, there'll be a link for Whisper Flow down in the
203:46 description. And now we need to add our form elements. So the first one is going
203:50 to be not a text. We're going to have them actually submit a file. So click on
203:54 file. This is going to be required. I only want them to be allowed to upload
203:57 one file. So I'm going to switch off multiple files. And then for the field
204:01 name, we're just going to say product photo. Okay. So now we're going to add
204:04 another one, which is going to be the product title. So I'm just going to
204:07 write product title. This is going to be text. For placeholder, let's just put
204:10 toothpaste since that was the example. This will be a required field. So, the
204:13 placeholder is just going to be the gray text that fills in the text box so
204:17 people are kind of they know what to put in. Okay, we're adding another one
204:21 called product description. We'll make this one required. We'll just leave the
204:24 placeholder blank cuz you don't need it. And then finally, what we need to get
204:27 from them is an email, but instead of doing text, we can actually make it
204:31 require a valid email address. So, I'm just going to call it email and we'll
204:34 just say like namele.com so they know what a valid email looks like. We'll make that
204:38 required because we have to send them an email at the end with their materials.
204:42 And now we should be good to go. So if I hit test step, we'll see that it's going
204:45 to open up a form submission and it has everything that we just configured. And
204:48 now let me put in some sample data real quick. Okay, so I put a picture of a
204:52 clone bottle. The title's clone. I said the clone smells very clean and fresh
204:55 and it's a very sophisticated scent because we're going to have that
204:59 description be used to sort of help create that text image prompt. And then
205:02 I just put my email. So I'm going to submit this form. We should see that
205:05 we're going to get data back right here in our NIN, which is the binary photo.
205:08 This is the product photo that I just submitted. And then we have our actual
205:13 table of information like the title, the description, and the email. And so when
205:17 I'm building stuff step by step, what I like to do is I get the data in here,
205:20 and then I pretty much will just build node by node, testing the data all the
205:23 way through, making sure that nothing's going to break when variables are being
205:27 passed from left to right in this workflow. Okay, so the next thing that
205:30 we need to do is we have this binary data in here and binary data is tough to
205:34 reference later. So what I'm going to do is I'm just going to upload it straight
205:37 to our Google Drive so we can pull that in later when we need it to actually
205:41 edit that image. Okay, so that's our form trigger. That's what starts the
205:44 workflow. And now what we're going to do next is we want to upload that original
205:48 image to Google Drive so we can pull it in later and then use it to edit the
205:51 image. So what I'm going to do is I'm going to click on the plus. I'm going to
205:54 type in Google Drive. And we're going to grab a Google Drive operation. That is
205:59 going to be upload file. So, I'll click on upload file. And at this point, you
206:02 need to connect your Google Drive. So, I'm not going to walk through that step
206:05 by step, but I have a video right up here where I do walk through it step by
206:08 step. But basically, you're just going to go to Docs. You have to open up a
206:12 sort of Google Cloud profile or a console, and then you just have to
206:15 connect yourself and enable the right credentials and APIs. Um, but like I
206:19 said, that video will walk through it. Anyways, now what we're doing is we have
206:23 to upload the binary field right here to our Google Drive. So, it's not called
206:27 data. We can see over here it's called product photo. So, I'm just going to
206:29 copy and paste that right there. So, it's going to be looking for that
206:32 product photo. And then we have to give it a name. So, that's why we had the
206:36 person submit a title. So, all I'm going to do is for the name, I'm going to make
206:40 this an expression instead of fixed because this name is going to change
206:43 based on the actual product coming through. I'm going to drag in the
206:47 product title from the left right here. So now the the photo in Google Drive is
206:51 going to be called cologne and then I'm just going to in parenthesis say
206:55 original. So because this is an expression, it basically means whenever
206:58 someone submits a form, whatever the title is, it's going to be title and
207:01 then it's going to say original. And that's how we sort of control that to be
207:05 dynamic. Anyways, then I'm just choosing what folder to go in. So in my drive,
207:08 I'm going to choose it to go to a folder that I just made called um product
207:12 creatives. So once we have that configured, I'm going to hit test step.
207:15 We're going to wait for this to spin. it means that it's trying to upload it
207:18 right now. And then once we get that success message, we'll quickly go to our
207:21 Google Drive and make sure that the image is actually there. So there we go.
207:25 It just came back. And now I'm going to click into Google Drive, click out of
207:27 the toothpaste, and we can see we have cologne. And that is the image that we
207:31 just submitted in NAN. All right. Now that we've done that, what we want to do
207:35 is we want to feed the data into an AI node so that it can create a text image
207:38 prompt. So I'm going to click on the plus. I'm going to grab an AI agent. And
207:43 before we do anything in here, I'm first of all going to give it its brain. So,
207:45 I'm going to click on the plus under chat model. I'm personally going to grab
207:49 an open router chat model, which basically lets you connect to a ton of
207:53 different things. Um, let me see. Open router.ai. It basically lets you connect
207:57 your agents to all the different models. So, if I click on models up here, we can
208:00 see that it just lets you connect to Gemini, Anthropic, OpenAI, Deepseek. It
208:04 has all these models and all in one place. So, go to open router, get an API
208:08 key, and then once you come back into here, all you have to do is connect your
208:11 API key. And what I'm going to use here is going to be 4.1. And then I'm just
208:15 going to name this so we know which one I'm using here. And then we now have our
208:20 agent accessing GPT4.1. Okay. So now you're going to go to that PDF that I have in the school
208:26 community and you're just going to copy this product photography prompt. Grab
208:31 that. Go back to the AI agent and then you're going to click on add option. Add
208:35 a system message. And then we're basically just going to I'm going to
208:38 click on expression and expand this full screen so you guys can see it better.
208:40 But I'm just going to paste that prompt in here. And this is going to tell the
208:44 AI agent how to take what we're giving it and turn it into a text image
208:51 optimized prompt for professional style, you know, studio photography. So, we're
208:55 not done yet because we have to actually give it the dynamic information from our
208:59 form submission every time. So, that's a user message. That's basically what it's
209:02 going to look at. So, the user message is what the agent's going to look at
209:05 every time. And the system message is basically like here are your
209:09 instructions. So for the user message, we're not going to be using a connected
209:11 chat trigger node. We're going to define below. And when we want to make sure
209:15 that this changes every time, we have to make sure it's an expression. And then
209:19 I'm just going to drill down over here to the form submission. And I'm going to
209:21 say, okay, here's what we're going to give this agent. It's going to get the
209:26 product, which the person submitted to us in the form, and we can drag in the
209:31 product, which was cologne, as you can see on the right. And then they also
209:36 gave us a description. So, all I have to do now is drag in the product
209:38 description. And so, now every time the agent will be looking at whatever
209:42 product and description that the user submitted in order to create its prompt.
209:46 So, I'm going to hit test step. We'll see right now it's using its chat model
209:50 GPT4.1. And it's already created that prompt for us. So, let's just give it a
209:53 quick read. Hyperrealistic photo of sophisticated cologne bottle,
209:56 transparent glass, sleek minimalistic design, silver metal cap, all this. But
210:00 what we have to do is we have to make sure that the image isn't being created
210:04 just on this. It has to look at this, but it also has to look at the actual
210:08 original image. So that's why our next step is going to be to redownload this
210:11 file and then we're going to push it over to the image generation model. So
210:15 at this point, you may be wondering like why are we going to upload the file if
210:18 we're just going to download it again? And the reason why I had to do that is
210:21 because when we get the file in the form of binary, we want to send the binary
210:27 data into the HTTP request right here that actually generates the image. And
210:30 we can't reference the binary way over here if it's only coming through over
210:34 here. So, we upload it so that we can then download it and then send it right
210:37 back in. And so, if that doesn't make sense yet, it probably will once we get
210:41 over to the stage. But that's why. Anyways, next step is we're going to
210:44 download that file. So, I'm going to click on this plus. We're going to be
210:47 downloading it from Google Drive and we're going to be using the operation
210:51 download file. So, we already should be connected because we've set up our
210:54 Google credentials already. The operation is going to be download the
210:57 resources a file and instead of choosing from a list, we're going to choose by
211:00 ID. And all we're going to do is download that file that we previously
211:03 uploaded every time. So I'm going to come over here, the Google Drive, upload
211:08 photo node, drag in the ID, and now we can see that's all we have to do. If we
211:11 hit test step, we'll get back that file that we originally uploaded. And we can
211:15 just make sure it's the cologne bottle. Okay, but now it's time to basically use
211:19 that downloaded file and the image prompt and send that over to an API
211:23 that's going to create an image for us. So we're going to be using OpenAI's
211:27 image generator. So here is the documentation. we have the ability to
211:30 create an image or we can create an image edit which is what we want to do
211:33 because we wanted to look at the photo and our request. So typically what you
211:38 can do in this documentation is you can copy the curl command but this curl
211:41 command is actually broken so we're not going to do that. If you copied this one
211:44 up here to actually just create an image that one would work fine but there's
211:47 like a bug with this one right now. So anyways I'm going to go into our n I'm
211:51 going to hit the plus. I'm going to grab an HTTP request and now we're going to
211:57 configure this request. So, I'm going to walk through how I'm reading the API
212:00 documentation right here to set this up. I'm not going to go super super
212:03 in-depth, but if you get confused along the way, then definitely check out my
212:06 paid course. The link for that down in the description. I've got a full course
212:10 on deep diving into APIs and HTTP requests. Anyways, the first thing we
212:13 see is we're going to be making a post request to this endpoint. So, the first
212:16 thing I'm going to do is copy this endpoint. We're going to paste that in.
212:19 And then we're also going to make sure the method is set to post. So, the next
212:23 thing that we have to do is authorize ourselves somehow. So over here I can
212:27 see that we have a header and the name is going to be authorization and then
212:31 the value is going to be bearer space R open AI key. So that's why I set up a
212:35 header authentication already. So in authentication I went to generic and
212:39 then I went to header and then you can see I have a bunch of different headers
212:42 already set up. But what I did here is I chose my OpenAI one where basically all
212:46 I did was I typed in here authorization and then in the value I typed in bearer
212:50 space and then I pasted my API key in there. And now I have my OpenAI
212:54 credential saved forever. Okay. So the first thing we have to do in our body
212:59 request over to OpenAI is we have to send over the image to edit. So that's
213:02 going to be in a field called image. And then we're sending over the actual
213:05 photo. So what I'm going to do is I'm going to click on send body. I'm going
213:10 to use form data. And now we can set up the different names and values to send
213:13 over. So the first thing is we're going to send over this image right here on
213:16 the lefth hand side. And this is in a field called data. And it's binary. So,
213:19 I'm going to choose instead of form data, I'm going to send over an NAN
213:23 binary file. The name is going to be image because that's what it said in the
213:26 documentation. And the input data field name is data. So, I'm just going to copy
213:30 that, paste it in there. And this basically means, okay, we're sending
213:34 over this picture. The next thing we need to send over is a prompt. So, the
213:37 name of this field is going to be prompt. I'm just going to copy that, add
213:42 a new parameter, and call it prompt. And then for the value, we want to send over
213:45 the prompt that we had our AI agent write. So, I'm going to click into
213:47 schema and I'm just going to drag over the output from the AI agent right
213:51 there. And now that's an expression. So, the next thing we want to send over is
213:54 what model do we want to use? Because if we don't put this in, it's going to
213:58 default to dolly 2, but we want to use gpt-image- one. So, I'm going to copy
214:04 GPT- image- one. We're going to come back into here, and I'm going to paste
214:08 that in as the value, but then the name is model because, as you can see in
214:11 here, right there, it says model. So hopefully you guys can see that when
214:15 we're sending over an API call, we just have all of these different options
214:18 where we can sort of tweak different settings to change the way that we get
214:22 the output back. And then you have some other options, of course, like quality
214:25 or size. But right now, we're just going to leave all that as default and just go
214:28 with these three things to keep it simple. And I'm going to hit test step
214:32 and we'll see if this is working. Okay, never mind. I got an error and I was
214:35 like, okay, I think I did everything right. The reason I got the error is
214:38 because I don't have any more credits. So, if you get this error, go add some
214:42 credits. Okay, so added more credits. I'm going to try this again and I'll
214:45 check back in. But before I do that, I wanted to say me clearly, I've been like
214:50 spamming this thing with creating images cuz it's so cool. It's so fun. But
214:53 everyone else in the world has also been doing that. So, if you're ever getting
214:56 some sort of like errors where it's like a 500 type of error where it means like
215:00 something's going on on the server side of things or you're seeing like some
215:04 sort of rate limit stuff, keep in mind that there's there's a limit on how many
215:07 images you can send per minute. I don't think that's been clearly defined on
215:13 GPT- image-1. But also, if the OpenAI server is receiving way too many
215:16 requests, that is also another reason why your request may be failing. So,
215:20 just keep that in mind. Okay, so now it worked. We just got that back. But what
215:23 you'll notice is we don't see an image here or like an image URL. So, what we
215:27 have to do is we have this base 64 string and we have to turn that into
215:32 binary data. So, what I'm going to do is after this node, I'm going to add one
215:37 that says um convert to file. So we're going to convert JSON data to binary
215:41 data and we're going to do B 64. So all I have to do now is show this data on
215:45 the lefth hand side. Grab the base 64 string. And then when we hit test step,
215:48 we should get a binary file over here, which if we click into it, this should
215:52 be our professional looking photo. Wow, that looks great. It even got the
215:55 wording and like the same fonts right. So that's awesome. And by the way, if we
216:00 click into the results of the create image where we did the image edit, we
216:04 can see the tokens. And with this model, it is basically $10 for a million input
216:09 tokens and $40 for a million output tokens. So right here, you can see the
216:12 difference between our input and output tokens. And this one was pretty cheap. I
216:15 think it was like 5 cents. Anyways, now that we have that image right here as
216:19 binary data, we need to turn that into a video using an API called Runway. And so
216:23 if we go into Runway and we go first of all, let's look at the price. For a
216:27 5second video, 25 cents. For a 10-second video, 50 cents. So that's the one we're
216:30 going to be doing today. But if we go to the API reference to read how we can
216:34 turn an image into a video, what we need to look at is how we actually send over
216:38 that image. And what we have to do here is send over an HTTPS URL of the image.
216:43 So we somehow have to get this binary data in NADN to a public image that
216:48 runway can access. So the way I'm going to be doing that is with this API that's
216:53 free called image BB. And um it's a free image hosting service. And what we can
216:57 do is basically just use its API to send over the binary data and we'll get back
217:02 a public URL. So come here, make a free account. You'll grab your API key from
217:05 up top. And then we basically have here's how we set this up. So what I'm
217:08 going to do is I'm going to copy the endpoint right there. We're going to go
217:12 back into naden and I'm going to add an HTTP request. And let me just configure
217:17 this up. We'll put it over here just to keep everything sort of square. But now
217:20 what I'm going to do in here is paste that endpoint in as our URL. You can
217:24 also see that it says this call can be done using post or git. But since git
217:28 requests are limited by the max amount of length, you should probably do post.
217:30 So I'm just going to go back in here and change this to a post. And then there
217:33 are basically two things that are required. The first one is our API key.
217:37 And then the second one is the actual image. Anyways, this documentation is
217:41 not super intuitive. I can sort of tell that this is a query parameter because
217:45 it's being attached at the end of the endpoint with a question mark and all
217:47 this kind of stuff. And that's just because I've looked at tons of API
217:51 documentation. So, what I'm going to do is go into nit. We're going to add a
217:55 generic credential type. It's going to be a query off. Where where was query?
217:59 There we go. And then you can see I've already added my image BB. But all
218:02 you're going to do is you would add the name as a key. And then you would just
218:05 paste in your API key. And that's it. And now we've authenticated ourselves to
218:09 the service. And then what's next is we need to send over the image in a field
218:12 called image. So I'm going to go back in here. I'm going to send over a body
218:15 because this allows us to actually send over n binary fields. And I'm not going
218:20 to do n binary. I'm going to do form data because then we can name the field
218:23 we're sending over. Like I said, not going to deep dive into how that all
218:26 works, but the name is going to be image and then the input data field name is
218:30 going to be data because that's how it's seen over here. And this should be it.
218:33 So, real quick, I'm just going to change this to get URL. And then we're going to
218:37 hit test step, which is going to send over that binary data to image BB. And
218:42 it hopefully should be sending us back a URL. And it sent back three of them. I'm
218:45 going to be using the middle one that's just called URL because it's like the
218:48 best size and everything. You can look at the other ones if you want on your
218:52 end, but this one is going to load up and we should see it's the image that we
218:55 got generated for us. It takes a while to load up on that first time, but as
218:59 you can see now, it's a publicly accessible URL and then we can feed it
219:03 into runway. So that's exactly our next step. We're going to add another request
219:07 right here. It's going to be an HTTP and this one we're going to configure to hit
219:11 runway. So here's a good example of we can actually use a curl command. So I'm
219:14 going to click on copy over here when I'm in the runway. Generate a video from
219:19 image. Come back into Naden, hit import curl, and paste that in there and hit
219:22 import. And this is going to basically configure everything we need. We just
219:26 have to tweak a few things. Typically, most API documentation nowadays will
219:29 have a curl command. The edit image one that we set up earlier was just a little
219:33 broken. Imag is just a free service, so sometimes they don't always. But let's
219:37 configure this node. So, the first thing I see is we have a header off right
219:40 here. And I don't want to send it like this. I want to send it up as a generic
219:44 type so I can save it. Otherwise, you'd have to go get your API key every time
219:47 you wanted to use Runway. So, as you can see, I've already set up my Runway API
219:51 key. So, I have it plugged in, but what you would do is you'd go get your API
219:55 key from Runway. And then you'd see, okay, how do we actually send over
219:58 authentication? It comes through with the name authorization. And then the
220:03 header is bearer space API key. So, similar to the last one. And then that's
220:06 all you would do in here when you're setting up your runway credential.
220:11 Authorization bearer space my API key. And then because we have ourselves
220:14 authenticated up here, we can flick off that headers. And all we have to do now
220:17 is configure the actual body. Okay, so first things first, what image are we
220:21 sending over to get turned into a video in that name prompt image? We're going
220:25 to get rid of that value. And I'm just going to drag in the URL that we wanted
220:29 that we got from earlier, which was that picture I s I showed you guys. So now
220:33 runway sees that image. Next, we have the seed, which if you want to look at
220:36 the documentation, you can play with it, but I'm just going to get rid of that.
220:38 Then we have the model, which we're going to be using, Gen 4 Turbo. We then
220:42 have the prompt text. So, this is where we're going to get rid of this. And
220:46 you're going to go back to that PDF you downloaded from my free school, and
220:49 you're going to paste this prompt in there. So, this prompt basically gives
220:53 us that like 3D spinning effect where it just kind of does a slow pan and a slow
220:56 rotate. And that's what I was looking for. If you're wanting some other type
220:59 of video, then you can tweak that prompt, of course. For the duration, if
221:04 you look in the documentation, it'll say the duration only basically allows five
221:08 or 10. So, I'm just going to change this one to 10. And then the last one was
221:11 ratio. And I'm just going to make the square. So here are the accepted ratio
221:16 values. I'm going to copy 960 by 960. And we're just going to paste that in
221:19 right there. And actually before we hit test step, I've realized that we're
221:22 missing something here. So back in the documentation, we can see that there's
221:25 one thing up here which is required, which is a header. X-runway- version.
221:30 And then we need to set the value to this. So I'm going to copy the header.
221:35 And we have to enable headers. I I deleted it earlier, but we're going to
221:37 enable that. So we have the version. And then I'm just going to go copy the value
221:41 that it needs to be set to and we'll paste that in there as the value.
221:44 Otherwise, this would not have worked. Okay, so that should be configured. But
221:48 before we test it out, I want to show you guys how I set up the polling flow
221:52 like this that you saw in the demo. So what we're going to do here is we need
221:57 to go see like, okay, once we send over our request right here to get a video
222:01 from our image, it's going to return an ID and that doesn't mean anything to us.
222:06 So what we have to do is get our task. So that is the basically we send over
222:10 the ID that it gives us and then it'll come back and say like the status equals
222:14 pending or running or we'll say completed. So what I'm going to do is
222:18 copy this curl command for getting task details. We're going to hook it up to
222:23 this node as an HTTP request. We're going to import that curl. Now that's pretty much set up. We
222:28 have our authorization which I'm going to delete that because as you know we
222:32 just configured that earlier as a header off. So, I'm just going to come in here
222:37 and grab my Runway API key. There it is. I couldn't find it for some reason. Um,
222:41 we have the version set up. And now all we have to do is drag in the actual ID
222:45 from the previous one. So, real quick, I'm just going to make this an
222:48 expression. Delete ID. And now we're pretty much set up. So, first of all,
222:51 I'm going to test this one, which is going to send off that request to runway
222:54 and say, "Hey, here's our image. Here's the prompt. Make a video out of it." And
222:59 as you can see, we got back an ID. Now I'm going to use this next node and I'm
223:03 going to drag in that ID from earlier. And now it's saying, okay, we're going
223:06 to check in on the status of this specific task. And if I hit test step,
223:09 what we're going to see is that it's not yet finished. So it's going to come back
223:13 and say, okay, status of this run or status of this task is running. So
223:17 that's why what I'm going to do is add an if. And this if is going to be saying,
223:24 okay, does this status field right here, does that equal running in all caps?
223:28 Because that's what it equals right now. If yes, what we're going to do is we are
223:32 going to basically wait for a certain amount of time. So here's the true
223:36 branch. I'm going to wait and let's just say it's 5 seconds. So I'll just call
223:41 this five seconds. I'm going to wait for 5 seconds and then I'm going to come
223:44 back here and try again. So as you saw in the demo, it basically tried again
223:48 like seven or eight times. And this just ensures that it's never going to move on
223:53 until we actually have a finished photo. So what you could also do is basically
223:56 say does status equal completed or whatever it means when it completes.
223:59 That's another way to do it. You just have to be careful to make sure that
224:01 whatever you're setting here as the check is always 100% going to work. And
224:07 then what you do is you would continue the rest of the logic down this path
224:10 once that check has been complete. And then of course you probably don't want
224:13 to have this check like 10 times every single time. So what you would do is
224:17 you'd add a weight step here. And once you know about how long it takes, you'd
224:21 add this here. So last time I had it at 30 seconds and it waited like eight
224:23 times. So let's just say I'm going to wait 60 seconds here. So then when this
224:27 flow actually runs, it'll wait for a minute, check. If it's still not done,
224:30 it'll continuously loop through here and wait 5 seconds every time until we're
224:34 done. Okay, there we go. So now status is succeeded. And what I'm going to do
224:37 is just view this video real quick. Hopefully this one came out nicely.
224:41 Let's take a look. Wow, this is awesome. Super clean. It's rotating really slowly. It's a full
224:49 10-second video. You can tell it's like a 3D image. This is awesome. Okay, cool.
224:54 So now if we test this if branch, we'll see that it's going to go down the other
224:57 one which is the false branch because it's actually completed. And now we can
225:01 with confidence shoot off the email with our materials. So I'm going to grab a
225:04 Gmail node. I'm going to click send a message. And we are going to have this
225:08 configured hopefully because you've already set up your Google stuff. And
225:11 now who do we send this to? We're going to go grab that email from the original
225:14 form submission which is all the way down here. We're going to make the
225:19 subject, which I'm just going to say marketing materials, and then a colon.
225:24 And we'll just drag in the actual title of the product, which in here was
225:28 cologne. I'm changing the email type to text just because I want to. Um, we're
225:32 going to make the body an expression. And we're just going to say like,
225:39 hey, here is your photo. And obviously this can be customized however you want.
225:43 But for the photo, what we have to do is grab that public URL that we generated
225:47 earlier. So right here there is the photo URL. Here is your video. And for
225:52 the video, we're going to drag in the URL we just got from the output of that
225:58 um runway get task check. So there is the video URL. And then I'm just going
226:02 to say cheers. Last thing I want to do is down here append edit an attribution
226:07 and turn that off. This just ensures that the email doesn't say this email
226:12 was sent by NAN. And now if we hit test step right here, this is pretty much the
226:15 end of the process. And we can go ahead and check. Uh-oh. Okay, so not
226:19 authorized. Let me fix that real quick. Okay, so I just switched my credential
226:21 because I was using one that had expired. So now this should go through
226:25 and we'll go take a look at the email. Okay, so did something wrong. I can
226:28 already tell what happened is this is supposed to be an expression and
226:31 dynamically come through as the title of the product, but we accidentally somehow
226:35 left off a curly brace. So, if I come back into here and and add one more
226:38 curly brace right here to the description or sorry, the subject now,
226:42 we should be good. I'll hit test step again. And now we'll go take a look at
226:46 that email. Okay, there we go. Now, we have the cologne and we have our photo
226:49 and our video. So, let's click into the quick. I'm just so amazed. This is this
226:55 is just so much fun. It look the the lighting and the the reflections. It's
227:00 it's all just perfect. And then we'll click into the photo just in case we want to see the
227:05 actual image. And there it is. This also looks awesome. All right, so that's
227:09 going to do it for today's video. I hope you guys enjoyed this style of walking
227:12 step by step through some of the API calls and sort of my thought process as
40:48 going to talk about data types in Nadn and what those look like. It's really
40:50 important to get familiar with this before we actually start automating
40:53 things and building agents and stuff like that. So, what I'm going to do is
40:57 just pull in a set node. As you guys know, this just lets us modify, add, or
41:01 remove fields. And it's very, very simple. We basically would just click on
41:05 this to add fields. We can add the name of the field. We choose the data type,
41:08 and then we set the value, whether that's a fixed value, which we'll be
41:13 looking at here, or if we're dragging in some sort of variable from the lefth
41:15 hand side. But clearly, right now, we have no data incoming. We just have a
41:20 manual trigger. So, what I'm going to do is zoom in on the actual browser so we
41:24 can examine this data on the output a bit bigger and I don't have to just keep
41:27 cutting back and forth with the editing. So, as you can see, there's five main
41:31 data types that we have access to and end it in. We have a string, which is
41:35 basically just a fancy name for a word. Um, as you can see, it's represented by
41:40 a little a, a letter a. Then we have a number, which is represented by a pound
41:43 sign or a hashtag, whatever you want to call it. Um, it's pretty
41:47 self-explanatory. Then we have a boolean which is basically just going to be true
41:50 or false. That's basically the only thing it can be represented by a little
41:54 checkbox. We have an array which is just a fancy word for list. And we'll see
41:58 exactly what this looks like. And then we have an object which is probably the
42:01 most confusing one which basically means it's just this big block which can have
42:05 strings in them, numbers in them. It can have booleans in them. It can have
42:09 arrays in them. And it can also have nested objects within objects. So we'll
42:12 take a look at that. Let's just start off real quick with the string. So let's
42:17 say a string would be a name and that would be my name. So if I hit test step
42:21 on the right hand side in the JSON, it comes through as key value pair like we
42:26 talked about. Name equals Nate. Super simple. You can tell it's a string
42:30 because right here we have two quotes around the word Nate. So that represents
42:34 a string. Or you could go to the schema and you can see that with name equals
42:38 Nate, there's the little letter A and that basically says, okay, this is a
42:41 string. As you see, it matches up right here. Cool. So that's a string. Let's
42:46 switch over to a number. Now we'll just say we're looking at age and we'll throw
42:51 in the number 50. Hit test step. And now we see age equals 50 with the pound sign
42:55 right here as the symbol in the schema view. Or if we go to JSON view, we have
43:01 the key value pair age equals 50. But now there are no double quotes around
43:05 the actual number. It's green. So that's how we know it's not a string. This is a
43:10 number. And um that's where you may run into some issues where if you had like
43:13 age coming through as a string, you wouldn't be able to like do any
43:17 summarizations or filters, you know, like if age is greater than 50, send it
43:21 off this way. If it's less than 50, send it that way. In order to do that type of
43:24 filtering and routing, you would need to make sure that age is actually a number
43:30 variable type or data type. Cool. So there's age. Let's go to a boolean. So
43:35 we're going to basically just say adult. And that can only be true or false. You
43:39 see, I don't have the option to type anything here. It's only going to be
43:42 false or it's only going to be true. And as you can see, it'll come through.
43:45 It'll look like a string, but there's no quotes around it. It's green. And that's
43:49 how we know it's a boolean. Or we could go to schema, and we can see that
43:53 there's a checkbox rather than the letter A symbol. Now, we're going to move on to
43:58 an array. And this one's interesting, right? So, let's just say we we want to
44:01 have a list of names. So, if I have a list of names and I was typing in my
44:05 name and I tried to hit test step, this is where you would run into an error
44:09 because it's basically saying, okay, the field called names, which we set right
44:13 here, it's expecting to get an array, but all we got was Nate, which is
44:17 basically a string. So, to fix this error, change the type for the field
44:21 names or you can ignore type conversions, whatever. Um, so if we were
44:25 to come down to the option and ignore type conversions. So when we hit ignore
44:29 type conversions and tested the step, it basically just converted the field
44:32 called names to a string because it just could understand that this was a string
44:35 rather than an array. So let's turn that back off and let's actually see how we
44:39 could get this to work if we wanted to make an array. So like we know an array
44:44 just is a fancy word for a list. And in order for us to actually send through an
44:48 end and say, okay, this is a list, we have to wrap it in square brackets like
44:53 this. But we also have to wrap each item in the list in quotes. So I have to go
44:58 like this and go like that. And now this would pass through as a list of a of
45:02 different strings. And those are names. And so if I wanted to add another one
45:06 after the first item, I would put a comma. I put two quotes. And then inside
45:10 that I could put another name. Hit test step. And now you can see we're getting
45:14 this array that's made up of different strings and they're all going to be
45:17 different names. So I could expand that. I could close it out. Um we could drag
45:21 in different names. And in JSON, what that looks like is we have our key and
45:25 then we have two closed brackets, which is basically exactly what like right
45:29 here. This is exactly what we typed right here. So that's how it's being
45:32 represented within these square brackets right here. Okay, cool. So the final one
45:36 we have to talk about is an object. And this one's a little more complex. So if
45:40 I was to hit test step here, it's going to tell us names expects an object, but
45:44 we got an array. So once again, you could come in here, ignore type
45:47 conversions, and then it would just basically come through as a string, but
45:50 it's not coming through as an array. So that's not how we want to do it. And I
45:55 don't want to mess with the actual like schema of typing in an object. So what
45:58 I'm going to do is go to chat. I literally just said, give me an example
46:02 JSON object to put into naden. It gives me this example JSON object. I'm going
46:06 to copy that. Come into the set node, and instead of manual mapping, I'm just
46:09 going to customize it with JSON. Paste the one that chat just gave us. And when
46:15 I hit test step, what we now see first of all in the schema view is we have one
46:18 item with you know this is an object and all this different stuff makes it up. So we
46:24 have a string which is name herk. We have a string which is email nate
46:27 example.com. We have a string which is company true horizon. Then we have an
46:33 array of interests within this object. So I could close this out. I could open
46:36 it up. And we have three interests. AI automation nadn and YouTube content. And
46:40 this is, you know, chat GBT's long-term memory about me making this. And then we
46:45 also have an object within our object which is called project. And the interesting difference
46:51 here with an object or an array is that when you have an array of interests,
46:54 every single item in that array is going to be called interest zero, interest
46:58 one, interest two. And by the way, this is three interests, but computers start
47:01 counting from zero. So that's why it says 0, one, two. But with an object, it
47:06 doesn't all have to be the same thing. So you can see in this project object
47:11 project object we have one string called title we have one string called called
47:15 called status and we have one string called deadline and this all makes up
47:18 its own object. As you can see if we went to table view this is literally
47:22 just one item that's really easy to read. And you can tell that this is an
47:26 array because it goes 012. And you can tell that this is an object because it
47:29 has different fields in it. This is a one item. It's one object. It's got
47:33 strings up top. It has no numbers actually. So the date right here, this
47:37 is coming through as a string variable type. We can tell because it's not
47:40 green. We can tell because it has double quotes around it. And we can also tell
47:43 because in schema it comes through with the letter A. But this is just how you
47:47 can see there's these different things that make up um this object. And you can
47:52 even close them down in JSON view. We can see interest is an array that has
47:55 three items. We could open that up. We can see project is an object because
47:59 it's wrapped in in um curly braces, not not um the closed square brackets as you
48:05 can see. So, there's a difference. And I know this wasn't super detailed and it's
48:09 just something really really important to know heading into when you actually
48:13 start to build stuff out because you're probably going to get some of those
48:15 errors where you're like, you know, blank expects an object but got this or
48:19 expects an array and got this. So, just wanted to make sure I came in here and
48:23 threw that module at you guys and hopefully it'll save you some headaches
48:26 down the road. Real quick, guys, if you want to be able to download all the
48:29 resources from this video, they'll be available for free in my free school
48:32 community, which will be the link in the pinned comment. There'll be a zip file
48:36 in there that has all 23 of these workflows, as you can see, and also two
48:40 PDFs at the bottom, which are covered in the video. So, like I said, join the
48:43 Free School community. Not only does it have all of my YouTube resources, but
48:46 it's also a really quick growing community of people who are obsessed
48:49 with AI automation and using ND every day. All you'll have to do is search for
48:53 the title of this video using the search bar or you can click on YouTube
48:56 resources and find the post associated with this video. And then you'll have
48:59 the zip file right here to download which once again is going to have all 23
49:04 of these JSON N workflows and two PDFs. And there may even be some bonus files
49:07 in here. You'll just have to join the free school community to find out. Okay,
49:11 so we talked about AI agents. We talked about AI workflows. We've gotten into
49:14 NADN and set up our account. We understand workflows, nodes, triggers,
49:19 JSON, stuff like that, and data types. Now, it's time to use all that stuff
49:21 that we've talked about and start applying it. So, we're going to head
49:24 into this next portion of this course, which is going to be about step-by-step
49:26 builds, where I'm going to walk you through every single step live, and
49:31 we'll have some pretty cool workflows set up by the end. So, let's get into
49:34 it. Today, we're going to be looking at three simple AI workflows that you can
49:37 build right now to get started learning NAND. We're going to walk through
49:40 everything step by step, including all of the credentials and the setups. So,
49:43 let's take a look at the three workflows we're going to be building today. All
49:46 right, the first one is going to be a rag pipeline and chatbot. And if you
49:50 don't know what rag means, don't worry. We're going to explain it all. But at a
49:52 high level, what we're doing is we're going to be using Pine Cone as a vector
49:55 database. If you don't know what a vector database is, we'll break it down.
49:58 We're going to be using Google Drive. We're going to be using Google Docs. And
50:01 then something called Open Router, which lets us connect to a bunch of different
50:05 AI models like OpenAI's models or Anthropics models. The second workflow
50:08 we're going to look at is a customer support workflow that's kind of going to
50:11 be building off of the first one we just built. Because in the first workflow,
50:14 we're going to be putting data into a Pine Cone vector database. And in this
50:17 one, we're going to use that data in there in order to respond to customer
50:21 support related emails. So, we'll already have had Pine Cone set up, but
50:24 we're going to set up our credentials for Gmail. And then we're also going to
50:28 be using an NAN AI agent as well as Open Router once again. And then finally,
50:31 we're going to be doing LinkedIn content creation. And in this one, we'll be
50:35 using an NAN AI agent and open router once again, but we'll have two new
50:38 credentials to set up. The first one being Tavi, which is going to let us
50:41 search the web. And then the second one will be Google Sheets where we're going
50:44 to store our content ideas, pull them in, and then have the content written
50:49 back to that Google sheet. So by the end of this video, you're going to have
50:51 three workflows set up and you're going to have a really good foundation to
50:55 continue to learn more about NADN. You'll already have gotten a lot of
50:57 credentials set up and understand what goes into connecting to different
51:00 services. One of the trickiest being Google. So we'll walk through that step
51:03 by step and then you'll have it configured and you'll be good. And then
51:05 from there, you'll be able to continuously build on top of these three
51:08 workflows that we're going to walk through together because there's really
51:11 no such thing as a finished product in the space. Different AI models keep
51:14 getting released and keep getting better. There's always ways to improve
51:17 your templates. And the cool thing about building workflows in NAN is that you
51:20 can make them super customized for exactly what you're looking for. So, if
51:24 this sounds good to you, let's hop into that first workflow. Okay, so for this
51:27 first workflow, we're building a rag pipeline and chatbot. And so if that
51:31 sounds like a bunch of gibberish to you, let's quickly understand what rag is and
51:36 what a vector database is. So rag stands for retrieval augmented generation. And
51:40 in the simplest terms, let's say you ask me a question and I don't actually know
51:43 the answer. I would just kind of Google it and then I would get the answer from
51:47 my phone and then I would tell you the answer. So in this case, when we're
51:50 building a rag chatbot, we're going to be asking the chatbot questions and it's
51:53 not going to know the answer. So it's going to look inside our vector
51:56 database, find the answer, and then it's going to respond to us. And so when
52:00 we're combining the elements of rag with a vector database, here's how it works.
52:03 So the first thing we want to talk about is actually what is a vector database.
52:07 So essentially this is what a vector database would look like. We're all
52:11 familiar with like an x and yaxis graph where you can plot points on there on a
52:14 two dimensional plane. But a vector database is a multi-dimensional graph of
52:19 points. So in this case, you can see this multi-dimensional space with all
52:23 these different points or vectors. And each vector is placed based on the
52:27 actual meaning of the word or words in the vector. So over here you can see we
52:31 have wolf, dog and cat. And they're placed similarly because the meaning of
52:35 these words are all like animals. Whereas over here we have apple and
52:38 banana which the meaning of the words are food more likely fruits. And that's
52:42 why they're placed over here together. So when we're searching through the
52:46 database, we basically vectorize a question the same way we would vectorize
52:50 any of these other points. And in this case, we were asking for a kitten. And
52:53 then that query gets placed over here near the other animals and then we're
52:56 able to say okay well we have all these results now. So what that looks like and
53:00 what we'll see when we get into NAND is we have a document that we want to
53:03 vectorize. We have to split the document up into chunks because we can't put like
53:07 a 50page PDF as one chunk. So it gets split up and then we're going to run it
53:10 through something called an embeddings model which basically just turns text
53:15 into numbers. Just as simple as that. And as you can see in this case let's
53:18 say we had a document about a company. We have company data, finance data, and
53:22 marketing data. And they all get placed differently because they mean different
53:26 things. And the the context of those chunks are different. And then this
53:30 visual down here is just kind of how an LLM or in this case, this agent takes
53:34 our question, turns it into its own question. We vectorize that using the
53:38 same embeddings model that we used up here to vectorize the original data. And
53:42 then because it gets placed here, it just grabs back any vectors that are
53:46 nearest, maybe like the nearest four or five, and then it brings it back in
53:49 order to respond to us. So don't want to dive too much into this. Don't want to
53:53 over complicate it, but hopefully this all makes sense. Cool. So now that we
53:57 understand that, let's actually start building this workflow. So what we're
53:59 going to do here is we are going to click on add first step because every
54:03 workflow needs a trigger that basically starts the workflow. So, I'm going to
54:08 type in Google Drive because what we're going to do is we are going to pull in a
54:12 document from our Google Drive in order to vectorize it. So, I'm going to choose
54:15 a trigger which is on changes involving a specific folder. And what we have to
54:19 do now is connect our account. As you can see, I'm already connected, but what
54:22 we're going to do is click on create new credential in order to connect our
54:25 Google Drive account. And what we have to do is go get a client ID and a
54:29 secret. So, what we want to do is click on open docs, which is going to bring us
54:33 to Naden's documents on how to set up this credential. We have a prerequisite
54:37 which is creating a Google Cloud account. So I'm going to click on Google
54:40 Cloud account and we're going to set up a new project. Okay. So I just signed
54:43 into a new account and I'm going to set up a whole project and walk through the
54:46 credentials with you guys. You'll click up here. You'll probably have something
54:49 up here that says like new project and then you'll click into new project. All
54:54 we have to do now is um name it and you you'll be able to start for free so
54:56 don't worry about that yet. So I'm just going to name this one demo and I'm
55:00 going to create this new project. And now up here in the top right you're
55:02 going to see that it's kind of spinning up this project. and then we'll move
55:06 forward. Okay, so it's already done and now I can select this project. So now
55:10 you can see up here I'm in my new project called demo. I'm going to click
55:15 on these three lines in the top left and what we're going to do first is go to
55:18 APIs and services and click on enabled APIs and services. And what we want to
55:22 do is add the ones we need. And so right now all I'm going to do is add Google
55:27 Drive. And you can see it's going to come up with Google Drive API. And then
55:31 all we have to do is really simply click enable. And there we I just enabled it.
55:35 So you can see here the status is enabled. And now we have to set up
55:37 something called our OOTH consent screen, which basically is just going to
55:43 let Nadn know that Google Drive and Naden are allowed to talk to each other
55:46 and have permissions. So right here, I'm going to click on OOTH consent screen.
55:49 We don't have one yet, so I'm going to click on get started. I'm going to give
55:53 it a name. So we're just going to call this one demo. Once again, I'm going to
55:57 add a support email. I'm going to click on next. Because I'm not using a Google
56:01 Workspace account, I'm just using a, you know, nate88@gmail.com. I'm going to have to
56:05 choose external. I'm going to click on next. For contact information, I'm
56:08 putting the same email as I used to create this whole project. Click on next
56:12 and then agree to terms. And then we're going to create that OOTH consent
56:17 screen. Okay, so we're not done yet. The next thing we want to do is we want to
56:20 click on audience. And we're going to add ourselves as a test user. So we
56:23 could also make the app published by publishing it right here, but I'm just
56:26 going to keep it in test. And when we keep it in test mode, we have to add a
56:30 test user. So I'm going to put in that same email from before. And this is
56:32 going to be the email of the Google Drive we want to access. So I put in my
56:36 email. You can see I saved it down here. And then finally, all we need to do is
56:40 come back into here. Go to clients. And then we need to create a new client.
56:45 We're going to click on web app. We're going to name it whatever we want. Of
56:47 course, I'm just going to call this one demo once again. And now we need to
56:52 basically add a redirect URI. So if you click back in Nitn, we have one right
56:57 here. So, we're going to copy this, go back into cloud, and we're going to add
57:00 a URI and paste it right in there, and then hit create, and then once that's created,
57:06 it's going to give us an ID and a secret. So, all we have to do is copy
57:10 the ID, go back into Nit and paste that right here. And then we need to go grab our
57:16 secret from Google Cloud, and then paste that right in there. And now we have a
57:19 little button that says sign in with Google. So, I'm going to open that up.
57:22 It's going to pull up a window to have you sign in. Make sure you sign in with
57:25 the same account that you just had yourself as a test user. That one. And
57:30 then you'll have to continue. And then here is basically saying like what
57:33 permissions do we have? Does anyone have to your Google Drive? So I'm just going
57:36 to select all. I'm going to hit continue. And then we should be good.
57:39 Connection successful and we are now connected. And you may just want to
57:43 rename this credential so you know you know which email it is. So now I've
57:47 saved my credential and we should be able to access the Google Drive now. So,
57:49 what I'm going to do is I'm going to click on this list and it's going to
57:52 show me the folders that I have in Google Drive. So, that's awesome. Now,
57:56 for the sake of this video, I'm in my Google Drive and I'm going to create a
57:59 new folder. So, new folder. We're going to call this one um FAQ. Create this one
58:05 because we're going to be uploading an FAQ document into it. So, here's my FAQ
58:09 folder um right here. And then what I have is down here I made a policy and
58:14 FAQ document which looks like this. We have some store policies and then we
58:17 also have some FAQs at the bottom. So, all I'm going to do is I'm going to drag
58:21 in my policy and FAQ document into that new FAQ folder. And then if we come into
58:27 NAN, we click on the new folder that we just made. So, it's not here yet. I'm
58:30 just going to click on these dots and click on refresh list. Now, we should
58:35 see the FAQ folder. There it is. Click on it. We're going to click on what are
58:38 we watching this folder for. I'm going to be watching for a file created. And
58:43 then, I'm just going to hit fetch test event. And now we can see that we did in
58:47 fact get something back. So, let's make sure this is the right one. Yep. So,
58:50 there's a lot of nasty information coming through. I'm going to switch over
58:52 here on the right hand side. This is where we can see the output of every
58:55 node. I'm going to click on table and I'm just going to scroll over and there
59:00 should be a field called file name. Here it is. Name. And we have policy and FAQ
59:04 document. So, we know we have the right document in our Google Drive. Okay. So,
59:08 perfect. Every time we drop in a new file into that Google folder, it's going
59:11 to start this workflow. And now we just have to configure what happens after the
59:15 workflow starts. So, all we want to do really is we want to pull this data into
59:20 n so that we can put it into our pine cone database. So, off of this trigger,
59:24 I'm going to add a new node and I'm going to grab another Google Drive node
59:28 because what happened is basically we have the file ID and the file name, but
59:32 we don't have the contents of the file. So, we're going to do a download file
59:35 node from Google Drive. I'm going to rename this one and just call it
59:38 download file just to keep ourselves organized. We already have our
59:41 credential connected and now it's basically saying what file do you want
59:45 to download. We have the ability to choose from a list. But if we choose
59:48 from the list, it's going to be this file every time we run the workflow. And
59:52 we want to make this dynamic. So we're going to change from list to by ID. And
59:56 all we have to do now is we're going to look on the lefth hand side for that
59:59 file that we just pulled in. And we're going to be looking for the ID of the
60:02 file. So I can see that I found it right down here in the spaces array because we
60:06 have the name right here and then we have the ID right above it. So, I'm
60:10 going to drag ID, put it right there in this folder. It's coming through as a
60:14 variable called JSON ID. And that's just basically referencing, you know,
60:17 whenever a file comes through on the the Google Drive trigger. I'm going to use
60:21 the variable JSON. ID, which will always pull in the files ID. So, then I'm going
60:25 to hit test step and we're going to see that we're going to get the binary data
60:28 of this file over here that we could download. And this is our policy and FAQ
60:33 document. Okay. So, there's step two. We have the file downloaded in NADN. And
60:37 now it's just as simple as putting it into pine cone. So before we do that,
60:41 let's head over to pine cone.io. Okay, so now we are in pine cone.io, which is
60:45 a vector database provider. You can get started for free. And what we're going
60:48 to do is sign up. Okay, so I just got logged in. And once you get signed up,
60:52 you should see us a page similar to this. It's a get started page. And what
60:55 we want to do is you want to come down here and click on, you know, begin setup
60:59 because we need to create an index. So I'm going to click on begin setup. We
61:03 have to name our index. So you can call this whatever you want. We have to
61:08 choose a configuration for a text model. We have to choose a configuration for an
61:11 embeddings model, which is sort of what I talked about right in here. This is
61:15 going to turn our text chunks into a vector. So what I'm going to do is I'm
61:19 going to choose text embedding three small from OpenAI. It's the most cost
61:23 effective OpenAI embedding model. So I'm going to choose that. Then I'm going to
61:26 keep scrolling down. I'm going to keep mine as serverless. I'm going to keep
61:29 AWS as the cloud provider. I'm going to keep this region. And then all I'm going
61:33 to do is hit create index. Once you create your index, it'll show up right
61:36 here. But we're not done yet. You're going to click into that index. And so I
61:39 already obviously have stuff in my vector database. You won't have this.
61:41 What I'm going to do real quick is just delete this information out of it. Okay.
61:45 So this is what yours should look like. There's nothing in here yet. We have no
61:48 name spaces and we need to get this configured. So on the left hand side, go
61:53 over here to API keys and you're going to create a new API key. Name it
61:58 whatever you want, of course. Hit create key. And then you're going to copy that
62:02 value. Okay, back in NDN, we have our API key copied. We're going to add a new
62:07 node after the download file and we're going to type in pine cone and we're
62:10 going to grab a pine cone vector store. Then we're going to select add documents
62:14 to a vector store and we need to set up our credential. So up here, you won't
62:18 have these and you're going to click on create new credential. And all we need
62:21 to do here is just an API key. We don't have to get a client ID or a secret. So
62:24 you're just going to paste in that API key. Once that's pasted in there and
62:27 you've given it a name so you know what this means. You'll hit save and it
62:30 should go green and we're connected to Pine Cone and you can make sure that
62:34 you're connected by clicking on the index and you should have the name of
62:37 the index right there that we just created. So I'm going to go ahead and
62:40 choose my index. I'm going to click on add option and we're going to be
62:43 basically adding this to a Pine Cone namespace which back in here in Pine
62:48 Cone if I go back into my database my index and I click in here you can see
62:51 that we have something called namespaces. And this basically lets us
62:55 put data into different folders within this one index. So if you don't specify
62:59 an index, it'll just come through as default and that's going to be fine. But
63:02 we want to get into the habit of having our data organized. So I'm going to go
63:05 back into NADN and I'm just going to name this name space FAQ because that's
63:10 the type of data we're putting in. And now I'm going to click out of this node.
63:13 So you can see the next thing that we need to do is connect an embeddings
63:17 model and a document loader. So let's start with the embeddings model. I'm
63:20 going to click on the plus and I'm going to click on embeddings open AAI. And
63:23 actually, this is one thing I left out of the Excalaw is that we also will need
63:27 to go get an OpenAI key. So, as you can see, when we need to connect a
63:30 credential, you'll click on create new credential and we just need to get an
63:33 API key. So, you're going to type in OpenAI API. You'll click on this first
63:37 link here. If you don't have an account yet, you'll sign in. And then once you
63:40 sign up, you want to go to your dashboard. And then on the lefth hand
63:44 side, very similar thing to Pine Cone, you'll click on API keys. And then we're
63:47 just going to create a new key. So you can see I have a lot. We're going to
63:49 make a new one. And I'm calling everything demo, but this is going to be
63:53 demo number three. Create new secret key. And then we have our key. So we're
63:56 going to copy this and we're going to go back into Nit. Paste that right here. We
63:59 paste it in our key. We've given in a name. And now we'll hit save and we
64:03 should go green. Just keep in mind that you may need to top up your account with
64:06 a few credits in order for you to actually be able to run this model. Um,
64:10 so just keep that in mind. So then what's really important to remember is
64:13 when we set up our pine cone index, we use the embedding model text embedding
64:17 three small from OpenAI. So that's why we have to make sure this matches right
64:20 here or this automation is going to break. Okay, so we're good with the
64:24 embeddings and now we need to add a document loader. So I'm going to click
64:27 on this plus right here. I'm going to click on default data loader and we have
64:31 to just basically tell Pine Cone the type of data we're putting in. And so
64:35 you have two options, JSON or binary. In this case, it's really easy because we
64:39 downloaded a a Google Doc, which is on the lefth hand side. You can tell it's
64:42 binary because up top right here on the input, we can switch between JSON and
64:47 binary. And if we were uploading JSON, all we'd be uploading is this gibberish
64:51 nonsense information that we don't need. We want to upload the binary, which is
64:55 the actual policy and FAQ document. So, I'm just going to switch this to binary.
64:58 I'm going to click out of here. And then the last thing we need to do is add a
65:01 text splitter. So, this is where I was talking about back in this Excal. we
65:05 have to split the document into different chunks. And so that's what
65:08 we're doing here with this text splitter. I'm going to choose a
65:12 recursive character text splitter. There's three options and I won't dive
65:15 into the difference right now, but recursive character text splitter will
65:18 help us keep context of the whole document as a whole, even though we're
65:22 splitting it up. So for now, chunk size is a th00and. That's just basically how
65:25 many characters am I going to put in each chunk? And then is there going to
65:29 be any overlap between our chunks of characters? So right now I'm just going
65:33 to leave it default a,000 and zero. So that's it. You just built your first
65:37 automation for a rag pipeline. And now we're just going to click on the play
65:40 button above the pine cone vector store node in order to see it get vectorized.
65:43 So we're going to basically see that we have four items that have left this
65:47 node. So this is basically telling us that our Google doc that we downloaded
65:51 right here. So this document got turned into four different vectors. So if I
65:55 click into the text splitter, we can see we have four different responses and
65:59 this is the contents that went into each chunk. So we can just verify this by
66:03 heading real quick into Pine Cone, we can see we have a new name space that we
66:07 created called FAQ. Number of records is four. And if we head over to the
66:10 browser, we can see that we do indeed have these four vectors. And then the
66:13 text field right here, as you can see, are the characters that were put into
66:17 each chunk. Okay, so that was the first part of this workflow, but we're going
66:21 to real quick just make sure that this actually works. So we're going to add a
66:24 rag chatbot. Okay. So, what I'm going to do now is hit the tab, or I could also
66:27 have just clicked on the plus button right here, and I'm going to type in AI
66:31 agent, and that is what we're going to grab and pull into this workflow. So, we
66:35 have an AI agent, and let's actually just put him right over here. Um, and
66:41 now what we need to do is we need to set up how are we actually going to talk to
66:44 this agent. And we're just going to use the default N chat window. So, once
66:47 again, I'm going to hit tab. I'm going to type in chat. And we have a chat
66:51 trigger. And all I'm going to do is over here, I'm going to grab the plus and I'm
66:54 going to drag it into the front of the AI agent. So basically now whenever we
66:58 hit open chat and we talk right here, the agent will read that chat message.
67:02 And we know this because if I click into the agent, we can see the user message
67:07 is looking for one in the connected chat trigger node, which we have right here
67:10 connected. Okay, so the first step with an AI agent is we need to give it a
67:14 brain. So we need to give it some sort of AI model to use. So we're going to
67:18 click on the plus right below chat model. And what we could do now is we
67:22 could set up an OpenAI chat model because we already have our API key from
67:25 OpenAI. But what I want to do is click on open router because this is going to
67:30 allow us to choose from all different chat models, not just OpenAIs. So we
67:33 could do Claude, we could do Google, we could do Plexity. We have all these
67:36 different models in here which is going to be really cool. And in order to get
67:39 an Open Router account, all you have to do is go sign up and get an API key. So
67:43 you'll click on create new credential and you can see we need an API key. So
67:47 you'll head over to openouter.ai. You'll sign up for an account. And then all you
67:50 have to do is in the top right, you're going to click on keys. And then once
67:54 again, kind of the same as all all the other ones. You're going to create a new
67:57 key. You're going to give it a name. You're going to click create. You have a
68:01 secret key. You're going to click copy. And then when we go back into NN and
68:04 paste it in here, give it a name. And then hit save. And we should go green.
68:07 We've connected to Open Router. And now we have access to any of these different
68:12 chat models. So, in this case, let's use let's use Claude um 3.5
68:19 Sonnet. And this is just to show you guys you can connect to different ones.
68:22 But anyways, now we could click on open chat. And actually, let me make sure you
68:26 guys can see him. If we say hello, it's going to use its brain claw 3.5 sonnet.
68:30 And now it responded to us. Hi there. How can I help you? So, just to validate
68:34 that our information is indeed in the Pine Cone vector store, we're going to
68:38 click on a tool under the agent. We're going to type in Pine Cone um and grab a
68:43 Pine Cone vector store and we're going to grab the account that we just
68:46 selected. So, this was the demo I just made. We're going to give it a name. So,
68:50 in this case, I'm just going to say knowledge base. We're going to give a description.
68:59 Call this tool to access the policy and FAQ database. So, we're basically just
69:03 describing to the agent what this tool does and when to use it. And then we
69:08 have to select the index and the name space for it to look inside of. So the
69:11 index is easy. We only have one. It's called sample. But now this is important
69:14 because if you don't give it the right name space, it won't find the right
69:19 information. So we called ours FAQ. If you remember in um our Pine Cone, we
69:23 have a namespace and we have FAQ right here. So that's why we're doing FAQ. And
69:26 now it's going to be looking in the right spot. So before we can chat with
69:30 it, we have to add an embeddings model to our Pine Cone vector store, which
69:33 same thing as before. We're going to grab OpenAI and we're going to use
69:37 embedding3 small and the same credential you just made. And now we're going to be
69:41 good to go to chat with our rag agent. So looking back in the document, we can
69:44 see we have some different stuff. So I'm going to ask this chatbot what the
69:48 warranty policy is. So I'm going to open up the chat window and say what is our
69:54 warranty policy? Send that off. And we should see that it's going to use its
69:57 brain as well as the vector store in order to create an answer for us because
70:00 it didn't know by itself. So there we go. just finished up and it said based on the information
70:06 from our knowledge base, here's the warranty policy. We have one-year
70:10 standard coverage. We have, you know, this email for claims processes. You
70:14 must provide proof of purchase and for warranty exclusions that aren't covered,
70:18 damage due to misuse, water damage, blah blah blah. Back in the policy
70:22 documentation, we can see that that is exactly what we have in our knowledge
70:26 base for warranty policy. So, just because I don't want this video to go
70:29 too long, I'm not going to do more tests, but this is where you can get in
70:31 there and make sure it's working. One thing to keep in mind is within the
70:34 agent, we didn't give it a system prompt. And what a system prompt is is
70:38 just basically a message that tells the agent how to do its job. So what you
70:42 could do is if you're having issues here, you could say, you know, like this
70:45 is the name of our tool which is called knowledgeb. You could tell the agent and
70:49 in system prompt, hey, like your job is to help users answer questions about the
70:54 um you know, our policy database. You have a tool called knowledgebase. You
70:58 need to use that in order to help them answer their questions. and that will
71:01 help you refine the behavior of how this agent acts. All right, so the next one
71:05 that we're doing is a customer support workflow. And as always, you have to
71:08 figure out what is the trigger for my workflow. In this case, it's going to be
71:12 triggered by a new email received. So I'm going to click on add first step.
71:16 I'm going to type in Gmail. Grab that node. And we have a trigger, which is on
71:19 message received right here. And we're going to click on that. So what we have
71:23 to do now is obviously authorize ourselves. So we're going to click on
71:26 create new credential right here. And all we have to do here is use OOTH 2. So
71:30 all we have to do is click on sign in. But before we can do that, we have to
71:33 come over to our Google Cloud once again. And now we have to make sure we
71:36 enable the Gmail API. So we'll click on Gmail API. And it'll be really simple.
71:40 We'll just have to click on enable. And now we should be able to do that OOTH
71:43 connection and actually sign in. You'll click on the account that you want to
71:46 access the Gmail. You'll give it access to everything. Click continue. And then
71:50 we're going to be connected as you can see. And then you'll want to name this
71:54 credential as always. Okay. So now we're using our new credential. And what I'm
71:57 going to do is if I hit fetch test event. So now we are seeing an email
72:01 that I just got in this inbox which in this case was nencloud was granted
72:05 access to your Google account blah blah blah. Um so that's what we just got.
72:09 Okay. So I just sent myself a different email and I'm going to fetch that email
72:13 now from this inbox. And we can see that the snippet says what is the privacy
72:17 policy? I'm concerned about my data and passwords. And what we want to do is we
72:21 want to turn off simplify because what this button is doing is it's going to
72:24 take the content of the email and basically, you know, cut it off. So in
72:28 this case, it didn't matter, but if you're getting long emails, it's going
72:30 to cut off some of the email. So if we turn off simplify fetch test event, once
72:34 again, we're now going to get a lot more information about this email, but we're
72:37 still going to be able to access the actual content, which is right here. We
72:41 have the text, what is privacy policy? I'm concerned about my data and
72:44 passwords. Thank you. And then you can see we have other data too like what the
72:48 subject was, who the email is coming from, what their name is, all this kind
72:52 of stuff. But the idea here is that we are going to be creating a workflow
72:56 where if someone sends an email to this inbox right here, we are going to
72:59 automatically look up the customer support policy and respond back to them
73:02 so we don't have to. Okay. So the first thing I'm actually going to do is pin
73:05 this data just so we can keep it here for testing. Which basically means
73:08 whenever we rerun this, it's not going to go look in our inbox. It's just going
73:12 to keep this email that we pulled in, which helps us for testing, right? Okay,
73:16 cool. So, the next step here is we need to have AI basically filter to see is
73:21 this email customer support related? If yes, then we're going to have a response
73:24 written. If no, we're going to do nothing because maybe the use case would
73:28 be okay, we're going to give it an access to an inbox where we're only
73:32 getting customer support emails. But sometimes maybe that's not the case. And
73:35 let's just say we wanted to create this as sort of like an inbox manager where
73:38 we can route off to different logic based on the type of email. So that's
73:41 what we're going to do here. So I'm going to click on the plus after the
73:44 Gmail trigger and I'm going to search for a text classifier node. And what
73:49 this does is it's going to use AI to read the incoming email and then
73:53 determine what type of email it is. So because we're using AI, the first thing
73:56 we have to do is connect a chat model. We already have our open router
73:59 credential set up. So I'm going to choose that. I'm going to choose the
74:01 credential and then I'm for this one, let's just keep it with 40 mini. And now
74:07 this AI node actually has AI and I'm going to click into the text classifier.
74:09 And the first thing we see is that there's a text to classify. So all we
74:13 want to do here is we want to grab the actual content of the email. So I'm
74:17 going to scroll down. I can see here's the text, which is the email content.
74:20 We're going to drag that into this field. And now every time a new email
74:25 comes through, the text classifier is going to be able to read it because we
74:28 put in a variable which basically represents the content of the email. So
74:32 now that it has that, it still doesn't know what to classify it as or what its
74:35 options are. So we're going to click on add category. The first category is
74:39 going to be customer support. And then basically we need to give it a
74:42 description of what a customer support email could look like. So I wanted to
74:46 keep this one simple. It's pretty vague, but you could make this more detailed,
74:49 of course. And I just sent an email that's related to helping out a
74:52 customer. They may be asking questions about our policies or questions about
74:56 our products or services. And what we can do is we can give it specific
74:59 examples of like here are some past customer support emails and here's what
75:02 they've looked like. And that will make this thing more accurate. But in this
75:05 case, that's all we're going to do. And then I'm going to add one more category
75:08 that's just going to be other. And then for now, I'm just going to say any email
75:14 that is not customer support related. Okay, cool. So now when we click out of
75:18 here, we can see we have two different branches coming off of this node, which
75:21 means when the text classifier decides, it's either going to send it off this
75:24 branch or it's going to send it down this branch. So let's quickly hit play.
75:28 It's going to be reading the email using its brain. And now you can see it has
75:32 outputed in the customer support branch. We can also verify by clicking into
75:35 here. And we can see customer support branch has one item and other branch has
75:39 no items. And just to keep ourselves organized right now, I'm going to click
75:42 on the other branch and I'm just going to add an operation that says do nothing
75:46 just so we can see, you know, what would happen if it went this way for now. But
75:49 now is where we want to configure the logic of having an agent be able to read
75:55 the email, hit the vector database to get relevant information and then help
75:58 us write an email. So I'm going to click on the plus after the customer support
76:02 branch. I'm going to grab an AI agent. So this is going to be very similar to
76:05 the way we set up our AI agent in the previous workflow. So, it's kind of
76:08 building on top of each other. And this time, if you remember in the previous
76:12 one, we were talking to it with a connected chat trigger node. And as you
76:15 can see here, we don't have a connected chat trigger node. So, the first thing
76:19 we want to do is change that. We want to define below. And this is where you
76:22 would think, okay, what do we actually want the agent to read? We want it to
76:25 read the email. So, I'm going to do the exact same thing as before. I'm going to
76:29 go into the Gmail trigger node, scroll all the way down until we can find the
76:32 actual email content, which is right here, and just drag that right in.
76:35 That's all we're going to do. And then we definitely want to add a system
76:39 message for this agent. We are going to open up the system message and I'm just
76:42 going to click on expression so I can expand this up full screen. And we're
76:46 going to write a system prompt. Again, for the sake of the video, keeping this
76:49 prompt really concise, but if you want to learn more about prompting, then
76:52 definitely check out my communities linked down below as well as this video
76:56 up here and all the other tutorials on my channel. But anyways, what we said
76:59 here is we gave it an overview and instructions. The overview says you are
77:03 a customer support agent for TechHaven. Your job is to respond to incoming
77:06 emails with relevant information using your knowledgebased tool. And so when we
77:10 do hook up our Pine Cone vector database, we're just going to make sure
77:13 to call it knowledgebase because that's what the agent thinks it has access to.
77:17 And then for the instructions, I said your output should be friendly and use
77:20 emojis and always sign off as Mr. Helpful from TechHaven Solutions. And
77:24 then one more thing I forgot to do actually is we want to tell it what to
77:27 actually output. So if we didn't tell it, it would probably output like a
77:31 subject and a body. But what's going to happen is we're going to reply to the
77:34 incoming email. We're not going to create a new one. So we don't need a
77:38 subject. So I'm just going to say output only the body content of the email. So
77:44 then we'll give it a try and see what that prompt looks like. We may have to
77:47 come back and refine it, but for now we're good. Um, and as you know, we have
77:51 to connect a chat model and then we have to connect our pine cone. So first of
77:54 all, chat model, we're going to use open router. And just to show you guys, we
77:58 can use a different type of model here. Let's use something else. Okay. So,
78:01 we're going to go with Google Gemini 2.0 Flash. And then we need to add the Pine
78:04 Cone database. So, I'm going to click on the plus under tool. I'm going to search
78:09 for Pine Cone Vector Store. Grab that. And we have the operation is going to be
78:13 retrieving documents as a tool for an AI agent. We're going to call this
78:20 knowledge capital B. And we're going to once again just say call this tool to
78:27 access policy and FAQ information. We need to set up the index as well as the
78:31 namespace. So sample and then we're going to call the namespace, you know,
78:35 FAQ because that's what it's called in our pine cone right here as you can see.
78:38 And then we just need to add our embeddings model and we should be good
78:42 to go which is embedded OpenAI text embedding three small. So we're going to
78:46 hit the play above the AI agent and it's going to be reading the email. As you
78:49 can see once again the prompt user message. It's reading the email. What is
78:52 the privacy policy? I'm concerned about my data and my passwords. Thank you. So
78:56 we're going to hit the play above the agent. We're going to watch it use its
78:59 brain. We're going to watch it call the vector store. And we got an error. Okay.
79:04 So, I'm getting this error, right? And it says provider returned error. And
79:09 it's weird because basically why it's erroring is because of our our chat
79:12 model. And it's it's weird because it goes green, right? So, anyways, what I
79:16 would do here is if you're experiencing that error, it means there's something
79:19 wrong with your key. So, I would go reset it. But for now, I'm just going to
79:22 show you the quick fix. I can connect to a OpenAI chat model real quick. And I
79:27 can run this here and we should be good to go. So now it's going to actually
79:31 write the email and output. Super weird error, but I'm honestly glad I caught
79:34 that on camera to show you guys in case you face that issue because it could be
79:37 frustrating. So we should be able to look at the actual output, which is,
79:41 "Hey there, thank you for your concern about privacy policy. At Tech Haven, we
79:45 take your data protection seriously." So then it gives us a quick summary with
79:48 data collection, data protection, cookies. If we clicked into here and
79:51 went to the privacy policy, we could see that it is in fact correct. And then it
79:55 also was friendly and used emojis like we told it to right here in the system
79:59 prompt. And finally, it signed off as Mr. Helpful from Tech Haven Solutions,
80:02 also like we told it to. So, we're almost done here. The last thing that we
80:05 want to do is we want to have it actually reply to this person that
80:09 triggered the whole workflow. So, we're going to click on the plus. We're going
80:13 to type in Gmail. Grab a Gmail node and we're going to do reply to a message.
80:17 Once we open up this node, we already know that we have it connected because
80:20 we did that earlier. We need to configure the message ID, the message
80:24 type, and the message. And so all I'm going to do is first of all, email type.
80:28 I'm going to do text. For the message ID, I'm going to go all the way down to
80:31 the Gmail trigger. And we have an ID right here. This is the ID we want to
80:35 put into the message ID so that it responds in line on Gmail rather than
80:39 creating a new thread. And then for the message, we're going to just drag in the
80:43 output from the agent that we just had write the message. So, I'm going to grab
80:46 this output, put it right there. And now you can see this is how it's going to
80:49 respond in email. And the last thing I want to do is I want to click on add option, append
80:55 nadn attribution, and then just check that off. So then at the bottom of the
80:59 email, it doesn't say this was sent by naden. So finally, we'll hit this test
81:04 step. We will see we get a success message that the email was sent. And
81:07 I'll head over to the email to show you guys. Okay, so here it is. This is the
81:11 one that we sent off to that inbox. And then this is the one that we just got
81:13 back. As you can see, it's in the same thread and it has basically the privacy
81:19 policy outlined for us. Cool. So, that's workflow number two. Couple ways we
81:22 could make this even better. One thing we could do is we could add a node right
81:25 here. And this would be another Gmail one. And we could basically add a label
81:31 to this email. So, if I grab add label to message, we would do the exact same
81:34 thing. We'd grab the message ID the same way we grabbed it earlier. So, now it
81:38 has the message ID of the label to actually create. And then we would just
81:41 basically be able to select the label we want to give it. So in this case, we
81:44 could give it the customer support label. We hit test step, we'll get
81:48 another success message. And then in our inbox, if we refresh, we will see that
81:51 that just got labeled as customer support. So you could add on more
81:55 functionality like that. And you could also down here create more sections. So
81:59 we could have finance, you know, a logic built out for finance emails. We could
82:02 have logic built out for all these other types of emails and um plug them into
82:07 different knowledge bases as well. Okay. So the third one we're going to do is a
82:11 LinkedIn content creator workflow. So, what we're going to do here is click on
82:14 add first step, of course. And ideally, you know, in production, what this
82:17 workflow would look like is a schedule trigger, you know. So, what you could do
82:20 is basically say every day I want this thing to run at 7:00 a.m. That way, I'm
82:23 always going to have a LinkedIn post ready for me at, you know, 7:30. I'll
82:27 post it every single day. And if you wanted it to actually be automatic,
82:30 you'd have to flick this workflow from inactive to active. And, you know, now
82:34 it says, um, your schedule trigger will now trigger executions on the schedule
82:37 you have defined. So now it would be working, but for the sake of this video,
82:40 we're going to turn that off and we are just going to be using a manual trigger
82:44 just so we can show how this works. Um, but it's the same concept, right? It
82:48 would just start the workflow. So what we're going to do from here is we're
82:51 going to connect a Google sheet. So I'm going to grab a Google sheet node. I'm
82:55 going to click on get rows and sheet and we have to create our credential once
82:58 again. So we're going to create new credential. We're going to be able to do
83:02 ooth to sign in, but we're going to have to go back to Google Cloud and we're
83:05 going to have to grab a sheet and make sure that we have the Google Sheets API
83:08 enabled. So, we'll come in here, we'll click enable, and now once this is good
83:12 to go, we'll be able to sign in using OOTH 2. So, very similar to what we just
83:16 had to do for Gmail in that previous workflow. But now, we can sign in. So,
83:20 once again, choosing my email, allowing it to have access, and then we're
83:23 connected successfully, and then giving this a good name. And now, what we can
83:26 do is choose the document and the sheet that it's going to be pulling from. So,
83:29 I'm going to show you. I have one called LinkedIn posts, and I only have one
83:33 sheet, but let's show you the sheet real quick. So, LinkedIn posts, what we have
83:38 is a topic, a status, and a content. And we're just basically going to be pulling
83:42 in one row where the status equals to-do, and then we are going to um
83:46 create the content, upload it back in right here, and then we're going to
83:50 change the status to created. So, then this same row doesn't get pulled in
83:52 every day. So, how this is going to work is that we're going to create a filter.
83:56 So the first filter is going to be looking within the status column and it
84:01 has to equal to-do. And if we click on test step, we should see that we're
84:04 going to get like all of these items where there's a bunch of topics. But we
84:08 don't want that. We only want to get the first row. So at the bottom here, add
84:12 option. I'm going to say return only first matching row. Check that on. We'll
84:15 test this again. And now we're only going to be getting that top row to
84:19 create content on. Cool. So we have our first step here, which is just getting
84:23 the content from the Google sheet. Now, what we're going to do is we need to do
84:27 some web search on this topic in order to create that content. So, I'm going to
84:30 add a new node. This one's going to be called an HTTP request. So, we're going
84:34 to be making a request to a specific API. And in this case, we're going to be
84:38 using Tavly's API. So, go on over to tavly.com and create a free account.
84:42 You're going to get a,000 searches for free per month. Okay, here we are in my
84:46 account. I'm on the free researcher plan, which gives me a thousand free
84:49 credits. And right here, I'm going to add an API key. We're going to name it,
84:54 create a key, and we're going to copy this value. And so, you'll start to get
84:56 to the point when you connect to different services, you always need to
85:00 have some sort of like token or API key. But anyways, we're going to grab this in
85:03 a sec. What we need to do now is go to the documentation that we see right
85:06 here. We're going to click on API reference. And now we have right here.
85:10 This is going to be the API that we need to use in order to search the web. So,
85:14 I'm not going to really dive into like everything about HTTP requests right
85:17 now. I'm just going to show you the simple way that we can get this set up.
85:21 So first thing that we're going to do is we obviously see that we're using an
85:25 endpoint called Tavali search and we can see it's a post request which is
85:28 different than like a git request and we have all these different things we need
85:31 to configure and it can be confusing. So all we want to do is on the top right we
85:35 see this curl command. We're going to click on the copy button. We're going to
85:40 go back into our NEN, hit import curl, paste in the curl command, hit
85:46 import, and now the whole node magically just basically filled in itself. So
85:50 that's really awesome. And now we can sort of break down what's going on. So
85:53 for every HTTP request, you have to have some sort of method. Typically, when
85:58 you're sending over data to a service, which in this case, we're going to be
86:01 sending over data to Tavali. It's going to search the web and then bring data
86:05 back to us. That's a post request because we're sending over body data. If
86:09 we were just like kind of trying to hit an and if we were just trying to access
86:14 like you know um bestbuy.com and we just wanted to scrape the information that
86:17 could just be a simple git request because we're not sending anything over
86:20 anyways then we're going to have some sort of base URL and endpoint which is
86:24 right here. The base URL we're hitting is api.com/tavaly and then the endpoint
86:30 we're hitting is slash search. So back in the documentation you can see right
86:34 here we have slash search but if we were doing like an extract we would do slash
86:37 extract. So that's how you can kind of see the difference with the endpoints.
86:40 And then we have a few more things to configure. The first one of course is
86:44 our authorization. So in this case, we're doing it through a header
86:46 parameter. As you can see right here, the curl command set it up. Basically
86:51 all we have to do is replace this um token with our API key from Tavi. So I'm
86:56 going to go back here, copy that key in N. I'm going to get rid of token and
87:00 just make sure that you have a space after the word bearer. And then you can
87:03 paste in your token. And now we are connected to Tavi. But we need to
87:07 configure our request before we send it off. So right here are the parameters
87:11 within our body request. And I'm not going to dive too deep into it. You can
87:13 go to the documentation if you want to understand like you know the main thing
87:17 really is the query which is what we're searching for. But we have other things
87:20 like the topic. It can be general or news. We have search depth. We have max
87:24 results. We have a time range. We have all this kind of stuff. Right now I'm
87:28 just going to leave everything here as default. We're only going to be getting
87:31 one result. And we're going to be doing a general topic. We're going to be doing
87:34 basic search. But right now, if we hit test step, we should see that this is
87:37 going to work. But it's going to be searching for who is Leo Messi. And
87:40 here's sort of like the answer we get back as well as a URL. So this is an
87:45 actual website we could go to about Lionel Messi and then some content from
87:51 that website. Right? So we are going to change this to an expression so that we
87:54 can put a variable in here rather than just a static hard-coded who is Leo
87:58 Messi. We'll delete that query. And all we're going to do is just pull in our
88:02 topic. So, I'm just going to simply pull in the topic of AI image generation.
88:06 Obviously, it's a variable right here, but this is the result. And then we're
88:09 going to test step. And this should basically pull back an article about AI
88:14 image generation. And you know, so here is a deep AI um link. We'll go to it.
88:19 And we can see this is an AI image generator. So maybe this isn't exactly
88:23 what we're looking for. What we could do is basically just say like, you know, we
88:28 could hardcode in search the web for. And now it's going to be saying search
88:31 the web for AI image generation. We could come in here and say yeah actually
88:34 you know let's get three results not just one. And then now we could test
88:37 that step and we're going to be getting a little bit different of a search
88:42 result. Um AI image generation uses text descriptions to create unique visuals.
88:45 And then now you can see we got three different URLs rather than just one.
88:49 Anyways, so that's our web search. And now that we have a web search based on
88:53 our defined topic, we just need to write that content. So I'm going to click on
88:58 the plus. I'm going to grab an AI agent. And once again, we're not giving it the
89:01 connected chat trigger node to look at. That's nowhere to be found. We're going
89:05 to feed in the research that was just done by Tavi. So, I'm going to click on
89:10 expression to open this up. I'm going to say article one with a colon and I'm
89:15 just going to drag in the content from article one. I'm going to say article 2
89:20 with a colon and just drag in the content from article 2. And then I'm
89:25 going to say article 3 colon and just drag in the content from the third
89:29 article. So now it's looking at all three article contents. And now we just
89:32 need to give it a system prompt on how to write a LinkedIn post. So open this
89:36 up. Click on add option. Click on system message. And now let's give it a prompt
89:41 about turning these three articles into a LinkedIn post. Okay. So I'm heading
89:45 over to my custom GPT for prompt architect. If you want to access this,
89:48 you can get it for free by joining my free school community. Um you'll join
89:51 that. It's linked in the description and then you can just search for prompt
89:54 architect and you should find the link. Anyways, real quick, it's just asking
89:58 for some clarification questions. So, anyways, I'm just shooting off a quick
90:01 reply and now it should basically be generating our system prompt for us. So,
90:05 I'll check in when this is done. Okay, so here is the system prompt. I am going
90:09 to just paste it in here and I'm just going to, you know, disclaimer, this is
90:12 not perfect at all. Like, I don't even want this tool section at all because we
90:16 don't have a tool hooked up to this agent. Um, we're obviously just going to
90:19 give it a chat model real quick. So, in this case, what I'm going to do is I'm
90:22 going to use Claude 3.5 Sonnet just because I really like the way that it
90:25 writes content. So, I'm using Claude through Open Router. And now, let's give
90:28 it a run and we'll just see what the output looks like. Um, I'll just click
90:31 into here while it's running and we should see that it's going to read those
90:34 articles and then we'll get some sort of LinkedIn post back. Okay, so here it is.
90:39 The creative revolution is here and it's AI powered. Gone are the days of hiring
90:42 expensive designers or struggling with complex software. Today's entrepreneurs
90:46 can transform ideas into a stunning visuals instantly using AI image
90:50 generators. So, as you can see, we have a few emojis. We have some relevant
90:53 hashtags. And then at the end, it also said this post, you know, it kind of
90:56 explains why it made this post. We could easily get rid of that. If all we want
90:59 is the content, we would just have to throw that in the system prompt. But now
91:03 that we have the post that we want, all we have to do is send it back into our
91:07 Google sheet and update that it was actually made. So, we're going to grab
91:11 another sheets node. We're going to do update row and sheet. And this one's a
91:14 little different. It's not just um grabbing stuff from a row. We're trying
91:18 to update stuff. So, we have to say what document we want, what sheet we want.
91:21 But now, it's asking us what column do we want to match on. So, basically, I'm
91:25 going to choose topic. And all we have to do is go all the way back down to the
91:28 sheet. We're going to choose the topic and drag it in right here. Which is
91:32 basically saying, okay, when this node gets called, whenever the topic equals
91:37 AI image generation, which is a variable, obviously, whatever whatever
91:40 topic triggered the workflow is what's going to pop up here. We're going to
91:44 update that status. So, back in the sheets, we can see that the status is
91:47 currently to-do, and we need to change it to created in order for it to go
91:51 green. So, I'm just going to type in created, and obviously, you have to
91:53 spell this correctly the same way you have it in your Google Sheets. And then
91:56 for the content, all I'm going to do is we're just going to drag in the output
92:00 of the AI agent. And as you can see, it's going to be spitting out the
92:03 result. And now if I hit test step and we go back into the sheet, we'll
92:06 basically watch this change. Now it's created. And now we have the content of
92:10 our LinkedIn post as well with some justification for why it created the
92:14 post like this. And so like I said, you could basically have this be some sort
92:17 of, you know, LinkedIn content making machine where every day it's going to
92:21 run at 7:00 a.m. It's going to give you a post. And then what you could do also
92:24 is you can automate this part of it where you're basically having it create
92:27 a few new rows every day if you give it a certain sort of like general topic to
92:32 create topics on and then every day you can just have more and more pumping out.
92:35 So that is going to do it for our third and final workflow. Okay, so that's
92:39 going to do it for this video. I hope that it was helpful. You know, obviously
92:42 we connected to a ton of different credentials and a ton of different
92:46 services. We even made a HTTP request to an API called Tavali. Now, if you found
92:49 this helpful and you liked this sort of live step-by-step style and you're also
92:53 looking to accelerate your journey with NAN and AI automations, I would
92:56 definitely recommend to check out my paid community. The link for that is
92:58 down in the description. Okay, so hopefully those three workflows taught
93:02 you a ton about connecting to different services and setting up credentials.
93:05 Now, I'm actually going to throw in one more bonus step-by-step build, which is
93:09 actually one that I shared in my paid community a while back, and I wanted to
93:12 bring it to you guys now. So, definitely finish out this course, and if you're
93:14 still looking for some more and you like the way I teach, then feel free to check
93:17 out the paid community. The link for that's down in the description. We've
93:19 got a course in there that's even more comprehensive than what you're watching
93:22 right now on YouTube. We've also got a great community of people that are using
93:25 Niten to build AI automations every single day. So, I'd love to see you guys
93:28 in that community. But, let's move ahead and build out this bonus workflow. Hey
93:34 guys. So, today I wanted to do a step by step of an invoice workflow. And this is
93:39 because there's different ways to approach stuff like this, right? There's
93:42 the conversation of OCR. There's a conversation of maybe extracting text
93:46 from PDFs. Um, there's the conversation of if you're always getting invoices in
93:50 the exact same format, you probably don't need AI because you could use like
93:54 a code node to extract the different parameters and then push that through.
93:58 So, that's kind of stuff we're going to talk about today. And I I haven't showed
94:01 this one on YouTube. It's not like a YouTube build, but it's not an agent.
94:04 It's an AI powered workflow. And I also wanted to talk about like just the
94:07 foundational elements of connecting pieces, thinking about the workflow. So,
94:11 what we're going to do first actually is we're going to hop into Excalar real
94:14 quick. and I'm going to create a new one. And we're just going to real quickly
94:20 wireframe out what we're doing. So, first thing we're going to draw out here
94:24 is the trigger. So, we'll make this one yellow. We'll call this the
94:30 trigger. And what this is going to be is invoice. Sorry, we're going to do new
94:41 um Google Drive. So the Google Drive node, it's going to be triggering the
94:45 workflow and it's going to be when a new invoice gets dropped into
94:49 um the folder that we're watching. So that's the trigger. From there, and like
94:53 I said, this is going to be a pretty simple workflow. From there, what we're
94:56 going to do is basically it's going to be a PDF. So the first thing to
95:02 understand is actually let me just put Google Drive over here. So the first
95:07 thing to understand from here is um you know what what do the invoices
95:12 look like? These are the questions that we're going to have. So the first one's what
95:19 do the invoices look like? Um because that determines what happens next. So if
95:23 they are PDFs that happen every single time and they're always in the same
95:27 format, then next we'd want to do okay well we can just kind of extract the
95:33 text from this and then we can um use a code node to extract the information we
95:39 need per each parameter. Now if it is a scanned invoice where it's maybe not as
95:44 we're not maybe not as able to extract text from it or like turn it into a text
95:48 doc, we'll probably have to do some OCR element. Um, but if it's PDF that's
95:54 generated by a computer, so we can extract the text, but they're not going
95:57 to come through the same every time, which is what we have in this case. I
96:00 have two example invoices. So, we know we're overall we're looking for like
96:04 business name, client name, invoice number, invoice date, due date, payment
96:07 method, bank details, maybe stuff like that, right? But both of these are
96:10 formatted very differently. They all have the same information, but they're
96:15 formatted differently. So that's why we can't that's why we want to use an AI
96:19 sort of information extractor node. Um so that's one of the main questions. The
96:22 other ones we'd think about would be like you know where do they go? So once we get
96:30 them where do they go? Um you know the frequency of them coming in and then
96:34 also like really any other action. So bas building off of where do they go? It's
96:42 also like what actions will we take? So, does that mean um are we just going to
96:46 throw it in a CRM or are we just going to throw it in a CRM or maybe a database
96:49 or are we also going to like send them an automated follow-up based on you know
96:53 the email that we extract from it and say, "Hey, we received your invoice.
96:56 Thanks." So, like what does that look like? So, those are the questions we
96:59 were initially going to ask. Um and then that helps us pretty much plan out the
97:05 next steps. So because we figured out um extract the same like x amount of
97:21 fields. So because we found out we want fields but the formats may not be
97:31 [Music] consistent. We will use an AI information extractor. Um that is just a long
97:43 sentence. So shorten this up a little bit or sorry make it smaller a little
97:46 bit. Okay so we have that. Um the um like updated to our Google sheet which will
98:08 invoice which I'll just call invoice database and then a follow-up email can
98:14 be sent or no not a follow-up email we'll just say an email, an internal email will be sent.
98:22 So, an email will be sent to the internal billing team. Okay, so this is what we've got, right?
98:30 We have our questions. We've kind of answered the questions. So, now we know
98:32 what the rest of the flow is going to look like. We already know this is not
98:35 going to be an agent. It's going to be a workflow. So, what we're going to do is
98:38 we're going to add another node right here, which is going to be, you know,
98:43 PDF comes in. And what we want to do is we want to extract the text from that
98:51 PDF. Um let's make this text smaller. So we're going to extract the text. And
98:55 we'll do this by using a um extract text node. Okay, cool. Now
99:01 once we have the text extracted, what do we need to do? We need to
99:09 um just moving over these initial questions. So we have the text
99:13 extracted. Extracted. What comes next? What comes next is we need to
99:18 um like decide on the fields to extract. And how do we get this? We get this
99:29 from our invoice database. So let's quickly set up the invoice database. I'm
99:33 going to do this by opening up a Google sheet which we are just going to call
99:41 the oops invoice DB. So now we need to figure out what we actually want to put
99:48 into our invoice DB. So first thing we'll do is um you know we're pretending
99:53 that our business is called Green Grass. So we don't need that. We don't need the
99:55 business information. We really just need the client information. So invoice
100:00 number will be the first thing we want. So, we're just setting up our database
100:04 here. So, invoice number. From there, we want to get client name, client address,
100:08 client email, client phone. [Music] oops, client name, client
100:21 email, client address, and then we want client phone. Okay, so we have those
100:29 five things. And let's see what else we want. probably the amount. So, we'll
100:43 um, and due date. Invoice date and due date. Okay. Invoice date and due date. Okay.
100:52 So, we have these, what are these? Eight. Eight fields. And I'm just going
100:57 to change these colors so it looks visually better for us. So, here are the
101:01 fields we have and this is what we want to extract from every single invoice
101:04 that we are going to receive. Cool. So, we know we have these
101:09 eight things. I'm just going to actually fine. So, we have our eight
101:17 fields to extract um and then they're going to be pushed to invoice DB and then we'll set up the
101:23 once we have these fields we can basically um create our email. So this
101:29 is going to be an AI node that's going to info extract. So it's going to
101:34 extract the eight fields that we have over here. So we're going to send the data
101:39 into there and it's going to extract those fields. Once we extract those
101:47 fields, we don't probably need to set the data because because coming out of
101:51 this will basically be those eight fields. So um you know every time what's
101:57 going to happen is actually sorry let me add another node here so we can
102:01 connect these. So what's going to come out of here is one item which will be
102:05 the one PDF and then what's coming out of here will be eight items every time.
102:09 So that's what we've got. We could also want to think about maybe if two
102:12 invoices get dropped in at the same time. How do we want to handle that loop
102:16 or just push through? But we won't worry about that yet. So we've got one item
102:19 coming in here. the node that's extracting the info will push out the
102:22 eight items and the eight items only. And then what we can do from there is
102:31 update invoice DB and then from there we can also and this could be like out of
102:35 here we do two things or it could be like a uh sequential if that makes
102:39 sense. So, well, what else we know we need to do is we know that we also need
102:43 to email billing team. And so, what I was saying there is we could either have it like this
102:51 where at the same time it branches off and it does those two things. And it
102:54 really doesn't matter the order because they're both going to happen either way.
102:57 So, for now, to keep the flow simple, we'll just do this or we're going to
103:02 email the billing team. Um, and what's going to happen is, you
103:12 internal, because this is internal, we already know like the billing email. So,
103:21 billing@acample.com. This is what we're going to feed in because we already know
103:23 the billing email. We don't have to extract this from anywhere. Um, so we
103:30 have all the info we need. We will what else do we need to feed in
103:34 here? So some of these some of these fields we'll have to filter in. So some
103:42 of the extracted fields because like we want to say hey you know we got this invoice on this
103:50 date um to this client and it's due on this date. So, we'll have some of the
103:53 extracted fields. We'll have a billing example and then potentially
103:59 like potentially like previous or like an email template potentially like that's that's something
104:06 we can think about or we can just prompt So, yeah. Okay. Okay. So what we want to
104:17 do here is actually this. What we need to do is the email has to be generated
104:22 somewhere. So before we feed into an emailing team node and let me actually
104:26 change this. So we're going to have green nodes be AI and then blue nodes
104:30 are going to be not AI. So we're going to get another AI node right here which
104:35 is going to be craft email. So we'll connect these pieces once again.
104:43 Um, and so I hope this I hope you guys can see like this is me trying to figure
104:46 out the workflow before we get into nit because then we can just plug in these
104:49 pieces, right? Um, and so I didn't even think about this. I mean, obviously we would have
104:54 got in there and end and realized, okay, well, we need an email to actually
105:00 configure these next fields, but that's just how it works, right? So
105:05 anyways, this stuff is actually hooked up to the wrong place. We need this to
105:07 be hooked up over here to the craft email tool. So, email template will also be
105:14 hooked up here. And then the billing example will be hooked up. This is the
105:18 No, this will still go here because that's actually the email team or the
105:21 email node. So, email node, send email node, which is an action and we'll be feeding in this as
105:31 well as the actual email. So the email that's written by AI will be fed in. And I
105:40 think that ends the process, right? So we'll just add a quick Oops. We'll just add a quick
105:47 yellow note over here. And I always my my colors always change, but just trying to keep things
105:53 consistent. Like in here, we're just saying, okay, the process is going to
105:56 end now. Okay, so this is our workflow, right? New invoice PDF comes through. We
106:01 want to extract the text. We're using an extract text node which is just going to
106:05 be a static extract from PDF PDF or convert PDF to text file type of thing.
106:09 We'll get one item sent to an AI node to extract the eight fields we need. The
106:12 eight items will be fed into the next node which is going to update our Google
106:16 sheet. Um and I'll just also signify here this is going to be a Google sheet
106:19 because it's important to understand the integrations and like who's involved in
106:24 each process. So this is going to be AI. This is going to be AI and that's
106:29 going to be an extract node. This is going to be a Gmail node and then we
106:33 have the process end. Cool. So this is our wireframe. Now we can get into naden
106:38 and start building out. We can see that this is a very very sequential flow. We
106:42 don't need an agent. We just need two AI nodes So let us get into niten and start
106:51 building this thing. So um we know we know what's starting this
106:55 process which is which is a trigger. So, I'm going to grab a Google Drive
106:59 trigger. We're going to do um on changes to a specific file or no, no, specific
107:04 folder, sorry. Changes involving a specific folder. We're going to choose
107:08 our folder, which is going to be the projects folder, and we're going to be
107:12 watching for a file created. So, we've got our ABC Tech Solutions.
107:18 I'm going to download this as a PDF real quick. So, download as a PDF. I'm going
107:23 to go to my projects folder in the drive, and I'm going to drag this guy in
107:26 here. Um, there it is. Okay, so there's our PDF. We'll come in here and we'll hit
107:32 fetch test event. So, we should be getting our PDF. Okay, nice. We will just make sure it's the
107:39 right one. So, we we should see a ABC Tech Solutions Invoice. Cool. So, I'm
107:42 going to pin this data just so we have it here. So, just for reference, pinning
107:46 data, all it does is just keeps it here. So, if we were to refresh this this
107:50 page, we'll still have our pinned data, which is that PDF to play with. But if
107:53 we would have not pinned it, then we would have had to fetch test event once
107:56 again. So not a huge deal with something like this, but if you're maybe doing web
108:00 hooks or API calls, you don't want to have to do it every time. So you can pin
108:03 that data. Um or like an output of an AI node if you don't want to have to rerun the AI.
108:11 But anyway, so we have our our PDF. We know next based on our wireframe. And
108:16 let me just call this um invoice flow wireframe. So we know next is we need to extract
108:23 text. So perfect. We'll get right into NADN. We'll click on next and we will do
108:28 an extract from file. So let's see. We want to extract from PDF. And
108:34 although what do we have here? We don't have any binary. So we
108:40 were on the right track here, but we forgot that in order to we get the we
108:44 get the PDF file ID, but we don't actually have it. So what we need to do
108:49 here first is um basically download the file because we need the binary to then feed that into
109:04 So we need the binary. So, sorry if that's like I mean really small, but basically in order to
109:11 extract the text, we need to download the file first to get the binary and
109:16 then we can um actually do that. So, little little thing we missed in the
109:19 wireframe, but not a huge deal, right? So, we're going to extend this one off.
109:23 We're going to do a Google Drive node once again, and we're going to look at
109:27 download file. So, now we can say, okay, we're downloading a file. Um, we can
109:32 choose from a list, but this has to be dynamic because it's going to be based
109:35 on that new trigger every time. So, I'm going to do by ID. And now on the lefth
109:39 hand side, we can look for the file ID. So, I'm going to switch to schema real
109:44 quick. Um, so we can find the the ID of the file. We're just going to have to go
109:47 through. So, we have a permissions ID right here. I don't think that's the
109:51 right one. We have a spaces ID. I don't think that's the right one either. We're
109:56 looking for an actual file ID. So, let's see. parents icon link, thumbnail link, and
110:06 sometimes you just have to like find it So, I feel like I probably have just
110:13 skipped right over it. Otherwise, we'll IDs. Maybe it is this one. Yeah, I think
110:21 Okay, sorry. I think it is this one because we see the name is right here
110:23 and the ID is right here. So, we'll try this. We're referencing that
110:27 dynamically. We also see in here we could do a Google file conversion which
110:31 basically says um you know if it's docs convert it to HTML if it's drawings
110:35 convert it to that. If it's this convert it to that there's not a PDF one so
110:39 we'll leave this off and we'll hit test step. So now we will see we got the
110:43 invoice we can click view and this is exactly what we're looking at here with
110:47 the invoice. So this is the correct one. Now since we have it in our binary data
110:50 over here we have binary. Now we can extract it from the file. So um you know
110:56 on the left is the inputs on the right is going to be our output. So we're
111:00 extracting from PDF. We're looking in the input binary field called data which
111:05 is right here. So I'll hit test step and now we have text. So here's the actual
111:10 text right um the invoice the information we need and out of this is
111:12 what we're going to pass over to extract. So let's go back to the
111:18 wireframe. We have our text extracted. Now, what we want to do is extract um
111:22 the specific eight fields that we need. So, hopping back into the workflow, we
111:26 know that this is going to be an AI node. So, it's going to be an
111:28 information extractor. We have to first of all classify we we know that one item is
111:34 going in here and that's right here for us in the table, which is the actual
111:37 text of the invoice. So, we can open this up and we can see this is the text
111:40 of the invoice. We want to do it from attribute description. So, that's what it's
111:46 looking for. So, we can add our eight attributes. So, we know there's going to
111:48 be eight of them, right? So, we can create eight. But, let's just first of
111:52 all go into our database to see what we want. So, the first one's invoice
111:55 number. So, I'm going to copy this over here. Invoice number. And we just have
111:59 to describe what that is. So, I'm just going to say the number of the
112:03 invoice. And this is required. We're going to make them all required. So,
112:06 number of the invoice. Then we have the client name. Paste that in
112:11 here. Um, these should all be pretty self um explanatory. So the name of the
112:17 client we're going to make it required client email. So this is going
112:22 to be a little bit repetitive but the email of the client and let me just
112:31 quickly copy this for the next two client address. So there's client
112:36 address and we're going to required. And then what's the last one
112:48 here? Client phone. Paste that in there, which is obviously going to be the phone
112:53 number of the client. And here we can say, is this going to be a string or is
112:56 it going to be a number? I'm going to leave it right now as a string just
112:59 because over here on the left you can see the phone. We have parenthesis in
113:03 there. And maybe we want the format to come over with the parenthesis and the
113:07 little hyphen. So let's leave it as a string for now. We can always test and
113:10 we'll come back. But client phone, we're going to leave that. We have total
113:15 amount. Same reason here. I'm going to leave this one as a string because I
113:18 want to keep the dollar sign when we send it over to sheets and we'll see how
113:22 it comes over. But the total amount of the invoice required. What's coming next is invoice
113:31 date and due date. So, invoice date and due date, we can say these are
113:36 going to be dates. So, we're changing the var the data type here. They're both
113:41 required. And the date the invoice was sent. And then we're going to
113:47 say the date the invoice is due. So, we're going to make sure this works. If
113:49 we need to, we can get in here and make these descriptions more descriptive. But
113:53 for now, we're good. We'll see if we have any options. You're an expert
113:56 extraction algorithm. Only extracts relevant information from the text. If
113:59 you do not know the value of the attribute to extract, you may omit the
114:03 attributes value. So, we'll just leave that as is. Um, and we'll hit test step.
114:08 It's going to be looking at this text. And of course, we're using AI, so we
114:12 have to connect a chat model. So, this will also alter the performance. Right
114:15 now, we're going to go with a Google Gemini 20 flash. See if that's powerful
114:20 enough. I think it should be. And then, we're going to hit play once again. So,
114:22 now it's going to be extracting information using AI. And what's great
114:26 about this is that we already get everything out here in its own item. So,
114:30 it's really easy to map this now into our Google sheet. So, let's make sure
114:35 this is all correct. Um, invoice number. That looks good. I'm going to open up
114:38 the actual one. Yep. Client name ABC. Yep. Client email finance at ABC
114:44 Tech. Yep. Address and phone. We have address and phone. Perfect. We have total amount
114:53 is 141 175. 14175. We have um March 8th and March 22nd. If we go back up here, March 8th,
115:00 March 22nd. Perfect. So, that one extracted it. Well, and um okay, so we have one item coming out,
115:08 but technically there's eight like properties in there. So, anyways, let's
115:12 go back to our our uh wireframe. So, after we extracted the eight items, what
115:16 do we do next? We're going to put them into our Google Sheet um database. So,
115:21 what we know is we're going to grab a Google Sheets. We're going to do an
115:26 append row because we're adding a row. Um, we already have a credential
115:28 selected. So, hopefully we can choose our invoice database. It's just going to
115:32 be the first sheet, sheet one. And now what happens is we have to map the
115:36 columns. So, you can see these are dragable. We can grab each one. If I go
115:40 to schema, it's a little more apparent. So, we have these eight items. And it's
115:42 going to be really easy now that we use an information extractor because we can
115:46 just map, you know, invoice number to invoice number, client name, client
115:51 name, email, email. And it's referencing these variables because every time after
115:56 we do our information extractor, they're going to be coming out as JSON.output
115:59 and then invoice number. And then for client name, JSON.output client name. So
116:03 we have these dynamic variables that will happen every single time. And
116:06 obviously I'll show this when we do another example, but we can keep mapping
116:10 everything in. And we also did it in that order. So it's really really easy
116:14 to do. We're just dragging and dropping and we are finished. Cool. So if I hit
116:21 test step here, this is going to give us a message that says like here are the
116:25 fields basically. So there are the fields. They're mapped correctly. Come
116:28 into the sheets. We now have automatically gotten this updated in our
116:34 invoice database. And um that's that. So let me just change some of these nodes.
116:39 So this is going to be update database. Um this is information extractor extract
116:44 from file. I'm just going to say this is download binary. So now we know what's going on
116:50 in each step. And we'll go back to the wireframe real quick. What happens after
116:53 we update the database? Now we need to craft the email. And this is going to be
116:58 using AI. And what's going to go into this is some of the extracted fields and
117:01 maybe an email template. What we're going to do more realistically is just a
117:08 system prompt. So back into nitn let's add a um open AI message and model node. So
117:15 what we're going to do is we're going to choose our model to talk to. In this
117:19 case we'll go 40 mini. It should be powerful enough. And now we're going to
117:23 set up our system prompt and our user prompt. So at this point if you don't
117:27 understand the difference the system prompt is the instructions. So we're telling this node
117:33 how to behave. So first I'm going to change this node name to create email
117:38 because that's like obviously what's going on keeping you organized. And now
117:41 how do we explain to this node what its role is? So you are an email
117:48 expert. You will receive let me actually just open this up. You will receive
117:58 um invoice information. Your job is to notify the internal billing
118:07 team that um an invoice was received. Receive/s sent. Okay. So,
118:15 honestly, I'm going to leave it at that for now. It's really simple. If we
118:18 wanted to, we can get in here and change the prompting as far as like here is the
118:22 format. Here is the way you should be doing it. One thing I like to do is I
118:25 like to say, you know, this is like your overview. And then if we need to get
118:28 more granular, we can give it different sections like output or rules or
118:32 anything like that. I'm also going to say you are an email expert
118:38 for green grass corp named [Music] um named um Greeny. Okay, so we have
118:49 Greenie from Green Grass Corp. That's our email expert that's going to email
118:53 the billing team every time this workflow happens. So that's the
118:58 overview. Now in the user prompt, think of this as like when you're talking to
119:01 chatbt. So obviously I had chatbt create these invoices chatgbt this when we say hello
119:08 that's a user message because this is an interact like an an interaction and it's
119:12 going to change every time. But behind the scenes in this chatbt openai has a
119:16 system prompt in here that's basically like you're a helpful assistant. You
119:19 help you know users answer questions. So this window right here that we type in
119:24 is our user message and behind the scenes telling the node how to act is
119:30 our system prompt. Cool. So in here I like to have dynamic information go into
119:33 the user message while I like to have static information in the actual system
119:37 prompt. So except for maybe the except the exception usually of
119:42 like giving it the current time and day because that's an expression. So
119:46 anyways, let's change this to an expression. Let's make this full screen.
119:50 We are going to be giving it the invoice information that it needs to write the
119:55 email because that's what it that's what it's expecting. In the system prompt, we
119:59 said you will receive invoice information. So, first thing is going to
120:05 be invoice number. We are going to grab invoice number and just drag it in.
120:10 We're going to grab client name and just drag it in. So, it's going
120:14 to dynamically get these different things every time, right?
120:19 So, let's say maybe it doesn't even we don't need client email. Okay, maybe we
120:24 do. We want client email. Um, so we'll give it that. But the billing team right now doesn't need
120:30 the address or phone. Let's just say that. But it does want we do want them
120:36 to know the total amount of that invoice. And we definitely want them to
120:40 know the invoice date and the invoice due date. So we can we can now drag in these two
120:48 things. So this was us just being able to customize what the AI node sees. Just
120:53 keep in mind if we don't drag anything in here, even if it's all on the input,
120:59 the AI node doesn't see any of it. So let's hit test step and we'll see the
121:02 type of email we get. We're going to have to make some changes. I already
121:04 know because you know we have to separate everything. But what it did is
121:08 it created a subject which is new invoice received and then the invoice
121:12 number. Dear billing team, I hope this message finds you well. We've received
121:15 an invoice that requires your attention and then it lists out some information
121:18 and then it also signs off Greeny Green Grass Corp. So, first thing we want to do is um if
121:25 we go back to our wireframe, what we have to send in, and we didn't document this well enough
121:35 actually, but what goes into here is a um you know, in order to send an email,
121:40 we need a two, we need a subject, and we need the email body.
121:48 So, that those are the three things we need. the two is coming from here. So,
121:52 we know that. And the subject and email are going to come from the um craft
121:58 email node. So, we have the the two and then actually I'm going to move this up
122:01 here. So, now we can just see where we're getting all of our pieces from.
122:04 So, the two is coming from internal knowledge. This can be hardcoded, but
122:07 the subject and email are going to be dynamic from the AI note. Cool. So what
122:13 we want to do now say output and we're going to tell it how to output information. So
122:26 output output the following parameters separately and we're just
122:33 going to say subject and email. So now it should be outputting two parameters
122:37 separately, but it's not going to because even though it says here's the
122:40 subject and then it gives us a subject and then it says here's the email and
122:43 gives us an email, they're still in one field. Meaning if we hook up another
122:48 node which would be a Gmail send email as we here. Okay, so now this is the next
122:58 node. Here's the fields we need. But as you can see coming out of the create
123:03 email AI node, we have this whole parameter called content which has the
123:07 subject and the email. And we need to get these split up so that we can drag
123:10 one into the subject and one into the two. Right? So first of all, I'm just
123:14 making these expressions just so we can drag stuff in later. Um, and so that's
123:20 what we need to do. And our fix there is we come into here and we just check this
123:25 switch that says output content is JSON, which will then will rerun. And now
123:29 we'll get subject and body. Subject and email in two different
123:34 fields right here. We can see which is awesome because then we can open up our
123:38 send email node. We can grab our subject. It's going to be dynamic. And
123:41 we can grab our email. It's going to be dynamic. Perfect. We're going to change
123:45 this to text. And we're going to add an option down here. And we're just going
123:49 to say append nadn attribution and turn that off because we just don't want to see the
123:55 message at the bottom that says this was sent by nadn. And if we go back to our
124:00 wireframe wherever that is over here, we know that this is the email that's going
124:03 to be coming through or we're going to be sending to every time because we're
124:06 sending internally. So we can put that right in here, not as a variable. Every
124:10 time this is going to be sending to billing@acample.com. So this really
124:13 could be fixed. It doesn't have to be an expression. Cool. So we will now hit test step and we can
124:21 see that we got this this email sent. So let me open up a new
124:26 tab. Let me go into whoa into our Gmail. I will go to the sent items and
124:34 we will see we just got this billing email. So obviously it was a fake email
124:37 but this is what it looks like. We've received a new invoice from ABC Tech.
124:40 Please find the details below. We got invoice number, client name, client
124:45 email, total amount, total invoice date, due date. Please process these this
124:48 invoice accordingly. So that's perfect. Um, we could also, if
124:54 we wanted to, we could prompt it a little bit differently to say, you know,
124:59 like this has been updated within the database and, um, you can check it out
125:03 here. So, let's do that real quick. What we're going to do is we're going to say
125:06 because we've already updated the database, I'm going to come into our
125:09 Google sheet. I'm going to copy this link and I'm going to we're basically going to bake this
125:19 So, I'm going to say we're going to give it a section called
125:26 email. Inform the billing team of the invoice. let them know we have also updated this
125:38 in the invoice database and they can view it here and we'll just give them
125:43 this link to that Google doc. So every time they'll just be able to send that
125:45 over. So I'm going to hit test up. We should see a new email over here which
125:50 is going to include that link I hope. So there's the link. We'll run this email
125:55 tool again to send a new email. Hop back into Google Gmail. We got a new one. And
126:01 now we can see we have this link. So you can view it here. We've already updated
126:05 this in the invoice database. We click on the link. And now we have our
126:09 database as well. So cool. Now let's say at this point we're happy with our
126:13 prompting. We're happy with the email. This is done. If we go back to the
126:17 wireframe, the email is the last node. So um maybe just to make it look
126:21 consistent, we will just add over something here that just says nothing.
126:25 And now we know that the process is done because nothing to do. So this is
126:28 basically like this is what we wireframed out. So we know that we're
126:32 happy with this process. We understand what's going on. Um but now let's unpin
126:37 this data real quick and let's drop in another invoice to make sure that even
126:41 though it's formatted differently. So this XYZ formatted differently, but the
126:45 AI should still be able to extract all the information that we need. So I'm
126:48 going to come in here and download this one as a PDF. We have it right there. I'm going
126:54 to drop it into our Google Drive. So, we have XYZ Enterprises now. Come back into
127:00 the workflow and we'll hit test fetch test event. Let's just make sure this is
127:03 the right one. So, XYZ Enterprises. Nice. And I'm just going to hit test workflow and
127:10 we'll watch it download, extract, get the information, update the database,
127:13 create the email, send the email, and then nothing else should happen after
127:19 that. So, boom, we're done. Let's click into our email. Here we have our new
127:23 invoice received. So it updated differently like the subjects dynamic
127:27 because it was from XYZ a different invoice number. As you remember the ABC
127:31 one, it started with TH AI and this one starts with INV. So that's why the subject is different.
127:38 Dear billing team, we have received a new invoice from XYZ Enterprises. Please
127:42 find the details below. There's the number, the name, all this kind of
127:46 information. Um the total amount was 13856. Let's go make sure that's right.
127:51 Total amount 13856. Um March 8th March 22nd once again. Is that
128:01 correct? March 8th March 22nd. Nice. And finance XYZ XYZ. Perfect. Okay. The
128:05 invoice has been updated in the database. You can view it here. So let's
128:08 click on that link. Nice. We got our information populated into the
128:12 spreadsheet. As you can see, it all looks correct to me as well. Our strings
128:15 are coming through nice and our dates are coming through nice. So I'm going to
128:18 leave it as is. Now, keep in mind because these are technically coming
128:23 through as strings, um, that's fine for phone, but Google Sheets automatically
128:27 made these numbers, I believe. So, if we wanted to, we could sum these up because
128:31 they're numbers. Perfect. Okay, cool. So, that's how that works,
128:36 right? Um, that's the email. We wireframed it out. We tested it with two
128:40 different types of invoices. They weren't consistent formatting, which
128:43 means we couldn't probably have used a code node, but the AI is able to read
128:47 this and extract it. As you can see right here, we got the same eight items
128:51 extracted that we were looking for. So, that's perfect. Um, cool. So, going to
129:00 will yeah I will attach the actual flow and I will attach the
129:06 just a picture of this wireframe I suppose in this um post and by now you
129:12 guys have already seen that I'm sure but yeah I hope this was helpful um the
129:16 whole process of like the way that I approached and I know this was a 35minut
129:20 build so it's not like the same as like building something more complex but as
129:24 far as like a general workflow you know this is a pretty solid solid one to get
129:27 started with. Um it it shows elements of using AI within a simple workflow that's going to
129:35 be sequential and it shows like you know the way we have to reference our
129:38 variables and how we have to drag things in and um obviously the component of
129:42 like wireframing out in the beginning to understand the full flow at least 80%
129:48 85% of the full flow before you get in there. So cool. Hope you guys enjoy this
129:52 one and I will see you guys in the community. Thanks. All right. All right,
129:55 I hope you guys enjoyed those step-by-step builds. Hopefully, right
129:57 now, you're feeling like you're in a really good spot with Naden and
130:01 everything starting to piece together. This next video we're going to move into
130:05 is about APIs because in order to really get more advanced with our workflows and
130:08 our AI agents, we have to understand the most important thing, which is APIs.
130:12 They let our Nin workflows connect to anything that you actually want to use.
130:15 So, it's really important to understand how to set up. And when you understand
130:18 it, the possibilities are endless. And it's really not even that difficult. So,
130:21 let's break it down. If you're building AI agents, but you don't really
130:24 understand what an API is or how to use them, don't worry. You're not alone. I
130:27 was in that exact same spot not too long ago. I'm not a programmer. I don't know
130:31 how to code, but I've been teaching tens of thousands of people how to build real
130:34 AI systems. And what changed the game for me was when I understood how to use
130:38 APIs. So, in this video, I'm going to break it down as simple as possible, no
130:41 technical jargon, and by the end, you'll be confidently setting up API calls
130:45 within your own Agentic workflows. Let's make this easy. So the purpose of this
130:48 video is to understand how to set up your own requests so you can access any
130:52 API because that's where the power truly comes in. And before we get into NDN and
130:56 we set up a couple live examples and I show you guys my thought process when
131:00 I'm setting up these API calls. First I thought it would just be important to
131:04 understand what an API really is. And APIs are so so powerful because let's
131:08 say we're building agents within NADN. Basically we can only do things within
131:12 NADN's environment unless we use an API to access some sort of server. So
131:16 whether that's like a Gmail or a HubSpot or Air Table, whatever we want to access
131:20 that's outside of Niden's own environment, we have to use an API call
131:24 to do so. And so that's why at the end of this video, when you completely
131:27 understand how to set up any API call you need, it's going to be a complete
131:31 gamecher for your workflows and it's also going to unlock pretty much
131:35 unlimited possibilities. All right, now that we understand why APIs are
131:38 important, let's talk about what they actually do. So API stands for
131:42 application programming interface. And at the highest level in the most simple
131:45 terms, it's just a way for two systems to talk to each other. So NAND and
131:49 whatever other system we want to use in our automations. So keeping it limited
131:53 to us, it's our NAN AI agent and whatever we want it to interact with.
131:56 Okay, so I said we're going to make this as simple as possible. So let's do it.
132:00 What we have here is just a scenario where we go to a restaurant. So this is
132:03 us right here. And what we do is we sit down and we look at the menu and we look
132:06 at what food that the restaurant has to offer. And then when we're ready to
132:10 order, we don't talk directly to the kitchen or the chefs in the kitchen. We
132:13 talk to the waiter. So we'd basically look at the menu. We'd understand what
132:16 we want. Then we would talk to the waiter and say, "Hey, I want the chicken
132:19 parm." The waiter would then take our order and deliver it to the kitchen. And
132:23 after the kitchen sees the request that we made and they understand, okay, this
132:26 person wants chicken parm. I'm going to grab the chicken parm, not the salmon.
132:30 And then we're basically going to feed this back down the line through the
132:33 waiter all the way back to the person who ordered it in the first place. And
132:37 so that's how you can see we use an HTTP request to talk to the API endpoint and
132:42 receive the data that we want. And so now a little bit more of a technical
132:45 example of how this works in NADN. Okay, so here is our AI agent. And when it
132:48 wants to interact with the service, it first has to look at that services API
132:53 documentation to see what is offered. Once we understand that, we'll read that
132:56 and we'll be ready to make our request and we will make that request using an
133:00 HTTP request. From there, that HTTP request will take our information and
133:04 send it to the API endpoint. the endpoint will look at what we ordered
133:07 and it will say, "Okay, this person wants this data. So, I'm going to go
133:10 grab that. I'm going to send it back, send it back to the HTTP request." And
133:14 then the HTTP request is actually what delivers us back the data that we asked
133:17 and we know that it was available because we had to look at the API
133:21 documentation first. So, I hope that helps. I think that looking at it
133:24 visually makes a lot more sense, especially when you hear, you know, HTTP
133:28 API endpoint, all this kind of stuff. But really, it's just going to be this
133:31 simple. So now let me show an example of what actually this looks like in Naden
133:34 and when you would use one and when you wouldn't need to use one. So here we
133:37 have two examples where we're accessing a service called open weather map which
133:41 basically just lets us grab the weather data from anywhere in the world. And so
133:44 on the left what we're doing is we're using open weather's native integration
133:48 within nadn. And so what I mean by native integration is just that when we
133:51 go into nadn and we click on the plus button to add an app and we want to see
133:54 like you know the different integrations. It has air tableable it
133:58 has affinity it has airtop it has all these AWS things. It has a ton of native
134:02 integrations and all that a native integration is is an HTTP request but
134:07 it's just like wrapped up nicely in a UI for us to basically fill in different
134:11 parameters. And so once you realize that it really clears everything up because
134:14 the only time you actually need to use an HTTP request is if the service you
134:19 want to use is not listed in this list of all the native integrations. Anyways,
134:23 let me show you what I mean by that. So like I said on the left we have Open
134:27 Weather Maps native integration. So basically what we're doing here is we're
134:29 sending over, okay, I'm using open weather map. I'm going to put in the
134:32 latitude and the longitude of the city that I'm looking for. And as you can see
134:36 over here, what we get back is Chicago as well as a bunch of information about
134:39 the current weather in Chicago. And so if you were to fill this out, it's super
134:42 super intuitive, right? All you do is put in the Latin long, you choose your
134:46 format as far as imperial or metric, and then you get back data. And that's the
134:49 exact same thing we're doing over here where we use an HTTP request to talk to
134:54 Open Weather's API endpoint. And so this is just looks a little more scary and
134:57 intimidating because we have to set this up ourselves. But if we zoom in, we can
135:00 see it's pretty simple. We're making a git request to open weather maps URL
135:05 endpoint. And then we're putting over the lat and the long, which is basically
135:09 the exact same from the one on the left. And then, as you can see, we get the
135:13 same information back about Chicago, and then some weather information about
135:16 Chicago. And so the purpose of that was just to show you guys that these native
135:20 integrations, all we're doing is we're accessing some sort of API endpoint. it
135:24 just looks simpler and easier and there's a nice user interface for us
135:27 rather than setting everything up manually. Okay, so hopefully that's
135:30 starting to make a little more sense. Let's move down here to the way that I
135:34 think about setting up these HTTP requests, which is we're basically just
135:37 setting up filters and making selections. All we're doing is we're
135:42 saying, okay, I want to access server X. When I access server X, I need to tell
135:45 it basically, what do I want from you? So, it's the same way when you're going
135:48 to order some pizza. You have to first think about which pizza shop do I want
135:52 to call? And then once you call them, it's like, okay, I need to actually
135:54 order something. It has to be small, medium, large. It has to be pepperoni or
135:58 cheese. You have to tell it what you want and then they will send you the
136:01 data back that you asked for. So when we're setting these up, we basically
136:04 have like five main things to look out for. The first one you have to do every
136:08 time, which is a method. And the two most common are going to be a get or a
136:11 post. Typically, a get is when you're just going to access an endpoint and you
136:14 don't have to send over any information. You're just going to get something back.
136:17 But a post is when you're going to send over certain parameters and certain data
136:21 and say okay using this information send me back what I'm asking for. The great
136:24 news is and I'll show you later when we get into any end to actually do a live
136:29 example. It'll always tell you to say you know is this a get or a post. Then
136:32 the next thing is the endpoint. You have to tell it like which website or you
136:35 know which endpoint you want to actually access which URL. From there we have
136:38 three different parameters to set up. And also just realize that this one
136:43 should say body parameters. But this used to be the most confusing part to
136:46 me, but it's really not too bad at all. So, let's break it down. So, keep in
136:49 mind when we're looking at that menu, that API documentation, it's always
136:52 going to basically tell us, okay, here are your query parameters, here are your
136:55 header parameters, and here are your body parameters. So, as long as you
136:57 understand how to read the documentation, you'll be just fine. But
137:01 typically, the difference here is that when you're setting up query parameters,
137:04 this is basically just saying a few filters. So if you search pizza into
137:08 Google, it'll come google.com/arch, which would be Google's endpoint. And then we would have a
137:14 question mark and then Q equals and then a bunch of different filters. So as you
137:16 can see right here, the first filter is just Q equals pizza. And the Q is, you
137:21 know, it stands for query parameters. And you don't even have to understand
137:23 that. That's just me showing you kind of like a real example of how that works.
137:26 From there, we have to set up a header parameter, which is pretty much always
137:30 going to exist. And I basically just think of header parameters as, you know,
137:34 authorizing myself. So, usually when you're doing some sort of API where you
137:37 have to pay, you have to get a unique API key and then you'll send that key.
137:40 And if you don't put your key in, then you're not going to be able to get the
137:43 data back. So, like if you're ordering a pizza and you don't give them your
137:46 credit card information, they're not going to send you a pizza. And usually
137:49 an API key is something you want to keep secret because let's say, you know, you
137:53 put 10 bucks into some sort of API that's going to create images for you.
137:56 If that key gets leaked, then anyone could use that key and could go create
138:00 images for themselves for free, but they'd be running down your credits. And
138:03 these can come in different forms, but I just wanted to show you a really common
138:06 one is, you know, you'll have your key value pairs where you'll put
138:10 authorization as the name and then in the value you'll put bearer space your
138:15 API key or in the name you could just put API_key and then in the value you'd
138:19 put your API key. But once again, the API documentation will tell you how to
138:23 configure all this. And then finally, the body parameters if you need to send
138:26 something over to get something back. Let's say we're, you know, making an API
138:31 call to our CRM and we want to get back information about John. We could send
138:35 over something like name equals John. The server would then grab all records
138:39 that have name equal John and then it would send them back to you. So those
138:42 are basically like the five main things to look out for when you're reading
138:45 through API documentation and setting up your HTTP request. But the beautiful
138:51 thing about living in 2025 is that we now have the most beautiful thing in the
138:55 world, which is a curl command. And so what a curl command is is it lets you
138:59 hit copy and then you basically can just import that curl into nadn and it will
139:03 pretty much set up the request for you. Then at that point it really is just
139:07 like putting in your own API key and tweaking a few things if you want to. So
139:11 let's take a look at this curl statement for a service called Tavi. As you can
139:14 see that's the endpoint is api.tavi.com. All this basically does is
139:18 it lets you search the internet. So you can see here this curl statement tells
139:21 us pretty much everything we need to know to use this. So it's telling us
139:24 that it's going to be a post. It's showing us the API endpoint that we're
139:27 going to be accessing. It shows us how to set up our header. So that's going to
139:30 be authorization and then it's going to be bearer space API token. It's
139:34 basically just telling us that we're going to get this back in JSON format.
139:37 And then you can see all of these different key value pairs right here in
139:40 the data section. And these are basically going to be body parameters
139:43 where we can say, you know, query is who is Leo Messi? So that's what we' be
139:47 searching the internet for. We have topic equals general. We have search
139:50 depth equals basic. So hopefully you can see all of these are just different
139:54 filters where we can choose okay do we want you know do we want one max result
139:58 or do we want four or do we have a time range or do we not. So this is really
140:02 just at the end of the day. It's basically like ordering Door Dash
140:05 because what we would have up here is you know like what actual restaurant do
140:09 we want to order food from? We would put in our credit card information. We would
140:12 say do we want this to be delivered to us? Do we want to pick it up? We would
140:15 basically say you know do you want a cheeseburger? No pickles. No onions.
140:18 Like what are the different things that you want to flip? Do you want a side? Do
140:21 you want fries? Or do you want salad? Like, what do you want? And so once you
140:25 get into this mindset where all I have to do is understand this documentation
140:28 and just tweak these little things to get back what I want, it makes setting
140:33 up API calls so much easier. And if another thing that kind of intimidates
140:37 you is the aspect of JSON, it shouldn't because all it is is
140:41 a key value pair like we kind of talked about. You know, this is JSON right here
140:44 and you're going to send your body parameter over as JSON and you're also
140:48 going to get back JSON. So the more and more you use it, you're going to
140:51 recognize like how easy it is to set up. So anyways, I hope that that made sense
140:54 and broke it down pretty simply. Now that we've seen like how it all works,
140:58 it's going to be really valuable to get into niten. I'm going to open up a API
141:02 documentation and we're just going to set up a few requests together and we'll
141:06 see how it works. Okay, so here's the example. You know, I did open weather's
141:09 native integration and then also open weather as an HTTP request. And you can
141:13 see it was like basically the exact same thing. Um, so let's say that what we
141:17 want to do is we want to use Perplexity, which if you guys don't know what
141:20 Perplexity is, it is basically, you know, kind of similar to Chatbt, but it
141:23 has really good like internet search and research. So let's say we wanted to use
141:28 this and hook it up to an AI agent, so it can do web search for us. But as you
141:33 can see, if I type in Perplexity, there's no native integration for
141:37 Perplexity. So that basically signals to us, okay, we can only access Perplexity
141:41 using an HTTP request. And real quick side note, if you're ever thinking to
141:45 yourself, hm, I wonder if I can have my agent interact with blank. The answer is
141:49 yes, if there's API documentation. And all you have to do typically to find out
141:53 if there's API documentation is just come in here and be like, you know,
141:56 Gmail API documentation. And then we can see Gmail API is a restful API, which
142:01 means it has an API and we can use it within our automations. Anyways, getting
142:04 back to this example of setting up a perplexity HTTP request. We have our
142:08 HTTP request right here and it's left completely blank. So, as you can see, we
142:12 have our method, we have our endpoint, we have query, header, and body
142:16 parameters, but nothing has been set up yet. So, what we need to do is we would
142:19 head over to Perplexity, as you can see right here. And at the bottom, there's
142:22 this little thing called API. So, I'm going to click on that. And this opens
142:26 up this little page. And so, what I have here is Perplexity's API. If I click on
142:30 developer docs, and then right here, I have API reference, which is integrate
142:34 the API into your workflows, which is exactly what we want to do. This page is
142:38 where people might get confused and it looks a little bit intimidating, but
142:40 hopefully this breakdown will show you how you can understand any API doc,
142:44 especially if there's a curl command. So, all I'm going to do first of all is
142:48 I'm just going to right away hit copy and make sure you know you're in curl.
142:51 If you're in Python, it's not the same. So, click on curl. I copied this curl
142:54 command and I'm going to come back into nit hit import curl and all I have to do
142:59 is paste. Click import and basically what you're going to see is this HTTP
143:02 request is going to get basically populated for us. So now we have the
143:06 method has been changed to post. We have the correct URL which is the API
143:10 endpoint which basically is telling this node, okay, we're going to use
143:13 Perplexity's API. You can see that the curl had no query parameters. So that's
143:17 left off. It did turn on headers which is basically just having us put our API
143:21 key in there. And then of course we have the JSON body that we need to send over.
143:24 Okay. So at this point what I would do is now that we have this set up, we know
143:28 we just need to put in a few things of our own. So the first thing to tackle is
143:32 how do we actually authorize ourselves into Perplexity? All right. All right.
143:35 So, I'm back in Perplexity. I'm going to go to my account and click on settings.
143:37 And then all I'm going to do is basically I need to find where I can get
143:41 my API key. So, on the lefth hand side, if I go all the way down, I can see API
143:45 keys. So, I'll click on that. And all this is going to do is this shows my API
143:48 key right here. So, I'll have to click on this, hit copy, come back into NN,
143:52 and then all I'm going to do is I'm going to just delete where this says
143:55 token, but I'm going to make sure to leave a space after bearer and hit
144:00 paste. So now this basically authorizes ourselves to use Perplexity's endpoint.
144:04 And now if we look down at the body request, we can see we have this thing
144:08 set up for us already. So if I hit test step, this is real quick going to make a
144:12 request over. It just hit Perplexity's endpoint. And as you can see, it came
144:15 back with data. And what this did is it basically searched Perplexity for how
144:20 many stars are there in our galaxy. And that's where right here we can see the
144:23 Milky Way galaxy, which is our galaxy, is estimated to contain between 100
144:27 billion and 400 billion stars, blah blah blah. So we know basically okay if we
144:30 want to change how this endpoint works what data we're going to get back this
144:34 right here is where we would change our request and if we go back into the
144:37 documentation we can see what else we have to set up. So the first thing to
144:40 notice is that there's a few things that are required and then some things that
144:43 are not. So right here we have you know authorization that's always required.
144:47 The model is always required like which perplexity model are we going to use?
144:51 The messages are always required. So this is basically a mix of a system
144:55 message and a user message. So here the example is be precise and concise and
144:58 then the user message is how many stars are there in the galaxy. So if I came
145:03 back here and I said you know um be funny in your answer. So I'm basically
145:08 telling this model how to act and then instead of how many stars are there in
145:11 the galaxy I'm just going to say how long do cows live? And I'll make another
145:16 request off to perplexity. So you can see what comes back is this longer
145:20 content. So it's not being precise and concise and it says so you're wondering
145:25 how long cows live. Well, let's move into the details. So, as you can see,
145:28 it's being funny. Okay, back into the API documentation. We have a few other
145:31 things that we could configure, but notice how these aren't required the
145:35 same way that these ones are. So, we have max tokens. We could basically put
145:39 in an integer and say how many tokens do you want to use at the maximum. We could
145:42 change the temperature, which is, you know, like how random the response would
145:45 be. And this one says, you know, it has to be between zero and two. And as we
145:48 keep scrolling down, you can see that there's a ton of other little levers
145:51 that we can just tweak a little bit to change the type of response that we get
145:55 back from Plexity. And so once you start to read more and more API documentation,
145:58 you can understand how you're really in control of what you get back from the
146:02 server. And also you can see like, you know, sometimes you have to send over
146:05 booleans, which is basically just true or false. Sometimes you can only send
146:09 over numbers. Sometimes you can only send over strings. And sometimes it'll
146:12 tell you, you know, like what will the this value default to? and also what are
146:15 the only accepted values that you actually could fill out. So for example,
146:18 if we go back to this temperature setting, we can see it has to be a
146:22 number and if you don't fill that out, it's going to be 0.2. But we can also
146:26 see that if you do fill this out, it has to be between zero or two. Otherwise,
146:30 it's not going to work. Okay, cool. So that's basically how it works. We just
146:34 set up an HTTP request and we change the system prompt and we change the user
146:36 prompt and that's how we can customize this thing to work for us. And that's
146:40 really cool as a node because we can set up, you know, a workflow to pass over
146:44 some sort of variable into this request. So it searches the web for something
146:47 different every time. But now let's say we want to give our agent access to this
146:50 tool and the agent will decide what am I going to search the web for based on
146:54 what the human asks me. So it's pretty much the exact same process. We'll click
146:57 on add a tool and we're going to add an HTTP request tool only because Plexity
147:01 doesn't have a native integration. And then once again you can see we have an
147:04 import curl button. So if I click on this and I just import that same curl
147:07 that we did last time once again it fills out this whole thing for us. So we
147:11 have post, we have the perplexity endpoint, we have our authorization
147:14 bearer, but notice we have to put in our token once again. And so a cool little
147:18 hack is let's say you know you're going to use perplexity a lot. Rather than
147:22 having to go grab your API key every single time, what we can do is we can
147:25 just send it over right here in the authentication tab. So let me show you
147:29 what I mean by that. If I click into authentication, I can click on generic
147:33 credential type. And then from here, I can basically choose, okay, is this a
147:37 basic O, a bearer O, all this kind of stuff. A lot of times it's just going to
147:39 be header off. So that's why we know right here we can click on header off.
147:42 And as you can see, we know that because we're sending this over as a header
147:45 parameter and we just did this earlier and it worked. So as you can see, I have
147:50 header offs already set up. I probably already have a Plexity one set up right
147:53 here. But I'm just going to go ahead and create a new one with you guys to show
147:56 you how this works. So I just create a new header off and all we have to do is
148:01 the exact same thing that we had down in the request that we just sent over which
148:04 means in the name we're just going to type in authorization with a capital A
148:08 and once again we can see in the API docs this is how you do it. So
148:11 authorization and then we can see that the value has to be capital B bearer
148:16 space API token. So I'm just going to come into here bearer space API token
148:21 and then all I have to do is you know first of all name this so we can can
148:26 save it and then if I hit save now every single time we want to use perplexity's
148:30 endpoint we already have our credentials saved. So that's great and then we can
148:33 turn off the headers down here because we don't need to send it over twice. So
148:37 now all we have to do is change this body request a little bit just to make
148:41 it more dynamic. So in order to make it dynamic the first thing we have to do is
148:44 change this to an expression. Now, we can see that we can basically add a
148:48 variable in here. And what we can do is we can add a variable that basically
148:52 just tells the AI model, the AI agent, here is where you're going to send over
148:57 your internet search query. And so we already know that all that that is is
149:01 the user content right here. So if I delete this and basically if I do two
149:06 curly braces and then within the curly braces I do a dollar sign and I type in
149:10 from, I can grab a from AI function. And this from AI function just indicates to
149:15 the AI agent I need to choose something to send over here. And you guys will see
149:19 an example and it will make more sense. I also did a full video breaking this
149:21 down. So if you want to see that, I'll tag it right up here. Anyways, as you
149:25 can see, all we have to really do is enter in a key. So I'm just going to do,
149:28 you know, two quotes and within the quote, I'm going to put in search term.
149:32 And so now the agent will be reading this and say, okay, whenever the user
149:35 interacts with me and I know I need to search the internet, I'm just going to
149:39 fill this whole thing in with the search term. So, now that that's set up, I'm
149:43 just going to change this request to um actually I'm just going to call it web
149:46 search to make that super intuitive for the AI agent. And now what we're going
149:49 to do is we are going to talk to the agent and see if it can actually search
149:53 the web. Okay, so I'm asking the AI agent to search the web for the best
149:57 movies. It's going to think about it. It's going to use this tool right here
150:00 and then we'll basically get to go in there and we can see what it filled in
150:03 in that search term placeholder that we gave it. So, first of all, the answer it
150:09 gave us was IMDb, top 250, um, Rotten Tomatoes, all this kind of stuff, right?
150:12 So, that's movie information that we just got from Perplexity. And what I can
150:16 do is click into the tool and we can see in the top left it filled out search
150:20 term with best movies. And we can even see that in action. If we come down to
150:23 the body request that it sent over and we expand this, we can see on the right
150:27 hand side in the result panel, this is the JSON body that it sent over to
150:32 Perplexity and it filled it in with best movies. And then of course what we got
150:36 back was our content from Perplexity, which is, you know, here are some of the
150:40 best movies across major platforms. All right, then real quick before we wrap up
150:43 here, I just wanted to talk about some common responses that you can get from
150:47 your HTTP requests. So the rule of thumb to follow is if you get data back and
150:52 you get a 200, you're good. Sometimes you'll get a response back, but you
150:55 won't explicitly see a 200 message. But if you're getting the data back, then
150:59 you're good to go. And a quick example of this is down here we have that HTTP
151:02 request which we went over earlier in this video where we went to open weather
151:06 maps API and you can see down here we got code 200 and there's data coming
151:11 back and 200 is good that's a success code. Now if you get a request in the
151:15 400s that means that you probably set up the request wrong. So 400 bad request
151:19 that could mean that your JSON's invalid. It could just mean that you
151:22 have like an extra quotation mark or you have you know an extra comma something
151:26 as silly as that. So let me show a quick example of that. We're going to test
151:29 this workflow and what I'm doing is I'm trying to send over a query to Tavi. And
151:32 you can see what we get is an error that says JSON parameter needs to be valid
151:36 JSON. And this would be a 400 error. And the issue here is if we go into the JSON
151:40 body that we're trying to send over, you can see in the result panel, we're
151:44 trying to send over a query that has basically two sets of quotation marks.
151:47 If you can see that. But here's the great news about JSON. It is so
151:51 universally used and it's been around for so long that we could basically just
151:55 copy the result over to chatbt. Paste it in here and say, I'm getting an error
151:59 message that says JSON parameter needs to be valid JSON. What's wrong with my
152:02 JSON? And as you can see, it says the issue with your JSON is the use of
152:06 double quotes around the string value in this line. So now we'd be able to go fix
152:09 that. And if we go back into the workflow, we take away these double
152:13 quotes right here. Test the step again. Now you can see it's spinning and it's
152:16 going to work. and we should get back some information about pineapples on
152:19 pizza. Another common error you could run into is a 401, meaning unauthorized.
152:23 This typically just means that your API key is wrong. You could also get a 403,
152:26 which is forbidden. That just means that maybe your account doesn't have access
152:30 to this data that you're requesting or something like that. And then another
152:33 one you could get is a 404, which sometimes you'll get that if you type in
152:36 a URL that doesn't exist. It just means this doesn't exist. We can't find it.
152:39 And a lot of times when you're looking at the actual API documentation that you
152:43 want to set up a request to. So here's an example with Tavi, it'll show you
152:47 what typical responses could look like. So here's one where you know we're using
152:50 Tavi to search for who is Leo Messi. This was an example we looked at
152:54 earlier. And with a 200 response, we are getting back like a query, an answer
152:58 results, stuff like that. We could also see we could get a 400 which would be
153:02 for bad request, you know, invalid topic. We could have a 401 which means
153:06 invalid API key. We could get all these other ones like 429, 432, but in general
153:12 400 is bad. And then even worse is a 500. And this just basically means
153:15 something's wrong with the server. Maybe it doesn't exist anymore or there's a
153:19 bug on the server side. But the good news about a 500 is it's not your fault.
153:22 You didn't set up the request wrong. It just means something's wrong with the
153:26 server. And it's really important to know that because if you think you did
153:28 something wrong, but it's really not your fault at all, you may be banging
153:32 your head against the wall for hours. So anyways, what I wanted to highlight here
153:35 is there's never just like a one-sizefits-all. I know how to set up
153:39 this one API call so I can just set up every single other API call the exact
153:42 same way. The key is to really understand how do you read the API
153:45 documentation? How do you set up your body parameters and your different
153:48 header parameters? And then if you start to run into issues, the key is
153:52 understanding and actually reading the error message that you're getting back
153:55 and adjusting from there. All right, so that's going to do it for this video.
153:58 Hopefully this has left you feeling a lot more comfortable with diving into
154:02 API documentation, walking through it just step by step using those curl
154:06 commands and really just understanding all I'm doing is I'm setting up filters
154:09 and levers here. I don't have to get super confused. It's really not that
154:12 technical. I'm pretty much in complete control over what my API is going to
154:16 send me back. The same way I'm in complete control when I'm, you know,
154:19 ordering something on Door Dash or ordering something on Amazon, whatever
154:23 it is. Hopefully by now the concept of APIs and HTTP requests makes a lot more
154:27 sense. But really, just to drive it home, what we're going to do is hop into
154:30 some actual setups in NADN of connecting to some different popular APIs and walk
154:35 through a few more step by steps just to really make sure that we understand the
154:38 differences that can come with different API documentation and how you read it
154:41 and how you set up stuff like your credentials and the body requests. So,
154:45 let's move into this next part, which I think is going to be super valuable to
154:49 see different API calls in action. Okay, so in nodn when we're working with a
154:52 large language model, whether that's an AI agent or just like an AI node, what
154:57 happens is we can only access the information that is in the large
155:00 language models training data. And a lot of times that's not going to be super
155:04 up-to-date and real time. So what we want to do is access different APIs that
155:08 let us search the web or do real-time search. And what we saw earlier in that
155:13 third step-by-step workflow was we used a tool called Tavi and we accessed it
155:17 through an HTTP request node which as you guys know looks like this. And we
155:21 were able to use this to communicate with Tavali's API server. So if we ever
155:25 want to access real-time information or do research on certain search terms, we
155:30 have to use some sort of API to do that. So, like I said, we talked about Tavi,
155:34 but in this video, I'm going to help you guys set up Perplexity, which if you
155:38 don't know what it is, it's kind of like Chat GBT, but it's really, really good
155:42 for web search and in-depth research. And it has that same sort of like, you
155:46 know, chat interface as chatbt, but what you also have is access to the API. So,
155:50 if I click on API, we can see this little screen, but what we want to go to
155:54 is the developer docs. And in the developer docs, what we're looking for
155:57 is the API reference. We can also click on the quick start guide right here
156:00 which just shows you how you can set up your API key and get all that kind of
156:03 stuff. So that's exactly what we're going to do is set up an API call to
156:08 Perplexity. So I'm going to click on API reference and what we see here is the
156:12 endpoint to access Perplexi's API. And so what I'm going to do is just grab
156:15 this curl command from that right hand side, go back into our NEN and I'm just
156:19 going to import the curl right into here. And then all we have to do from
156:22 there is basically configure what we want to research and put in our own API
156:27 key. So there we go. We have our node pretty much configured. And now the
156:30 first thing we see we need to set up is our authorization API key. And what we
156:34 could do is set this up in here as a generic credential type and save it. But
156:37 right now we're just going to keep things as simple as possible where we
156:40 imported the curl. And now I'm just going to show you where to plug in
156:43 little things. So we have to go back over to Perplexity and we need to go get
156:46 an API key. So I'm going to come over here to the left and I'm going to click
156:49 on my settings. And hopefully in here we're able to find where our API key
156:53 lives. Now we can see in the bottom left over here we have API keys. And what I'm
156:56 going to do is come in here and just create a new secret key. And we just got
156:59 a new one generated. So I'm just going to click on this button, click on copy,
157:03 and all we have to do is replace the word right here that says token. So I'm
157:07 just going to delete that. I'm going to make sure to leave a space after the
157:10 word bearer. And I'm going to paste in my Perplexity API key. So now we should
157:14 be connected. And now what we need to do is we need to set up the actual body
157:17 request. So if I go back into the documentation, we can see this is
157:21 basically what we're sending over. So that first thing is a model, which is
157:24 the name of the model that will complete your prompt. And if we wanted to look at
157:27 different models, we could click into here and look at other supported models
157:31 from Perplexity. So it took us to the screen. We click on models, and we can
157:35 see we have Sonar Pro or Sonar. We have Sonar Deep Research. We have some
157:38 reasoning models as well. But just to keep things simple, I'm going to stick
157:41 with the default model right now, which is just sonar. Then we have an object
157:45 that we're sending over, which is messages. And within the messages
157:49 object, we have a few things. So first of all we're sending over content which
157:53 is the contents of the message in turn of conversation. It can be in a string
157:57 or an array of parts. And then we have a role which is going to be the role of
158:00 the speaker in the conversation. And we have available options system user or
158:05 assistant. So what you can see in our request is that we're sending over a
158:08 system message as well as a user message. And the system message is
158:11 basically the instructions for how this AI model on perplexity should act. And
158:15 then the user message is our dynamic search query that is going to change
158:19 every time. And if we go back into the documentation, we can see that there are
158:22 a few other things we could add, but we don't have to. We could tell Perplexity
158:26 what is the max tokens we want to use, what is the temperature we want to use.
158:30 We could have it only search for things in the past week or day. So, this
158:34 documentation is basically going to be all the filters and settings that you
158:38 have access to in order to customize the type of results that you want to get
158:41 back. But, like I said, keeping this one really simple. We just want to search
158:45 the web. All I'm going to do is keep it as is. And if I disconnect this real
158:48 quick and we come in and test step, it's basically going to be searching
158:51 perplexity for how many stars are there in our galaxy. And then the AI model of
158:56 sonar is the one that's going to grab all of these five sources and it's going
159:00 to answer us. And right here it says the Milky Way galaxy, which is our home
159:04 galaxy, is estimated to contain between 100 billion and 400 billion stars. This
159:08 range is due to the difficulty blah blah blah blah blah. So that's basically how
159:11 it was able to answer us because it used an AI model called sonar. So now if we
159:15 wanted to make this search a little bit more dynamic, we could basically plug
159:19 this in and you can see in here what I'm doing is I'm just setting a search term.
159:23 So let's test this step. What happens is the output of this node is a research
159:27 term. Then we could reference that variable of research term right in here
159:31 in our actual body request to perplexity. So I would delete this fixed
159:35 message which is how many stars are there in our galaxy. And all I would do
159:38 is I'd drag in research term from the left, put it in between the two quotes,
159:43 and now it's coming over dynamically as anthropic latest developments. And all
159:47 I'd have to do now is hit test step. And we will get an answer from Perplexity
159:51 about Anthropic's recent developments. There we go. It just came back. We can
159:54 see there's five different sources right here. It went to Anthropic, it went to
159:57 YouTube, it went to TechCrunch. And what we get is that today, May 22nd, so real
160:03 time information, Claude Opus 4 was released. And that literally came out
160:07 like 2 or 3 hours ago. So that's how we know this is searching the web in real
160:11 time. And then all we'd have to do is have, you know, maybe an AI model is
160:14 changing our search term or maybe we're pulling from a Google sheet with a bunch
160:17 of different topics we need to research. But whatever it is, as long as we are
160:21 passing over that variable, this actual search result from Perplexity is going
160:25 to change every single time. And that's the whole point of variables, right?
160:28 They they vary. They're dynamic. So, I know that one was quick, but Perplexity
160:32 is a super super versatile tool and probably an API that you're going to be
160:35 calling a ton of times. So, just wanted there. So, Firecall is going to allow us
160:43 to turn any website into LLM ready data in a matter of seconds. And as you can
160:46 see right here, it's also open source. So, once you get over to Firecol, click
160:49 on this button and you'll be able to get 500 free credits to play around with. As
160:52 you can see, there's four different things we can do with Firecrawl. We can
160:57 scrape, we can crawl, we can map or we can do this new extract which basically
161:01 means we can give firecraw a URL and also a prompt like can you please
161:04 extract the company name and the services they offer and an icebreaker
161:08 out of this URL. So there's some really cool use cases that we can do with
161:11 firecrawl. So in this video we're going to be mainly looking at extract, but I'm
161:14 also going to show you the difference between scrape and extract. And we're
161:17 going to get into end and connect up so you can see how this works. But the
161:20 playground is going to be a really good place to understand the difference
161:23 between these different endpoints. All right, so for the sake of this video,
161:26 this is the website we're going to be looking at. It's called quotes to
161:29 scrape. And as you can see, it's got like 10 on this first page and it also
161:32 has different pages of different categories of quotes. And as you can
161:35 see, if we click into them, there are different quotes. So what I'm going to
161:37 do is go back to the main screen and I'm going to copy the URL of this website
161:41 and we're going to go into niten. We're going to open up a new node, which is
161:45 going to be an HTTP request. And this is just to show you what a standard get
161:49 request to a static website looks like. So we're going to paste in the URL, hit
161:53 test step, and on the right hand side, we're going to get all the HTML back
161:57 from the quotes to scrape website. Like I said, what we're looking at here is a
162:00 nasty chunk of HTML. It's pretty hard for us to read, but basically what's
162:04 going on here is this is the code that goes to the website in order to have it
162:07 be styled and different fonts and different colors. So right here, what
162:10 we're looking at is the entire first page of this website. So if we were to
162:14 search for Harry, if I copy this, we go back into edit and we control F this.
162:19 You can see there is the exact quote that has the word Harry. So everything
162:22 from the website's in here, it's just wrapped up in kind of an ugly chunk of
162:26 HTML. Now hopping back over to the fireall playground using the scrape
162:30 endpoint, we can replace that same URL. We'll run this and it's going to output
162:33 markdown formatting. So now we can see we actually have everything we're
162:36 looking for with a different quotes and it's a lot more readable for a human. So
162:41 that's what a web scrape is, right? We get the information back, whether that's
162:44 HTML or markdown, but then we would typically feed that into some sort of
162:47 LLM in order to extract the information we're looking for. In this case, we'd be
162:52 looking for different quotes. But what we can do with extract is we can give it
162:56 the URL and then also say, hey, get all of the quotes on here. And using this
162:59 method, we can say, not just these first 10 on this page. I want you to crawl
163:03 through the whole site and basically get all of these quotes, all of these
163:06 quotes, all of these quotes, all of these quotes. So it's going to be really
163:09 cool. So I'm going to show how this works in firecrawl and then we're going
163:11 to plug it into noden. All right. So what we're doing here is we're saying
163:14 extract all of the quotes and authors from this website. I gave it the website
163:18 and now what it's doing is it's going to generate the different parameters that
163:22 the LLM will be looking to extract out of the content of the website. Okay. So
163:26 here's the run we're about to execute. We have the URL and then we have our
163:29 schema for what the LLM is going to be looking for. And it's looking for text
163:33 which would be the quote and it's a string. And then it's also going to be
163:35 looking for the author of that quote which is also a string. And then the
163:39 prompt we're feeding here to the LLM is extract all quotes and their
163:43 corresponding authors from the website. So we're going to hit run and we're
163:46 going to see that it's not only going to go to that first URL, it's basically
163:49 going to take that main domain, which is quotes to scrape.com, and it's going to
163:53 be crawling through the other sections of this website in order to come back
163:56 and scrape all the quotes on there. Also, quick plug, go ahead and use code
164:00 herk10 to get 10% off the first 12 months on your Firecrawl plan. Okay, so
164:04 it just finished up. As you can see, we have 79 quotes. So down here we have a
164:09 JSON response where it's going to be an object called quotes. And in there we
164:12 have a bunch of different items which has you know text author text author
164:17 text author and we have pretty much everything from that website now. Okay
164:20 cool. But what we want to do is look at how we can do this in n so that we have
164:25 you know a list of 20 30 40 URLs that we want to extract information from. We can
164:28 just loop through and send off that automation rather than having to come in
164:33 here and type that out in firecrawl. Okay. So what we're going to do is go
164:35 back into edit end. And I apologize because there may be some jumping around
164:38 here, but we're basically just gonna clear out this HTTP request and grab a
164:42 new one. Now, what we're going to do is we want to go into Firecrawl's
164:45 documentation. So, all we have to do is import the curl command for the extract
164:48 endpoint rather than trying to figure out how to fill out these different
164:51 parameters. So, back in Firecrawl, once you set up your account, up in the top
164:54 right, you'll see a button called docs. You want to click into there. And now,
164:57 we can see a quick start guide. We have different endpoints. And what we're
165:00 going to do is on the left, scroll down to features and click on extract. And
165:04 this is what we're looking for. So, we've got some information here. The
165:07 first thing to look at is when we're using the extract, you can extract
165:10 structured data from one or multiple URLs, including wild cards. So, what we
165:14 did was we didn't just scrape one single page. We basically scraped through all
165:18 of the pages that had the main base domain of um quotescrape.com or
165:23 something like that. And if you put a asterk after it, it's going to basically
165:26 mean this is a wild card and it's going to go scrape all pages that are after it
165:30 rather than just scraping this one predefined page. As you can see right
165:34 here, it'll automatically crawl and parse all the URLs it can discover, then
165:38 extract the requested data. And we can see that's how it worked because if we
165:41 come back into the request we just made, we can see right here that it added a
165:45 slash with an asterisk after quotes to scrape.com. Okay. Anyway, so what we're
165:48 looking for here is this curl command. This is basically going to fill out the
165:51 method, which is going to be a post request. It's going to fill out the
165:54 endpoint. It'll fill out the content type, and it'll show us how to set up
165:58 our authorization. And then we'll have a body request that we'll need to make
166:02 some minor changes to. So in the top right I'm going to click copy and I'm
166:05 going to come back into edit end. Hit import curl. Paste that in there. Hit
166:09 import. And as you can see everything pretty much just got populated. So like
166:12 I said the method is going to be a post. We have the endpoint already set up. And
166:15 what I want to do is show you guys how to set up this authorization so that we
166:18 can keep it saved forever rather than having to put it in here in the
166:22 configuration panel every time. So first of all, head back over to your
166:26 firecrawl. Go to API keys on the lefth hand side. And you're just going to want
166:29 to copy that API key. So once you have that copied, head back into NN. And now
166:33 let's look at how we actually set this up. So typically what you do is we have
166:37 this as a header parameter. Not all authorizations are headers, but this one
166:41 is a header. And the key or the name is authorization and the value is bearer
166:46 space your API key. So what you'd typically do is just paste in your API
166:50 key right there and you'd be good to go. But what we want to do is we want to
166:53 save our firecrawl credential the same way you'd save, you know, a Google
166:58 Sheets credential or a Slack credential. So, we're going to come into
167:01 authentication, click on generic. We're going to click on generic type and
167:04 choose header because we know down here it's a header off. And then you can see
167:07 I have some other credentials already saved. We're going to create a new one.
167:11 I'm just going to name this firecrawl to keep ourselves organized. For the name,
167:14 we're going to put authorization. And for the value, we're going to type
167:18 bearer with a capital B space and then paste in our API key. And we'll hit
167:21 save. And this is going to be the exact same thing that we just did down below,
167:25 except for now we have it saved. So, we can actually flick this field off. We
167:28 don't need to send headers because we're sending them right here. And now we just
167:32 need to figure out how to configure this body request. Okay, so I'm going to
167:35 change this to an expression and open it up just so we can take a look at it. The
167:38 first thing we notice is that by default there are three URLs in here that we
167:41 would be extracting from. We don't want to do that here. So I'm going to grab
167:44 everything within the array, but I'm going to keep the two quotation marks.
167:47 Now all we need to do is put the URL that we're looking to extract
167:49 information from in between these quotation marks. So here I just put in
167:53 the quotes to scrape.com. But what we want to do if you remember is we want to
167:57 put an asterisk after that so that it will go and crawl all of the pages, not
168:01 just that first page and which would only have like nine or 10 quotes. And
168:04 now the rest is going to be really easy to configure because we already did this
168:07 in the playground. So we know exactly what goes where. So I'm going to click
168:10 back into our playground example. First thing is this is the quote that
168:13 firecross sent off. So I'm going to copy that. Go back and edit in and I'm just
168:17 going to replace the prompts right here. We don't want the company mission blah
168:20 blah blah. We want to paste this in here and we're looking to extract all quotes
168:24 and their corresponding authors from the website. And then next is basically
168:27 telling the LLM, what are you pulling back? So, we just told it it's pulling
168:31 back quotes and authors. So, we need to actually make the schema down here in
168:36 the body request match the prompt. So, all we have to do is go back into our
168:39 playground. Right here is the schema that we sent over in our example. And
168:42 I'm just going to click on JSON view and I'm going to copy this entire thing
168:46 which is wrapped up in curly braces. We'll come back into end and we'll start
168:51 after schema colon space. Replace all this with what we just had in um fire
168:55 crawl. And actually I think I've noticed the way that this copied over. It's not
168:58 going to work. So let me show you guys that real quick. If we hit test step,
169:01 it's going to say JSON parameter needs to be valid JSON. So what I'm going to
169:05 do is I'm going to copy all of this. Now I came into chat GBT and I'm just saying
169:08 fix this JSON. What it's going to do is it's going to just basically push these
169:12 over. When you copy it over from Firecrol, it kind of aligns them on the
169:15 left, but you don't want that. So, as you can see, it just basically pushed
169:18 everything over. We'll copy this into our Nit end right there. And all it did
169:21 was bump everything over once. And now we should be good to go. So, real quick
169:24 before we test this out, I'm just going to call this extract. And then we'll hit
169:28 test step. And we should see that it's going to be pulling. And it's going to
169:33 give us a message that says um true. And it gives us an ID. And so now what we
169:37 need to do next is pull this ID back to see if our request has been fulfilled
169:41 yet. So I'm back in the documentation. And now we are going to look at down
169:45 here asynchronous extraction and status checking. So this is how we check the
169:49 status of a request. As you saw, we just made one. So here I'm going to click on
169:52 copy this curl command. We're going to come back and end it in and we're going
169:56 to add another HTTP request and we're going to import that in there. And you
170:00 can see this one is going to be a get command. It's going to have a different
170:02 endpoint. And what we need to do if you look back at the documentation is at the
170:07 end of the extract slash we have to put the extract ID that we're looking to
170:13 check the status of. So back in n the ID is going to be coming from the left hand
170:16 side the previous node every time. So I'm just going to change the URL field
170:21 to an expression. Put a backslash and then I'm going to grab the ID pull it
170:25 right in there and we're good to go. Except we need to set up our credential.
170:28 And this is why it's great. We already set this up as a generic as a header.
170:32 And now we can just pull in easily our fire crawl off and hit test step. So
170:37 what happens now is our request hasn't been done yet. So as you can see it
170:41 comes back as processing and the data is an empty array. So what we're going to
170:44 set up real quick is something called polling where we're basically checking
170:48 in on a specific ID which is this one right here. And we're going to check and
170:51 if it's if it's empty, if the data field is empty, then that means we're going to
170:55 wait a certain amount of time and come back and try again. So after the
170:59 request, I'm going to add a if. So, this is just basically going to help us
171:02 create our filter. So, we're dragging in JSON.data, which as you can see is an
171:06 empty array, and we're just going to say is empty. But one thing you have to keep
171:10 in mind is this doesn't match. As you can see, we're dragging in an array, and
171:14 we were trying to do a filter of a string. So, we have to go to array and
171:18 then say is empty. And we'll hit test step. And this is going to say true. The
171:23 data field is empty. And so, if true, what we want to do is we're going to add
171:27 a wait. And this will wait for, you know, let's in in this case we'll just
171:30 say five seconds. So if we hit test step, it's going to wait for five
171:33 seconds. And um I wish actually I switched the logic so that this would be
171:37 on the bottom, but whatever. And then we would just drag this right back into
171:41 here. And we would try it again. So now after 5 seconds had passed or however
171:45 much time, we would try this again. And now we can see that we have our item
171:48 back and the data field is no longer empty because we have our quotes object
171:53 which has 83 quotes. So even got more than that time we did it in the
171:56 playground. And I'm thinking this is just because, you know, the extract is
171:59 kind of still in beta. So it may not be super consistent, but that's still way
172:03 better than if we were to just do a simple getit request. And then as you
172:07 can see now, if we ran this next step, this would come out. Ah, but this is interesting. So
172:13 before it knows what it's pulling back, the JSON.data field is an array. And so
172:17 we're able to set up is the array empty? But now it's an object. So we can't put
172:21 it through the same filter because we're looking at a filter for an array. So
172:25 what I'm thinking here is we could set up this continue using error output. So
172:29 because this this node would error, we could hit test step and we could see now
172:33 it's going to go down the false branch. And so this basically just means it's
172:36 going to let us continue moving through the process. And we could do then
172:38 whatever we want to do down here. Obviously this isn't perfect because I
172:41 just set this up to show you guys and ran into that. But that's typically sort
172:45 of the way we would think is how can we make this a little more dynamic because
172:49 it has to deal with empty arrays or potentially full objects. Anyways, what
172:52 I wanted to show you guys now is back in our request if we were to get rid of
172:56 this asterk. What would happen? So, we're just going to run this whole
172:59 process again. I'll hit test workflow. And now it's going to be sending that
173:04 request only to, you know, one URL rather than the other one. Aha. And I'm
173:08 glad we are doing live testing because I made the mistake of putting this in as
173:12 JSON ID which doesn't exist if we're pulling from the weight node. So all we
173:16 have to do in here is get rid of JSON. ID and pull in a basically a you know a
173:21 node reference variable. So we're going to do two curly braces. We're going to
173:25 be pulling from the extract node. And now we just want to say
173:29 item.json ID and we should be good to go now. So I'm just going to refresh this
173:33 and we'll completely do it again. So test workflow, we're doing the exact
173:37 same thing. It's not ready yet. So, we're going to wait 5 seconds and then
173:40 we're going to go check again. We hopefully should see, okay, it's not
173:42 ready still. So, we're going to wait five more seconds. Come check again. And
173:46 then whenever it is ready now, as you can see, it goes down this branch. And
173:50 we can see that we actually get our items back. And what you see here is
173:54 that this time we only got 10 quotes. Um, you know, it says nine, but
173:57 computers count from zero. But we only got 10 quotes because um we didn't put
174:04 an asterisk after the URL. So, Firecrawl didn't know I need to go scrape
174:07 everything out of this whole base URL. I'm only going to be scraping this one
174:11 specific page, which is this one right here, which does in fact only have 10
174:15 quotes. And by the way, super simple template here, but if you want to try it
174:18 out and just plug in your API key and different URLs, you can grab that in the
174:22 free school community. You'll hop in there, you will click on YouTube
174:24 resources and click on the post associated with this video, and you'll
174:28 have the JSON right there to download. Once you download that, all you have to
174:31 do is import it from file right up here, and you'll have the workflow. So,
174:34 there's a lot of cool use cases for firecrawl. It'd be cool to be able to
174:38 pull from a a sheet, for example, of 30 or 40 or 50 URLs that we want to run
174:42 through and then update based on the results. You could do some really cool
174:45 stuff here, like researching a ton of companies and then having it also create
174:49 some initial outreach for you. So, I hope you guys enjoyed that one.
174:51 Firecrawl is a super cool tool. There's lots of functionality there and there's
174:55 lots of uses of AI in Firecrawl, which is awesome. We're going to move into a
174:57 different tool that you can use to scrape pretty much anything, which is
175:00 called Appify, which has a ton of different actors, and you can scrape,
175:04 like I said, almost anything. So, let's go into the setup video. So, Ampify is
175:08 like a marketplace for actors, which essentially let us scrape anything on
175:10 the internet. As you can see right here, we're able to explore 4,500 plus
175:14 pre-built actors for web scraping and automation. And it's really not that
175:17 complicated. An actor is basically just a predefined script that was already
175:20 built for us that we can just send off a certain request to. So, you can think of
175:23 it like a virtual assistant where you're saying, "Hey, I want you to I want to
175:26 use the Tik Tok virtual assistant and I want you to scrape, you know, videos
175:30 that have the hashtag of AI content." Or you could use the LinkedIn job scraper
175:33 and you could say, "I want to find jobs that are titled business analyst." So,
175:36 there's just so many ways you could use Appify. You could get leads from Google
175:39 Maps. You could get Instagram comments. You could get Facebook posts. There's
175:43 just almost unlimited things you can do here. You can even tap into Apollo's
175:46 database of leads and just get a ton. So today I'm just going to show you guys in
175:50 NAN the easiest way to set up this Aify actor where you're going to start the
175:53 actor and then you're going to just grab those results. So what you're going to
175:56 want to do is head over to Aify using the link in the description and then use
176:01 code 30 Nate Herk to get 30% off. Okay, like I said, what we're going to be
176:03 covering today is a two-step process where you make one request to Aify to
176:07 start up an actor and then you're going to wait for it to finish up and then
176:10 you're just going to pull those results back in. So let me show you what that
176:12 looks like. What I'm going to do is hit test workflow and this is going to start
176:15 the Google Maps actor. And what we're doing here is we're asking for dentists
176:18 in New York. And then if I go to my Appify console and I go over here to
176:22 actors and click on the Google Maps extractor one, if I click on runs, we
176:25 can see that there's one currently finishing up right now. And now that
176:28 it's finished, I can go back into our workflow. I can hook it up to the get
176:32 results node. Hit test step. And this is going to pull in those 50 dentists that
176:36 we just scraped in New York. And you can see this contains information like their
176:39 address, their website, their phone number, all this kind of stuff. So you
176:43 can just basically scrape these lists of leads. So anyways, that's how this
176:46 works, but let's walk through a live setup. So once you're in your Appify
176:49 console, you click on the Appify store, and this is where you can see all the
176:52 different actors. And let's do an example of like a social media one. So
176:55 I'm going to click on this Tik Tok scraper since it's just the first one
176:58 right here. And this may seem a little bit confusing, but it's not going to be
177:01 too bad at all. We get to basically do all this with natural language. So let
177:04 me show you guys how this works. So basically, we have this configuration
177:07 panel right here. When you open up any sort of actor, they won't always all be
177:11 the same, but in this one, what we have is videos with this hashtag. So, we can
177:14 put something in. I put in AI content to play around with earlier. And then you
177:17 can see it asks, how many videos do you want back? So, in this case, I put 10.
177:21 Let's just put 25 for the sake of this demo. And then you have the option to
177:24 add more settings. So, down here, we could do, you know, we could add certain
177:26 profiles that we want to scrape. We could add a different search
177:29 functionality. We could even have it download the videos for us. So, once
177:32 you're good with this configuration, and just don't over complicate it. Think of
177:35 it the same way you would like put in filters on an e-commerce website or the
177:39 same way you would, you know, fill in your order when you're door dashing some
177:42 food. So, now that we have this filled out the way we want it, all I'm going to
177:45 do is come up to the top right and hit API and click API endpoints. The first
177:49 thing we're going to do is we're going to use this endpoint called run actor.
177:52 This is the one that's basically just going to send a request to Apify and
177:55 start this process, but it's not going to give us the live results back. That's
177:59 why the second step later is to pull the results back. What you could do is you
178:02 could run the actor synchronously, meaning it's going to send it off and
178:05 it's just going to spin in and it end until we're done and until it has the
178:09 results. But I found this way to be more consistent. So anyways, all you have to
178:12 do is click on copy and it's already going to have copied over your appy API
178:17 key. So it's really, really simple. All we're going to do here is open up a new
178:20 HTTP request. I'm going to just paste in that URL that we just copied right here.
178:24 And that's basically all we have to do except for we want to change this method
178:28 to post because as you can see right here, it says post. And so this is
178:31 basically just us putting in the actor's phone number. And so we're giving it a
178:35 call. But now what we have to do is actually tell it what we want. So right
178:38 here, we've already filled this out. I'm going to click on JSON and all I have to
178:42 do is just copy this JSON right here. Go back into N. Flick this on to send a
178:47 body and we want to send over just JSON. And then all I have to do is paste that
178:49 in there. So, as you can see, what we're sending over to this Tik Tok scraper is
178:54 I want AI content and I want 25 results. And then all this other stuff is false.
178:57 So, I'm just going to hit test step. And so, this basically returns us with an ID
179:00 and says, okay, the actor started. If we go back into here and we click on runs,
179:04 we can see that this crawler is now running. and it's going to basically
179:07 tell us how much it costed, how long it took, and all this kind of stuff. And
179:11 now it's already done. So, what we need to do now is we need to click on API up
179:14 in the top right. Click on API endpoints again and scroll all the way down to the
179:19 bottom where we can see get last run data set items. So, all I need to do is
179:23 hit this copy button right here. Go back into Nitn and then open up another HTTP
179:27 request. And then I'm just going to paste that URL right in there once
179:30 again. And I don't even have to change the method because if we go in here, we
179:34 can see that this is a get. So, all I have to do is hit test step. And this is
179:38 going to pull in those 25 results from our Tik Tok scrape based on the search
179:42 term AI content. So, you can see right here it says 25 items. And just to show
179:46 you guys that it really is 25 items, I'm just going to grab a set field. We're
179:49 going to just drag in the actual text from here and hit test step. And it
179:53 should Oh, we have to connect a trigger. So, I'm just going to move this trigger
179:57 over here real quick. And um what you can do is because we already have our
180:00 data here, I can just pin it so we don't actually have to run it again. But then
180:03 I'll hit test step. And now we can see we're going to get our 25 items right
180:08 here, which are all of the text content. So I think just the captions or the
180:11 titles of these Tik Toks. And we have all 25 Tik Toks as you can see. So I
180:15 just showed you guys the two-step method. And why I've been using it
180:18 because here's an example where I did the synchronous run. So all I did was I
180:22 came to the Google maps and I went to API endpoints and then I wanted to do
180:26 run actor synchronously which basically means that it would run it in n and it
180:30 would spin until the results were done and then it should feed back the output.
180:34 So I copied that I put it into here and as you can see I just ran it with the
180:37 Google maps looking for plumbers and we got nothing back. So that's why we're
180:40 taking this two-step approach where as you can see here we're going to do that
180:43 exact same request. We're doing a request for plumbers and we're going to
180:47 fire this off. And so nothing came back in Nitn. But if we go to our actor and
180:50 we go to runs, we can see right here that this was the one that we just made
180:54 for plumbers. And if we click into it, we can see all the plumbers. So that's
180:57 why we're taking the two-step approach. I'm going to make the exact same request
181:00 here for New York plumbers. And what I'm going to do is just run this workflow.
181:03 And now I wanted to talk about what we have to do because what happens is we
181:07 started the actor. And as you can see, it's running right now. And then it went
181:10 to grab the results, but the results aren't done yet. So that's why it comes
181:13 back and says this is an item, but it's empty. So, what we want to do is we want
181:17 to go to our runs and we want to see how long this is taking on average for 50
181:21 leads. As you can see, the most amount of time it's ever taken was 19 seconds.
181:24 So, I'm just going to go in here and in between the start actor and grab
181:28 results, I'm going to add a wait, and I'm just going to tell this thing to
181:31 wait for 22 seconds just to be safe. And now, what I'm going to do is just run
181:33 this thing again. It's going to start the actor. It's going to wait for 22
181:37 seconds. So, if we go back into Ampify, you can see that the actor is once again
181:41 running. After about 22 seconds, it's going to pass over and then we should
181:45 get all 50 results back in our HTTP request. There we go. Just finished up.
181:48 And now you can see that we have 50 items which are all of the plumbers that
181:53 we got in New York. So from here, now that you have these 50 leads and
181:56 remember if you want to come back into Ampify and change up your input, you can
182:00 change how many places you want to extract. So if you changed this to 200
182:03 and then you clicked on JSON and you copied in that body, you would now be
182:07 searching for 200 results. But anyways, that's the hard part is getting the
182:11 leads into end. But now we have all this data about them and we can just, you
182:14 know, do some research, send them off a email, whatever it is, we can just
182:18 basically have this thing running 24/7. And if you wanted to make this workflow
182:21 more advanced to handle a little bit more dynamic amount of results. What
182:25 you'd want to use is a technique called polling. So basically, you'd wait, you
182:28 check in, and then if the results were all done, you continue down the process.
182:32 But if they weren't all done, you would basically wait again and come back. And
182:36 you would just loop through this until you're confident that all of the results
182:39 are done. So that's going to be it for this one. I'll have this template
182:42 available in my free school community if you want to play around with it. Just
182:44 remember you'll have to come in here and you'll have to switch out your own API
182:47 key. And don't forget when you get to Ampify, you can use code 30 Nate Herk to
182:51 get 30% off. Okay, so those were some APIs that we can use to actually scrape
182:54 information. Now, what if we want to use APIs to generate some sort of content?
182:58 We're going to look at an image generation API from OpenAI and we're
183:02 going to look at a video generation API called Runway. So these next two
183:05 workflows will explain how you set up those API calls and also how you can
183:09 bake them into a workflow to be a little bit more practical. So let's take a
183:13 look. So this workflow right here, all I had to do was enter in ROI on AI
183:17 automation and it was able to spit out this LinkedIn post for me. And if you
183:21 look at this graphic, it's insane. It looks super professional. It even has a
183:24 little LinkedIn logo in the corner, but it directly calls out the actual
183:28 statistics that are in the post based on the research. And for this next one, all
183:31 I typed in was mental health within the workplace and it spit out this post.
183:34 According to Deote Insights, organizations that support mental health
183:38 can see up to 25% increase in productivity. And as you can see down
183:42 here, it's just a beautiful graphic. So, a few weeks ago when Chacht came out
183:45 with their image generation model, you probably saw a lot of stuff on LinkedIn
183:48 like this where people were turning themselves into action figures or some
183:51 stuff like this where people were turning themselves into Pixar animation
183:55 style photos or whatever it is. And obviously, I had to try this out myself.
183:58 And of course, this was very cool and everyone was getting really excited. But
184:01 then I started to think about how could this image generation model actually be
184:06 used to save time for a marketing team because this new image model is actually
184:09 good at spelling and it can make words that don't look like gibberish. It opens
184:13 up a world of possibilities. So here's a really quick example of me giving it a
184:16 one-s sentence prompt and it spits out a poster that looks pretty solid. Of
184:20 course, we were limited to having to do this in chatbt and coming in here and
184:24 typing, but now the API is released, so we can start to save hours and hours of
184:27 time. And so, the automation I'm going to show with you guys today is going to
184:30 help you turn an idea into a fully researched LinkedIn post with a graphic
184:34 as well. And of course, we're going to walk through setting up the HTTP request
184:39 to OpenAI's image generation model. But what you can do is also download this
184:42 entire template for free and you can use it to post on LinkedIn or you can also
184:46 just kind of build on top of it to see how you can use image generation to save
184:50 you hours and hours within some sort of marketing process. So this workflow
184:54 right here, all I had to do was enter in ROI on AI automation and it was able to
184:58 spit out this LinkedIn post for me. And if you look at this graphic, it's
185:01 insane. It looks super professional. It even has a little LinkedIn logo in the
185:05 corner, but it directly calls out the actual statistics that are in the post
185:09 based on the research. So 74% of organizations say their most advanced AI
185:13 initiatives are meeting or exceeding ROI expectations right here. And on the
185:17 other side, we can see that only 26% of companies have achieved significant
185:21 AIdriven gains so far, which is right here. And I was just extremely impressed
185:24 by this one. And for this next one, all I typed in was mental health within the
185:28 workplace. And to spit out this post, according to Deote Insights,
185:31 organizations that support mental health can see up to 25% increase in
185:35 productivity. And as you can see down here, it's just a beautiful graphic.
185:38 something that would probably take me 20 minutes in Canva. And if you can now
185:42 push out these posts in a minute rather than 20 minutes, you can start to push
185:45 out more and more throughout the day and save hours every week. And because the
185:49 post is being backed by research, the graphic is being backed by the research
185:53 post. You're not polluting anything into the internet. A lot of people in my
185:56 comments call it AI slop. Anyways, let's do a quick live run of this workflow and
185:59 then I'll walk through step by step how to set up this API call. And as always,
186:03 if you want to download this workflow for free, all you have to do is join my
186:06 free school community. link is down in the description and then you can search
186:09 for the title of the video. You can go into YouTube resources. You need to find
186:13 the post associated with this video and then when you're in there, you'll be
186:16 able to download this JSON file and that is the template. So you download the
186:20 JSON file. You'll go back into Nitn. You'll open up a new workflow and in the
186:25 top right you'll go to import from file. Import that JSON file and then there'll
186:28 be a little sticky note with a setup guide just sort of telling you what you
186:31 need to plug in to get this thing to work for you. Okay, quick disclaimer
186:33 though. I'm not actually going to post this to LinkedIn. you certainly could,
186:37 but um I'm just going to basically send the post as well as the attachment to my
186:41 email because I don't want to post on LinkedIn right now. Anyways, as you can
186:45 see here, this workflow is starting with a form submission. So, if I hit test
186:48 workflow, it's going to pop up with a form where we have to enter in our email
186:53 for the workflow to send us the results. Topic of the post and then also I threw
186:57 in here a target audience. So, you could have these posts be kind of flavored
187:00 towards a specific audience if you want to. Okay, so this form is waiting for
187:04 us. I put in my email. I put the topic of morning versus night people and the
187:07 target audience is working adults. So, we'll hit submit, close out of here, and
187:10 we'll see the LinkedIn post agent is going to start up. It's using Tavi here
187:14 for research and it's going to create that post and then pass the post on to
187:19 the image prompt agent. And that image prompt agent is going to read the post
187:22 and basically create a prompt to feed into OpenAI's image generator. And as you can
187:28 see, it's doing that right now. We're going to get that back as a base 64
187:32 string. And then we're just converting that to binary so we can actually post
187:36 that on LinkedIn or send that in email as an attachment and we'll break down
187:39 all these steps. But let's just wait and see what these results look like here.
187:43 Okay, so all that just finished up. Let me pop over to email. So in email, we
187:46 got our new LinkedIn post. Are you a morning lark or a night owl? The science
187:49 of productivity. I'm not going to read through this right now exactly, but
187:53 let's take a look at the image we got. When are you most productive? In the
187:57 morning, plus 10% productivity or night owls thrive in flexibility. I mean, this
188:00 is insane. This is a really good graphic. Okay, so now that we've seen
188:04 again how good this is, let's just break down what's going on. We're going to
188:07 start off with the LinkedIn post agent. All we're doing is we're feeding in two
188:11 things from the form submission, which was what is the topic of the post, as
188:14 well as who's the target audience. So right here, you can see morning versus
188:18 night people and working adults. And then we move into the actual system
188:21 prompt, which I'm not going to read through this entire thing. If you
188:23 download the template, the prompt will be in there for you to look at. But
188:26 basically I told it you are an AI agent specialized in creating professional
188:30 educational and engaging LinkedIn posts based on a topic provided by the user.
188:34 We told it that it has a tool called Tavly that it will use to search the web
188:38 and gather accurate information and that the post should be written to appeal to
188:42 the provided target audience. And then basically just some more information
188:45 about how to structure the post, what it should output and then an example which
188:49 is basically you receive a topic. You search the web, you draft the post and
188:53 you format it with source citations, clean structure, optional hashtags and a
188:57 call to action at the end. And as you can see what it outputs is a super clean
189:01 LinkedIn post right here. So then what we're going to do is basically we're
189:05 feeding this output directly into that next agent. And by the way, they're both
189:10 using chat GBT 4.1 through open router. All right, but before we look at the
189:12 image prompt agent, let's just take a look at these two things down here. So
189:15 the first one is the chat model that plugs into both image prompt agent and
189:19 the LinkedIn post agent. So all you have to do is go to open router, get an API
189:22 key, and then you can choose from all these different models. And in here, I'm
189:24 using GPT4.1. And then we have the actual tool that the LinkedIn agent uses for its
189:31 research which is Tavi. And what we're doing here is we're sending off a post
189:35 request using an HTTP request tool to the Tavi endpoint. So this is where
189:39 people typically start to feel overwhelmed when trying to set up these
189:42 requests because it can be confusing when you're trying to look through that
189:45 API documentation. Which is exactly why in my paid community I created a APIs
189:49 and HTTP requests deep dive because truthfully you need to understand how to
189:54 set up these requests because being able to connect to different APIs is where
189:58 the magic really happens. So Tavi just lets your LLM connect to the web and
190:02 it's really good for web search and it also gives you a thousand free searches
190:05 per month. So that's the plan that I'm on. Anyways, once you're in here and you
190:08 have an account and you get an API key, all I did was went to the Tavali search
190:12 endpoint and you can see we have a curl statement right here where we have this
190:17 endpoint. We have post as the method we have this is how we authorize ourselves
190:20 and this is all going to be pretty similar to the way that we set up the
190:23 actual request to OpenAI's image generation API. So, I'm not going to
190:26 dive into this too much. When you download this template, all you have to
190:30 do is plug in your Tavi API. But later in this video when we walk through
190:35 setting up the request to OpenAI, this should make more sense. Anyways, the
190:38 main thing to take away from this tool is that we're using a placeholder for
190:41 the request because in the request we sent over to Tavali, we basically say,
190:44 okay, here's the search query that we're going to search the internet for. And
190:47 then we have all these other little settings we can tweak like the topic,
190:51 how many results, how many chunks per source, all this kind of stuff. All we
190:55 really want to touch right now is the query. And as you can see, I put this in
190:59 curly braces, meaning it's a placeholder. I'm calling the placeholder
191:02 search term. And down here, I'm defining that placeholder as what the user is
191:06 searching for. So, as you can see, this data in the placeholder is going to be
191:09 filled in by the model. So, based on our form submission, when we asked it to,
191:13 you know, create a LinkedIn post about morning versus night people, it fills
191:17 out the search term with latest research on productivity, morning people versus
191:21 night people, and that's basically how it searches the internet. And then we
191:24 get our results back. And now it creates a LinkedIn post that we're ready to pass
191:29 off to the next agent. So the output of this one gets fed into this next one,
191:32 which all it has to do is read the output. As you can see right here, we
191:36 gave it the LinkedIn post, which is the full one that we just got spit out. And
191:39 then our system message is basically telling it to turn that into an image
191:43 prompt. This one is a little bit longer. Not too bad, though. I'm not going to
191:46 read the whole thing, but essentially we're telling it that it's going to be
191:50 an AI agent that transforms a LinkedIn post into a visual image prompt for a
191:56 textto-image AI generation model. So, we told it to read the post, identify the
192:00 message, identify the takeaways, and then create a compelling graphic prompt
192:04 that can be used with a textto image generator. We gave it some output
192:06 instructions like, you know, if there's numbers, try to work those into the
192:10 prompt. Um, you can use, you know, text, charts, icons, shapes, overlays,
192:14 anything like that. And then the very bottom here, we just gave it sort of
192:17 like an example prompt format. And you can see what it spits out is a image
192:21 prompt. So it says a dynamic split screen infographic style graphic. Left
192:25 side has a sunrise, it's bright yellow, and it has morning larks plus 10%
192:29 productivity. And the right side is a morning night sky, cool blue gradients,
192:33 a crescent moon, all this kind of stuff. And that is exactly what we saw back in
192:38 here when we look at our image. And so this is just so cool to me because first
192:41 of all, I think it's really cool that it can read a post and kind of use its
192:44 brain to say, "Okay, this would be a good, you know, graphic to be looking at
192:47 while I'm reading this post." But then on top of that, it can actually just go
192:51 create that for us. So, I think this stuff is super cool. You know, I
192:53 remember back in September, I was working on a project where someone
192:57 wanted me to help them with LinkedIn automated posting and they wanted visual
193:00 elements as well and I was like, uh, I don't know, like that might have to be a
193:04 couple month away thing when we have some better models and now we're here.
193:07 So, it's just super exciting to see. But anyways, now we're going to feed that
193:12 output, the image prompt into the HTTP request to OpenAI. So, real quick, let's
193:16 go take a look at OpenAI's documentation. So, of course, we have
193:20 the GBT image API, which lets you create, edit, and transform images.
193:24 You've got different styles, of course. You can do like memes with a with text.
193:29 You can do creative things. You can turn other images into different images. You
193:31 can do all this kind of stuff. And this is where it gets really cool, these
193:35 posters and the visuals with words because that's the kind of stuff where
193:39 typically AI image gen like wasn't there yet. And one thing real quick in your
193:42 OpenAI account, which is different than your chatbt account, this is where you
193:46 add the billing for your OpenAI API calls. You have to have your
193:50 organization verified in order to actually be able to access this model
193:54 through API. Right now, it took me 2 minutes. You basically just have to
193:57 submit an ID and it has to verify that you're human and then you'll be verified
194:00 and then you can use it. Otherwise, you're going to get an error message
194:02 that looks like this that I got earlier today. But anyways, the verification
194:06 process does not take too long. Anyways, then you're going to head over to the
194:08 API documentation that I will have linked in the description where we can
194:12 see how we can actually create an image in NAN. So, we're going to dive deeper
194:16 into this documentation in the later part of this video where I'm walking
194:19 through a step-by-step setup of this. But, we're using the endpoint um which
194:23 is going to create an image. So, we have this URL right here. We're going to be
194:27 creating a post request and then we just obviously have our things that we have
194:30 to configure like the prompt in the body. We have to obviously send over
194:35 some sort of API key. We have to, you know, we can choose the size. We can
194:37 choose the model. All this kind of stuff. So back in NN, you can see that
194:41 I'm sending a post request to that endpoint. For the headers, I set up my
194:44 API key right here, but I'm going to show you guys a better way to do that in
194:47 the later part of this video. And then for the body, we're saying, okay, I want
194:51 to use the GBT image model. Here's the actual prompt to use for the image which
194:54 we dragged in from the image prompt agent. And then finally the size we just
194:59 left it as that 1024 * 1024 square image. And so this is interesting
195:03 because what we get back is we get back a massive base 64 code. Like this thing
195:08 is huge. I can't even scroll right now. My screen's kind of frozen. Anyways, um
195:12 yeah, there it goes. It just kind of lagged. But we got back this massive
195:15 file. We can see how many tokens this was. And then what we're going to do is
195:20 we're going to convert that to binary data. So that's how we can actually get
195:23 the file as an image. As you can see now after we turn that nasty string into a
195:28 file, we have the binary image right over here. So all I did was I basically
195:32 just dragged in this field right here with that nasty string. And then when
195:36 you hit test step, you'll get that binary data. And then from there, you
195:39 have the binary data, you have the LinkedIn post. All you have to do is,
195:43 you know, activate LinkedIn, drag it right in there. Or you can just do what
195:47 I did, which is I'm sending it to myself in email. And of course, before you guys
195:50 yell at me, let's just talk about how much this run costed me. So, this was
195:55 4,273 tokens. And if we look at this API and we go down to the pricing section,
195:59 we can see that for image output tokens, which was generated images, it's going
196:03 to be 40 bucks for a million tokens, which comes out to about 17 cents. If
196:06 you can see that right here, hopefully I did the math right. But really, for the
196:09 quality and kind of for the industry standard I've seen for price, that's on
196:13 the cheaper end. And as you can see down here, it translates roughly to 2 cents,
196:17 7 cents, 19 cents per generated image for low, medium, blah blah blah blah
196:20 blah. But anyways, now that that's out of the way, let's just set up an HTTP
196:25 request to that API and generate an image. So, I'm going to add a first
196:28 step. I'm just going to grab an HTTP request. So, I'm just going to head over
196:31 to the actual API documentation from OpenAI on how to create an image and how
196:35 to hit this endpoint. And all we're going to do is we're going to copy this
196:38 curl command over here on the right. If it you're not seeing a curl command, if
196:41 you're seeing Python, just change that to curl. Copy that. And then we're going
196:45 to go back into Nitn. Hit import curl. Paste that in there. And then once we
196:49 hit import, we're almost done. So that curl statement basically just autopop
196:52 populated almost everything we need to do. Now we just have a few minor tweaks.
196:55 But as you can see, it changed the method to post. It gave us the correct
196:59 URL endpoint already. It has us sending a header, which is our authorization,
197:02 and then it has our body parameters filled out where all we'd really have to
197:06 change here is the prompt. And if we wanted to, we can customize this kind of
197:09 stuff. And that's why it's going to be really helpful to be able to understand
197:13 and read API documentation so you know how to customize these different
197:16 requests. Basically, all of these little things here like prompt, background,
197:20 model, n, output format, they're just little levers that you can pull and
197:23 tweak in order to change your output. But we're not going to dive too deep
197:26 into that right now. Let's just see how we can create an image. Anyways, before
197:30 we grab our API key and plug that in, when you're in your OpenAI account, make
197:33 sure that your organization is verified. Otherwise, you're going to get this
197:35 error message and it's not going to let you access the model. Doesn't take long.
197:39 Just submit an ID. And then also make sure that you have billing information
197:43 set up so you can actually pay for um an image. But then you're going to go down
197:47 here to API keys. You're going to create new secret key. This one's going to be
197:53 called image test just for now. And then you're going to copy that API key. Now
197:56 back in any then it has this already set up for us where all we need to do is
198:00 delete all this. We're going to keep the space after bearer. And we can paste in
198:03 our API key like that. and we're good to go. But if you want a better method to
198:08 be able to save this key in Nadn so you don't have to go find it every time.
198:12 What you can do is come to authentication, go to general or actually no it's generic and then you're
198:17 going to choose header off and we know it's header because right here we're
198:20 sending headers as a header parameter and this is where we're authorizing
198:22 oursel. So we're just going to do the same up here with the header off. And
198:26 then we're going to create a new one. I'm just going to call this one openai
198:30 image just so we can keep ourselves organized. And then you're going to do the same
198:34 thing as what we saw down in that header parameter field. Meaning the
198:39 authorization is the name and then the value was bearer space API key. So
198:44 that's all I'm going to do. I'm going to hit save. We are now authorized to
198:49 access this endpoint. And I'm just going to turn off sending headers because
198:53 we're technically sending headers right up here with our authentication. So we
198:57 should be good now. Right now we'll be getting an image of a cute baby sea
199:01 otter. Um, and I'm just going to say making pancakes. And we'll hit test
199:05 step. And this should be running right now. Um, okay. So, bad request. Please
199:09 check your parameters. Invalid type for n. It expected an integer, but it got a
199:13 string instead. So, if you go back to the API documentation, we can see n
199:18 right here. It should be integer or null, and it's also optional. So, I'm
199:21 just going to delete that. We don't really need that. And I'm going to hit test step.
199:26 And while that's running real quick, we'll just go back at n. And this
199:29 basically says the number of images to generate must be between 1 and 10. So
199:32 that's like one of those little levers you could tweak like I was talking about
199:36 if you want to customize your request. But right now by default it's only going
199:40 to give us one. Looks like this HTTP request is working. So I'll check in
199:45 with you guys in 20 seconds when this is done. Okay. So now that that finished
199:49 up, didn't take too long. We have a few things and all we really need is this
199:52 base 64. But we can see again this one costed around 17. And now we just have
199:57 to turn this into binary so we can actually view an image. So I'm going to
200:01 add a plus after the HTTP request. I'm just going to type in binary. And we can
200:06 see convert to file, which is going to convert JSON data to binary data. And
200:11 all we want to do here is move a B 64 string to file because this is a B 64
200:15 JSON. And this basically represents the image. So I'm going to drag that into
200:19 there. And then when I hit test step, we should be getting a binary image output
200:24 in a field called data. As you can see right here, and this should be our image
200:28 of a cute sea otter making pancakes. As you can see, um it's not super
200:32 realistic, and that's because the prompt didn't have any like photorealistic,
200:36 hyperrealistic elements in there, but you can easily make it do so. And of
200:39 course, I was playing around with this earlier, and just to show you guys, you
200:41 can make some pretty cool realistic images, here was um a post I made about
200:47 um if ancient Rome had access to iPhones. And obviously, this is not like
200:51 a real Twitter account. Um, but this is a dinosaurs evolved into modern-day
200:55 influencers. This was just for me testing like an automation using this
200:59 API and auto posting, but not as practical as like these LinkedIn
201:02 graphics. But if you guys want to see a video sort of like this, let me know. Or
201:05 if you also want to see a more evolved version of the LinkedIn posting flow and
201:09 how we can make it even more robust and even more automated, then definitely let
201:15 well. Okay. Okay. So, all I have to do in this form submission is enter in a
201:19 picture of a product, enter in the product name, the product description,
201:22 and my email address. And we'll send this off, and we'll see the workflow
201:26 over here start to fire off. So, we're going to upload the photo. We're going
201:29 to get an image prompt. We're going to download that photo. Now, we're creating
201:32 a professional graphic. So, after our image has been generated, we're
201:35 uploading it to a API to get a public URL so we can feed that URL of the image
201:40 into Runway to generate a professional video. Now, we're going to wait 30
201:42 seconds and then we'll check in to see if the video is done. If it's not done
201:45 yet, we're going to come down here and pull, wait five more seconds, and then
201:48 go check in. And we're going to do this infinitely until our video is actually
201:52 done. So, anyways, it just finished up. It ended up hitting this check eight
201:55 times, which indicates I should probably increase the wait time over here. But
201:58 anyways, let's go look at our finished products. So, we just got this new
202:01 email. Here are the requested marketing materials for your toothpaste. So,
202:04 first, let's look at the video cuz I think that's more exciting. So, let me
202:06 open up this link. Wow, we got a 10-second video. It's spinning. It's 3D.
202:10 The lighting is changing. This looks awesome. And then, of course, it also
202:13 sends us that image. in case we want to use that as well. And one of the steps
202:16 in the workflow is that it's going to upload your original image to your
202:19 Google Drive. So here you can see this was the original and then this was the
202:22 finished product. So now you guys have seen a demo. We're going to build this
202:25 entire workflow step by step. So stick with me because by the end of this
202:28 video, you'll have this exact system up and running. Okay. So when we're setting
202:32 up a system where we're creating an image from text and then we're creating
202:35 a video from that image, the two most important things are going to be that
202:38 image prompt and that video prompt. So what we're going to do is head over to
202:41 my school community. The link for that will be down in the description. It's a
202:43 free school community. And then what you're going to do is either search for
202:46 the title of this video or click on YouTube resources and find the post
202:50 associated with this video. And when you click into there, there'll be a doc that
202:54 will look like this or a PDF and it will have the two prompts that you'll need in
202:57 order to run the system. So head over there, get that doc, and then we can hop
203:00 into the step by step. And that way we can start to build this workflow and you
203:03 guys will have the prompts to plug right in. Cool. So once you have those, let's
203:07 get started on the workflow. So as you guys know, a workflow always has to
203:11 start with some sort of trigger. So in this case, we're going to be triggering
203:14 this workflow with a form submission. So I'm just going to grab the native NAN
203:18 form on new form event. So we're going to configure what this form is going to
203:20 look like and what it's going to prompt a user to input. And then whenever
203:24 someone actually submits a response, that's when the workflow is going to
203:27 fire off. Okay. So I'm going to leave the authentication as none. The form
203:31 title, I'm just putting go to market. For the form description, I'm going to
203:36 say give us a product photo, title, and description, and we'll get back to you
203:40 with professional marketing materials. And if you guys are interested in what I
203:43 just used to dictate that text, there'll be a link for Whisper Flow down in the
203:46 description. And now we need to add our form elements. So the first one is going
203:50 to be not a text. We're going to have them actually submit a file. So click on
203:54 file. This is going to be required. I only want them to be allowed to upload
203:57 one file. So I'm going to switch off multiple files. And then for the field
204:01 name, we're just going to say product photo. Okay. So now we're going to add
204:04 another one, which is going to be the product title. So I'm just going to
204:07 write product title. This is going to be text. For placeholder, let's just put
204:10 toothpaste since that was the example. This will be a required field. So, the
204:13 placeholder is just going to be the gray text that fills in the text box so
204:17 people are kind of they know what to put in. Okay, we're adding another one
204:21 called product description. We'll make this one required. We'll just leave the
204:24 placeholder blank cuz you don't need it. And then finally, what we need to get
204:27 from them is an email, but instead of doing text, we can actually make it
204:31 require a valid email address. So, I'm just going to call it email and we'll
204:34 just say like namele.com so they know what a valid email looks like. We'll make that
204:38 required because we have to send them an email at the end with their materials.
204:42 And now we should be good to go. So if I hit test step, we'll see that it's going
204:45 to open up a form submission and it has everything that we just configured. And
204:48 now let me put in some sample data real quick. Okay, so I put a picture of a
204:52 clone bottle. The title's clone. I said the clone smells very clean and fresh
204:55 and it's a very sophisticated scent because we're going to have that
204:59 description be used to sort of help create that text image prompt. And then
205:02 I just put my email. So I'm going to submit this form. We should see that
205:05 we're going to get data back right here in our NIN, which is the binary photo.
205:08 This is the product photo that I just submitted. And then we have our actual
205:13 table of information like the title, the description, and the email. And so when
205:17 I'm building stuff step by step, what I like to do is I get the data in here,
205:20 and then I pretty much will just build node by node, testing the data all the
205:23 way through, making sure that nothing's going to break when variables are being
205:27 passed from left to right in this workflow. Okay, so the next thing that
205:30 we need to do is we have this binary data in here and binary data is tough to
205:34 reference later. So what I'm going to do is I'm just going to upload it straight
205:37 to our Google Drive so we can pull that in later when we need it to actually
205:41 edit that image. Okay, so that's our form trigger. That's what starts the
205:44 workflow. And now what we're going to do next is we want to upload that original
205:48 image to Google Drive so we can pull it in later and then use it to edit the
205:51 image. So what I'm going to do is I'm going to click on the plus. I'm going to
205:54 type in Google Drive. And we're going to grab a Google Drive operation. That is
205:59 going to be upload file. So, I'll click on upload file. And at this point, you
206:02 need to connect your Google Drive. So, I'm not going to walk through that step
206:05 by step, but I have a video right up here where I do walk through it step by
206:08 step. But basically, you're just going to go to Docs. You have to open up a
206:12 sort of Google Cloud profile or a console, and then you just have to
206:15 connect yourself and enable the right credentials and APIs. Um, but like I
206:19 said, that video will walk through it. Anyways, now what we're doing is we have
206:23 to upload the binary field right here to our Google Drive. So, it's not called
206:27 data. We can see over here it's called product photo. So, I'm just going to
206:29 copy and paste that right there. So, it's going to be looking for that
206:32 product photo. And then we have to give it a name. So, that's why we had the
206:36 person submit a title. So, all I'm going to do is for the name, I'm going to make
206:40 this an expression instead of fixed because this name is going to change
206:43 based on the actual product coming through. I'm going to drag in the
206:47 product title from the left right here. So now the the photo in Google Drive is
206:51 going to be called cologne and then I'm just going to in parenthesis say
206:55 original. So because this is an expression, it basically means whenever
206:58 someone submits a form, whatever the title is, it's going to be title and
207:01 then it's going to say original. And that's how we sort of control that to be
207:05 dynamic. Anyways, then I'm just choosing what folder to go in. So in my drive,
207:08 I'm going to choose it to go to a folder that I just made called um product
207:12 creatives. So once we have that configured, I'm going to hit test step.
207:15 We're going to wait for this to spin. it means that it's trying to upload it
207:18 right now. And then once we get that success message, we'll quickly go to our
207:21 Google Drive and make sure that the image is actually there. So there we go.
207:25 It just came back. And now I'm going to click into Google Drive, click out of
207:27 the toothpaste, and we can see we have cologne. And that is the image that we
207:31 just submitted in NAN. All right. Now that we've done that, what we want to do
207:35 is we want to feed the data into an AI node so that it can create a text image
207:38 prompt. So I'm going to click on the plus. I'm going to grab an AI agent. And
207:43 before we do anything in here, I'm first of all going to give it its brain. So,
207:45 I'm going to click on the plus under chat model. I'm personally going to grab
207:49 an open router chat model, which basically lets you connect to a ton of
207:53 different things. Um, let me see. Open router.ai. It basically lets you connect
207:57 your agents to all the different models. So, if I click on models up here, we can
208:00 see that it just lets you connect to Gemini, Anthropic, OpenAI, Deepseek. It
208:04 has all these models and all in one place. So, go to open router, get an API
208:08 key, and then once you come back into here, all you have to do is connect your
208:11 API key. And what I'm going to use here is going to be 4.1. And then I'm just
208:15 going to name this so we know which one I'm using here. And then we now have our
208:20 agent accessing GPT4.1. Okay. So now you're going to go to that PDF that I have in the school
208:26 community and you're just going to copy this product photography prompt. Grab
208:31 that. Go back to the AI agent and then you're going to click on add option. Add
208:35 a system message. And then we're basically just going to I'm going to
208:38 click on expression and expand this full screen so you guys can see it better.
208:40 But I'm just going to paste that prompt in here. And this is going to tell the
208:44 AI agent how to take what we're giving it and turn it into a text image
208:51 optimized prompt for professional style, you know, studio photography. So, we're
208:55 not done yet because we have to actually give it the dynamic information from our
208:59 form submission every time. So, that's a user message. That's basically what it's
209:02 going to look at. So, the user message is what the agent's going to look at
209:05 every time. And the system message is basically like here are your
209:09 instructions. So for the user message, we're not going to be using a connected
209:11 chat trigger node. We're going to define below. And when we want to make sure
209:15 that this changes every time, we have to make sure it's an expression. And then
209:19 I'm just going to drill down over here to the form submission. And I'm going to
209:21 say, okay, here's what we're going to give this agent. It's going to get the
209:26 product, which the person submitted to us in the form, and we can drag in the
209:31 product, which was cologne, as you can see on the right. And then they also
209:36 gave us a description. So, all I have to do now is drag in the product
209:38 description. And so, now every time the agent will be looking at whatever
209:42 product and description that the user submitted in order to create its prompt.
209:46 So, I'm going to hit test step. We'll see right now it's using its chat model
209:50 GPT4.1. And it's already created that prompt for us. So, let's just give it a
209:53 quick read. Hyperrealistic photo of sophisticated cologne bottle,
209:56 transparent glass, sleek minimalistic design, silver metal cap, all this. But
210:00 what we have to do is we have to make sure that the image isn't being created
210:04 just on this. It has to look at this, but it also has to look at the actual
210:08 original image. So that's why our next step is going to be to redownload this
210:11 file and then we're going to push it over to the image generation model. So
210:15 at this point, you may be wondering like why are we going to upload the file if
210:18 we're just going to download it again? And the reason why I had to do that is
210:21 because when we get the file in the form of binary, we want to send the binary
210:27 data into the HTTP request right here that actually generates the image. And
210:30 we can't reference the binary way over here if it's only coming through over
210:34 here. So, we upload it so that we can then download it and then send it right
210:37 back in. And so, if that doesn't make sense yet, it probably will once we get
210:41 over to the stage. But that's why. Anyways, next step is we're going to
210:44 download that file. So, I'm going to click on this plus. We're going to be
210:47 downloading it from Google Drive and we're going to be using the operation
210:51 download file. So, we already should be connected because we've set up our
210:54 Google credentials already. The operation is going to be download the
210:57 resources a file and instead of choosing from a list, we're going to choose by
211:00 ID. And all we're going to do is download that file that we previously
211:03 uploaded every time. So I'm going to come over here, the Google Drive, upload
211:08 photo node, drag in the ID, and now we can see that's all we have to do. If we
211:11 hit test step, we'll get back that file that we originally uploaded. And we can
211:15 just make sure it's the cologne bottle. Okay, but now it's time to basically use
211:19 that downloaded file and the image prompt and send that over to an API
211:23 that's going to create an image for us. So we're going to be using OpenAI's
211:27 image generator. So here is the documentation. we have the ability to
211:30 create an image or we can create an image edit which is what we want to do
211:33 because we wanted to look at the photo and our request. So typically what you
211:38 can do in this documentation is you can copy the curl command but this curl
211:41 command is actually broken so we're not going to do that. If you copied this one
211:44 up here to actually just create an image that one would work fine but there's
211:47 like a bug with this one right now. So anyways I'm going to go into our n I'm
211:51 going to hit the plus. I'm going to grab an HTTP request and now we're going to
211:57 configure this request. So, I'm going to walk through how I'm reading the API
212:00 documentation right here to set this up. I'm not going to go super super
212:03 in-depth, but if you get confused along the way, then definitely check out my
212:06 paid course. The link for that down in the description. I've got a full course
212:10 on deep diving into APIs and HTTP requests. Anyways, the first thing we
212:13 see is we're going to be making a post request to this endpoint. So, the first
212:16 thing I'm going to do is copy this endpoint. We're going to paste that in.
212:19 And then we're also going to make sure the method is set to post. So, the next
212:23 thing that we have to do is authorize ourselves somehow. So over here I can
212:27 see that we have a header and the name is going to be authorization and then
212:31 the value is going to be bearer space R open AI key. So that's why I set up a
212:35 header authentication already. So in authentication I went to generic and
212:39 then I went to header and then you can see I have a bunch of different headers
212:42 already set up. But what I did here is I chose my OpenAI one where basically all
212:46 I did was I typed in here authorization and then in the value I typed in bearer
212:50 space and then I pasted my API key in there. And now I have my OpenAI
212:54 credential saved forever. Okay. So the first thing we have to do in our body
212:59 request over to OpenAI is we have to send over the image to edit. So that's
213:02 going to be in a field called image. And then we're sending over the actual
213:05 photo. So what I'm going to do is I'm going to click on send body. I'm going
213:10 to use form data. And now we can set up the different names and values to send
213:13 over. So the first thing is we're going to send over this image right here on
213:16 the lefth hand side. And this is in a field called data. And it's binary. So,
213:19 I'm going to choose instead of form data, I'm going to send over an NAN
213:23 binary file. The name is going to be image because that's what it said in the
213:26 documentation. And the input data field name is data. So, I'm just going to copy
213:30 that, paste it in there. And this basically means, okay, we're sending
213:34 over this picture. The next thing we need to send over is a prompt. So, the
213:37 name of this field is going to be prompt. I'm just going to copy that, add
213:42 a new parameter, and call it prompt. And then for the value, we want to send over
213:45 the prompt that we had our AI agent write. So, I'm going to click into
213:47 schema and I'm just going to drag over the output from the AI agent right
213:51 there. And now that's an expression. So, the next thing we want to send over is
213:54 what model do we want to use? Because if we don't put this in, it's going to
213:58 default to dolly 2, but we want to use gpt-image- one. So, I'm going to copy
214:04 GPT- image- one. We're going to come back into here, and I'm going to paste
214:08 that in as the value, but then the name is model because, as you can see in
214:11 here, right there, it says model. So hopefully you guys can see that when
214:15 we're sending over an API call, we just have all of these different options
214:18 where we can sort of tweak different settings to change the way that we get
214:22 the output back. And then you have some other options, of course, like quality
214:25 or size. But right now, we're just going to leave all that as default and just go
214:28 with these three things to keep it simple. And I'm going to hit test step
214:32 and we'll see if this is working. Okay, never mind. I got an error and I was
214:35 like, okay, I think I did everything right. The reason I got the error is
214:38 because I don't have any more credits. So, if you get this error, go add some
214:42 credits. Okay, so added more credits. I'm going to try this again and I'll
214:45 check back in. But before I do that, I wanted to say me clearly, I've been like
214:50 spamming this thing with creating images cuz it's so cool. It's so fun. But
214:53 everyone else in the world has also been doing that. So, if you're ever getting
214:56 some sort of like errors where it's like a 500 type of error where it means like
215:00 something's going on on the server side of things or you're seeing like some
215:04 sort of rate limit stuff, keep in mind that there's there's a limit on how many
215:07 images you can send per minute. I don't think that's been clearly defined on
215:13 GPT- image-1. But also, if the OpenAI server is receiving way too many
215:16 requests, that is also another reason why your request may be failing. So,
215:20 just keep that in mind. Okay, so now it worked. We just got that back. But what
215:23 you'll notice is we don't see an image here or like an image URL. So, what we
215:27 have to do is we have this base 64 string and we have to turn that into
215:32 binary data. So, what I'm going to do is after this node, I'm going to add one
215:37 that says um convert to file. So we're going to convert JSON data to binary
215:41 data and we're going to do B 64. So all I have to do now is show this data on
215:45 the lefth hand side. Grab the base 64 string. And then when we hit test step,
215:48 we should get a binary file over here, which if we click into it, this should
215:52 be our professional looking photo. Wow, that looks great. It even got the
215:55 wording and like the same fonts right. So that's awesome. And by the way, if we
216:00 click into the results of the create image where we did the image edit, we
216:04 can see the tokens. And with this model, it is basically $10 for a million input
216:09 tokens and $40 for a million output tokens. So right here, you can see the
216:12 difference between our input and output tokens. And this one was pretty cheap. I
216:15 think it was like 5 cents. Anyways, now that we have that image right here as
216:19 binary data, we need to turn that into a video using an API called Runway. And so
216:23 if we go into Runway and we go first of all, let's look at the price. For a
216:27 5second video, 25 cents. For a 10-second video, 50 cents. So that's the one we're
216:30 going to be doing today. But if we go to the API reference to read how we can
216:34 turn an image into a video, what we need to look at is how we actually send over
216:38 that image. And what we have to do here is send over an HTTPS URL of the image.
216:43 So we somehow have to get this binary data in NADN to a public image that
216:48 runway can access. So the way I'm going to be doing that is with this API that's
216:53 free called image BB. And um it's a free image hosting service. And what we can
216:57 do is basically just use its API to send over the binary data and we'll get back
217:02 a public URL. So come here, make a free account. You'll grab your API key from
217:05 up top. And then we basically have here's how we set this up. So what I'm
217:08 going to do is I'm going to copy the endpoint right there. We're going to go
217:12 back into naden and I'm going to add an HTTP request. And let me just configure
217:17 this up. We'll put it over here just to keep everything sort of square. But now
217:20 what I'm going to do in here is paste that endpoint in as our URL. You can
217:24 also see that it says this call can be done using post or git. But since git
217:28 requests are limited by the max amount of length, you should probably do post.
217:30 So I'm just going to go back in here and change this to a post. And then there
217:33 are basically two things that are required. The first one is our API key.
217:37 And then the second one is the actual image. Anyways, this documentation is
217:41 not super intuitive. I can sort of tell that this is a query parameter because
217:45 it's being attached at the end of the endpoint with a question mark and all
217:47 this kind of stuff. And that's just because I've looked at tons of API
217:51 documentation. So, what I'm going to do is go into nit. We're going to add a
217:55 generic credential type. It's going to be a query off. Where where was query?
217:59 There we go. And then you can see I've already added my image BB. But all
218:02 you're going to do is you would add the name as a key. And then you would just
218:05 paste in your API key. And that's it. And now we've authenticated ourselves to
218:09 the service. And then what's next is we need to send over the image in a field
218:12 called image. So I'm going to go back in here. I'm going to send over a body
218:15 because this allows us to actually send over n binary fields. And I'm not going
218:20 to do n binary. I'm going to do form data because then we can name the field
218:23 we're sending over. Like I said, not going to deep dive into how that all
218:26 works, but the name is going to be image and then the input data field name is
218:30 going to be data because that's how it's seen over here. And this should be it.
218:33 So, real quick, I'm just going to change this to get URL. And then we're going to
218:37 hit test step, which is going to send over that binary data to image BB. And
218:42 it hopefully should be sending us back a URL. And it sent back three of them. I'm
218:45 going to be using the middle one that's just called URL because it's like the
218:48 best size and everything. You can look at the other ones if you want on your
218:52 end, but this one is going to load up and we should see it's the image that we
218:55 got generated for us. It takes a while to load up on that first time, but as
218:59 you can see now, it's a publicly accessible URL and then we can feed it
219:03 into runway. So that's exactly our next step. We're going to add another request
219:07 right here. It's going to be an HTTP and this one we're going to configure to hit
219:11 runway. So here's a good example of we can actually use a curl command. So I'm
219:14 going to click on copy over here when I'm in the runway. Generate a video from
219:19 image. Come back into Naden, hit import curl, and paste that in there and hit
219:22 import. And this is going to basically configure everything we need. We just
219:26 have to tweak a few things. Typically, most API documentation nowadays will
219:29 have a curl command. The edit image one that we set up earlier was just a little
219:33 broken. Imag is just a free service, so sometimes they don't always. But let's
219:37 configure this node. So, the first thing I see is we have a header off right
219:40 here. And I don't want to send it like this. I want to send it up as a generic
219:44 type so I can save it. Otherwise, you'd have to go get your API key every time
219:47 you wanted to use Runway. So, as you can see, I've already set up my Runway API
219:51 key. So, I have it plugged in, but what you would do is you'd go get your API
219:55 key from Runway. And then you'd see, okay, how do we actually send over
219:58 authentication? It comes through with the name authorization. And then the
220:03 header is bearer space API key. So, similar to the last one. And then that's
220:06 all you would do in here when you're setting up your runway credential.
220:11 Authorization bearer space my API key. And then because we have ourselves
220:14 authenticated up here, we can flick off that headers. And all we have to do now
220:17 is configure the actual body. Okay, so first things first, what image are we
220:21 sending over to get turned into a video in that name prompt image? We're going
220:25 to get rid of that value. And I'm just going to drag in the URL that we wanted
220:29 that we got from earlier, which was that picture I s I showed you guys. So now
220:33 runway sees that image. Next, we have the seed, which if you want to look at
220:36 the documentation, you can play with it, but I'm just going to get rid of that.
220:38 Then we have the model, which we're going to be using, Gen 4 Turbo. We then
220:42 have the prompt text. So, this is where we're going to get rid of this. And
220:46 you're going to go back to that PDF you downloaded from my free school, and
220:49 you're going to paste this prompt in there. So, this prompt basically gives
220:53 us that like 3D spinning effect where it just kind of does a slow pan and a slow
220:56 rotate. And that's what I was looking for. If you're wanting some other type
220:59 of video, then you can tweak that prompt, of course. For the duration, if
221:04 you look in the documentation, it'll say the duration only basically allows five
221:08 or 10. So, I'm just going to change this one to 10. And then the last one was
221:11 ratio. And I'm just going to make the square. So here are the accepted ratio
221:16 values. I'm going to copy 960 by 960. And we're just going to paste that in
221:19 right there. And actually before we hit test step, I've realized that we're
221:22 missing something here. So back in the documentation, we can see that there's
221:25 one thing up here which is required, which is a header. X-runway- version.
221:30 And then we need to set the value to this. So I'm going to copy the header.
221:35 And we have to enable headers. I I deleted it earlier, but we're going to
221:37 enable that. So we have the version. And then I'm just going to go copy the value
221:41 that it needs to be set to and we'll paste that in there as the value.
221:44 Otherwise, this would not have worked. Okay, so that should be configured. But
221:48 before we test it out, I want to show you guys how I set up the polling flow
221:52 like this that you saw in the demo. So what we're going to do here is we need
221:57 to go see like, okay, once we send over our request right here to get a video
222:01 from our image, it's going to return an ID and that doesn't mean anything to us.
222:06 So what we have to do is get our task. So that is the basically we send over
222:10 the ID that it gives us and then it'll come back and say like the status equals
222:14 pending or running or we'll say completed. So what I'm going to do is
222:18 copy this curl command for getting task details. We're going to hook it up to
222:23 this node as an HTTP request. We're going to import that curl. Now that's pretty much set up. We
222:28 have our authorization which I'm going to delete that because as you know we
222:32 just configured that earlier as a header off. So, I'm just going to come in here
222:37 and grab my Runway API key. There it is. I couldn't find it for some reason. Um,
222:41 we have the version set up. And now all we have to do is drag in the actual ID
222:45 from the previous one. So, real quick, I'm just going to make this an
222:48 expression. Delete ID. And now we're pretty much set up. So, first of all,
222:51 I'm going to test this one, which is going to send off that request to runway
222:54 and say, "Hey, here's our image. Here's the prompt. Make a video out of it." And
222:59 as you can see, we got back an ID. Now I'm going to use this next node and I'm
223:03 going to drag in that ID from earlier. And now it's saying, okay, we're going
223:06 to check in on the status of this specific task. And if I hit test step,
223:09 what we're going to see is that it's not yet finished. So it's going to come back
223:13 and say, okay, status of this run or status of this task is running. So
223:17 that's why what I'm going to do is add an if. And this if is going to be saying,
223:24 okay, does this status field right here, does that equal running in all caps?
223:28 Because that's what it equals right now. If yes, what we're going to do is we are
223:32 going to basically wait for a certain amount of time. So here's the true
223:36 branch. I'm going to wait and let's just say it's 5 seconds. So I'll just call
223:41 this five seconds. I'm going to wait for 5 seconds and then I'm going to come
223:44 back here and try again. So as you saw in the demo, it basically tried again
223:48 like seven or eight times. And this just ensures that it's never going to move on
223:53 until we actually have a finished photo. So what you could also do is basically
223:56 say does status equal completed or whatever it means when it completes.
223:59 That's another way to do it. You just have to be careful to make sure that
224:01 whatever you're setting here as the check is always 100% going to work. And
224:07 then what you do is you would continue the rest of the logic down this path
224:10 once that check has been complete. And then of course you probably don't want
224:13 to have this check like 10 times every single time. So what you would do is
224:17 you'd add a weight step here. And once you know about how long it takes, you'd
224:21 add this here. So last time I had it at 30 seconds and it waited like eight
224:23 times. So let's just say I'm going to wait 60 seconds here. So then when this
224:27 flow actually runs, it'll wait for a minute, check. If it's still not done,
224:30 it'll continuously loop through here and wait 5 seconds every time until we're
224:34 done. Okay, there we go. So now status is succeeded. And what I'm going to do
224:37 is just view this video real quick. Hopefully this one came out nicely.
224:41 Let's take a look. Wow, this is awesome. Super clean. It's rotating really slowly. It's a full
224:49 10-second video. You can tell it's like a 3D image. This is awesome. Okay, cool.
224:54 So now if we test this if branch, we'll see that it's going to go down the other
224:57 one which is the false branch because it's actually completed. And now we can
225:01 with confidence shoot off the email with our materials. So I'm going to grab a
225:04 Gmail node. I'm going to click send a message. And we are going to have this
225:08 configured hopefully because you've already set up your Google stuff. And
225:11 now who do we send this to? We're going to go grab that email from the original
225:14 form submission which is all the way down here. We're going to make the
225:19 subject, which I'm just going to say marketing materials, and then a colon.
225:24 And we'll just drag in the actual title of the product, which in here was
225:28 cologne. I'm changing the email type to text just because I want to. Um, we're
225:32 going to make the body an expression. And we're just going to say like,
225:39 hey, here is your photo. And obviously this can be customized however you want.
225:43 But for the photo, what we have to do is grab that public URL that we generated
225:47 earlier. So right here there is the photo URL. Here is your video. And for
225:52 the video, we're going to drag in the URL we just got from the output of that
225:58 um runway get task check. So there is the video URL. And then I'm just going
226:02 to say cheers. Last thing I want to do is down here append edit an attribution
226:07 and turn that off. This just ensures that the email doesn't say this email
226:12 was sent by NAN. And now if we hit test step right here, this is pretty much the
226:15 end of the process. And we can go ahead and check. Uh-oh. Okay, so not
226:19 authorized. Let me fix that real quick. Okay, so I just switched my credential
226:21 because I was using one that had expired. So now this should go through
226:25 and we'll go take a look at the email. Okay, so did something wrong. I can
226:28 already tell what happened is this is supposed to be an expression and
226:31 dynamically come through as the title of the product, but we accidentally somehow
226:35 left off a curly brace. So, if I come back into here and and add one more
226:38 curly brace right here to the description or sorry, the subject now,
226:42 we should be good. I'll hit test step again. And now we'll go take a look at
226:46 that email. Okay, there we go. Now, we have the cologne and we have our photo
226:49 and our video. So, let's click into the quick. I'm just so amazed. This is this
226:55 is just so much fun. It look the the lighting and the the reflections. It's
227:00 it's all just perfect. And then we'll click into the photo just in case we want to see the
227:05 actual image. And there it is. This also looks awesome. All right, so that's
227:09 going to do it for today's video. I hope you guys enjoyed this style of walking
227:12 step by step through some of the API calls and sort of my thought process as
227:16 to how I set up this workflow. Okay, at this point I think you guys probably
227:19 have a really good understanding of how these AI workflows actually function and
227:22 you're probably getting a little bit antsy and want to build an actual AI
227:26 agent. Now, so we're about to get into building your first AI agent step by
227:30 step. But before that, just wanted to drive home the concept of AI workflows
227:35 versus AI agents one more time and the benefits of using workflows. But of
227:38 course, there are scenarios where you do need to use an agent. So, let's break it
227:42 down real quick. Everyone is talking about AI agents right now, but the truth
227:46 is most people are using them completely wrong and admittedly myself included.
227:50 It's such a buzzword right now and it's really cool in n to visually see your
227:54 agents think about which tools they have and which ones to call. So, a lot of
227:57 people are just kind of forcing AI agents into processes where you don't
228:01 really need it. But in reality, a simple AI workflow is not only going to be
228:04 easier to build, it's going to be more cost- effective and also more reliable
228:07 in the long run. If you guys don't know me, my name's Nate. And for a while now,
228:10 I've been running an agency where we deliver AI solutions to clients. And
228:14 I've also been teaching people from any background how to build out these things
228:17 practically and apply them to their business through deep dive courses as
228:21 well as live calls. So, if that sounds interesting to you, definitely check out
228:23 the community with the link in the description. But let's get into the
228:26 video. So, we're going to get into Naden and I'm going to show you guys some
228:29 mistakes of when I've built agents when I should have been building AI
228:31 workflows. But before that, I just wanted to lay out the foundations here.
228:35 So, we all know what chatbt is. At its core, it's a large language model that
228:39 we talk to with an input and then it basically just gives us an output. So,
228:42 if we wanted to leverage chatbt to help us write a blog post, we would ask it to
228:46 write a blog post about a certain topic. It would do that and then it would give
228:48 us the output which we would then just copy and paste somewhere else. And then
228:52 came the birth of AI agents, which is when we actually were able to give tools
228:56 to our LLM so that they could not only just generate content for us, but they
228:59 could actually go post it or go do whatever we wanted to do with it. AI
229:02 agents are great and there's definitely a time and a place for them because they
229:05 have different tools and basically the agent will use its brain to understand,
229:08 okay, I have these three tools based on what the user is asking me. Do I call
229:12 this one and then do I output or do I call this one then this one or do I need
229:17 to call all three simultaneously? It has that option and it has the variability
229:19 there. So, this is going to be a non-deterministic workflow. But the
229:23 reality is most of the processes that we're trying to enhance for our clients
229:28 are pretty deterministic workflows that we can build out with something more
229:30 linear where we still have the same tools. We're still using AI, but we have
229:34 everything going step one, step two, step three, step four, step five, step
229:38 six, which is going to reduce the variability there. It's going to be very
229:42 deterministic and it's going to help us with a lot of things. So stick with me
229:45 because I'm going to show you guys an AI agent video that I made on YouTube a few
229:49 months back and I started re-evaluating it. Like why would I ever build out the
229:52 system like that? It's so inefficient. So I'll show you guys that in a sec. But
229:55 real quick, let's talk about the pros of AI workflows over AI agents. And I
229:59 narrowed it down to four main points. The first one is reliability and
230:02 consistency. One of the most important concepts of building an effective AI
230:05 agent is the system prompt because it has to understand what its tools are,
230:09 when to use each one, and what the end goal is. and it's on its own to figure
230:12 out which ones do I need to call in order to provide a good output. But with
230:15 a workflow, we're basically keeping it on track and there's no way that the
230:18 process can sort of deviate from the guardrails that we've set up because it
230:22 has to happen in order and it can't really go anywhere else. So this makes
230:25 systems more reliable because there's never going to be a transfer of data
230:28 between workflows where things may get messed up or incorrect mappings being
230:32 sent across, you know, agent to a different agent or agent to tool. We're
230:36 just basically able to go through the process linearly. So the next one is
230:40 going to be cost efficiency. When we're using an agent and it has different
230:44 tools, every time it hits a tool, it's going to go back to its brain. It's
230:46 going to rerun through its system prompt and it's going to think about what is my
230:49 next step here. And every time you're accessing that AI agent's brain, it
230:53 costs you money. So if we're able to eliminate that aspect of decision-m and
230:57 just say, okay, you you finished step two, now you have to go on to step
231:00 three. There's no decision to be made. We don't have to make that extra API
231:04 call to think about what comes next, and we're saving money. Number three is
231:08 easier debugging and maintenance. When we have an AI workflow, we can see
231:12 exactly which node errors. We can see exactly what mappings are incorrect and
231:16 what happened here. Whereas with an AI agent workflow, it's a little bit
231:18 tougher because there's a lot of manipulating the system prompt and
231:21 messing with different tool configurations. And like I said, there's
231:25 data flowing between agent to tool or between agent to subworkflow. And that's
231:28 where a lot of things can happen that you don't really have full visibility
231:31 into. And then the final one is scalability. kind of backpacks right off
231:35 of number three. But if you wanted to add more nodes and more functionality to
231:38 a workflow, it's as simple as, you know, plugging in a few more blocks here and
231:41 there or adding on to the back. But when you want to increase the functionality
231:44 of an AI agent, you're probably going to have to give it more tools. And when you
231:47 give it more tools, you're going to have to refine and add more lines to the
231:52 system prompt, which could work great initially, but then previous
231:55 functionality, the first couple tools you added, those might stop working or
231:59 those may become less consistent. So basically, the more control that we have
232:02 over the entire workflow, the better. AI is great. There are times when we need
232:05 to make decisions and we need that little bit of flexibility. But if a
232:09 decision doesn't have to be made, why would we leave that up to the AI to
232:13 hallucinate 5 or 10% of the time when we could basically say, "Hey, this is going
232:16 to be 100% consistent." Anyways, I've made a video that talks a little bit
232:19 more about this stuff, as well as other things I've learned over the first 6
232:22 months of building agents. If you want to watch that, I'll link it up here. But
232:25 let's hop into n and take a look at some real examples. Okay, so the first
232:29 example I want to share with you guys is a typical sort of rag agent. And for
232:32 some reason it always seems like the element of rag has to be associated with
232:36 an agent, but it really doesn't. So what we have is a workflow where we're
232:39 putting a document from Google Drive into Pine Cone. We have a customer
232:42 support agent and then we have a customer support AI workflow. And both
232:46 of the blue box and the green box, they do the exact same thing, but this one's
232:49 going to be more efficient and we also have more control. So let's break this
232:52 down. Also, if you want to download this template to play around with, you can
232:54 get it for free if you go to my free school community. The link for that's
232:57 down in the description as well. You'll come into here, click on YouTube
233:00 resources, and click on the post associated with this video. And then the
233:03 workflow will be right here for you to download. Okay, so anyways, here is the
233:06 document that we're going to be looking at. It has policy and FAQ information.
233:10 We've already put it into Pine Cone. As you can see, it's created eight vectors.
233:13 And now what we're going to do is we're going to fire off an email to the
233:16 customer support agent to see how it handles it. Okay, so we just sent off,
233:20 do you offer price matching or bulk discounts? We'll come back into the
233:23 workflow, hit run, and we should see the customer support agent is hitting the
233:26 vector database, and it's also hitting its reply email tool. But what you'll
233:29 notice is that it hit its brain. So, Google Gemini 2.0 Flash in this case,
233:33 not a huge deal because it's free. But if you were using something else, it's
233:36 going to have hit that API three different times, which would be three
233:40 separate costs. So, let's check and see if it did this correctly. So, in our
233:43 email, we got the reply, "We do not offer price matching currently, but we
233:46 do run promotions and discounts regularly. Yes, bulk orders may qualify
233:50 for a discount. Please contact our sales team at salestechhaven.com for
233:54 inquiries. So, let's go validate that that's correct. So, in the FAQ section
233:57 of this doc, we have that they don't offer price matching, but they do run
234:00 promotions and discounts regularly. And then for bulk discounts, um you have to
234:04 hit up the sales team. So, it answered correctly. Okay. So, now we're going to
234:08 run the customer support AI workflow down here. It's going to grab the email.
234:11 It's going to search Pine Cone. It's going to write the email. I'll explain
234:13 what's going on here in a sec. And then it responds to the customer. So, there's
234:17 four steps here. It's going to be an email trigger. It's going to search the
234:19 knowledge base. It's going to write the email and then respond to the customer
234:23 in an email. So, why would we leave that up to the agent to decide what it needs
234:27 to do if it's always going to happen in those four steps every time? All right,
234:30 here's the email we just got in reply. As you can see, this is the one that the
234:32 agent wrote, and this one looks a lot better. Hello, thank you for reaching
234:36 out to us. In response to your inquiry, we currently do not offer price
234:39 matching. However, we do regularly run promotions and discounts, so be sure to
234:42 keep an eye out for those. That's accurate. Regarding bulk discounts, yes,
234:47 they may indeed qualify for a discount. So reach out to our sales team. If you
234:50 have any other questions, please feel free to reach out. Best regards, Mr.
234:53 Helpful, TechHaven. And obviously, I told it to sign off like that. So, now
234:57 that we've seen that, let's actually break down what's going on. So, it's the
235:00 same trigger. You know, we're getting an email, and as you can see, we can find
235:03 the text of the email right here, which was, "Do you guys offer price matching
235:07 or bulk discounts?" We're feeding that into a pine cone node. So, if you guys
235:10 didn't know, you don't even need these to be only tools. You can have them just
235:14 be nodes. where we're searching for the prompts that is, do you guys offer price
235:18 matching or bulk discounts? And maybe you might want an AI step between the
235:22 trigger and the search to maybe like formulate a query out of the email if
235:25 the email is pretty long. But in this case, that's all we did. And now we can
235:29 see we got those four vectors back, same way we would have with the agent. But
235:32 what's cool is we have a lot more control over it. So as you can see, we
235:37 have a vector and then we have a score, which basically ranks how relevant it
235:40 the vector was to the query that we sent off. And so we have some pretty low ones
235:44 over here, but what we can do is say, okay, we only want to keep if the score
235:48 is greater than 04. So it's only going to be keeping these two, as you can see,
235:51 and it's getting rid of these two that aren't super relevant. And this is
235:54 something that's a lot easier to control in this linear flow compared to having
235:59 the agent try to filter through vector results up here. Anyways, then we're
236:02 just aggregating however many results it pulls back. if it's four, if it's three,
236:06 or if it's just one, it's still just going to aggregate them together so that
236:09 we can feed it into our OpenAI node that's going to write the email. So
236:12 basically, in the user prompt, we said, "Okay, here's the customer inquiry.
236:15 Here's the original email, and here's the relevant knowledge that we found.
236:18 All you have to do now is write an email." And so by giving this AI node
236:22 just one specific goal, it's going to be more quality and consistent with its
236:26 outputs rather than we gave the agent multiple jobs. It had to not only write
236:30 the email, but it also had to figure out how to search through information and
236:33 figure out what the next step was. So this node, it only has to focus on one
236:37 thing. It has the knowledge handed to it on a silver platter to write the email
236:40 with. And basically, we said, you're Mr. Helpful, a customer support rep for Tech
236:44 Haven. Your job is to respond to incoming customer emails with accurate
236:47 information from the knowledge base. You must only answer using relevant
236:50 knowledge provided to you. Don't make anything up. We gave it the tone and
236:53 then we said only output the body in a clean format. it outputs that body and
236:57 then all it had to do is map in the correct message ID and the correct
237:02 message content. Simple as that. So, I hope this makes sense. Obviously, it's a
237:05 lot cooler to watch the agent do something like that up here, but this is
237:08 basically the exact same flow and I would argue that it's going to be a lot
237:12 better, more consistent, and cheaper. Okay, so now to show an example where I
237:15 released this as a YouTube video and a couple weeks later I was like, why did I
237:19 do it like that? So, what we have here is a technical analyst. And so basically
237:23 we're talking to it through Telegram and it has one tool which is basically going
237:27 to get a chart image and then it's going to analyze the chart image and then it
237:30 sends it back to us in Telegram. And this is the workflow that it's actually
237:33 calling right here where we're making an HTTP request to chart- image. We're
237:37 getting the chart, downloading it, analyzing the image, sending it back,
237:40 and then responding back to the agent. So there's basically like two transfers
237:44 of data here that we don't need because as you can see down here, we have the
237:49 exact same process as one simple AI workflow. So there's going to be much
237:52 much less room for error here. But first of all, let's demo how this works and
237:56 then we'll demo the actual AI workflow. Okay, so it should be listening to us
237:59 now. I'm going to ask it to analyze Microsoft. And as you can see, it's now
238:02 hitting that tool. We won't see this workflow actually in real time just
238:06 because it's like calling a different execution, but this is the workflow that
238:08 it's calling down here. I can actually just it's basically calling this right
238:12 here. Um, so what it's going to do is it's going to send us an image and then
238:16 a second or two later it's going to send us an actual analysis. So there is
238:20 Microsoft's stock chart and now it's creating that analysis as you can see
238:22 right up here and then it's going to send us that analysis. We just got it.
238:26 So if you want to see the full video that I made on YouTube, I'll I'll tag it
238:29 right up here. But not going to dive too much into what's actually happening. I
238:32 just want to prove that we can do the exact same thing down here with a simple
238:36 workflow. Although right here, I did evolve this workflow a little bit. So
238:39 it's it's not only looking at NASDAQ, but it can also choose different
238:43 exchanges and feed that into the API call. But anyways, let's make this
238:47 trigger down here active and let's just show off that we can do the exact same
238:51 thing with the workflow and it's going to be better. So, test workflow. This
238:56 should be listening to us. Now, I'm just going to ask it to um we'll do a
239:00 different one. Analyze uh Bank of America. So, now it's getting it. It is
239:04 going to be downloading the chart. Actually, want to open up Telegram so we
239:07 can see downloading the chart, analyzing the image. It's going to send us that
239:11 image and then pretty much immediately after it should be able to send us that
239:15 analysis. So we don't have that awkward 2 to 5 second wait. Obviously we're
239:19 waiting here. But as soon as this is done, we should get the both the image
239:22 and the text simultaneously. There you go. And so you can see the results are
239:27 basically the same. But this one is just going to be more consistent. There's no
239:30 transfer of data between workflow. There's no need to hit an AI model to
239:33 decide what tool I need to use. It is just going to be one seamless flow. You
239:37 can al also get this workflow in the free school community if you want to
239:39 play around with it. Just wanted to throw that out there. Anyways, that's
239:43 going to wrap us up here. I just wanted to close off with this isn't me bashing
239:46 on AI agents. Well, I guess a little bit it was. AI agents are super powerful.
239:50 They're super cool. It's really important to learn prompt engineering
239:54 and giving them different tools, but it's just about understanding, am I
239:57 forcing an agent into something that doesn't need it? Am I exposing myself to
240:02 the risk of lower quality outputs, less consistency, more difficult time scaling
240:06 this thing? Things along those lines. And so that's why I think it's super
240:08 important to get into something like Excal wireframe out the solution that
240:12 you're looking to build. Understand what are all the steps here. What are the
240:16 different API calls or different people involved? What could happen here? Is
240:21 this deterministic or is there an aspect of decision-m and variability here?
240:24 Essentially, is every flow going to be the same or not the same? Cool. So now
240:29 that we have that whole concept out of the way, I think it's really important
240:31 to understand that so that when you're planning out what type of system you're
240:34 going to build, you're actually doing it the right way from the start. But now
240:38 that we understand that, let's finally set up our first AI agent together.
240:42 Let's move into that video. All right, so at this point you guys are familiar
240:45 with Naden. You've built a few AI workflows and now it's time to actually
240:49 build an AI agent, which gets even cooler. So before we actually hop into
240:52 there and do that, just want to do a quick refresher on this little diagram
240:55 we talked about at the beginning of this video, which is the anatomy of an AI
240:59 agent. So we have our input, we have our actual AI agent, and then we have an
241:03 output. The AI agent is connected to different tools, and that's how it
241:06 actually takes action. And in order to understand which tools do I need to use,
241:10 it will look at its brain and its instructions. The brain comes in the
241:13 form of a large language model, which in this video, we'll be using open router
241:17 to connect to as many different ones as we want. and you guys have already set
241:20 up your open router credentials. Then we also have access to memory which I will
241:23 show you guys how we're going to set up in nadn. Then finally it uses its
241:27 instructions in order to understand what to do and that is in the form of a
241:31 system prompt which we will also see in naden. So all of these elements that
241:34 we've talked about will directly translate to something in nen and I will
241:38 show you guys and call out exactly where these are so there's no confusion. So
241:43 we're going to hop in nitn and you guys know that a new workflow always starts
241:46 with a trigger. So, I'm going to hit tab and I'm going to type in a chat trigger
241:50 because we want to just basically be able to talk to our AI agent right here
241:55 in the native Nadm chat. So, there is our trigger and what I'm going to do is
241:59 click the plus and add an AI agent right after this trigger so we can actually
242:02 talk to it. And so, this is what it looks like. You know, we have our AI
242:04 agent right here, but I'm going to click into it so we can just talk about the
242:07 difference between a user message up here and a system message that we can
242:11 add down here. So going back to the example with chatbt and with our diagram
242:17 when we're talking to chat gbt in our browser every single time we type and
242:21 say something to chatbt that is a user message because that message coming in
242:25 is dynamic every time. So you can see right here the source for the prompt
242:29 that the AI agent will be listening for as if it was chatbt is the connected
242:33 chat trigger node. So we're set up right here and the agent will be reading that
242:37 every time. If we were feeding in information to this agent that wasn't
242:40 coming from the chat message trigger, we'd have to change that. But right now,
242:43 we're good. And if we go back to our diagram, this is basically the input
242:47 that we're feeding into the AI agent. So, as you can see, input goes into the
242:50 agent. And that's exactly what we have right here. Input going into the agent.
242:54 And then we have the system prompt. So, I'm going to click back into the agent.
242:57 And we can see right here, we have a system message, which is just telling
243:00 this AI agent, you are a helpful assistant. So, right now, we're just
243:03 going to leave it as that. And back in our diagram that is right here, its
243:07 instructions, which is called a system prompt. So the next thing we can see
243:10 that we need is we need to give our AI agent a brain, which will be a large
243:14 language model and also memory. So I'm going to flick back into N. And you can
243:18 see we have two options right here. The first one is chat model. So I'm first of
243:21 all just going to click on the plus for chat model. I'm going to choose open
243:24 router. And we've already connected to open router. And now I just get to
243:27 choose from all of these different chat models to use. So I'm just going to go
243:34 ahead and choose a GBT 4.1 Mini. And I'm just going to rename this node
243:38 GPT 4.1 mini just so we know which one using. Cool. So now we have our input,
243:45 our AI agent, and a brain. But let's give it some memory real quick, which is
243:48 as simple as just clicking the plus under memory. And I'm just going to for
243:52 now choose simple memory, which stores it in and it end. There's no credentials
243:56 required. And as you can see, the session ID is looking for the connected
244:00 chat trigger node. because we're using the connected chat trigger node, we
244:03 don't have to change anything. We are good to go. So, this is basically the
244:07 core part of the agent, right? So, what I can do is I can actually talk to this
244:10 thing. So, I can say, "Hey," and we'll see what it says back. It's going to use
244:15 its memory. It's going to um use its brain to actually answer us. And it
244:18 says, "Hello, how can I assist you?" I can say, "My name is Nate. I am 23 years
244:25 old." And now what I'm going to basically test is that it's storing all
244:29 of this as memory and it's going to know that. So now it says, "Nice to meet you,
244:31 Nate. How can I help you?" Now I'm going to ask you, you know, what's my name and how old am I?
244:40 So we'll send that off. And now it's going to be able to answer us. Your name
244:43 is Nate and you are 23 years old. How can I assist you further? So first of
244:47 all, the reason it's being so helpful is because its system message says you're a
244:51 helpful assistant. The next piece would be it's using its brain to answer us and
244:55 it's using its memory to make sure it's not forgetting stuff about our current
245:00 conversation. So those are the three parts right there. Input, AI agent,
245:04 brain, and instructions. And now it's time to add the tools. So in this
245:08 example, we're going to build a super simple personal assistant AI agent that
245:12 can do three things. It's going to be able to look in our contact database in
245:17 order to grab contact information. with that contact information. It's going to
245:20 be able to send an email and it's going to be able to create a calendar event.
245:24 So, first thing we're going to do is we're going to set up our contact
245:27 database. And what I'm going to do for that is just I have this Google sheet.
245:31 Really simple. It just says name and email. This could be maybe you have your
245:34 contacts in Google contacts. You could connect that or an Air Table base or
245:38 whatever you want. This is just the actual tool, the actual integration that
245:42 we want to make to our AI agent. So, what I'm going to do is throw in a few
245:46 rows of example names and emails in here. Okay. So, we're just going to
245:48 stick with these three. We've got Michael Scott, Ryan Reynolds, and Oprah
245:51 Winfrey. And now, what we're going to be able to do is have our AI agent look at
245:55 this contact database whenever we ask it to send an email to someone or make a
245:59 calendar event with someone. If I go back and add it in, the first thing we
246:02 have to do is add a tool to actually access this Google sheet. So, I'm going
246:05 to click on tool. I'm going to type in Google sheet. It's as simple as that.
246:08 And you can see we have a Google Sheets tool. So, I'm going to click on that.
246:11 And now we have to set up our credential. You guys have already
246:15 connected to Google Sheets in the previous workflow, so it shouldn't be
246:17 too difficult. So choose your credential. And then the first thing is
246:20 a tool description. What we're going to do is we are going to just set this
246:24 automatically. And this basically describes to the AI agent what does this
246:28 tool do. So we could set it manually and describe ourselves, but if you just set
246:32 it automatically, the AI is going to be pretty good at understanding what it
246:35 needs to do with this tool. The next thing is a resource. So what are we
246:38 actually looking for? We're looking for a sheet within a document, not an entire
246:41 document itself. Then the operation is we want to just get rows. So I'm going to leave it
246:47 all as that. And then what we need to do is actually choose our document and then
246:50 the sheet within that document that we want to look at. So for document, I'm
246:54 going to choose contacts. And for sheet, there's only one. I'm just going to
246:57 choose sheet one. And then the last thing I want to do is just give this
247:01 actual tool a pretty intuitive name. So I'm just going to call this
247:06 contacts database. There you go. So now it should be super clear to this AI
247:09 agent when to use this tool. We may have to do some system prompting actually to
247:12 say like, hey, here are the different tools you have. But for now, we're just
247:15 going to test it out and see if it works. So what I'm going to do is open
247:19 up the chat and just ask it, can you please get Oprah Winfreyy's contact
247:23 information. There we go. We'll send that off and we will watch it basically
247:27 think. And then there we go. Boom. It hit the Google Sheet tool that we wanted
247:31 it to. And if I open up the chat, it says Oprah Winfreyy's contact
247:35 information is email opra winfrey.com. If we go into the base, we can see that
247:39 is exactly what we put for her contact information. Okay, so we've confirmed
247:43 that the agent knows how to use this tool and that it can properly access
247:46 Google Sheets. The next step now is to add another tool to be able to send
247:49 emails. So, I'm going to move this thing over. I'm going to add another tool and
247:53 I'm just going to search for Gmail and click on Gmail tool. Once again, we've
247:57 already covered credentials. So, hopefully you guys are already logged in
248:00 there. And then what we need to do is just configure the rest of the tool. So
248:04 tool description set automatically resource message operation send and then
248:10 we have to fill out the two the subject the email type and the message. What
248:14 we're able to do with our AI agents and tools is something super super cool. We
248:19 can let our AI agent decide how to fill out these three fields that will be
248:23 dynamic. And all I have to do is click on this button right here to the right
248:26 that says let the model define this parameter. So I'm going to click on that
248:29 button. And now we can see that it says defined automatically by the model. So
248:33 basically if I said hey can you send an email to Oprah Winfrey saying this um
248:39 and this it would then interpret our message our user input and it would then
248:45 fill out who's this going to who's the subject and who's the email. So I'll
248:47 show you guys an example of that. It's super cool. So I'm just going to click
248:51 on this button for subject and also this button for message. And now we can see
248:56 the actual AI use its brain to fill out these three fields. And then also I'm just going to change
249:01 the email type to text because I like it how it comes through as text. So real
249:06 quick, just want to change this name to send email. And all we have to do now is
249:10 we're going to chat with our agent and see if it's able to send that email. All
249:13 right. So I'm sending off this message that asks to send an email to Oprah
249:17 asking how she's doing and if she has plans this weekend. And what happened is
249:21 it went straight to the send email tool. And the reason it did that is because in
249:25 its memory, it remembered that it already knows Oprah Winfreyy's contact
249:29 information. So if I open chat, it says the email's been sent asking how she's
249:32 doing and if she has plans this weekend. Is there anything else that you would
249:36 like to do? So real quick before we go see if the email actually did get sent,
249:39 I'm going to click into the tool. And what we can see is on this left hand
249:44 side, we can see exactly how it chose to fill out these three fields. So for the
249:48 two, it put oprafree.com, which is correct. For the subject, it put
249:52 checking in. And for the message, it put hi Oprah. I hope this weekend finds you
249:55 well. How are you doing? Do you have any plans? Best regards, Nate. And another
250:00 thing that's really cool is the only reason that it signed off right here as
250:03 best regards Nate is because once again, it used its memory and it remembers that
250:07 our name is Nate. That's how it filled out those fields. Let me go over to my
250:11 email and we'll take a look. So, in our sent, we have the checking in subject.
250:15 We have the message that we just read in and it in. And then we have this little
250:18 thing at the bottom that says this email was automatically sent by NADN. We can
250:23 easily turn that off if we go into NADN. Open up the tool. We add an option at
250:26 the bottom that says append naden attribution. And then we just turn off
250:30 the append naden attribution. And as you can see if we click on add options,
250:33 there are other things that we can do as well. Like we can reply to the sender
250:36 only. We can add a sender name. We can add attachments. All this other stuff.
250:41 But at a high level and real quick setup, that is the send email tool. And
250:45 keep in mind, we still haven't given our agent any sort of system prompt besides
250:48 saying you're a helpful assistant. So, super cool stuff. All right, cool. And
250:52 now for the last tool, what we want to do is add a create calendar event. So, I'm going
250:59 to search calendar and grab a Google calendar node. We already should be set
251:03 up. Or if you're not, actually, all you have to do is just create new credential
251:06 and sign in real quick because you already went and created your whole
251:10 Google Cloud thing. We're going to leave the description as automatic. The resource is an event. The
251:15 operation is create. The calendar is going to be one that we choose from our
251:19 account. And now we have a few things that we want to fill out for this tool.
251:23 So basically, it's asking what time is the event going to start and what time
251:26 is the event going to end. So real quick, I'm just going to do the same
251:29 thing. I'm going to let the model decide based on the way that we interact with
251:33 it with our input. And then real quick, I just want to add one more field, which
251:36 is going to be a summary. And basically whatever gets filled in right here for
251:39 summary is what's going to show up as the name of the event in Google
251:42 calendar. But once again we're going to let the model automatically define this
251:49 field. So let's call this node create event. And actually one more thing I
251:52 forgot to do is we want to add an attendee. So we can actually let the
251:56 agent add someone to an event as well. So that is the new tool. We're going to
252:00 hit save. And remember no system prompts. Let's see if we can create a
252:04 calendar event with Michael Scott. All right. All right. So, we're asking for
252:08 dinner with Michael at 6 p.m. What's going to happen is it probably Okay, so
252:12 we're going to have to do some prompting because we don't know Michael Scott's
252:15 contact information yet, but it went ahead and tried to create that email.
252:19 So, it said that it created the event and let's click into the tool and see
252:23 what happened. So, it tried to send the event invite to michael.scottample.com. So, it
252:29 completely made that up because in our contacts base, Michael Scott's email is
252:33 mikegreatcott.com. So, it got that wrong. That's the first thing it got
252:38 wrong. The second thing it got wrong was the actual start and end date. So, yes,
252:42 it made the event for 6 p.m., but it made it for 6 p.m. on April 27th, 2024,
252:48 which was over a year ago. So, we can fix this by using the system prompt. So,
252:52 what I'm going to do real quick is go into the system prompt, and I'm just
252:55 going to make it just an expression and open it up full screen real quick. What
253:00 I'm going to say next is you must always look in the contacts database before
253:05 doing something like creating an event or sending an email. You need the
253:10 person's email address in order to do one of those actions. Okay, so that's a really simple
253:16 thing we can add. And then also what I want to tell it is what is today's
253:19 current date and time? So that if I say create an event for tomorrow or create
253:22 an event for today, it actually gets the date right. So, I'm just going to say
253:28 here is the current date slashtime. And all I have to do to give it access to
253:31 the current date and time is do two curly braces. And then right here you
253:35 can see dollar sign now which says a date time representing the current
253:39 moment. So if I click on that on the right hand side in the result panel you
253:42 can see it's going to show the current date and time. So we're happy with that.
253:45 Our system prompt has been a little bit upgraded and now we're going to just try
253:49 that exact same query again and we'll see what happens. So, I'm going to click
253:53 on this little repost message button. Send it again. And hopefully now, there
253:57 we go. It hits the contact database to get Michael Scott's email. And then it
254:00 creates the calendar event with Michael Scott. So, down here, it says, I've
254:04 created a calendar event for dinner with Michael Scott tonight at 6. If you need
254:07 any more assistance, feel free to ask. So, if I go to my calendar, we can see
254:11 we have a 2-hour long dinner with Michael Scott. If I click onto it, we
254:16 can see that the guest that was invited was mikegreatscott.com, which is exactly
254:21 what we see in our contact database. And so, you may have noticed it made this
254:23 event for 2 hours because we didn't specify. If I said, "Hey, create a
254:27 15-minute event," it would have only made it 15 minutes. So, what I'm going
254:31 to do real quick is a loaded prompt. Okay, so fingers crossed. We're saying,
254:35 "Please invite Ryan Reynolds to a party tonight that's only 30 minutes long at 8
254:39 p.m. and send him an email to confirm." So, what happened here? It went to go
254:43 create an event and send an email, but it didn't get Ryan Reynolds email first.
254:47 So, if we click into this, we can see that it sent an email to
254:50 ryan.rrensacample.com. That's not right. And it went to create an event at
254:54 ryan.rerensacample.com. And that's not right either. But the good news is if we
254:58 go to calendar, we can see that it did get the party right as far as it's 8
255:02 p.m. and only 15 minutes. So, because it didn't take the right action, it's not
255:06 that big of a deal. We know now that we have to go and refine the system prompt.
255:10 So to do that, I'm going to open up the agent. I'm going to click into the
255:14 system prompt. And we are going to fix some stuff up. Okay. So I added two
255:17 sentences that say, "Never make up someone's email address. You must look
255:21 in the contact database tool." So as you guys can see, this is pretty natural
255:24 language. We're just instructing someone how to do something as if we were
255:27 teaching an intern. Okay. So what I'm going to do real quick is clear this
255:30 memory. So I'm just going to reset the session. And now we're starting from a
255:33 clean slate. And I'm going to ask that exact same query to do that multi-step
255:37 thing with Ryan Reynolds. All right. Take two. We're inviting Ryan Reynolds
255:40 to a party at 9:00 p.m. There we go. It's hitting the contacts database. And
255:43 now it's going to hit the create event and the send email tool at the same
255:47 time. Boom. I've scheduled a 30-minute party tonight at 9:00 p.m. and invited
255:51 Ryan Reynolds. So, let's go to our calendar. We have a 9 p.m. party for 30
255:55 minutes long, and it is ryanpool.com, which is exactly what we
255:59 see in our contacts database. And then, if we go to our email, we can see now
256:03 that we have a party invitation for tonight to ryanpool.com. But what you'll
256:08 notice is now it didn't sign off as Nate because I cleared that memory. So this
256:13 would be a super simple fix. We would just want to go to the system prompt and
256:15 say, "Hey, when you're sending emails, make sure you sign off as Nate." So
256:19 that's going to be it for your first AI agent build. This one is very simple,
256:24 but also hopefully really opens your eyes to how easy it is to plug in these
256:27 different tools. And it's really just about your configurations and your
256:31 system prompts because system prompting is a really important skill and it's
256:34 something that you kind of have to just try out a lot. You have to get a lot of
256:38 reps and it's a very iterative process. But anyways, congratulations. You just
256:42 built your first AI agent in probably less than 20 minutes and now add on a
256:46 few more tools. Play around with a few more parameters and just see how this
256:49 kind of stuff works. In this section, what I'm going to talk about is dynamic
256:54 memory for your AI agents. So if you remember, we had just set up this agent
256:58 and we were using simple memory and this was basically helping us keep
257:02 conversation history. But what we didn't yet talk about was the session ID and
257:07 what that exactly means. So basically think of a session ID as some sort of
257:12 unique identifier that identifies each separate conversation. So, if I'm
257:18 talking to you, person A, and you ask me something, I'm gonna go look at
257:22 conversations from our conversation, person A and Nate, and then I can read
257:25 that for context and then respond to you. But if person B talks to me, I'm
257:29 going to go look at my conversation history with person B before I respond
257:34 to them. And that way, I keep two people and two conversations completely
257:37 separate. So, that's what a session ID is. So, if we were having some sort of
257:41 AI agent that was being triggered by an email, we would basically want to set
257:46 the session ID as the email address coming in because then we know that the
257:50 agent's going to be uniquely responding to whoever actually sent that email that
257:55 triggered it. So, just to demonstrate how that works, what I'm going to do is
257:58 just manipulate the session ID a little bit. So, I'm going to come into here and
258:02 I'm going to instead of using the chat trigger node for the session ID, I'm
258:06 going to just define it below. And I'm just going to do that exact example that
258:09 I just talked to you guys about with person A and person B. So I'm just going
258:13 to put a lowercase A in there as the session ID key. So once I save that, what I'm going
258:20 to do is just say hi. Now it's going to respond to me. It's going to update the
258:23 conversation history and say hi. I'm going to say my name is
258:27 um Bruce. I don't know why I thought of Bruce, but my name's Bruce. And now it
258:31 says nice to meet you Bruce. How can I assist you? Now what I'm going to do is
258:36 I'm going to change the session ID to B. We'll hit save. And I'm just going to
258:39 say what's my name? What's my name? And it's going to say I don't have access to your name
258:46 directly. If you'd like, you can provide your name or any other details you want
258:50 me to know. How can I assist you today? So person A is Bruce. Person B is no
258:54 name. And what I'm going to do is go back to putting the key as A. Hit save.
259:01 And now if I say, "What is my name?" with a misspelled my, it's going to say,
259:04 "Hey, Bruce." There we go. Your name is Bruce. How can I assist you further? And so
259:10 that's just a really quick demo of how you're able to sort of actually have
259:15 dynamic um conversations with multiple users in one single agent flow because
259:20 you can make this field dynamic. So, what I'm going to do to show you guys a
259:23 practical use of this, let's say you're wanting to connect your agent to Slack
259:27 or to Telegram or to WhatsApp or to Gmail. You want the memory to be dynamic
259:31 and you want it to be unique for each person that's interacting with it. So,
259:35 what I have here is a Gmail trigger. I'm going to hit test workflow, which should
259:38 just pull in an email. So, when we open up this email, we can see like the
259:41 actual body of the email. We can see, you know, like history. We can see a
259:45 thread ID, all this kind of stuff. But what I want to look at is who is the
259:48 email from? Because then if I feed this into the AI agent and first of all we
259:53 would have to change the actual um user message. So we are no longer talking to
259:57 our agent with the connected chat trigger node, right? We're connecting to
260:01 it with Gmail. So I'm going to click to find below. The user message is
260:04 basically going to be whatever you want the agent to look at. So don't even
260:09 think about end right now. If you had an agent to help you with your emails, what
260:11 would you want it to read? You'd want it to read maybe a combination of the
260:15 subject and the body. So that's exactly what I'm going to do. I'm just going to
260:19 type in subject. Okay, here's the subject down here. And I'm going to drag
260:22 that right in there. And then I'm just going to say body. And then I would drag
260:27 in the actual body snippet. And it's a snippet right now because in the actual
260:31 Gmail trigger, we have this flicked on as simplified. If we turn that off, it
260:34 would give us not a snippet. It would give us a full email body. But for right
260:37 now, for simplicity, we'll leave it simplified. But now you can see that's
260:40 what the agent's going to be reading every time, not the connected chat
260:44 trigger node. And before we hit test step, what we want to do is we want to
260:47 make the sender of this email also the session key for the simple memory. So
260:54 we're going to define below and what I'm going to do is find the from field which
260:59 is right here and drag that in. So now whenever we get a new email, we're going
261:02 to be looking at conversation history from whoever sent that email to trigger
261:06 this whole workflow. So I'll hit save and basically what I'm going to do is
261:10 just run the agent. And what it's going to do is update the memory. It's going
261:13 to be looking at the correct thing and it's taking some action for us. So,
261:17 we'll take a look at what it does. But basically, it said the invitation email
261:20 for the party tonight has been sent to Ryan. If you need any further
261:23 assistance, please let me know. And the reason why it did that is because the
261:27 actual user message basically was saying we're inviting Ryan to a party. So,
261:31 hopefully that clears up some stuff about dynamic um user messages and
261:36 dynamic memory. And now you're on your way to building some pretty cool Jetic
261:39 workflows. And something important to touch on real quick is with the memory
261:43 within the actual node. What you'll notice is that there is a context window
261:47 length parameter. And this says how many past interactions the model receives as
261:50 context. So this is definitely more of the short-term memory because it's only
261:53 going to be looking at the past five interactions before it crafts its
261:57 response. And this is not just with a simple memory node. What we have here is
262:01 if I delete this connection and click on memory, you can see there are other
262:04 types of memory we can use for our AI agents. Let's say for example we're
262:07 doing Postgress which later in this course you'll see how to set this up.
262:10 But in Postgress you can see that there's also a context window length. So
262:14 just to show you guys an example of like what that actually looks like. What
262:16 we're going to do is just connect back to here. I'm going to drag in our chat
262:20 message trigger which means I'm going to have to change the input of the AI
262:23 agent. So we're going to get rid of this whole um defined below with the subject
262:27 and body. We're going to drag in the connected chat trigger node. Go ahead
262:31 and give this another save. And now I'm just going to come into the chat and
262:37 say, "Hello, Mr. Agent. What is going on here? We have the memory is messed up."
262:41 So remember, I just changed the session ID from our chat trigger to the Gmail
262:48 trigger um the address, the email address of whoever just sent us the
262:50 email. So I'm going to have to go change that again. I'm just going to simply
262:54 choose connected chat trigger node. And now it's referencing the correct session
262:57 ID. Our variable is green. We're good to go. We'll try this again. Hello, Mr.
263:01 Agent. It's going to talk to us. So, just save that as Nate. Okay. Nice to meet you, Nate. How
263:14 can I assist you? My favorite color is blue. And I'm going to say, you know,
263:21 tell me about myself. Okay. So, it's using all that memory, right? We
263:24 basically saw a demo of this, but it basically says, other than your name and
263:27 your favorite color is blue, what else is there about you? So if I go into the
263:31 agent and I click over here into the agent logs, we can see the basically the
263:36 order of operations that the agent took in order to answer us. So the first
263:40 thing that it does is it uses its simple memory. And that's where you can see
263:43 down here, these are basically the past interactions that we've had, which was
263:49 um hello Mr. Agent, my name is Nate, my favorite color is blue. And this would
263:52 basically cap out at five interactions. So that's all we're basically saying in
263:57 this context window length right here. So, just wanted to throw that out there
264:00 real quick. This is not going to be absolutely unlimited memory to remember
264:04 everything that you've ever said to your agent. We would have to set that up in a
264:07 different way. All right, so you've got your agent up and running. You have your
264:10 simple memory set up, but something that I alluded to in that video was setting
264:15 up memory outside of NADN, which could be something like Postgress. So in this
264:17 next one, we're going to walk through the full setup of creating a superbase
264:21 account, connecting your Postgress and your Superbase so that you can have your
264:25 short-term memory with Postgress and then you can also connect a vector
264:28 database with Superbase. So let's get started. So today I'm going to be
264:30 showing you guys how to connect Postgress SQL and Superbase to Nadin. So
264:34 what I'm going to be doing today is walking through signing up for an
264:36 account, creating a project, and then connecting them both to NADN so you guys
264:40 can follow every step of the way. But real quick, Postgress is an open- source
264:43 relational database management system that you're able to use plugins like PG
264:46 vector if you want vector similarity search. In this case, we're just going
264:49 to be using Postgress as the memory for our agent. And then Superbase is a
264:52 backend as a service that's kind of built on top of Postgress. And in
264:55 today's example, we're going to be using that as the vector database. But don't
264:58 want to waste any time. Here we are in Naden. And what we know we're going to
265:01 do here for our agent is give it memory with Postgress and access to a vector
265:05 database in Superbase. So for memory, I'm going to click on this plus and
265:07 click on Postgress chat memory. And then we'll set up this credential. And then
265:10 over here we want to click on the plus for tool. We'll grab a superbase vector
265:13 store node and then this is where we'll hook up our superbase credential. So
265:16 whenever we need to connect to these thirdparty services what we have to do
265:19 is come into the node go to our credential and then we want to create a
265:22 new one. And then we have all the stuff to configure like our host our username
265:26 our password our port all this kind of stuff. So we have to hop into superbase
265:30 first create account create a new project and then we'll be able to access
265:33 all this information to plug in. So here we are in Superbase. I'm going to be
265:35 creating a new account like I said just so we can walk through all of this step
265:38 by step for you guys. So, first thing you want to do is sign up for a new
265:41 account. So, I just got my confirmation email. So, I'm going to go ahead and
265:43 confirm. Once you do that, it's going to have you create a new organization. And
265:46 then within that, we create a new project. So, I'm just going to leave
265:48 everything as is for now. It's going to be personal. It's going to be free. And
265:52 I'll hit create organization. And then from here, we are creating a new
265:54 project. So, I'm going to leave everything once again as is. This is the
265:57 organization we're creating the project in. Here's the project name. And then
266:00 you need to create a password. And you're going to have to remember this
266:02 password to hook up to our Subabase node later. So, I've entered my password. I'm
266:06 going to copy this because like I said, you want to save this so you can enter
266:08 it later. And then we'll click create new project. This is going to be
266:11 launching up our project. And this may take a few minutes. So, um, just have to
266:15 be patient here. As you can see, we're in the screen. It's going to say setting
266:18 up project. So, we pretty much are just going to wait until our project's been
266:21 set up. So, while this is happening, we can see that there's already some stuff
266:23 that may look a little confusing. We've got project API keys with a service ro
266:27 secret. We have configuration with a different URL and some sort of JWT
266:30 secret. So, I'm going to show you guys how you need to access what it is and
266:35 plug it into the right places in Naden. But, as you can see, we got launched to
266:38 a different screen. The project status is still being launched. So, just going
266:41 to wait for it to be complete. So, everything just got set up. We're now
266:44 good to connect to NAN. And what you want to do is typically you'd come down
266:47 to project settings and you click on database. And this is where everything
266:50 would be to connect. But it says connection string has moved. So, as you
266:52 can see, there's a little button up here called connect. So, we're going to click
266:55 on this. And now, this is where we're grabbing the information that we need
266:58 for Postgress. So this is where it gets a little confusing because there's a lot
267:01 of stuff that we need for Postgress. We need to get a host, a username, our
267:05 password from earlier when we set up the project, and then a port. So all we're
267:08 looking for are those four things, but we need to find them in here. So what
267:11 I'm going to do is change the type to Postgress SQL. And then I'm going to go
267:15 down to the transaction pooler, and this is where we're going to find the things
267:17 that we need. The first thing that we're looking for is the host, which if you
267:20 set it up just like me, it's going to be after the -h. So it's going to be AWS,
267:24 and then we have our region.pool.subase.com. So we're going to grab that, copy it, and then we're
267:29 going to paste that into the host section right there. So that's what it
267:32 should look like for host. Now we have a database and a username to set up. So if
267:36 we go back into that superbase page, we can see we have a D and a U. So the D is
267:40 going to stay as Postgress, but for user, we're going to grab everything
267:43 after the U, which is going to be postgress.com, and then these um
267:47 different characters. So I'm going to paste that in here under the user. And
267:50 for the password, this is where you're going to paste in the password that you
267:53 use to set up your Subbase project. And then finally at the bottom, we're
267:56 looking for a port, which is by default 5342. But in this case, we're going to
267:59 grab the port from the transaction pooler right here, which is following
268:04 the lowercase P. So we have 6543. I'm going to copy that, paste that into here
268:07 as the port. And then we'll hit save. And we'll see if we got connection
268:10 tested successfully. There we go. We got green. And then I'm just going to rename
268:13 this so I can keep it organized. So there we go. We've connected to
268:16 Postgress as our chat memory. We can see that it is going to be using the
268:19 connected chat trigger node. That's how it's going to be using the key to store
268:22 this information. and it's going to be storing it in a table in Subabase called
268:25 Naden chat histories. So real quick, I'm going to talk to the agent. I'm just
268:28 going to disconnect the subbase so we don't get any errors. So now when I send
268:31 off hello AI agent, it's going to respond to us with something like hey,
268:34 how can I help you today? Hello, how can I assist you? And now you can see that
268:37 there were two things stored in our Postgress chat memory. So we'll switch
268:40 over to superbase. And now we're going to come up here in the left and go to
268:43 table editor. We can see we have a new table that we just created called NAN
268:46 chat histories. And then we have two messages in here. So the first one as
268:49 you can see was a human type and the content was hello AI agent which is what
268:53 we said to the AI agent and then the second one was a type AI and this is the
268:58 AI's response to us. So it said hello how can I assist you today. So this is
269:01 where all of your chats are going to be stored based on the session ID and just
269:05 once again this session ID is coming from the connected chat trigger node. So
269:08 it's just coming from this node right here. As you can see, there's the
269:11 session ID that matches the one in our our chat memory table. And that is how
269:16 it's using it to store sort of like the unique chat conversations. Cool. Now
269:20 that we have Postgress chat memory set up, let's hook up our Superbase vector
269:24 store. So, we're going to drag it in. And then now we need to go up here and
269:27 connect our credentials. So, I'm going to create new credential. And we can see
269:31 that we need two things, a host and a service role secret. And the host is not
269:34 going to be the same one as the host that we used to set up our Postgress. So
269:37 let's hop into Superbase and grab this information. So back in Superbase, we're
269:41 going to go down to the settings. We're going to click on data API and then we
269:45 have our project URL and then we have our service ro secret. So this is all
269:49 we're using for URL. We're going to copy this, go back to Subase, and then we'll
269:52 paste this in as our host. As you can see, it's supposed to be HTTPS
269:56 um and then your Superbase account. So we'll paste that in and you can see
270:00 that's what we have.co. Also, keep in mind this is because I launched up an
270:03 organization and a project in Superbase's cloud. If you were to
270:06 self-host this, it would be a little different because you'd have to access
270:09 your local host. And then of course, we need our service ro secret. So back in
270:13 Superbase, I'm going to reveal, copy, and then paste it into an end. So let me
270:16 do that real quick. And as you can see, I got that huge token. Just paste it in.
270:19 So what I'm going to do now is save it. Hopefully it goes green. There we go. We
270:22 have connection tested successfully. And then once again, just going to rename
270:25 this. The next step from here would be to create our Superbase vector store
270:28 within the platform that we can actually push documents into. So you're going to
270:31 click on docs right here. You are going to go to the quick start for setting up
270:35 your vector store and then all you have to do right here is copy this command.
270:38 So in the top right, copy this script. Come back into Subabase. You'll come on
270:42 the lefth hand side to SQL editor. You'll paste that command in here. You
270:44 don't change anything at all. You'll just hit run. And then you could should
270:48 see down here success. No rows returned. And then in the table editor, we'll have
270:51 a new table over here called documents. So this is where when we're actually
270:54 vectorizing our data, it's going to go into this table. Okay. Okay. So, I'm
270:57 just going to do a real quick example of putting a Google doc into our Subbase
271:00 vector database just to show you guys that everything's connected the way it
271:03 should be and working as it should. So, I'm going to grab a Google Drive node
271:06 right here. I'm going to click download file. I'm going to select a file to
271:10 download which in this case I'm just going to grab body shop services terms
271:13 and conditions and then hit test step. And we'll see the binary data which is a
271:17 doc file over here. And now we have that information. And what we want to do with
271:21 it is add it to superbase superbase vector store. So, I'm going to type in
271:25 superbase. We'll see vector store. The operation is going to be add documents
271:28 to vector store. And then we have to choose the right credential because we
271:31 have to choose the table to put it in. So this is in this case we already made
271:34 a table. As you can see in our superbase it's called documents. So back in here
271:38 I'm going to choose the credential I just made. I'm going to choose insert
271:41 documents and I'm going to choose the table to insert it to not the N chat
271:45 histories. We want to insert this to documents because this one is set up for
271:49 vectorization. From there I have to choose our document loader as well as
271:52 our embeddings. So I'm not really going to dive into exactly what this all means
271:55 right now. If you're kind of confused and you're wanting a deeper dive on rag
271:58 and building agents, definitely check out my paid community. We've got
272:01 different deep dive topics about all this kind of stuff. But I'm just going
272:03 to set this up real quick so we can see the actual example. I'm just choosing
272:07 the binary data to load in here. I'm choosing the embedding and I'm choosing
272:10 our text splitter which is going to be recursive. And so now all I have to do
272:13 here is hit run. It's going to be taking that binary data of that body shop file.
272:17 It split it up. And as you can see there's three items. So if we go back
272:20 into our Superbase vector store and we hit refresh, we now see three items in our
272:25 vector database and we have the different content and all this
272:28 information here like the standard oil change, the synthetic oil change is
272:31 coming from our body shop document that I have right here that we put in there
272:35 just to validate the rag. And we know that this is a vector database store
272:38 rather than a relational one because we can see we have our vector embedding
272:41 over here which is all the dimensions. And then we have our metadata. So we
272:44 have stuff like the source and um the blob type, all this kind of stuff. And
272:47 this is where we could also go ahead and add more metadata if we wanted to.
272:51 Anyways, now that we have vectors in our documents table, we can hook up the
272:55 actual agent to the correct table. So in here, what I'm going to call this is um
273:00 body shop. For the description, I'm going to say use this to get information
273:06 about the body shop. And then from the table name, we have to choose the
273:08 correct table, of course. So we know that we just put all this into something
273:11 called documents. So I'm going to choose documents. And finally, we just have to
273:15 choose our embeddings, of course, so that it can embed the query and pull
273:18 stuff back accurately. And that's pretty much it. We have our AI agent set up.
273:22 So, let's go ahead and do a test and see what we get back. So, I'm going to go
273:25 ahead and say what brake services are offered at the body shop. It's going to
273:29 update the Postgress memory. So, now we'll be able to see that query. It hit
273:32 the Superbase vector store in order to retrieve that information and then
273:36 create an augmented generated answer for us. And now we have the body shop offers
273:40 the following brake services. 120 per axle for replacement, 150 per axle for
273:45 rotor replacement, and then full brake inspection is 30 bucks. So, if we click
273:49 back into our document, we can see that that's exactly what it just pulled. And
273:53 then, if we go into our vector database within Subase, we can find that
273:56 information in here. But then we can also click on NAN chat history, and we
274:00 can see we have two more chats. So, the first one was a human, which is what we
274:03 said. What brake services are offered at the body shop? And then the second one
274:07 was a AI content, which is the body shop offers the following brake services,
274:10 blah blah blah. And this is exactly what it just responded to us with within NADN
274:14 down here as you can see. And so keep in mind this AI agent has zero prompting.
274:17 We didn't even open up the system message. All that's in here is you are a
274:20 helpful assistant. But if you are setting this up, what you want to do is
274:23 you know explain its role and you want to tell it you know you have access to a
274:28 vector database. It is called X. It has information about X Y and Z and you
274:32 should use it when a client asks about X Y and Z. Anyways that's going to be it
274:35 for this one. Subase and Postgress are super super powerful tools to use to
274:38 connect up as a database for your agents, whether it's going to be
274:41 relational or vector databases and you've got lots of options with, you
274:44 know, self-hosting and some good options for security and scalability there. Now
274:47 that you guys have built an agent and you see the way that an agent is able to
274:50 understand what tools it has and which ones it needs to use, what's really
274:55 really cool and powerful about NAND is that we can have a tool for an AI agent
274:59 be a custom workflow that we built out in Nadn or we can build out a custom
275:04 agent in Naden and then give our main agent access to call on that lower
275:08 agent. So what I'm about to share with you guys next is an architecture you can
275:11 use when you're building multi- aent systems. It's basically called having an
275:15 orchestrator agent and sub agents or parent agents and child agents. So,
275:18 let's dive into it. I think you guys will think it's pretty cool. So, a
275:22 multi- aent system is one where we have multiple autonomous AI agents working
275:26 together in order to get the job done and they're able to talk to each other
275:28 and they're able to use the tools that they have access to. What we're going to
275:31 be talking about today is a type of multi- aent system called the
275:34 orchestrator architecture. And basically what that means that we have one agent
275:38 up here. I call it the parent agent and then I call these child agents. But we
275:41 have an orchestrator agent that's able to call on different sub aents. And the
275:46 best way to think about it is this agent's only goal is to understand the
275:50 intent of the user. Whether that's through Telegram or through email,
275:53 whatever it is, understanding that intent and then understanding, okay, I
275:57 have access to these four agents and here is what each one is good at. Which
276:01 one or which ones do I need to call in order to actually achieve the end goal?
276:05 So, in this case, if I'm saying to the agent, can you please write me a quick
276:11 blog post about dogs and send that to Dexter Morgan, and can you also create a
276:14 dinner event for tonight at 6 p.m. with Michael Scott? And thank you. Cool. So,
276:20 this is a pretty loaded task, right? And can you imagine if this one agent had
276:25 access to all of these like 15 or however many tools and it had to do all
276:29 of that itself, it would be pretty overwhelmed and it wouldn't be able to
276:32 do it very accurately. So, what you can see here is it is able to just
276:35 understand, okay, I have these four agents. They each have a different role.
276:38 Which ones do I need to call? And you can see what it's doing is it called the
276:41 contact agent to get the contact information. Right now, it's calling the
276:44 content creator agent. And now that that's finished up, it's probably going
276:47 to call the calendar agent to make that event. And then it's going to call the
276:50 email agent in order to actually send that blog that we had the content
276:54 creator agent make. And then you can see it also called this little tool down
276:56 here called Think. If you want to see a full video where I broke down what that
276:59 does, you can watch it right up here. But we just got a response back from the
277:03 orchestrator agent. So, let's see what it said. All right, so it said, "The
277:06 blog post about dogs has been sent to Dexter Morgan. A dinner event for
277:09 tonight at 6 p.m. with Michael Scott has been created. And if you need anything
277:12 else, let me know." And just to verify that that actually went through, you can
277:14 see we have a new event for dinner at 6 p.m. with Michael Scott. And then in our
277:18 email and our scent, we can see that we have a full blog post sent to Dexter
277:22 Morgan. And you can see that we also have a link right here that we can click
277:24 into, which means that the content creator agent was able to do some
277:28 research, find this URL, create the blog post, and send that back to the
277:31 orchestrator agent. And then the orchestrator agent remembered, okay, so
277:34 I need to send a blog post to Dexter Morgan. I've got his email from the
277:37 contact agent. I have the blog post from the content creator agent. Now all I
277:40 have to do is pass it over to the email agent to take care of the rest. So yes,
277:44 it's important to think about the tools because if this main agent had access to
277:47 all those tools, it would be pretty overwhelming. But also think about the
277:50 prompts. So, in this ultimate assistant prompt, it's pretty short, right? All I
277:54 had to say was, "You're the ultimate assistant. Your job is to send the
277:57 user's query to the correct tool. You should never be writing emails or ever
278:00 creating summaries or doing anything. You just need to delegate the task." And
278:04 then what we did is we said, "Okay, you have these six tools. Here's what
278:06 they're called. Here's when you use them." And it's just super super clear
278:10 and concise. There's almost no room for ambiguity. We gave it a few rules, an
278:14 example output, and basically that's it. And now it's able to interpret any query
278:17 we might have, even if it's a loaded query. As you can see, in this case, it
278:20 had to call all four agents, but it still got it right. And then when it
278:24 sends over something to like the email agent, for example, we're able to give
278:27 this specific agent a very, very specific system prompt because we only
278:31 have to tell it about you only have access to these email tools. And this is
278:35 just going back to the whole thing about specialization. It's not confusing. It
278:39 knows exactly what it needs to do. Same thing with these other agents. You know,
278:41 the calendar agent, of course, has its own prompts with its own set of calendar
278:45 tools. The contact agent has its own prompt with its own set of contact
278:48 tools. And then of course we have the content creator agent which has to know
278:52 how to not only do research using its tavly tool but it also has to format the
278:57 blog post with you know proper HTML. As you can see here there was like a title
279:00 there were headings there were you know inline links all that kind of stuff. And
279:04 so because we have all of this specialized can you imagine if we had
279:08 all of that system prompt thrown into this one agent and gave it access to all
279:12 the tools just wouldn't be good. And if you're still not convinced, think about
279:15 the fact that for each of these different tasks, because we know what
279:18 each agent is doing, we're able to give it a very specific chat model because,
279:21 you know, like for something like content creation, I like to use cloud
279:24 3.7, but I wouldn't want to use something as expensive as cloud 3.7 just
279:28 to get contacts or to add contacts to my contact database. So that's why I went
279:32 with Flash here. And then for these ones, I'm using 4.1 Mini. So you're able
279:36 to have a lot more control over exactly how you want your agents to run. And so
279:39 I pretty much think I hit on a lot of that, but you know, benefits of
279:43 multi-agent system, more reusable components. So now that we have built
279:46 out, you know, an email agent, whenever I'm building another agent ever, and I
279:49 realize, okay, maybe it would be nice for this agent to have a couple email
279:53 functions. Boom, I just give it access to the email agent because we've already
279:56 built it and this email agent can be called on by as many different workflows
279:59 as we want. And when we're talking about reusable components, that doesn't have
280:04 to just mean these agents are reusable. It could also be workflows that are
280:07 reusable. So, for example, if I go to this AI marketing team video, if you
280:09 haven't watched it, I'll leave a link right up here. These tools down here,
280:14 none of these are agents. They're all just workflows. So, for example, if I
280:17 click into the video workflow, you can see that it's sending data over to this
280:20 workflow. And even though it's not an agent, it still is going to do
280:24 everything it needs to do and then send data back to that main agent. Similarly,
280:28 with this create image tool, if I was to click into it real quick, you can see
280:31 that this is not an agent, but what it's going to do is it's going to take
280:34 information from that orchestrator agent and do a very specific function. That
280:38 way, this main agent up here, all it has to do is understand, I have these
280:41 different tools, which one do I need to use. So, reusable components and also
280:45 we're going to have model flexibility, different models for different agents.
280:48 We're going to have easier debugging and maintenance because like I said with the
280:51 whole prompting thing, if you tried to give that main agent access to 25 tools
280:56 and in the prompt you have to say here's when you use all 25 tools and it wasn't
280:59 working, you wouldn't know where to start. You would feel really overwhelmed
281:02 as to like how do I even fix this prompt. So by splitting things up into
281:07 small small tasks and specialized areas, it's going to make it so much easier.
281:10 Exactly like I just covered point number four, clear prompts logic and better
281:14 testability. And finally, it's a foundation for multi-turn agents or
281:17 agent memory. Just because we're sending data from main agent to sub agent
281:20 doesn't mean we're losing that context of like we're talking to Nate right now
281:24 or we're talking to Dave right now. We can still have that memory pass between
281:28 workflows. So things get really really powerful and it's just pretty cool.
281:32 Okay, so we've seen a demo. I think you guys understand the benefits here. Just
281:35 one thing I wanted to throw out before we get into like a live build of a
281:40 multi- aent system is just because this is cool and there's benefits doesn't
281:44 mean it's always the right thing to do. So if you're forcing a multi-agent
281:47 orchestrator framework into a process that could be a simple single agent or a
281:52 simple AI workflow, all you're going to be doing is you're going to be
281:54 increasing the latency. You're going to be increasing the cost because you're
281:58 making more API calls and you're probably going to be increasing the
282:02 amount of error just because kind of the golden rule is you want to eliminate as
282:06 much data transfer between workflows as you can because that's where you can run
282:10 into like some issues. But of course there are times when you do need
282:13 dedicated agents for certain functions. So, let's get into a new workflow and
282:17 build a really simple example of an orchestrator agent that's able to call
282:21 on a sub agent. All right. So, what we're going to be doing here is we're
282:24 going to build an orchestrator agent. So, I'm going to hit tab. I'm going to
282:27 type in AI agent and we're going to pull this guy in. And we're just going to be
282:29 talking to this guy using that little chat window down here for now. So, first
282:33 thing we need to do as always is connect a brain. I'm going to go ahead and grab
282:36 an open router. And we're just going to throw in a 4.1 mini. And I'll just
282:40 change this name real quick so we can see what we're using. And from here,
282:44 we're basically just going to connect to a subworkflow. And then we'll go build
282:48 out that actual subworkflow agent. So the way we do it is we click on this
282:52 plus under tool. And what we want to do is call nen workflow tool because you
282:56 can see it says uses another nen workflow as a tool. Allows packaging any
283:00 naden node as a tool. So it's super cool. That's how we can send data to
283:03 these like custom things that we built out. As you saw earlier when I showed
283:06 that little example of the marketing team agent, that's how we can do it. So
283:10 I'm going to click on this. And basically when you click on this,
283:13 there's a few things to configure. The first one is a description of when do
283:17 you use this tool. You'll kind of tell the agent that here and you'll also be
283:20 able to tell a little bit in a system prompt, but you have to tell it when to
283:23 use this tool. And then the next thing is actually linking the tool. So you can
283:27 see we can choose from a list of our different workflows in NAN. You can see
283:30 I have a ton of different workflows here, but all you have to do is you have
283:32 to choose the one that you want this orchestrator agent to send data to. And
283:36 one thing I want to call attention to here is this text box which says the
283:40 tool will call the workflow you defined below and it will look in the last node
283:44 for the response. The workflow needs to start with an execute workflow trigger.
283:48 So what does this mean? Let's just go build another workflow and we will see
283:51 exactly what it means. So I'm going to open up a new workflow which is going to
283:54 be our sub agent. So, I'm going to hit tab to open up the nodes. And it's
283:57 obviously prompting us to choose a trigger. And we're going to choose this
284:00 one down here that says when executed by another workflow, runs the flow when
284:04 called by the execute workflow node from a different tool. So, basically, the
284:07 only thing that can access this node and send data to this node is one of these
284:12 bad boys right here. So, these two things are basically just connected and
284:15 data is going to be sending between them. And what's interesting about this
284:18 node is you can have a couple ways that you accept data. So, by default, I
284:21 usually just put it on accept all data. And this will put things into a field
284:25 right here called query. But if you wanted to, you could also have it send
284:29 over specific fields. So, if you wanted to only get like, you know, a phone
284:32 number and you wanted to get a name and you wanted to get an email and you
284:36 wanted those all to already be in three separate fields, that's how you could do
284:39 that. And a practical example of that would be in my marketing team right here
284:43 in the create image. You can see that I'm sending over an image title, an
284:47 image prompt, and a chat ID. And that's another good example of being able to
284:51 send, you know, like memory over because I have a chat ID coming from here, which
284:55 is memory to the agent right here. But then I can also send that chat ID to the
284:59 next workflow if we need memory to be accessed down here as well. So in this
285:02 case, just to start off, we're not going to be sending over specified fields.
285:06 We're just going to do accept all data and let us connect an AI agent to this
285:10 guy. So I'm going to type in AI agent. We'll pull this in. The first thing we
285:14 need to do is we need to change this because we're not going to be talking
285:17 through the connected chat trigger node as we know because we have this trigger
285:21 right here. So what we're going to do is save this workflow. So now it should
285:24 actually register an end that we have this workflow. I'm going to go back in
285:27 here and we're just going to connect it. So we know that it's called subwork sub
285:32 aent. So grab that right there. And now you can see it says the sub workflow is
285:35 set up to receive all input data. Without specific inputs, the agent will
285:39 not be able to pass data to this tool. you can define the specific inputs in
285:42 the trigger. So that's exactly what I just showed you guys with changing that
285:45 right there. So what I want to do is show how data gets here so we can
285:49 actually map it so the agent can read it. So what we need to do before we can
285:52 actually test it out is we need to make sure that this orchestrator agent
285:56 understands what this tool will do and when to use it. So let's just say that
285:59 this one's going to be an email agent. First thing I'm going to do is just
286:03 intuitively name this thing email agent. I'm then going to type in the
286:07 description call this tool to take any email actions. So now it should
286:12 basically, you know, signal to this guy whenever I see any sort of query come in
286:16 that has to do with email. I'm just going to pass that query right off to
286:19 this tool. So as you can see, I'm not even going to add a system message to
286:22 this AI agent yet. We're just going to see if we can understand. And I'm going
286:26 to come in here and say, "Please send an email to Nate asking him how he's
286:30 doing." So, we fire that off and hopefully it's going to call this tool and then we'll
286:35 be able to go in there and see the query that we got. The reason that this
286:38 errored is because we haven't mapped anything. So, what I'm going to do is
286:41 click on the tool. I'm going to click on view subexecution. So, we can pop open
286:45 like the exact error that just happened. And we can see exactly what happened is
286:49 that this came through in a field called query. But the main agent is not looking
286:53 for a field called query. It's looking for a field called chat input. So I'm
286:57 just going to click on debug and editor so we can actually pull this in. Now all
287:00 I have to do is come in here, change this to define below, and then just drag
287:04 in the actual query. And now we know that this sub agent is always going to
287:09 receive the orchestrator agents message. But what you'll notice here is that the
287:12 orchestrator agent sent over a message that says, "Hi Nate, just wanted to
287:15 check in and see how you're doing. Hope all is well." So there's a mistake here
287:19 because this main agent ended up basically like creating an email and
287:23 sending it over. All we wanted to do is basically just pass the message along.
287:27 So what I would do here is come into the system prompt and I'm just going to
287:31 say overview. You are an orchestrator agent. Your only job is to delegate the task to
287:40 the correct tool. No need to write emails or create summaries. There we go. So just with a
287:45 very simple line, that's all we're going to do. And before we shoot that off, I'm
287:48 just going to go back into the sub workflow and we have to give this thing
287:52 an actual brain. so that it can process messages. We're just going to go with a
287:57 4.1 mini once again. Save that. So, it actually reflects on this main agent.
288:00 And now, let's try to send off this exact same query. And we'll see what it
288:04 does this time. So, it's calling the email agent tool. It shouldn't error
288:09 because we we we fixed it. But, as you can see now, it just called that tool
288:12 twice. So, we have to understand why did it just call the sub agent twice. First
288:16 thing I'm going to do is click into the main agent and I'm going to click on
288:19 logs. And we can see exactly what it did. So when it called the email agent
288:23 once again it sent over a subject which is checking in and an actual email body.
288:26 So we have to fix the prompting there right? But then the output which is
288:30 basically what that sub workflow sent back said here's a slightly polished
288:33 version of your message for a warm and clear tone blah blah blah. And then for
288:36 some reason it went and called that email agent again. So now it says please
288:40 send the following email and it sends it over again. And then the sub agent says
288:45 I can't send emails directly but here's the email you can send. So, they're both
288:48 in this weird loop of thinking they are creating them an email, but not actually
288:51 being able to send them. So, let's take a look and see how we can fix that. All
288:55 right, so back in the sub workflow, what we want to do now is actually let this
288:58 agent have the ability to send emails. Otherwise, they're just going to keep
289:01 doing that endless loop. So, I'm going to add a tool and type in Gmail. We're
289:05 going to change this to a send message operation. I'm just going to rename this
289:10 send email. And we're just going to have the two be defined by the model. We're
289:14 going to have the subject be defined by the model. and we're going to have the
289:17 message be defined by the model. And all this means is that ideally, you know,
289:21 this query is going to say, hey, send an email to nateample.com asking what's up.
289:26 The agent would then interpret that and it would fill out, okay, who is it going
289:29 to? What's the subject and what's the message? It would basically just create
289:33 it all itself using AI. And the last thing I'm going to do is just turn off
289:36 the Nent attribution right there. And now let's give it another shot. And keep
289:39 in mind, there's no system prompt in this actual agent. And I actually want
289:43 to show you guys a cool tip. So when you're building these multi- aent
289:47 systems and you're doing things like sending data between flows, if you don't
289:51 want to always go back to the main agent to test out like how this one's working,
289:55 what you can do is come into here and we can just edit this query and just like
289:59 set some mock data as if the main agent was sending over some stuff. So I'm
290:02 going to say like we're pretending the orchestrator agent sent over to the sub
290:12 nate@example.com asking what's up and we'll just get rid of that R. And then
290:15 now you can see that's the query. That's exactly what this agent's going to be
290:18 looking at right here. And if we hit play above this AI agent, we'll see that
290:22 hopefully it's going to call that send email tool and we'll see what it did. So
290:25 it just finished up. We'll click into the tool to see what it did. And as you
290:29 can see, it sent it to Nate example. It made the subject checking in and then
290:32 the message was, "Hey Nate, just wanted to check in and see what's up. best your
290:36 name. So, my thought process right now is like, let's get everything working
290:39 the way we want it with this agent before we go back to that orchestrator
290:43 agent and fix the prompting there. So, one thing I don't like is that it's
290:46 signing off with best your name. So, we have a few options here. We could do
290:50 that in the system prompt, but same thing with like um specialization. If
290:54 this tool is specialized in sending emails, we might as well instruct it how
290:58 to send emails in this tool. So for the message, I'm going to add a description
291:01 and I'm going to say always sign off emails as Bob. And that really should do it. So
291:08 because we have this mock data right here, I don't have to go and, you know,
291:12 send another message. I can just test it out again and see what it's going to do.
291:15 So it's going to call the send email tool. It's going to make that message.
291:18 And now we will go ahead and look and see if it's signed off in a better way.
291:21 Right here, we can see now it's signing off best, Bob. So, let's just say right
291:25 now we're happy with the way that our sub agent's working. We can go ahead and
291:29 come back into the main agent and test it out again. All right. So, I'm just
291:32 going to shoot off that same message again that says, "Send an email to Nate
291:35 asking him how he's doing." And this will be interesting. We'll see what it
291:38 sends over. It was one run and it says, "Could you please provide Nate's email
291:42 address so I can send the message?" So, what happened here was the subexecution
291:46 realized we don't have Nate's email address. And that's why it basically
291:49 responded back to this main agent and said, "I need that if I need to send the
291:53 message." So if I click on subexecution, we will see exactly what it did and why
291:57 it did that and it probably didn't even call that send email tool. Yeah. So it
292:01 actually failed and it failed because it tried to fill out the two as Nate and it
292:05 realized that's not like a valid email address. So then because this sub agent
292:09 responds with could you please provide Nate's email address so I can send the
292:13 message. That's exactly what the main agent saw right here in the response
292:16 from this agent tool. So that's how they're able to talk to each other, go
292:19 back and forth, and then you can see that the orchestrator agent prompted us
292:22 to actually provide Nate's email address. So now we're going to try,
292:25 please send an email to nativeample.com asking him how the project is coming
292:29 along. We'll shoot that off and everything should go through this time
292:32 and it should basically say, oh, which project are you referring to? This will
292:35 help me provide you with the most accurate and relevant update. So once
292:39 again, the sub agent is like, okay, I don't have enough information to send
292:42 off that message, so I'm going to respond back to that orchestrator agent.
292:46 And just because we actually need one to get through, let me shoot off one more
292:49 example. Okay, hopefully this one's specific enough. We have an email
292:52 address. We have a specified name of a project. And we should see that
292:55 hopefully it's going to send this email this time. Okay, there we go. The email
292:58 asking Nate how Project Pan is coming along. It's been sent. Anything else you
293:02 need? So, at this point, it would be okay. Which other agents could I add to
293:06 the system to make it a bit easier on myself? The first thing naturally to do
293:09 would be I need to add some sort of contact agent. Or maybe I realize that I
293:14 don't need a full agent for that. Maybe that needs to just be one tool. So
293:17 basically what I would do then is I'd add a tool right here. I would grab an
293:20 air table because that's where my contact information lives. And all I
293:24 want to do is go to contacts and choose contacts. And now I just need to change
293:30 this to search. So now this tool's only job is to return all of the contacts in
293:34 my contact database. I'm just going to come in here and call this contacts. And
293:38 now keep in mind once again there's still nothing in the system prompt about
293:41 here are the tools you have and here's what you do. I just want to show you
293:44 guys how intelligent these models can be before you even prompt them. And then
293:47 once you get in there and say, "Okay, now you have access to these seven
293:50 agents. Here's what each of them are good at, it gets even cooler." So, let's
293:54 try one more thing and see if it can use the combination of contact database and
293:58 email agent. Okay, so I'm going to fire this off. Send an email to Dexter Morgan
294:02 asking him if he wants to get lunch. You can see that right away it used the
294:05 contacts database, pulled back Dexter Morgan's email address, and now we can
294:08 see that it sent that email address over to the email agent, and now we have all
294:12 of these different data transfers talking to each other, and hopefully it
294:15 sent the email. All right, so here's that email. Hi, Dexter. Would you like
294:18 to get lunch sometime soon? Best Bob. The formatting is a little off. We can
294:21 fix that within the the tool for the email agent. But let's see if we sent
294:24 that to the right email, which is dextermiami.com. If we go into our
294:28 contacts database, we can see right here we have dextermorggan dextermiami.com.
294:32 And like I showed you guys earlier, what you want to do is get pretty good at
294:35 reading these agent logs. So you can see how your agents are thinking and what
294:38 data they're sending between workflows. And if we go to the logs here, we can
294:42 see first of all, it used its GPT4.1 mini model brain to understand what to
294:46 do. It understood, okay, I need to go to the contacts table. So I got my contact
294:50 information. Then I need to call the email agent. And what I sent over to the
294:55 email agent was send an email to dextermiami.com asking him if he wants
294:59 to get lunch. And that was perfect. All right. All right, so that's going to do
295:02 it for this one. Hopefully this opened your eyes to the possibilities of these
295:05 multi- aent systems in N&N and also hopefully it taught you some stuff
295:08 because I know all of this stuff is like really buzzwordy sometimes with all
295:12 these agents agents agents but there are use cases where it really is the best
295:15 path but it's all about like understanding what is the end goal and
295:18 how do I want to evolve this workflow and then deciding like what's the best
51:27 first workflow, we're building a rag pipeline and chatbot. And so if that
51:31 sounds like a bunch of gibberish to you, let's quickly understand what rag is and
51:36 what a vector database is. So rag stands for retrieval augmented generation. And
51:40 in the simplest terms, let's say you ask me a question and I don't actually know
51:43 the answer. I would just kind of Google it and then I would get the answer from
51:47 my phone and then I would tell you the answer. So in this case, when we're
51:50 building a rag chatbot, we're going to be asking the chatbot questions and it's
51:53 not going to know the answer. So it's going to look inside our vector
51:56 database, find the answer, and then it's going to respond to us. And so when
52:00 we're combining the elements of rag with a vector database, here's how it works.
52:03 So the first thing we want to talk about is actually what is a vector database.
52:07 So essentially this is what a vector database would look like. We're all
52:11 familiar with like an x and yaxis graph where you can plot points on there on a
52:14 two dimensional plane. But a vector database is a multi-dimensional graph of
52:19 points. So in this case, you can see this multi-dimensional space with all
52:23 these different points or vectors. And each vector is placed based on the
52:27 actual meaning of the word or words in the vector. So over here you can see we
52:31 have wolf, dog and cat. And they're placed similarly because the meaning of
52:35 these words are all like animals. Whereas over here we have apple and
52:38 banana which the meaning of the words are food more likely fruits. And that's
52:42 why they're placed over here together. So when we're searching through the
52:46 database, we basically vectorize a question the same way we would vectorize
52:50 any of these other points. And in this case, we were asking for a kitten. And
52:53 then that query gets placed over here near the other animals and then we're
52:56 able to say okay well we have all these results now. So what that looks like and
53:00 what we'll see when we get into NAND is we have a document that we want to
53:03 vectorize. We have to split the document up into chunks because we can't put like
53:07 a 50page PDF as one chunk. So it gets split up and then we're going to run it
53:10 through something called an embeddings model which basically just turns text
53:15 into numbers. Just as simple as that. And as you can see in this case let's
53:18 say we had a document about a company. We have company data, finance data, and
53:22 marketing data. And they all get placed differently because they mean different
53:26 things. And the the context of those chunks are different. And then this
53:30 visual down here is just kind of how an LLM or in this case, this agent takes
53:34 our question, turns it into its own question. We vectorize that using the
53:38 same embeddings model that we used up here to vectorize the original data. And
53:42 then because it gets placed here, it just grabs back any vectors that are
53:46 nearest, maybe like the nearest four or five, and then it brings it back in
53:49 order to respond to us. So don't want to dive too much into this. Don't want to
53:53 over complicate it, but hopefully this all makes sense. Cool. So now that we
53:57 understand that, let's actually start building this workflow. So what we're
53:59 going to do here is we are going to click on add first step because every
54:03 workflow needs a trigger that basically starts the workflow. So, I'm going to
54:08 type in Google Drive because what we're going to do is we are going to pull in a
54:12 document from our Google Drive in order to vectorize it. So, I'm going to choose
54:15 a trigger which is on changes involving a specific folder. And what we have to
54:19 do now is connect our account. As you can see, I'm already connected, but what
54:22 we're going to do is click on create new credential in order to connect our
54:25 Google Drive account. And what we have to do is go get a client ID and a
54:29 secret. So, what we want to do is click on open docs, which is going to bring us
54:33 to Naden's documents on how to set up this credential. We have a prerequisite
54:37 which is creating a Google Cloud account. So I'm going to click on Google
54:40 Cloud account and we're going to set up a new project. Okay. So I just signed
54:43 into a new account and I'm going to set up a whole project and walk through the
54:46 credentials with you guys. You'll click up here. You'll probably have something
54:49 up here that says like new project and then you'll click into new project. All
54:54 we have to do now is um name it and you you'll be able to start for free so
54:56 don't worry about that yet. So I'm just going to name this one demo and I'm
55:00 going to create this new project. And now up here in the top right you're
55:02 going to see that it's kind of spinning up this project. and then we'll move
55:06 forward. Okay, so it's already done and now I can select this project. So now
55:10 you can see up here I'm in my new project called demo. I'm going to click
55:15 on these three lines in the top left and what we're going to do first is go to
55:18 APIs and services and click on enabled APIs and services. And what we want to
55:22 do is add the ones we need. And so right now all I'm going to do is add Google
55:27 Drive. And you can see it's going to come up with Google Drive API. And then
55:31 all we have to do is really simply click enable. And there we I just enabled it.
55:35 So you can see here the status is enabled. And now we have to set up
55:37 something called our OOTH consent screen, which basically is just going to
55:43 let Nadn know that Google Drive and Naden are allowed to talk to each other
55:46 and have permissions. So right here, I'm going to click on OOTH consent screen.
55:49 We don't have one yet, so I'm going to click on get started. I'm going to give
55:53 it a name. So we're just going to call this one demo. Once again, I'm going to
55:57 add a support email. I'm going to click on next. Because I'm not using a Google
56:01 Workspace account, I'm just using a, you know, nate88@gmail.com. I'm going to have to
56:05 choose external. I'm going to click on next. For contact information, I'm
56:08 putting the same email as I used to create this whole project. Click on next
56:12 and then agree to terms. And then we're going to create that OOTH consent
56:17 screen. Okay, so we're not done yet. The next thing we want to do is we want to
56:20 click on audience. And we're going to add ourselves as a test user. So we
56:23 could also make the app published by publishing it right here, but I'm just
56:26 going to keep it in test. And when we keep it in test mode, we have to add a
56:30 test user. So I'm going to put in that same email from before. And this is
56:32 going to be the email of the Google Drive we want to access. So I put in my
56:36 email. You can see I saved it down here. And then finally, all we need to do is
56:40 come back into here. Go to clients. And then we need to create a new client.
56:45 We're going to click on web app. We're going to name it whatever we want. Of
56:47 course, I'm just going to call this one demo once again. And now we need to
56:52 basically add a redirect URI. So if you click back in Nitn, we have one right
56:57 here. So, we're going to copy this, go back into cloud, and we're going to add
57:00 a URI and paste it right in there, and then hit create, and then once that's created,
57:06 it's going to give us an ID and a secret. So, all we have to do is copy
57:10 the ID, go back into Nit and paste that right here. And then we need to go grab our
57:16 secret from Google Cloud, and then paste that right in there. And now we have a
57:19 little button that says sign in with Google. So, I'm going to open that up.
57:22 It's going to pull up a window to have you sign in. Make sure you sign in with
57:25 the same account that you just had yourself as a test user. That one. And
57:30 then you'll have to continue. And then here is basically saying like what
57:33 permissions do we have? Does anyone have to your Google Drive? So I'm just going
57:36 to select all. I'm going to hit continue. And then we should be good.
57:39 Connection successful and we are now connected. And you may just want to
57:43 rename this credential so you know you know which email it is. So now I've
57:47 saved my credential and we should be able to access the Google Drive now. So,
57:49 what I'm going to do is I'm going to click on this list and it's going to
57:52 show me the folders that I have in Google Drive. So, that's awesome. Now,
57:56 for the sake of this video, I'm in my Google Drive and I'm going to create a
57:59 new folder. So, new folder. We're going to call this one um FAQ. Create this one
58:05 because we're going to be uploading an FAQ document into it. So, here's my FAQ
58:09 folder um right here. And then what I have is down here I made a policy and
58:14 FAQ document which looks like this. We have some store policies and then we
58:17 also have some FAQs at the bottom. So, all I'm going to do is I'm going to drag
58:21 in my policy and FAQ document into that new FAQ folder. And then if we come into
58:27 NAN, we click on the new folder that we just made. So, it's not here yet. I'm
58:30 just going to click on these dots and click on refresh list. Now, we should
58:35 see the FAQ folder. There it is. Click on it. We're going to click on what are
58:38 we watching this folder for. I'm going to be watching for a file created. And
58:43 then, I'm just going to hit fetch test event. And now we can see that we did in
58:47 fact get something back. So, let's make sure this is the right one. Yep. So,
58:50 there's a lot of nasty information coming through. I'm going to switch over
58:52 here on the right hand side. This is where we can see the output of every
58:55 node. I'm going to click on table and I'm just going to scroll over and there
59:00 should be a field called file name. Here it is. Name. And we have policy and FAQ
59:04 document. So, we know we have the right document in our Google Drive. Okay. So,
59:08 perfect. Every time we drop in a new file into that Google folder, it's going
59:11 to start this workflow. And now we just have to configure what happens after the
59:15 workflow starts. So, all we want to do really is we want to pull this data into
59:20 n so that we can put it into our pine cone database. So, off of this trigger,
59:24 I'm going to add a new node and I'm going to grab another Google Drive node
59:28 because what happened is basically we have the file ID and the file name, but
59:32 we don't have the contents of the file. So, we're going to do a download file
59:35 node from Google Drive. I'm going to rename this one and just call it
59:38 download file just to keep ourselves organized. We already have our
59:41 credential connected and now it's basically saying what file do you want
59:45 to download. We have the ability to choose from a list. But if we choose
59:48 from the list, it's going to be this file every time we run the workflow. And
59:52 we want to make this dynamic. So we're going to change from list to by ID. And
59:56 all we have to do now is we're going to look on the lefth hand side for that
59:59 file that we just pulled in. And we're going to be looking for the ID of the
60:02 file. So I can see that I found it right down here in the spaces array because we
60:06 have the name right here and then we have the ID right above it. So, I'm
60:10 going to drag ID, put it right there in this folder. It's coming through as a
60:14 variable called JSON ID. And that's just basically referencing, you know,
60:17 whenever a file comes through on the the Google Drive trigger. I'm going to use
60:21 the variable JSON. ID, which will always pull in the files ID. So, then I'm going
60:25 to hit test step and we're going to see that we're going to get the binary data
60:28 of this file over here that we could download. And this is our policy and FAQ
60:33 document. Okay. So, there's step two. We have the file downloaded in NADN. And
60:37 now it's just as simple as putting it into pine cone. So before we do that,
60:41 let's head over to pine cone.io. Okay, so now we are in pine cone.io, which is
60:45 a vector database provider. You can get started for free. And what we're going
60:48 to do is sign up. Okay, so I just got logged in. And once you get signed up,
60:52 you should see us a page similar to this. It's a get started page. And what
60:55 we want to do is you want to come down here and click on, you know, begin setup
60:59 because we need to create an index. So I'm going to click on begin setup. We
61:03 have to name our index. So you can call this whatever you want. We have to
61:08 choose a configuration for a text model. We have to choose a configuration for an
61:11 embeddings model, which is sort of what I talked about right in here. This is
61:15 going to turn our text chunks into a vector. So what I'm going to do is I'm
61:19 going to choose text embedding three small from OpenAI. It's the most cost
61:23 effective OpenAI embedding model. So I'm going to choose that. Then I'm going to
61:26 keep scrolling down. I'm going to keep mine as serverless. I'm going to keep
61:29 AWS as the cloud provider. I'm going to keep this region. And then all I'm going
61:33 to do is hit create index. Once you create your index, it'll show up right
61:36 here. But we're not done yet. You're going to click into that index. And so I
61:39 already obviously have stuff in my vector database. You won't have this.
61:41 What I'm going to do real quick is just delete this information out of it. Okay.
61:45 So this is what yours should look like. There's nothing in here yet. We have no
61:48 name spaces and we need to get this configured. So on the left hand side, go
61:53 over here to API keys and you're going to create a new API key. Name it
61:58 whatever you want, of course. Hit create key. And then you're going to copy that
62:02 value. Okay, back in NDN, we have our API key copied. We're going to add a new
62:07 node after the download file and we're going to type in pine cone and we're
62:10 going to grab a pine cone vector store. Then we're going to select add documents
62:14 to a vector store and we need to set up our credential. So up here, you won't
62:18 have these and you're going to click on create new credential. And all we need
62:21 to do here is just an API key. We don't have to get a client ID or a secret. So
62:24 you're just going to paste in that API key. Once that's pasted in there and
62:27 you've given it a name so you know what this means. You'll hit save and it
62:30 should go green and we're connected to Pine Cone and you can make sure that
62:34 you're connected by clicking on the index and you should have the name of
62:37 the index right there that we just created. So I'm going to go ahead and
62:40 choose my index. I'm going to click on add option and we're going to be
62:43 basically adding this to a Pine Cone namespace which back in here in Pine
62:48 Cone if I go back into my database my index and I click in here you can see
62:51 that we have something called namespaces. And this basically lets us
62:55 put data into different folders within this one index. So if you don't specify
62:59 an index, it'll just come through as default and that's going to be fine. But
63:02 we want to get into the habit of having our data organized. So I'm going to go
63:05 back into NADN and I'm just going to name this name space FAQ because that's
63:10 the type of data we're putting in. And now I'm going to click out of this node.
63:13 So you can see the next thing that we need to do is connect an embeddings
63:17 model and a document loader. So let's start with the embeddings model. I'm
63:20 going to click on the plus and I'm going to click on embeddings open AAI. And
63:23 actually, this is one thing I left out of the Excalaw is that we also will need
63:27 to go get an OpenAI key. So, as you can see, when we need to connect a
63:30 credential, you'll click on create new credential and we just need to get an
63:33 API key. So, you're going to type in OpenAI API. You'll click on this first
63:37 link here. If you don't have an account yet, you'll sign in. And then once you
63:40 sign up, you want to go to your dashboard. And then on the lefth hand
63:44 side, very similar thing to Pine Cone, you'll click on API keys. And then we're
63:47 just going to create a new key. So you can see I have a lot. We're going to
63:49 make a new one. And I'm calling everything demo, but this is going to be
63:53 demo number three. Create new secret key. And then we have our key. So we're
63:56 going to copy this and we're going to go back into Nit. Paste that right here. We
63:59 paste it in our key. We've given in a name. And now we'll hit save and we
64:03 should go green. Just keep in mind that you may need to top up your account with
64:06 a few credits in order for you to actually be able to run this model. Um,
64:10 so just keep that in mind. So then what's really important to remember is
64:13 when we set up our pine cone index, we use the embedding model text embedding
64:17 three small from OpenAI. So that's why we have to make sure this matches right
64:20 here or this automation is going to break. Okay, so we're good with the
64:24 embeddings and now we need to add a document loader. So I'm going to click
64:27 on this plus right here. I'm going to click on default data loader and we have
64:31 to just basically tell Pine Cone the type of data we're putting in. And so
64:35 you have two options, JSON or binary. In this case, it's really easy because we
64:39 downloaded a a Google Doc, which is on the lefth hand side. You can tell it's
64:42 binary because up top right here on the input, we can switch between JSON and
64:47 binary. And if we were uploading JSON, all we'd be uploading is this gibberish
64:51 nonsense information that we don't need. We want to upload the binary, which is
64:55 the actual policy and FAQ document. So, I'm just going to switch this to binary.
64:58 I'm going to click out of here. And then the last thing we need to do is add a
65:01 text splitter. So, this is where I was talking about back in this Excal. we
65:05 have to split the document into different chunks. And so that's what
65:08 we're doing here with this text splitter. I'm going to choose a
65:12 recursive character text splitter. There's three options and I won't dive
65:15 into the difference right now, but recursive character text splitter will
65:18 help us keep context of the whole document as a whole, even though we're
65:22 splitting it up. So for now, chunk size is a th00and. That's just basically how
65:25 many characters am I going to put in each chunk? And then is there going to
65:29 be any overlap between our chunks of characters? So right now I'm just going
65:33 to leave it default a,000 and zero. So that's it. You just built your first
65:37 automation for a rag pipeline. And now we're just going to click on the play
65:40 button above the pine cone vector store node in order to see it get vectorized.
65:43 So we're going to basically see that we have four items that have left this
65:47 node. So this is basically telling us that our Google doc that we downloaded
65:51 right here. So this document got turned into four different vectors. So if I
65:55 click into the text splitter, we can see we have four different responses and
65:59 this is the contents that went into each chunk. So we can just verify this by
66:03 heading real quick into Pine Cone, we can see we have a new name space that we
66:07 created called FAQ. Number of records is four. And if we head over to the
66:10 browser, we can see that we do indeed have these four vectors. And then the
66:13 text field right here, as you can see, are the characters that were put into
66:17 each chunk. Okay, so that was the first part of this workflow, but we're going
66:21 to real quick just make sure that this actually works. So we're going to add a
66:24 rag chatbot. Okay. So, what I'm going to do now is hit the tab, or I could also
66:27 have just clicked on the plus button right here, and I'm going to type in AI
66:31 agent, and that is what we're going to grab and pull into this workflow. So, we
66:35 have an AI agent, and let's actually just put him right over here. Um, and
66:41 now what we need to do is we need to set up how are we actually going to talk to
66:44 this agent. And we're just going to use the default N chat window. So, once
66:47 again, I'm going to hit tab. I'm going to type in chat. And we have a chat
66:51 trigger. And all I'm going to do is over here, I'm going to grab the plus and I'm
66:54 going to drag it into the front of the AI agent. So basically now whenever we
66:58 hit open chat and we talk right here, the agent will read that chat message.
67:02 And we know this because if I click into the agent, we can see the user message
67:07 is looking for one in the connected chat trigger node, which we have right here
67:10 connected. Okay, so the first step with an AI agent is we need to give it a
67:14 brain. So we need to give it some sort of AI model to use. So we're going to
67:18 click on the plus right below chat model. And what we could do now is we
67:22 could set up an OpenAI chat model because we already have our API key from
67:25 OpenAI. But what I want to do is click on open router because this is going to
67:30 allow us to choose from all different chat models, not just OpenAIs. So we
67:33 could do Claude, we could do Google, we could do Plexity. We have all these
67:36 different models in here which is going to be really cool. And in order to get
67:39 an Open Router account, all you have to do is go sign up and get an API key. So
67:43 you'll click on create new credential and you can see we need an API key. So
67:47 you'll head over to openouter.ai. You'll sign up for an account. And then all you
67:50 have to do is in the top right, you're going to click on keys. And then once
67:54 again, kind of the same as all all the other ones. You're going to create a new
67:57 key. You're going to give it a name. You're going to click create. You have a
68:01 secret key. You're going to click copy. And then when we go back into NN and
68:04 paste it in here, give it a name. And then hit save. And we should go green.
68:07 We've connected to Open Router. And now we have access to any of these different
68:12 chat models. So, in this case, let's use let's use Claude um 3.5
68:19 Sonnet. And this is just to show you guys you can connect to different ones.
68:22 But anyways, now we could click on open chat. And actually, let me make sure you
68:26 guys can see him. If we say hello, it's going to use its brain claw 3.5 sonnet.
68:30 And now it responded to us. Hi there. How can I help you? So, just to validate
68:34 that our information is indeed in the Pine Cone vector store, we're going to
68:38 click on a tool under the agent. We're going to type in Pine Cone um and grab a
68:43 Pine Cone vector store and we're going to grab the account that we just
68:46 selected. So, this was the demo I just made. We're going to give it a name. So,
68:50 in this case, I'm just going to say knowledge base. We're going to give a description.
68:59 Call this tool to access the policy and FAQ database. So, we're basically just
69:03 describing to the agent what this tool does and when to use it. And then we
69:08 have to select the index and the name space for it to look inside of. So the
69:11 index is easy. We only have one. It's called sample. But now this is important
69:14 because if you don't give it the right name space, it won't find the right
69:19 information. So we called ours FAQ. If you remember in um our Pine Cone, we
69:23 have a namespace and we have FAQ right here. So that's why we're doing FAQ. And
69:26 now it's going to be looking in the right spot. So before we can chat with
69:30 it, we have to add an embeddings model to our Pine Cone vector store, which
69:33 same thing as before. We're going to grab OpenAI and we're going to use
69:37 embedding3 small and the same credential you just made. And now we're going to be
69:41 good to go to chat with our rag agent. So looking back in the document, we can
69:44 see we have some different stuff. So I'm going to ask this chatbot what the
69:48 warranty policy is. So I'm going to open up the chat window and say what is our
69:54 warranty policy? Send that off. And we should see that it's going to use its
69:57 brain as well as the vector store in order to create an answer for us because
70:00 it didn't know by itself. So there we go. just finished up and it said based on the information
70:06 from our knowledge base, here's the warranty policy. We have one-year
70:10 standard coverage. We have, you know, this email for claims processes. You
70:14 must provide proof of purchase and for warranty exclusions that aren't covered,
70:18 damage due to misuse, water damage, blah blah blah. Back in the policy
70:22 documentation, we can see that that is exactly what we have in our knowledge
70:26 base for warranty policy. So, just because I don't want this video to go
70:29 too long, I'm not going to do more tests, but this is where you can get in
70:31 there and make sure it's working. One thing to keep in mind is within the
70:34 agent, we didn't give it a system prompt. And what a system prompt is is
70:38 just basically a message that tells the agent how to do its job. So what you
70:42 could do is if you're having issues here, you could say, you know, like this
70:45 is the name of our tool which is called knowledgeb. You could tell the agent and
70:49 in system prompt, hey, like your job is to help users answer questions about the
70:54 um you know, our policy database. You have a tool called knowledgebase. You
70:58 need to use that in order to help them answer their questions. and that will
71:01 help you refine the behavior of how this agent acts. All right, so the next one
71:05 that we're doing is a customer support workflow. And as always, you have to
71:08 figure out what is the trigger for my workflow. In this case, it's going to be
71:12 triggered by a new email received. So I'm going to click on add first step.
71:16 I'm going to type in Gmail. Grab that node. And we have a trigger, which is on
71:19 message received right here. And we're going to click on that. So what we have
71:23 to do now is obviously authorize ourselves. So we're going to click on
71:26 create new credential right here. And all we have to do here is use OOTH 2. So
71:30 all we have to do is click on sign in. But before we can do that, we have to
71:33 come over to our Google Cloud once again. And now we have to make sure we
71:36 enable the Gmail API. So we'll click on Gmail API. And it'll be really simple.
71:40 We'll just have to click on enable. And now we should be able to do that OOTH
71:43 connection and actually sign in. You'll click on the account that you want to
71:46 access the Gmail. You'll give it access to everything. Click continue. And then
71:50 we're going to be connected as you can see. And then you'll want to name this
71:54 credential as always. Okay. So now we're using our new credential. And what I'm
71:57 going to do is if I hit fetch test event. So now we are seeing an email
72:01 that I just got in this inbox which in this case was nencloud was granted
72:05 access to your Google account blah blah blah. Um so that's what we just got.
72:09 Okay. So I just sent myself a different email and I'm going to fetch that email
72:13 now from this inbox. And we can see that the snippet says what is the privacy
72:17 policy? I'm concerned about my data and passwords. And what we want to do is we
72:21 want to turn off simplify because what this button is doing is it's going to
72:24 take the content of the email and basically, you know, cut it off. So in
72:28 this case, it didn't matter, but if you're getting long emails, it's going
72:30 to cut off some of the email. So if we turn off simplify fetch test event, once
72:34 again, we're now going to get a lot more information about this email, but we're
72:37 still going to be able to access the actual content, which is right here. We
72:41 have the text, what is privacy policy? I'm concerned about my data and
72:44 passwords. Thank you. And then you can see we have other data too like what the
72:48 subject was, who the email is coming from, what their name is, all this kind
72:52 of stuff. But the idea here is that we are going to be creating a workflow
72:56 where if someone sends an email to this inbox right here, we are going to
72:59 automatically look up the customer support policy and respond back to them
73:02 so we don't have to. Okay. So the first thing I'm actually going to do is pin
73:05 this data just so we can keep it here for testing. Which basically means
73:08 whenever we rerun this, it's not going to go look in our inbox. It's just going
73:12 to keep this email that we pulled in, which helps us for testing, right? Okay,
73:16 cool. So, the next step here is we need to have AI basically filter to see is
73:21 this email customer support related? If yes, then we're going to have a response
73:24 written. If no, we're going to do nothing because maybe the use case would
73:28 be okay, we're going to give it an access to an inbox where we're only
73:32 getting customer support emails. But sometimes maybe that's not the case. And
73:35 let's just say we wanted to create this as sort of like an inbox manager where
73:38 we can route off to different logic based on the type of email. So that's
73:41 what we're going to do here. So I'm going to click on the plus after the
73:44 Gmail trigger and I'm going to search for a text classifier node. And what
73:49 this does is it's going to use AI to read the incoming email and then
73:53 determine what type of email it is. So because we're using AI, the first thing
73:56 we have to do is connect a chat model. We already have our open router
73:59 credential set up. So I'm going to choose that. I'm going to choose the
74:01 credential and then I'm for this one, let's just keep it with 40 mini. And now
74:07 this AI node actually has AI and I'm going to click into the text classifier.
74:09 And the first thing we see is that there's a text to classify. So all we
74:13 want to do here is we want to grab the actual content of the email. So I'm
74:17 going to scroll down. I can see here's the text, which is the email content.
74:20 We're going to drag that into this field. And now every time a new email
74:25 comes through, the text classifier is going to be able to read it because we
74:28 put in a variable which basically represents the content of the email. So
74:32 now that it has that, it still doesn't know what to classify it as or what its
74:35 options are. So we're going to click on add category. The first category is
74:39 going to be customer support. And then basically we need to give it a
74:42 description of what a customer support email could look like. So I wanted to
74:46 keep this one simple. It's pretty vague, but you could make this more detailed,
74:49 of course. And I just sent an email that's related to helping out a
74:52 customer. They may be asking questions about our policies or questions about
74:56 our products or services. And what we can do is we can give it specific
74:59 examples of like here are some past customer support emails and here's what
75:02 they've looked like. And that will make this thing more accurate. But in this
75:05 case, that's all we're going to do. And then I'm going to add one more category
75:08 that's just going to be other. And then for now, I'm just going to say any email
75:14 that is not customer support related. Okay, cool. So now when we click out of
75:18 here, we can see we have two different branches coming off of this node, which
75:21 means when the text classifier decides, it's either going to send it off this
75:24 branch or it's going to send it down this branch. So let's quickly hit play.
75:28 It's going to be reading the email using its brain. And now you can see it has
75:32 outputed in the customer support branch. We can also verify by clicking into
75:35 here. And we can see customer support branch has one item and other branch has
75:39 no items. And just to keep ourselves organized right now, I'm going to click
75:42 on the other branch and I'm just going to add an operation that says do nothing
75:46 just so we can see, you know, what would happen if it went this way for now. But
75:49 now is where we want to configure the logic of having an agent be able to read
75:55 the email, hit the vector database to get relevant information and then help
75:58 us write an email. So I'm going to click on the plus after the customer support
76:02 branch. I'm going to grab an AI agent. So this is going to be very similar to
76:05 the way we set up our AI agent in the previous workflow. So, it's kind of
76:08 building on top of each other. And this time, if you remember in the previous
76:12 one, we were talking to it with a connected chat trigger node. And as you
76:15 can see here, we don't have a connected chat trigger node. So, the first thing
76:19 we want to do is change that. We want to define below. And this is where you
76:22 would think, okay, what do we actually want the agent to read? We want it to
76:25 read the email. So, I'm going to do the exact same thing as before. I'm going to
76:29 go into the Gmail trigger node, scroll all the way down until we can find the
76:32 actual email content, which is right here, and just drag that right in.
76:35 That's all we're going to do. And then we definitely want to add a system
76:39 message for this agent. We are going to open up the system message and I'm just
76:42 going to click on expression so I can expand this up full screen. And we're
76:46 going to write a system prompt. Again, for the sake of the video, keeping this
76:49 prompt really concise, but if you want to learn more about prompting, then
76:52 definitely check out my communities linked down below as well as this video
76:56 up here and all the other tutorials on my channel. But anyways, what we said
76:59 here is we gave it an overview and instructions. The overview says you are
77:03 a customer support agent for TechHaven. Your job is to respond to incoming
77:06 emails with relevant information using your knowledgebased tool. And so when we
77:10 do hook up our Pine Cone vector database, we're just going to make sure
77:13 to call it knowledgebase because that's what the agent thinks it has access to.
77:17 And then for the instructions, I said your output should be friendly and use
77:20 emojis and always sign off as Mr. Helpful from TechHaven Solutions. And
77:24 then one more thing I forgot to do actually is we want to tell it what to
77:27 actually output. So if we didn't tell it, it would probably output like a
77:31 subject and a body. But what's going to happen is we're going to reply to the
77:34 incoming email. We're not going to create a new one. So we don't need a
77:38 subject. So I'm just going to say output only the body content of the email. So
77:44 then we'll give it a try and see what that prompt looks like. We may have to
77:47 come back and refine it, but for now we're good. Um, and as you know, we have
77:51 to connect a chat model and then we have to connect our pine cone. So first of
77:54 all, chat model, we're going to use open router. And just to show you guys, we
77:58 can use a different type of model here. Let's use something else. Okay. So,
78:01 we're going to go with Google Gemini 2.0 Flash. And then we need to add the Pine
78:04 Cone database. So, I'm going to click on the plus under tool. I'm going to search
78:09 for Pine Cone Vector Store. Grab that. And we have the operation is going to be
78:13 retrieving documents as a tool for an AI agent. We're going to call this
78:20 knowledge capital B. And we're going to once again just say call this tool to
78:27 access policy and FAQ information. We need to set up the index as well as the
78:31 namespace. So sample and then we're going to call the namespace, you know,
78:35 FAQ because that's what it's called in our pine cone right here as you can see.
78:38 And then we just need to add our embeddings model and we should be good
78:42 to go which is embedded OpenAI text embedding three small. So we're going to
78:46 hit the play above the AI agent and it's going to be reading the email. As you
78:49 can see once again the prompt user message. It's reading the email. What is
78:52 the privacy policy? I'm concerned about my data and my passwords. Thank you. So
78:56 we're going to hit the play above the agent. We're going to watch it use its
78:59 brain. We're going to watch it call the vector store. And we got an error. Okay.
79:04 So, I'm getting this error, right? And it says provider returned error. And
79:09 it's weird because basically why it's erroring is because of our our chat
79:12 model. And it's it's weird because it goes green, right? So, anyways, what I
79:16 would do here is if you're experiencing that error, it means there's something
79:19 wrong with your key. So, I would go reset it. But for now, I'm just going to
79:22 show you the quick fix. I can connect to a OpenAI chat model real quick. And I
79:27 can run this here and we should be good to go. So now it's going to actually
79:31 write the email and output. Super weird error, but I'm honestly glad I caught
79:34 that on camera to show you guys in case you face that issue because it could be
79:37 frustrating. So we should be able to look at the actual output, which is,
79:41 "Hey there, thank you for your concern about privacy policy. At Tech Haven, we
79:45 take your data protection seriously." So then it gives us a quick summary with
79:48 data collection, data protection, cookies. If we clicked into here and
79:51 went to the privacy policy, we could see that it is in fact correct. And then it
79:55 also was friendly and used emojis like we told it to right here in the system
79:59 prompt. And finally, it signed off as Mr. Helpful from Tech Haven Solutions,
80:02 also like we told it to. So, we're almost done here. The last thing that we
80:05 want to do is we want to have it actually reply to this person that
80:09 triggered the whole workflow. So, we're going to click on the plus. We're going
80:13 to type in Gmail. Grab a Gmail node and we're going to do reply to a message.
80:17 Once we open up this node, we already know that we have it connected because
80:20 we did that earlier. We need to configure the message ID, the message
80:24 type, and the message. And so all I'm going to do is first of all, email type.
80:28 I'm going to do text. For the message ID, I'm going to go all the way down to
80:31 the Gmail trigger. And we have an ID right here. This is the ID we want to
80:35 put into the message ID so that it responds in line on Gmail rather than
80:39 creating a new thread. And then for the message, we're going to just drag in the
80:43 output from the agent that we just had write the message. So, I'm going to grab
80:46 this output, put it right there. And now you can see this is how it's going to
80:49 respond in email. And the last thing I want to do is I want to click on add option, append
80:55 nadn attribution, and then just check that off. So then at the bottom of the
80:59 email, it doesn't say this was sent by naden. So finally, we'll hit this test
81:04 step. We will see we get a success message that the email was sent. And
81:07 I'll head over to the email to show you guys. Okay, so here it is. This is the
81:11 one that we sent off to that inbox. And then this is the one that we just got
81:13 back. As you can see, it's in the same thread and it has basically the privacy
81:19 policy outlined for us. Cool. So, that's workflow number two. Couple ways we
81:22 could make this even better. One thing we could do is we could add a node right
81:25 here. And this would be another Gmail one. And we could basically add a label
81:31 to this email. So, if I grab add label to message, we would do the exact same
81:34 thing. We'd grab the message ID the same way we grabbed it earlier. So, now it
81:38 has the message ID of the label to actually create. And then we would just
81:41 basically be able to select the label we want to give it. So in this case, we
81:44 could give it the customer support label. We hit test step, we'll get
81:48 another success message. And then in our inbox, if we refresh, we will see that
81:51 that just got labeled as customer support. So you could add on more
81:55 functionality like that. And you could also down here create more sections. So
81:59 we could have finance, you know, a logic built out for finance emails. We could
82:02 have logic built out for all these other types of emails and um plug them into
82:07 different knowledge bases as well. Okay. So the third one we're going to do is a
82:11 LinkedIn content creator workflow. So, what we're going to do here is click on
82:14 add first step, of course. And ideally, you know, in production, what this
82:17 workflow would look like is a schedule trigger, you know. So, what you could do
82:20 is basically say every day I want this thing to run at 7:00 a.m. That way, I'm
82:23 always going to have a LinkedIn post ready for me at, you know, 7:30. I'll
82:27 post it every single day. And if you wanted it to actually be automatic,
82:30 you'd have to flick this workflow from inactive to active. And, you know, now
82:34 it says, um, your schedule trigger will now trigger executions on the schedule
82:37 you have defined. So now it would be working, but for the sake of this video,
82:40 we're going to turn that off and we are just going to be using a manual trigger
82:44 just so we can show how this works. Um, but it's the same concept, right? It
82:48 would just start the workflow. So what we're going to do from here is we're
82:51 going to connect a Google sheet. So I'm going to grab a Google sheet node. I'm
82:55 going to click on get rows and sheet and we have to create our credential once
82:58 again. So we're going to create new credential. We're going to be able to do
83:02 ooth to sign in, but we're going to have to go back to Google Cloud and we're
83:05 going to have to grab a sheet and make sure that we have the Google Sheets API
83:08 enabled. So, we'll come in here, we'll click enable, and now once this is good
83:12 to go, we'll be able to sign in using OOTH 2. So, very similar to what we just
83:16 had to do for Gmail in that previous workflow. But now, we can sign in. So,
83:20 once again, choosing my email, allowing it to have access, and then we're
83:23 connected successfully, and then giving this a good name. And now, what we can
83:26 do is choose the document and the sheet that it's going to be pulling from. So,
83:29 I'm going to show you. I have one called LinkedIn posts, and I only have one
83:33 sheet, but let's show you the sheet real quick. So, LinkedIn posts, what we have
83:38 is a topic, a status, and a content. And we're just basically going to be pulling
83:42 in one row where the status equals to-do, and then we are going to um
83:46 create the content, upload it back in right here, and then we're going to
83:50 change the status to created. So, then this same row doesn't get pulled in
83:52 every day. So, how this is going to work is that we're going to create a filter.
83:56 So the first filter is going to be looking within the status column and it
84:01 has to equal to-do. And if we click on test step, we should see that we're
84:04 going to get like all of these items where there's a bunch of topics. But we
84:08 don't want that. We only want to get the first row. So at the bottom here, add
84:12 option. I'm going to say return only first matching row. Check that on. We'll
84:15 test this again. And now we're only going to be getting that top row to
84:19 create content on. Cool. So we have our first step here, which is just getting
84:23 the content from the Google sheet. Now, what we're going to do is we need to do
84:27 some web search on this topic in order to create that content. So, I'm going to
84:30 add a new node. This one's going to be called an HTTP request. So, we're going
84:34 to be making a request to a specific API. And in this case, we're going to be
84:38 using Tavly's API. So, go on over to tavly.com and create a free account.
84:42 You're going to get a,000 searches for free per month. Okay, here we are in my
84:46 account. I'm on the free researcher plan, which gives me a thousand free
84:49 credits. And right here, I'm going to add an API key. We're going to name it,
84:54 create a key, and we're going to copy this value. And so, you'll start to get
84:56 to the point when you connect to different services, you always need to
85:00 have some sort of like token or API key. But anyways, we're going to grab this in
85:03 a sec. What we need to do now is go to the documentation that we see right
85:06 here. We're going to click on API reference. And now we have right here.
85:10 This is going to be the API that we need to use in order to search the web. So,
85:14 I'm not going to really dive into like everything about HTTP requests right
85:17 now. I'm just going to show you the simple way that we can get this set up.
85:21 So first thing that we're going to do is we obviously see that we're using an
85:25 endpoint called Tavali search and we can see it's a post request which is
85:28 different than like a git request and we have all these different things we need
85:31 to configure and it can be confusing. So all we want to do is on the top right we
85:35 see this curl command. We're going to click on the copy button. We're going to
85:40 go back into our NEN, hit import curl, paste in the curl command, hit
85:46 import, and now the whole node magically just basically filled in itself. So
85:50 that's really awesome. And now we can sort of break down what's going on. So
85:53 for every HTTP request, you have to have some sort of method. Typically, when
85:58 you're sending over data to a service, which in this case, we're going to be
86:01 sending over data to Tavali. It's going to search the web and then bring data
86:05 back to us. That's a post request because we're sending over body data. If
86:09 we were just like kind of trying to hit an and if we were just trying to access
86:14 like you know um bestbuy.com and we just wanted to scrape the information that
86:17 could just be a simple git request because we're not sending anything over
86:20 anyways then we're going to have some sort of base URL and endpoint which is
86:24 right here. The base URL we're hitting is api.com/tavaly and then the endpoint
86:30 we're hitting is slash search. So back in the documentation you can see right
86:34 here we have slash search but if we were doing like an extract we would do slash
86:37 extract. So that's how you can kind of see the difference with the endpoints.
86:40 And then we have a few more things to configure. The first one of course is
86:44 our authorization. So in this case, we're doing it through a header
86:46 parameter. As you can see right here, the curl command set it up. Basically
86:51 all we have to do is replace this um token with our API key from Tavi. So I'm
86:56 going to go back here, copy that key in N. I'm going to get rid of token and
87:00 just make sure that you have a space after the word bearer. And then you can
87:03 paste in your token. And now we are connected to Tavi. But we need to
87:07 configure our request before we send it off. So right here are the parameters
87:11 within our body request. And I'm not going to dive too deep into it. You can
87:13 go to the documentation if you want to understand like you know the main thing
87:17 really is the query which is what we're searching for. But we have other things
87:20 like the topic. It can be general or news. We have search depth. We have max
87:24 results. We have a time range. We have all this kind of stuff. Right now I'm
87:28 just going to leave everything here as default. We're only going to be getting
87:31 one result. And we're going to be doing a general topic. We're going to be doing
87:34 basic search. But right now, if we hit test step, we should see that this is
87:37 going to work. But it's going to be searching for who is Leo Messi. And
87:40 here's sort of like the answer we get back as well as a URL. So this is an
87:45 actual website we could go to about Lionel Messi and then some content from
87:51 that website. Right? So we are going to change this to an expression so that we
87:54 can put a variable in here rather than just a static hard-coded who is Leo
87:58 Messi. We'll delete that query. And all we're going to do is just pull in our
88:02 topic. So, I'm just going to simply pull in the topic of AI image generation.
88:06 Obviously, it's a variable right here, but this is the result. And then we're
88:09 going to test step. And this should basically pull back an article about AI
88:14 image generation. And you know, so here is a deep AI um link. We'll go to it.
88:19 And we can see this is an AI image generator. So maybe this isn't exactly
88:23 what we're looking for. What we could do is basically just say like, you know, we
88:28 could hardcode in search the web for. And now it's going to be saying search
88:31 the web for AI image generation. We could come in here and say yeah actually
88:34 you know let's get three results not just one. And then now we could test
88:37 that step and we're going to be getting a little bit different of a search
88:42 result. Um AI image generation uses text descriptions to create unique visuals.
88:45 And then now you can see we got three different URLs rather than just one.
88:49 Anyways, so that's our web search. And now that we have a web search based on
88:53 our defined topic, we just need to write that content. So I'm going to click on
88:58 the plus. I'm going to grab an AI agent. And once again, we're not giving it the
89:01 connected chat trigger node to look at. That's nowhere to be found. We're going
89:05 to feed in the research that was just done by Tavi. So, I'm going to click on
89:10 expression to open this up. I'm going to say article one with a colon and I'm
89:15 just going to drag in the content from article one. I'm going to say article 2
89:20 with a colon and just drag in the content from article 2. And then I'm
89:25 going to say article 3 colon and just drag in the content from the third
89:29 article. So now it's looking at all three article contents. And now we just
89:32 need to give it a system prompt on how to write a LinkedIn post. So open this
89:36 up. Click on add option. Click on system message. And now let's give it a prompt
89:41 about turning these three articles into a LinkedIn post. Okay. So I'm heading
89:45 over to my custom GPT for prompt architect. If you want to access this,
89:48 you can get it for free by joining my free school community. Um you'll join
89:51 that. It's linked in the description and then you can just search for prompt
89:54 architect and you should find the link. Anyways, real quick, it's just asking
89:58 for some clarification questions. So, anyways, I'm just shooting off a quick
90:01 reply and now it should basically be generating our system prompt for us. So,
90:05 I'll check in when this is done. Okay, so here is the system prompt. I am going
90:09 to just paste it in here and I'm just going to, you know, disclaimer, this is
90:12 not perfect at all. Like, I don't even want this tool section at all because we
90:16 don't have a tool hooked up to this agent. Um, we're obviously just going to
90:19 give it a chat model real quick. So, in this case, what I'm going to do is I'm
90:22 going to use Claude 3.5 Sonnet just because I really like the way that it
90:25 writes content. So, I'm using Claude through Open Router. And now, let's give
90:28 it a run and we'll just see what the output looks like. Um, I'll just click
90:31 into here while it's running and we should see that it's going to read those
90:34 articles and then we'll get some sort of LinkedIn post back. Okay, so here it is.
90:39 The creative revolution is here and it's AI powered. Gone are the days of hiring
90:42 expensive designers or struggling with complex software. Today's entrepreneurs
90:46 can transform ideas into a stunning visuals instantly using AI image
90:50 generators. So, as you can see, we have a few emojis. We have some relevant
90:53 hashtags. And then at the end, it also said this post, you know, it kind of
90:56 explains why it made this post. We could easily get rid of that. If all we want
90:59 is the content, we would just have to throw that in the system prompt. But now
91:03 that we have the post that we want, all we have to do is send it back into our
91:07 Google sheet and update that it was actually made. So, we're going to grab
91:11 another sheets node. We're going to do update row and sheet. And this one's a
91:14 little different. It's not just um grabbing stuff from a row. We're trying
91:18 to update stuff. So, we have to say what document we want, what sheet we want.
91:21 But now, it's asking us what column do we want to match on. So, basically, I'm
91:25 going to choose topic. And all we have to do is go all the way back down to the
91:28 sheet. We're going to choose the topic and drag it in right here. Which is
91:32 basically saying, okay, when this node gets called, whenever the topic equals
91:37 AI image generation, which is a variable, obviously, whatever whatever
91:40 topic triggered the workflow is what's going to pop up here. We're going to
91:44 update that status. So, back in the sheets, we can see that the status is
91:47 currently to-do, and we need to change it to created in order for it to go
91:51 green. So, I'm just going to type in created, and obviously, you have to
91:53 spell this correctly the same way you have it in your Google Sheets. And then
91:56 for the content, all I'm going to do is we're just going to drag in the output
92:00 of the AI agent. And as you can see, it's going to be spitting out the
92:03 result. And now if I hit test step and we go back into the sheet, we'll
92:06 basically watch this change. Now it's created. And now we have the content of
92:10 our LinkedIn post as well with some justification for why it created the
92:14 post like this. And so like I said, you could basically have this be some sort
92:17 of, you know, LinkedIn content making machine where every day it's going to
92:21 run at 7:00 a.m. It's going to give you a post. And then what you could do also
92:24 is you can automate this part of it where you're basically having it create
92:27 a few new rows every day if you give it a certain sort of like general topic to
92:32 create topics on and then every day you can just have more and more pumping out.
92:35 So that is going to do it for our third and final workflow. Okay, so that's
92:39 going to do it for this video. I hope that it was helpful. You know, obviously
92:42 we connected to a ton of different credentials and a ton of different
92:46 services. We even made a HTTP request to an API called Tavali. Now, if you found
92:49 this helpful and you liked this sort of live step-by-step style and you're also
92:53 looking to accelerate your journey with NAN and AI automations, I would
92:56 definitely recommend to check out my paid community. The link for that is
92:58 down in the description. Okay, so hopefully those three workflows taught
93:02 you a ton about connecting to different services and setting up credentials.
93:05 Now, I'm actually going to throw in one more bonus step-by-step build, which is
93:09 actually one that I shared in my paid community a while back, and I wanted to
93:12 bring it to you guys now. So, definitely finish out this course, and if you're
93:14 still looking for some more and you like the way I teach, then feel free to check
93:17 out the paid community. The link for that's down in the description. We've
93:19 got a course in there that's even more comprehensive than what you're watching
93:22 right now on YouTube. We've also got a great community of people that are using
93:25 Niten to build AI automations every single day. So, I'd love to see you guys
93:28 in that community. But, let's move ahead and build out this bonus workflow. Hey
93:34 guys. So, today I wanted to do a step by step of an invoice workflow. And this is
93:39 because there's different ways to approach stuff like this, right? There's
93:42 the conversation of OCR. There's a conversation of maybe extracting text
93:46 from PDFs. Um, there's the conversation of if you're always getting invoices in
93:50 the exact same format, you probably don't need AI because you could use like
93:54 a code node to extract the different parameters and then push that through.
93:58 So, that's kind of stuff we're going to talk about today. And I I haven't showed
94:01 this one on YouTube. It's not like a YouTube build, but it's not an agent.
94:04 It's an AI powered workflow. And I also wanted to talk about like just the
94:07 foundational elements of connecting pieces, thinking about the workflow. So,
94:11 what we're going to do first actually is we're going to hop into Excalar real
94:14 quick. and I'm going to create a new one. And we're just going to real quickly
94:20 wireframe out what we're doing. So, first thing we're going to draw out here
94:24 is the trigger. So, we'll make this one yellow. We'll call this the
94:30 trigger. And what this is going to be is invoice. Sorry, we're going to do new
94:41 um Google Drive. So the Google Drive node, it's going to be triggering the
94:45 workflow and it's going to be when a new invoice gets dropped into
94:49 um the folder that we're watching. So that's the trigger. From there, and like
94:53 I said, this is going to be a pretty simple workflow. From there, what we're
94:56 going to do is basically it's going to be a PDF. So the first thing to
95:02 understand is actually let me just put Google Drive over here. So the first
95:07 thing to understand from here is um you know what what do the invoices
95:12 look like? These are the questions that we're going to have. So the first one's what
95:19 do the invoices look like? Um because that determines what happens next. So if
95:23 they are PDFs that happen every single time and they're always in the same
95:27 format, then next we'd want to do okay well we can just kind of extract the
95:33 text from this and then we can um use a code node to extract the information we
95:39 need per each parameter. Now if it is a scanned invoice where it's maybe not as
95:44 we're not maybe not as able to extract text from it or like turn it into a text
95:48 doc, we'll probably have to do some OCR element. Um, but if it's PDF that's
95:54 generated by a computer, so we can extract the text, but they're not going
95:57 to come through the same every time, which is what we have in this case. I
96:00 have two example invoices. So, we know we're overall we're looking for like
96:04 business name, client name, invoice number, invoice date, due date, payment
96:07 method, bank details, maybe stuff like that, right? But both of these are
96:10 formatted very differently. They all have the same information, but they're
96:15 formatted differently. So that's why we can't that's why we want to use an AI
96:19 sort of information extractor node. Um so that's one of the main questions. The
96:22 other ones we'd think about would be like you know where do they go? So once we get
96:30 them where do they go? Um you know the frequency of them coming in and then
96:34 also like really any other action. So bas building off of where do they go? It's
96:42 also like what actions will we take? So, does that mean um are we just going to
96:46 throw it in a CRM or are we just going to throw it in a CRM or maybe a database
96:49 or are we also going to like send them an automated follow-up based on you know
96:53 the email that we extract from it and say, "Hey, we received your invoice.
96:56 Thanks." So, like what does that look like? So, those are the questions we
96:59 were initially going to ask. Um and then that helps us pretty much plan out the
97:05 next steps. So because we figured out um extract the same like x amount of
97:21 fields. So because we found out we want fields but the formats may not be
97:31 [Music] consistent. We will use an AI information extractor. Um that is just a long
97:43 sentence. So shorten this up a little bit or sorry make it smaller a little
97:46 bit. Okay so we have that. Um the um like updated to our Google sheet which will
98:08 invoice which I'll just call invoice database and then a follow-up email can
98:14 be sent or no not a follow-up email we'll just say an email, an internal email will be sent.
98:22 So, an email will be sent to the internal billing team. Okay, so this is what we've got, right?
98:30 We have our questions. We've kind of answered the questions. So, now we know
98:32 what the rest of the flow is going to look like. We already know this is not
98:35 going to be an agent. It's going to be a workflow. So, what we're going to do is
98:38 we're going to add another node right here, which is going to be, you know,
98:43 PDF comes in. And what we want to do is we want to extract the text from that
98:51 PDF. Um let's make this text smaller. So we're going to extract the text. And
98:55 we'll do this by using a um extract text node. Okay, cool. Now
99:01 once we have the text extracted, what do we need to do? We need to
99:09 um just moving over these initial questions. So we have the text
99:13 extracted. Extracted. What comes next? What comes next is we need to
99:18 um like decide on the fields to extract. And how do we get this? We get this
99:29 from our invoice database. So let's quickly set up the invoice database. I'm
99:33 going to do this by opening up a Google sheet which we are just going to call
99:41 the oops invoice DB. So now we need to figure out what we actually want to put
99:48 into our invoice DB. So first thing we'll do is um you know we're pretending
99:53 that our business is called Green Grass. So we don't need that. We don't need the
99:55 business information. We really just need the client information. So invoice
100:00 number will be the first thing we want. So, we're just setting up our database
100:04 here. So, invoice number. From there, we want to get client name, client address,
100:08 client email, client phone. [Music] oops, client name, client
100:21 email, client address, and then we want client phone. Okay, so we have those
100:29 five things. And let's see what else we want. probably the amount. So, we'll
100:43 um, and due date. Invoice date and due date. Okay. Invoice date and due date. Okay.
100:52 So, we have these, what are these? Eight. Eight fields. And I'm just going
100:57 to change these colors so it looks visually better for us. So, here are the
101:01 fields we have and this is what we want to extract from every single invoice
101:04 that we are going to receive. Cool. So, we know we have these
101:09 eight things. I'm just going to actually fine. So, we have our eight
101:17 fields to extract um and then they're going to be pushed to invoice DB and then we'll set up the
101:23 once we have these fields we can basically um create our email. So this
101:29 is going to be an AI node that's going to info extract. So it's going to
101:34 extract the eight fields that we have over here. So we're going to send the data
101:39 into there and it's going to extract those fields. Once we extract those
101:47 fields, we don't probably need to set the data because because coming out of
101:51 this will basically be those eight fields. So um you know every time what's
101:57 going to happen is actually sorry let me add another node here so we can
102:01 connect these. So what's going to come out of here is one item which will be
102:05 the one PDF and then what's coming out of here will be eight items every time.
102:09 So that's what we've got. We could also want to think about maybe if two
102:12 invoices get dropped in at the same time. How do we want to handle that loop
102:16 or just push through? But we won't worry about that yet. So we've got one item
102:19 coming in here. the node that's extracting the info will push out the
102:22 eight items and the eight items only. And then what we can do from there is
102:31 update invoice DB and then from there we can also and this could be like out of
102:35 here we do two things or it could be like a uh sequential if that makes
102:39 sense. So, well, what else we know we need to do is we know that we also need
102:43 to email billing team. And so, what I was saying there is we could either have it like this
102:51 where at the same time it branches off and it does those two things. And it
102:54 really doesn't matter the order because they're both going to happen either way.
102:57 So, for now, to keep the flow simple, we'll just do this or we're going to
103:02 email the billing team. Um, and what's going to happen is, you
103:12 internal, because this is internal, we already know like the billing email. So,
103:21 billing@acample.com. This is what we're going to feed in because we already know
103:23 the billing email. We don't have to extract this from anywhere. Um, so we
103:30 have all the info we need. We will what else do we need to feed in
103:34 here? So some of these some of these fields we'll have to filter in. So some
103:42 of the extracted fields because like we want to say hey you know we got this invoice on this
103:50 date um to this client and it's due on this date. So, we'll have some of the
103:53 extracted fields. We'll have a billing example and then potentially
103:59 like potentially like previous or like an email template potentially like that's that's something
104:06 we can think about or we can just prompt So, yeah. Okay. Okay. So what we want to
104:17 do here is actually this. What we need to do is the email has to be generated
104:22 somewhere. So before we feed into an emailing team node and let me actually
104:26 change this. So we're going to have green nodes be AI and then blue nodes
104:30 are going to be not AI. So we're going to get another AI node right here which
104:35 is going to be craft email. So we'll connect these pieces once again.
104:43 Um, and so I hope this I hope you guys can see like this is me trying to figure
104:46 out the workflow before we get into nit because then we can just plug in these
104:49 pieces, right? Um, and so I didn't even think about this. I mean, obviously we would have
104:54 got in there and end and realized, okay, well, we need an email to actually
105:00 configure these next fields, but that's just how it works, right? So
105:05 anyways, this stuff is actually hooked up to the wrong place. We need this to
105:07 be hooked up over here to the craft email tool. So, email template will also be
105:14 hooked up here. And then the billing example will be hooked up. This is the
105:18 No, this will still go here because that's actually the email team or the
105:21 email node. So, email node, send email node, which is an action and we'll be feeding in this as
105:31 well as the actual email. So the email that's written by AI will be fed in. And I
105:40 think that ends the process, right? So we'll just add a quick Oops. We'll just add a quick
105:47 yellow note over here. And I always my my colors always change, but just trying to keep things
105:53 consistent. Like in here, we're just saying, okay, the process is going to
105:56 end now. Okay, so this is our workflow, right? New invoice PDF comes through. We
106:01 want to extract the text. We're using an extract text node which is just going to
106:05 be a static extract from PDF PDF or convert PDF to text file type of thing.
106:09 We'll get one item sent to an AI node to extract the eight fields we need. The
106:12 eight items will be fed into the next node which is going to update our Google
106:16 sheet. Um and I'll just also signify here this is going to be a Google sheet
106:19 because it's important to understand the integrations and like who's involved in
106:24 each process. So this is going to be AI. This is going to be AI and that's
106:29 going to be an extract node. This is going to be a Gmail node and then we
106:33 have the process end. Cool. So this is our wireframe. Now we can get into naden
106:38 and start building out. We can see that this is a very very sequential flow. We
106:42 don't need an agent. We just need two AI nodes So let us get into niten and start
106:51 building this thing. So um we know we know what's starting this
106:55 process which is which is a trigger. So, I'm going to grab a Google Drive
106:59 trigger. We're going to do um on changes to a specific file or no, no, specific
107:04 folder, sorry. Changes involving a specific folder. We're going to choose
107:08 our folder, which is going to be the projects folder, and we're going to be
107:12 watching for a file created. So, we've got our ABC Tech Solutions.
107:18 I'm going to download this as a PDF real quick. So, download as a PDF. I'm going
107:23 to go to my projects folder in the drive, and I'm going to drag this guy in
107:26 here. Um, there it is. Okay, so there's our PDF. We'll come in here and we'll hit
107:32 fetch test event. So, we should be getting our PDF. Okay, nice. We will just make sure it's the
107:39 right one. So, we we should see a ABC Tech Solutions Invoice. Cool. So, I'm
107:42 going to pin this data just so we have it here. So, just for reference, pinning
107:46 data, all it does is just keeps it here. So, if we were to refresh this this
107:50 page, we'll still have our pinned data, which is that PDF to play with. But if
107:53 we would have not pinned it, then we would have had to fetch test event once
107:56 again. So not a huge deal with something like this, but if you're maybe doing web
108:00 hooks or API calls, you don't want to have to do it every time. So you can pin
108:03 that data. Um or like an output of an AI node if you don't want to have to rerun the AI.
108:11 But anyway, so we have our our PDF. We know next based on our wireframe. And
108:16 let me just call this um invoice flow wireframe. So we know next is we need to extract
108:23 text. So perfect. We'll get right into NADN. We'll click on next and we will do
108:28 an extract from file. So let's see. We want to extract from PDF. And
108:34 although what do we have here? We don't have any binary. So we
108:40 were on the right track here, but we forgot that in order to we get the we
108:44 get the PDF file ID, but we don't actually have it. So what we need to do
108:49 here first is um basically download the file because we need the binary to then feed that into
109:04 So we need the binary. So, sorry if that's like I mean really small, but basically in order to
109:11 extract the text, we need to download the file first to get the binary and
109:16 then we can um actually do that. So, little little thing we missed in the
109:19 wireframe, but not a huge deal, right? So, we're going to extend this one off.
109:23 We're going to do a Google Drive node once again, and we're going to look at
109:27 download file. So, now we can say, okay, we're downloading a file. Um, we can
109:32 choose from a list, but this has to be dynamic because it's going to be based
109:35 on that new trigger every time. So, I'm going to do by ID. And now on the lefth
109:39 hand side, we can look for the file ID. So, I'm going to switch to schema real
109:44 quick. Um, so we can find the the ID of the file. We're just going to have to go
109:47 through. So, we have a permissions ID right here. I don't think that's the
109:51 right one. We have a spaces ID. I don't think that's the right one either. We're
109:56 looking for an actual file ID. So, let's see. parents icon link, thumbnail link, and
110:06 sometimes you just have to like find it So, I feel like I probably have just
110:13 skipped right over it. Otherwise, we'll IDs. Maybe it is this one. Yeah, I think
110:21 Okay, sorry. I think it is this one because we see the name is right here
110:23 and the ID is right here. So, we'll try this. We're referencing that
110:27 dynamically. We also see in here we could do a Google file conversion which
110:31 basically says um you know if it's docs convert it to HTML if it's drawings
110:35 convert it to that. If it's this convert it to that there's not a PDF one so
110:39 we'll leave this off and we'll hit test step. So now we will see we got the
110:43 invoice we can click view and this is exactly what we're looking at here with
110:47 the invoice. So this is the correct one. Now since we have it in our binary data
110:50 over here we have binary. Now we can extract it from the file. So um you know
110:56 on the left is the inputs on the right is going to be our output. So we're
111:00 extracting from PDF. We're looking in the input binary field called data which
111:05 is right here. So I'll hit test step and now we have text. So here's the actual
111:10 text right um the invoice the information we need and out of this is
111:12 what we're going to pass over to extract. So let's go back to the
111:18 wireframe. We have our text extracted. Now, what we want to do is extract um
111:22 the specific eight fields that we need. So, hopping back into the workflow, we
111:26 know that this is going to be an AI node. So, it's going to be an
111:28 information extractor. We have to first of all classify we we know that one item is
111:34 going in here and that's right here for us in the table, which is the actual
111:37 text of the invoice. So, we can open this up and we can see this is the text
111:40 of the invoice. We want to do it from attribute description. So, that's what it's
111:46 looking for. So, we can add our eight attributes. So, we know there's going to
111:48 be eight of them, right? So, we can create eight. But, let's just first of
111:52 all go into our database to see what we want. So, the first one's invoice
111:55 number. So, I'm going to copy this over here. Invoice number. And we just have
111:59 to describe what that is. So, I'm just going to say the number of the
112:03 invoice. And this is required. We're going to make them all required. So,
112:06 number of the invoice. Then we have the client name. Paste that in
112:11 here. Um, these should all be pretty self um explanatory. So the name of the
112:17 client we're going to make it required client email. So this is going
112:22 to be a little bit repetitive but the email of the client and let me just
112:31 quickly copy this for the next two client address. So there's client
112:36 address and we're going to required. And then what's the last one
112:48 here? Client phone. Paste that in there, which is obviously going to be the phone
112:53 number of the client. And here we can say, is this going to be a string or is
112:56 it going to be a number? I'm going to leave it right now as a string just
112:59 because over here on the left you can see the phone. We have parenthesis in
113:03 there. And maybe we want the format to come over with the parenthesis and the
113:07 little hyphen. So let's leave it as a string for now. We can always test and
113:10 we'll come back. But client phone, we're going to leave that. We have total
113:15 amount. Same reason here. I'm going to leave this one as a string because I
113:18 want to keep the dollar sign when we send it over to sheets and we'll see how
113:22 it comes over. But the total amount of the invoice required. What's coming next is invoice
113:31 date and due date. So, invoice date and due date, we can say these are
113:36 going to be dates. So, we're changing the var the data type here. They're both
113:41 required. And the date the invoice was sent. And then we're going to
113:47 say the date the invoice is due. So, we're going to make sure this works. If
113:49 we need to, we can get in here and make these descriptions more descriptive. But
113:53 for now, we're good. We'll see if we have any options. You're an expert
113:56 extraction algorithm. Only extracts relevant information from the text. If
113:59 you do not know the value of the attribute to extract, you may omit the
114:03 attributes value. So, we'll just leave that as is. Um, and we'll hit test step.
114:08 It's going to be looking at this text. And of course, we're using AI, so we
114:12 have to connect a chat model. So, this will also alter the performance. Right
114:15 now, we're going to go with a Google Gemini 20 flash. See if that's powerful
114:20 enough. I think it should be. And then, we're going to hit play once again. So,
114:22 now it's going to be extracting information using AI. And what's great
114:26 about this is that we already get everything out here in its own item. So,
114:30 it's really easy to map this now into our Google sheet. So, let's make sure
114:35 this is all correct. Um, invoice number. That looks good. I'm going to open up
114:38 the actual one. Yep. Client name ABC. Yep. Client email finance at ABC
114:44 Tech. Yep. Address and phone. We have address and phone. Perfect. We have total amount
114:53 is 141 175. 14175. We have um March 8th and March 22nd. If we go back up here, March 8th,
115:00 March 22nd. Perfect. So, that one extracted it. Well, and um okay, so we have one item coming out,
115:08 but technically there's eight like properties in there. So, anyways, let's
115:12 go back to our our uh wireframe. So, after we extracted the eight items, what
115:16 do we do next? We're going to put them into our Google Sheet um database. So,
115:21 what we know is we're going to grab a Google Sheets. We're going to do an
115:26 append row because we're adding a row. Um, we already have a credential
115:28 selected. So, hopefully we can choose our invoice database. It's just going to
115:32 be the first sheet, sheet one. And now what happens is we have to map the
115:36 columns. So, you can see these are dragable. We can grab each one. If I go
115:40 to schema, it's a little more apparent. So, we have these eight items. And it's
115:42 going to be really easy now that we use an information extractor because we can
115:46 just map, you know, invoice number to invoice number, client name, client
115:51 name, email, email. And it's referencing these variables because every time after
115:56 we do our information extractor, they're going to be coming out as JSON.output
115:59 and then invoice number. And then for client name, JSON.output client name. So
116:03 we have these dynamic variables that will happen every single time. And
116:06 obviously I'll show this when we do another example, but we can keep mapping
116:10 everything in. And we also did it in that order. So it's really really easy
116:14 to do. We're just dragging and dropping and we are finished. Cool. So if I hit
116:21 test step here, this is going to give us a message that says like here are the
116:25 fields basically. So there are the fields. They're mapped correctly. Come
116:28 into the sheets. We now have automatically gotten this updated in our
116:34 invoice database. And um that's that. So let me just change some of these nodes.
116:39 So this is going to be update database. Um this is information extractor extract
116:44 from file. I'm just going to say this is download binary. So now we know what's going on
116:50 in each step. And we'll go back to the wireframe real quick. What happens after
116:53 we update the database? Now we need to craft the email. And this is going to be
116:58 using AI. And what's going to go into this is some of the extracted fields and
117:01 maybe an email template. What we're going to do more realistically is just a
117:08 system prompt. So back into nitn let's add a um open AI message and model node. So
117:15 what we're going to do is we're going to choose our model to talk to. In this
117:19 case we'll go 40 mini. It should be powerful enough. And now we're going to
117:23 set up our system prompt and our user prompt. So at this point if you don't
117:27 understand the difference the system prompt is the instructions. So we're telling this node
117:33 how to behave. So first I'm going to change this node name to create email
117:38 because that's like obviously what's going on keeping you organized. And now
117:41 how do we explain to this node what its role is? So you are an email
117:48 expert. You will receive let me actually just open this up. You will receive
117:58 um invoice information. Your job is to notify the internal billing
118:07 team that um an invoice was received. Receive/s sent. Okay. So,
118:15 honestly, I'm going to leave it at that for now. It's really simple. If we
118:18 wanted to, we can get in here and change the prompting as far as like here is the
118:22 format. Here is the way you should be doing it. One thing I like to do is I
118:25 like to say, you know, this is like your overview. And then if we need to get
118:28 more granular, we can give it different sections like output or rules or
118:32 anything like that. I'm also going to say you are an email expert
118:38 for green grass corp named [Music] um named um Greeny. Okay, so we have
118:49 Greenie from Green Grass Corp. That's our email expert that's going to email
118:53 the billing team every time this workflow happens. So that's the
118:58 overview. Now in the user prompt, think of this as like when you're talking to
119:01 chatbt. So obviously I had chatbt create these invoices chatgbt this when we say hello
119:08 that's a user message because this is an interact like an an interaction and it's
119:12 going to change every time. But behind the scenes in this chatbt openai has a
119:16 system prompt in here that's basically like you're a helpful assistant. You
119:19 help you know users answer questions. So this window right here that we type in
119:24 is our user message and behind the scenes telling the node how to act is
119:30 our system prompt. Cool. So in here I like to have dynamic information go into
119:33 the user message while I like to have static information in the actual system
119:37 prompt. So except for maybe the except the exception usually of
119:42 like giving it the current time and day because that's an expression. So
119:46 anyways, let's change this to an expression. Let's make this full screen.
119:50 We are going to be giving it the invoice information that it needs to write the
119:55 email because that's what it that's what it's expecting. In the system prompt, we
119:59 said you will receive invoice information. So, first thing is going to
120:05 be invoice number. We are going to grab invoice number and just drag it in.
120:10 We're going to grab client name and just drag it in. So, it's going
120:14 to dynamically get these different things every time, right?
120:19 So, let's say maybe it doesn't even we don't need client email. Okay, maybe we
120:24 do. We want client email. Um, so we'll give it that. But the billing team right now doesn't need
120:30 the address or phone. Let's just say that. But it does want we do want them
120:36 to know the total amount of that invoice. And we definitely want them to
120:40 know the invoice date and the invoice due date. So we can we can now drag in these two
120:48 things. So this was us just being able to customize what the AI node sees. Just
120:53 keep in mind if we don't drag anything in here, even if it's all on the input,
120:59 the AI node doesn't see any of it. So let's hit test step and we'll see the
121:02 type of email we get. We're going to have to make some changes. I already
121:04 know because you know we have to separate everything. But what it did is
121:08 it created a subject which is new invoice received and then the invoice
121:12 number. Dear billing team, I hope this message finds you well. We've received
121:15 an invoice that requires your attention and then it lists out some information
121:18 and then it also signs off Greeny Green Grass Corp. So, first thing we want to do is um if
121:25 we go back to our wireframe, what we have to send in, and we didn't document this well enough
121:35 actually, but what goes into here is a um you know, in order to send an email,
121:40 we need a two, we need a subject, and we need the email body.
121:48 So, that those are the three things we need. the two is coming from here. So,
121:52 we know that. And the subject and email are going to come from the um craft
121:58 email node. So, we have the the two and then actually I'm going to move this up
122:01 here. So, now we can just see where we're getting all of our pieces from.
122:04 So, the two is coming from internal knowledge. This can be hardcoded, but
122:07 the subject and email are going to be dynamic from the AI note. Cool. So what
122:13 we want to do now say output and we're going to tell it how to output information. So
122:26 output output the following parameters separately and we're just
122:33 going to say subject and email. So now it should be outputting two parameters
122:37 separately, but it's not going to because even though it says here's the
122:40 subject and then it gives us a subject and then it says here's the email and
122:43 gives us an email, they're still in one field. Meaning if we hook up another
122:48 node which would be a Gmail send email as we here. Okay, so now this is the next
122:58 node. Here's the fields we need. But as you can see coming out of the create
123:03 email AI node, we have this whole parameter called content which has the
123:07 subject and the email. And we need to get these split up so that we can drag
123:10 one into the subject and one into the two. Right? So first of all, I'm just
123:14 making these expressions just so we can drag stuff in later. Um, and so that's
123:20 what we need to do. And our fix there is we come into here and we just check this
123:25 switch that says output content is JSON, which will then will rerun. And now
123:29 we'll get subject and body. Subject and email in two different
123:34 fields right here. We can see which is awesome because then we can open up our
123:38 send email node. We can grab our subject. It's going to be dynamic. And
123:41 we can grab our email. It's going to be dynamic. Perfect. We're going to change
123:45 this to text. And we're going to add an option down here. And we're just going
123:49 to say append nadn attribution and turn that off because we just don't want to see the
123:55 message at the bottom that says this was sent by nadn. And if we go back to our
124:00 wireframe wherever that is over here, we know that this is the email that's going
124:03 to be coming through or we're going to be sending to every time because we're
124:06 sending internally. So we can put that right in here, not as a variable. Every
124:10 time this is going to be sending to billing@acample.com. So this really
124:13 could be fixed. It doesn't have to be an expression. Cool. So we will now hit test step and we can
124:21 see that we got this this email sent. So let me open up a new
124:26 tab. Let me go into whoa into our Gmail. I will go to the sent items and
124:34 we will see we just got this billing email. So obviously it was a fake email
124:37 but this is what it looks like. We've received a new invoice from ABC Tech.
124:40 Please find the details below. We got invoice number, client name, client
124:45 email, total amount, total invoice date, due date. Please process these this
124:48 invoice accordingly. So that's perfect. Um, we could also, if
124:54 we wanted to, we could prompt it a little bit differently to say, you know,
124:59 like this has been updated within the database and, um, you can check it out
125:03 here. So, let's do that real quick. What we're going to do is we're going to say
125:06 because we've already updated the database, I'm going to come into our
125:09 Google sheet. I'm going to copy this link and I'm going to we're basically going to bake this
125:19 So, I'm going to say we're going to give it a section called
125:26 email. Inform the billing team of the invoice. let them know we have also updated this
125:38 in the invoice database and they can view it here and we'll just give them
125:43 this link to that Google doc. So every time they'll just be able to send that
125:45 over. So I'm going to hit test up. We should see a new email over here which
125:50 is going to include that link I hope. So there's the link. We'll run this email
125:55 tool again to send a new email. Hop back into Google Gmail. We got a new one. And
126:01 now we can see we have this link. So you can view it here. We've already updated
126:05 this in the invoice database. We click on the link. And now we have our
126:09 database as well. So cool. Now let's say at this point we're happy with our
126:13 prompting. We're happy with the email. This is done. If we go back to the
126:17 wireframe, the email is the last node. So um maybe just to make it look
126:21 consistent, we will just add over something here that just says nothing.
126:25 And now we know that the process is done because nothing to do. So this is
126:28 basically like this is what we wireframed out. So we know that we're
126:32 happy with this process. We understand what's going on. Um but now let's unpin
126:37 this data real quick and let's drop in another invoice to make sure that even
126:41 though it's formatted differently. So this XYZ formatted differently, but the
126:45 AI should still be able to extract all the information that we need. So I'm
126:48 going to come in here and download this one as a PDF. We have it right there. I'm going
126:54 to drop it into our Google Drive. So, we have XYZ Enterprises now. Come back into
127:00 the workflow and we'll hit test fetch test event. Let's just make sure this is
127:03 the right one. So, XYZ Enterprises. Nice. And I'm just going to hit test workflow and
127:10 we'll watch it download, extract, get the information, update the database,
127:13 create the email, send the email, and then nothing else should happen after
127:19 that. So, boom, we're done. Let's click into our email. Here we have our new
127:23 invoice received. So it updated differently like the subjects dynamic
127:27 because it was from XYZ a different invoice number. As you remember the ABC
127:31 one, it started with TH AI and this one starts with INV. So that's why the subject is different.
127:38 Dear billing team, we have received a new invoice from XYZ Enterprises. Please
127:42 find the details below. There's the number, the name, all this kind of
127:46 information. Um the total amount was 13856. Let's go make sure that's right.
127:51 Total amount 13856. Um March 8th March 22nd once again. Is that
128:01 correct? March 8th March 22nd. Nice. And finance XYZ XYZ. Perfect. Okay. The
128:05 invoice has been updated in the database. You can view it here. So let's
128:08 click on that link. Nice. We got our information populated into the
128:12 spreadsheet. As you can see, it all looks correct to me as well. Our strings
128:15 are coming through nice and our dates are coming through nice. So I'm going to
128:18 leave it as is. Now, keep in mind because these are technically coming
128:23 through as strings, um, that's fine for phone, but Google Sheets automatically
128:27 made these numbers, I believe. So, if we wanted to, we could sum these up because
128:31 they're numbers. Perfect. Okay, cool. So, that's how that works,
128:36 right? Um, that's the email. We wireframed it out. We tested it with two
128:40 different types of invoices. They weren't consistent formatting, which
128:43 means we couldn't probably have used a code node, but the AI is able to read
128:47 this and extract it. As you can see right here, we got the same eight items
128:51 extracted that we were looking for. So, that's perfect. Um, cool. So, going to
129:00 will yeah I will attach the actual flow and I will attach the
129:06 just a picture of this wireframe I suppose in this um post and by now you
129:12 guys have already seen that I'm sure but yeah I hope this was helpful um the
129:16 whole process of like the way that I approached and I know this was a 35minut
129:20 build so it's not like the same as like building something more complex but as
129:24 far as like a general workflow you know this is a pretty solid solid one to get
129:27 started with. Um it it shows elements of using AI within a simple workflow that's going to
129:35 be sequential and it shows like you know the way we have to reference our
129:38 variables and how we have to drag things in and um obviously the component of
129:42 like wireframing out in the beginning to understand the full flow at least 80%
129:48 85% of the full flow before you get in there. So cool. Hope you guys enjoy this
129:52 one and I will see you guys in the community. Thanks. All right. All right,
129:55 I hope you guys enjoyed those step-by-step builds. Hopefully, right
129:57 now, you're feeling like you're in a really good spot with Naden and
130:01 everything starting to piece together. This next video we're going to move into
130:05 is about APIs because in order to really get more advanced with our workflows and
130:08 our AI agents, we have to understand the most important thing, which is APIs.
130:12 They let our Nin workflows connect to anything that you actually want to use.
130:15 So, it's really important to understand how to set up. And when you understand
130:18 it, the possibilities are endless. And it's really not even that difficult. So,
130:21 let's break it down. If you're building AI agents, but you don't really
130:24 understand what an API is or how to use them, don't worry. You're not alone. I
130:27 was in that exact same spot not too long ago. I'm not a programmer. I don't know
130:31 how to code, but I've been teaching tens of thousands of people how to build real
130:34 AI systems. And what changed the game for me was when I understood how to use
130:38 APIs. So, in this video, I'm going to break it down as simple as possible, no
130:41 technical jargon, and by the end, you'll be confidently setting up API calls
130:45 within your own Agentic workflows. Let's make this easy. So the purpose of this
130:48 video is to understand how to set up your own requests so you can access any
130:52 API because that's where the power truly comes in. And before we get into NDN and
130:56 we set up a couple live examples and I show you guys my thought process when
131:00 I'm setting up these API calls. First I thought it would just be important to
131:04 understand what an API really is. And APIs are so so powerful because let's
131:08 say we're building agents within NADN. Basically we can only do things within
131:12 NADN's environment unless we use an API to access some sort of server. So
131:16 whether that's like a Gmail or a HubSpot or Air Table, whatever we want to access
131:20 that's outside of Niden's own environment, we have to use an API call
131:24 to do so. And so that's why at the end of this video, when you completely
131:27 understand how to set up any API call you need, it's going to be a complete
131:31 gamecher for your workflows and it's also going to unlock pretty much
131:35 unlimited possibilities. All right, now that we understand why APIs are
131:38 important, let's talk about what they actually do. So API stands for
131:42 application programming interface. And at the highest level in the most simple
131:45 terms, it's just a way for two systems to talk to each other. So NAND and
131:49 whatever other system we want to use in our automations. So keeping it limited
131:53 to us, it's our NAN AI agent and whatever we want it to interact with.
131:56 Okay, so I said we're going to make this as simple as possible. So let's do it.
132:00 What we have here is just a scenario where we go to a restaurant. So this is
132:03 us right here. And what we do is we sit down and we look at the menu and we look
132:06 at what food that the restaurant has to offer. And then when we're ready to
132:10 order, we don't talk directly to the kitchen or the chefs in the kitchen. We
132:13 talk to the waiter. So we'd basically look at the menu. We'd understand what
132:16 we want. Then we would talk to the waiter and say, "Hey, I want the chicken
132:19 parm." The waiter would then take our order and deliver it to the kitchen. And
132:23 after the kitchen sees the request that we made and they understand, okay, this
132:26 person wants chicken parm. I'm going to grab the chicken parm, not the salmon.
132:30 And then we're basically going to feed this back down the line through the
132:33 waiter all the way back to the person who ordered it in the first place. And
132:37 so that's how you can see we use an HTTP request to talk to the API endpoint and
132:42 receive the data that we want. And so now a little bit more of a technical
132:45 example of how this works in NADN. Okay, so here is our AI agent. And when it
132:48 wants to interact with the service, it first has to look at that services API
132:53 documentation to see what is offered. Once we understand that, we'll read that
132:56 and we'll be ready to make our request and we will make that request using an
133:00 HTTP request. From there, that HTTP request will take our information and
133:04 send it to the API endpoint. the endpoint will look at what we ordered
133:07 and it will say, "Okay, this person wants this data. So, I'm going to go
133:10 grab that. I'm going to send it back, send it back to the HTTP request." And
133:14 then the HTTP request is actually what delivers us back the data that we asked
133:17 and we know that it was available because we had to look at the API
133:21 documentation first. So, I hope that helps. I think that looking at it
133:24 visually makes a lot more sense, especially when you hear, you know, HTTP
133:28 API endpoint, all this kind of stuff. But really, it's just going to be this
133:31 simple. So now let me show an example of what actually this looks like in Naden
133:34 and when you would use one and when you wouldn't need to use one. So here we
133:37 have two examples where we're accessing a service called open weather map which
133:41 basically just lets us grab the weather data from anywhere in the world. And so
133:44 on the left what we're doing is we're using open weather's native integration
133:48 within nadn. And so what I mean by native integration is just that when we
133:51 go into nadn and we click on the plus button to add an app and we want to see
133:54 like you know the different integrations. It has air tableable it
133:58 has affinity it has airtop it has all these AWS things. It has a ton of native
134:02 integrations and all that a native integration is is an HTTP request but
134:07 it's just like wrapped up nicely in a UI for us to basically fill in different
134:11 parameters. And so once you realize that it really clears everything up because
134:14 the only time you actually need to use an HTTP request is if the service you
134:19 want to use is not listed in this list of all the native integrations. Anyways,
134:23 let me show you what I mean by that. So like I said on the left we have Open
134:27 Weather Maps native integration. So basically what we're doing here is we're
134:29 sending over, okay, I'm using open weather map. I'm going to put in the
134:32 latitude and the longitude of the city that I'm looking for. And as you can see
134:36 over here, what we get back is Chicago as well as a bunch of information about
134:39 the current weather in Chicago. And so if you were to fill this out, it's super
134:42 super intuitive, right? All you do is put in the Latin long, you choose your
134:46 format as far as imperial or metric, and then you get back data. And that's the
134:49 exact same thing we're doing over here where we use an HTTP request to talk to
134:54 Open Weather's API endpoint. And so this is just looks a little more scary and
134:57 intimidating because we have to set this up ourselves. But if we zoom in, we can
135:00 see it's pretty simple. We're making a git request to open weather maps URL
135:05 endpoint. And then we're putting over the lat and the long, which is basically
135:09 the exact same from the one on the left. And then, as you can see, we get the
135:13 same information back about Chicago, and then some weather information about
135:16 Chicago. And so the purpose of that was just to show you guys that these native
135:20 integrations, all we're doing is we're accessing some sort of API endpoint. it
135:24 just looks simpler and easier and there's a nice user interface for us
135:27 rather than setting everything up manually. Okay, so hopefully that's
135:30 starting to make a little more sense. Let's move down here to the way that I
135:34 think about setting up these HTTP requests, which is we're basically just
135:37 setting up filters and making selections. All we're doing is we're
135:42 saying, okay, I want to access server X. When I access server X, I need to tell
135:45 it basically, what do I want from you? So, it's the same way when you're going
135:48 to order some pizza. You have to first think about which pizza shop do I want
135:52 to call? And then once you call them, it's like, okay, I need to actually
135:54 order something. It has to be small, medium, large. It has to be pepperoni or
135:58 cheese. You have to tell it what you want and then they will send you the
136:01 data back that you asked for. So when we're setting these up, we basically
136:04 have like five main things to look out for. The first one you have to do every
136:08 time, which is a method. And the two most common are going to be a get or a
136:11 post. Typically, a get is when you're just going to access an endpoint and you
136:14 don't have to send over any information. You're just going to get something back.
136:17 But a post is when you're going to send over certain parameters and certain data
136:21 and say okay using this information send me back what I'm asking for. The great
136:24 news is and I'll show you later when we get into any end to actually do a live
136:29 example. It'll always tell you to say you know is this a get or a post. Then
136:32 the next thing is the endpoint. You have to tell it like which website or you
136:35 know which endpoint you want to actually access which URL. From there we have
136:38 three different parameters to set up. And also just realize that this one
136:43 should say body parameters. But this used to be the most confusing part to
136:46 me, but it's really not too bad at all. So, let's break it down. So, keep in
136:49 mind when we're looking at that menu, that API documentation, it's always
136:52 going to basically tell us, okay, here are your query parameters, here are your
136:55 header parameters, and here are your body parameters. So, as long as you
136:57 understand how to read the documentation, you'll be just fine. But
137:01 typically, the difference here is that when you're setting up query parameters,
137:04 this is basically just saying a few filters. So if you search pizza into
137:08 Google, it'll come google.com/arch, which would be Google's endpoint. And then we would have a
137:14 question mark and then Q equals and then a bunch of different filters. So as you
137:16 can see right here, the first filter is just Q equals pizza. And the Q is, you
137:21 know, it stands for query parameters. And you don't even have to understand
137:23 that. That's just me showing you kind of like a real example of how that works.
137:26 From there, we have to set up a header parameter, which is pretty much always
137:30 going to exist. And I basically just think of header parameters as, you know,
137:34 authorizing myself. So, usually when you're doing some sort of API where you
137:37 have to pay, you have to get a unique API key and then you'll send that key.
137:40 And if you don't put your key in, then you're not going to be able to get the
137:43 data back. So, like if you're ordering a pizza and you don't give them your
137:46 credit card information, they're not going to send you a pizza. And usually
137:49 an API key is something you want to keep secret because let's say, you know, you
137:53 put 10 bucks into some sort of API that's going to create images for you.
137:56 If that key gets leaked, then anyone could use that key and could go create
138:00 images for themselves for free, but they'd be running down your credits. And
138:03 these can come in different forms, but I just wanted to show you a really common
138:06 one is, you know, you'll have your key value pairs where you'll put
138:10 authorization as the name and then in the value you'll put bearer space your
138:15 API key or in the name you could just put API_key and then in the value you'd
138:19 put your API key. But once again, the API documentation will tell you how to
138:23 configure all this. And then finally, the body parameters if you need to send
138:26 something over to get something back. Let's say we're, you know, making an API
138:31 call to our CRM and we want to get back information about John. We could send
138:35 over something like name equals John. The server would then grab all records
138:39 that have name equal John and then it would send them back to you. So those
138:42 are basically like the five main things to look out for when you're reading
138:45 through API documentation and setting up your HTTP request. But the beautiful
138:51 thing about living in 2025 is that we now have the most beautiful thing in the
138:55 world, which is a curl command. And so what a curl command is is it lets you
138:59 hit copy and then you basically can just import that curl into nadn and it will
139:03 pretty much set up the request for you. Then at that point it really is just
139:07 like putting in your own API key and tweaking a few things if you want to. So
139:11 let's take a look at this curl statement for a service called Tavi. As you can
139:14 see that's the endpoint is api.tavi.com. All this basically does is
139:18 it lets you search the internet. So you can see here this curl statement tells
139:21 us pretty much everything we need to know to use this. So it's telling us
139:24 that it's going to be a post. It's showing us the API endpoint that we're
139:27 going to be accessing. It shows us how to set up our header. So that's going to
139:30 be authorization and then it's going to be bearer space API token. It's
139:34 basically just telling us that we're going to get this back in JSON format.
139:37 And then you can see all of these different key value pairs right here in
139:40 the data section. And these are basically going to be body parameters
139:43 where we can say, you know, query is who is Leo Messi? So that's what we' be
139:47 searching the internet for. We have topic equals general. We have search
139:50 depth equals basic. So hopefully you can see all of these are just different
139:54 filters where we can choose okay do we want you know do we want one max result
139:58 or do we want four or do we have a time range or do we not. So this is really
140:02 just at the end of the day. It's basically like ordering Door Dash
140:05 because what we would have up here is you know like what actual restaurant do
140:09 we want to order food from? We would put in our credit card information. We would
140:12 say do we want this to be delivered to us? Do we want to pick it up? We would
140:15 basically say you know do you want a cheeseburger? No pickles. No onions.
140:18 Like what are the different things that you want to flip? Do you want a side? Do
140:21 you want fries? Or do you want salad? Like, what do you want? And so once you
140:25 get into this mindset where all I have to do is understand this documentation
140:28 and just tweak these little things to get back what I want, it makes setting
140:33 up API calls so much easier. And if another thing that kind of intimidates
140:37 you is the aspect of JSON, it shouldn't because all it is is
140:41 a key value pair like we kind of talked about. You know, this is JSON right here
140:44 and you're going to send your body parameter over as JSON and you're also
140:48 going to get back JSON. So the more and more you use it, you're going to
140:51 recognize like how easy it is to set up. So anyways, I hope that that made sense
140:54 and broke it down pretty simply. Now that we've seen like how it all works,
140:58 it's going to be really valuable to get into niten. I'm going to open up a API
141:02 documentation and we're just going to set up a few requests together and we'll
141:06 see how it works. Okay, so here's the example. You know, I did open weather's
141:09 native integration and then also open weather as an HTTP request. And you can
141:13 see it was like basically the exact same thing. Um, so let's say that what we
141:17 want to do is we want to use Perplexity, which if you guys don't know what
141:20 Perplexity is, it is basically, you know, kind of similar to Chatbt, but it
141:23 has really good like internet search and research. So let's say we wanted to use
141:28 this and hook it up to an AI agent, so it can do web search for us. But as you
141:33 can see, if I type in Perplexity, there's no native integration for
141:37 Perplexity. So that basically signals to us, okay, we can only access Perplexity
141:41 using an HTTP request. And real quick side note, if you're ever thinking to
141:45 yourself, hm, I wonder if I can have my agent interact with blank. The answer is
141:49 yes, if there's API documentation. And all you have to do typically to find out
141:53 if there's API documentation is just come in here and be like, you know,
141:56 Gmail API documentation. And then we can see Gmail API is a restful API, which
142:01 means it has an API and we can use it within our automations. Anyways, getting
142:04 back to this example of setting up a perplexity HTTP request. We have our
142:08 HTTP request right here and it's left completely blank. So, as you can see, we
142:12 have our method, we have our endpoint, we have query, header, and body
142:16 parameters, but nothing has been set up yet. So, what we need to do is we would
142:19 head over to Perplexity, as you can see right here. And at the bottom, there's
142:22 this little thing called API. So, I'm going to click on that. And this opens
142:26 up this little page. And so, what I have here is Perplexity's API. If I click on
142:30 developer docs, and then right here, I have API reference, which is integrate
142:34 the API into your workflows, which is exactly what we want to do. This page is
142:38 where people might get confused and it looks a little bit intimidating, but
142:40 hopefully this breakdown will show you how you can understand any API doc,
142:44 especially if there's a curl command. So, all I'm going to do first of all is
142:48 I'm just going to right away hit copy and make sure you know you're in curl.
142:51 If you're in Python, it's not the same. So, click on curl. I copied this curl
142:54 command and I'm going to come back into nit hit import curl and all I have to do
142:59 is paste. Click import and basically what you're going to see is this HTTP
143:02 request is going to get basically populated for us. So now we have the
143:06 method has been changed to post. We have the correct URL which is the API
143:10 endpoint which basically is telling this node, okay, we're going to use
143:13 Perplexity's API. You can see that the curl had no query parameters. So that's
143:17 left off. It did turn on headers which is basically just having us put our API
143:21 key in there. And then of course we have the JSON body that we need to send over.
143:24 Okay. So at this point what I would do is now that we have this set up, we know
143:28 we just need to put in a few things of our own. So the first thing to tackle is
143:32 how do we actually authorize ourselves into Perplexity? All right. All right.
143:35 So, I'm back in Perplexity. I'm going to go to my account and click on settings.
143:37 And then all I'm going to do is basically I need to find where I can get
143:41 my API key. So, on the lefth hand side, if I go all the way down, I can see API
143:45 keys. So, I'll click on that. And all this is going to do is this shows my API
143:48 key right here. So, I'll have to click on this, hit copy, come back into NN,
143:52 and then all I'm going to do is I'm going to just delete where this says
143:55 token, but I'm going to make sure to leave a space after bearer and hit
144:00 paste. So now this basically authorizes ourselves to use Perplexity's endpoint.
144:04 And now if we look down at the body request, we can see we have this thing
144:08 set up for us already. So if I hit test step, this is real quick going to make a
144:12 request over. It just hit Perplexity's endpoint. And as you can see, it came
144:15 back with data. And what this did is it basically searched Perplexity for how
144:20 many stars are there in our galaxy. And that's where right here we can see the
144:23 Milky Way galaxy, which is our galaxy, is estimated to contain between 100
144:27 billion and 400 billion stars, blah blah blah. So we know basically okay if we
144:30 want to change how this endpoint works what data we're going to get back this
144:34 right here is where we would change our request and if we go back into the
144:37 documentation we can see what else we have to set up. So the first thing to
144:40 notice is that there's a few things that are required and then some things that
144:43 are not. So right here we have you know authorization that's always required.
144:47 The model is always required like which perplexity model are we going to use?
144:51 The messages are always required. So this is basically a mix of a system
144:55 message and a user message. So here the example is be precise and concise and
144:58 then the user message is how many stars are there in the galaxy. So if I came
145:03 back here and I said you know um be funny in your answer. So I'm basically
145:08 telling this model how to act and then instead of how many stars are there in
145:11 the galaxy I'm just going to say how long do cows live? And I'll make another
145:16 request off to perplexity. So you can see what comes back is this longer
145:20 content. So it's not being precise and concise and it says so you're wondering
145:25 how long cows live. Well, let's move into the details. So, as you can see,
145:28 it's being funny. Okay, back into the API documentation. We have a few other
145:31 things that we could configure, but notice how these aren't required the
145:35 same way that these ones are. So, we have max tokens. We could basically put
145:39 in an integer and say how many tokens do you want to use at the maximum. We could
145:42 change the temperature, which is, you know, like how random the response would
145:45 be. And this one says, you know, it has to be between zero and two. And as we
145:48 keep scrolling down, you can see that there's a ton of other little levers
145:51 that we can just tweak a little bit to change the type of response that we get
145:55 back from Plexity. And so once you start to read more and more API documentation,
145:58 you can understand how you're really in control of what you get back from the
146:02 server. And also you can see like, you know, sometimes you have to send over
146:05 booleans, which is basically just true or false. Sometimes you can only send
146:09 over numbers. Sometimes you can only send over strings. And sometimes it'll
146:12 tell you, you know, like what will the this value default to? and also what are
146:15 the only accepted values that you actually could fill out. So for example,
146:18 if we go back to this temperature setting, we can see it has to be a
146:22 number and if you don't fill that out, it's going to be 0.2. But we can also
146:26 see that if you do fill this out, it has to be between zero or two. Otherwise,
146:30 it's not going to work. Okay, cool. So that's basically how it works. We just
146:34 set up an HTTP request and we change the system prompt and we change the user
146:36 prompt and that's how we can customize this thing to work for us. And that's
146:40 really cool as a node because we can set up, you know, a workflow to pass over
146:44 some sort of variable into this request. So it searches the web for something
146:47 different every time. But now let's say we want to give our agent access to this
146:50 tool and the agent will decide what am I going to search the web for based on
146:54 what the human asks me. So it's pretty much the exact same process. We'll click
146:57 on add a tool and we're going to add an HTTP request tool only because Plexity
147:01 doesn't have a native integration. And then once again you can see we have an
147:04 import curl button. So if I click on this and I just import that same curl
147:07 that we did last time once again it fills out this whole thing for us. So we
147:11 have post, we have the perplexity endpoint, we have our authorization
147:14 bearer, but notice we have to put in our token once again. And so a cool little
147:18 hack is let's say you know you're going to use perplexity a lot. Rather than
147:22 having to go grab your API key every single time, what we can do is we can
147:25 just send it over right here in the authentication tab. So let me show you
147:29 what I mean by that. If I click into authentication, I can click on generic
147:33 credential type. And then from here, I can basically choose, okay, is this a
147:37 basic O, a bearer O, all this kind of stuff. A lot of times it's just going to
147:39 be header off. So that's why we know right here we can click on header off.
147:42 And as you can see, we know that because we're sending this over as a header
147:45 parameter and we just did this earlier and it worked. So as you can see, I have
147:50 header offs already set up. I probably already have a Plexity one set up right
147:53 here. But I'm just going to go ahead and create a new one with you guys to show
147:56 you how this works. So I just create a new header off and all we have to do is
148:01 the exact same thing that we had down in the request that we just sent over which
148:04 means in the name we're just going to type in authorization with a capital A
148:08 and once again we can see in the API docs this is how you do it. So
148:11 authorization and then we can see that the value has to be capital B bearer
148:16 space API token. So I'm just going to come into here bearer space API token
148:21 and then all I have to do is you know first of all name this so we can can
148:26 save it and then if I hit save now every single time we want to use perplexity's
148:30 endpoint we already have our credentials saved. So that's great and then we can
148:33 turn off the headers down here because we don't need to send it over twice. So
148:37 now all we have to do is change this body request a little bit just to make
148:41 it more dynamic. So in order to make it dynamic the first thing we have to do is
148:44 change this to an expression. Now, we can see that we can basically add a
148:48 variable in here. And what we can do is we can add a variable that basically
148:52 just tells the AI model, the AI agent, here is where you're going to send over
148:57 your internet search query. And so we already know that all that that is is
149:01 the user content right here. So if I delete this and basically if I do two
149:06 curly braces and then within the curly braces I do a dollar sign and I type in
149:10 from, I can grab a from AI function. And this from AI function just indicates to
149:15 the AI agent I need to choose something to send over here. And you guys will see
149:19 an example and it will make more sense. I also did a full video breaking this
149:21 down. So if you want to see that, I'll tag it right up here. Anyways, as you
149:25 can see, all we have to really do is enter in a key. So I'm just going to do,
149:28 you know, two quotes and within the quote, I'm going to put in search term.
149:32 And so now the agent will be reading this and say, okay, whenever the user
149:35 interacts with me and I know I need to search the internet, I'm just going to
149:39 fill this whole thing in with the search term. So, now that that's set up, I'm
149:43 just going to change this request to um actually I'm just going to call it web
149:46 search to make that super intuitive for the AI agent. And now what we're going
149:49 to do is we are going to talk to the agent and see if it can actually search
149:53 the web. Okay, so I'm asking the AI agent to search the web for the best
149:57 movies. It's going to think about it. It's going to use this tool right here
150:00 and then we'll basically get to go in there and we can see what it filled in
150:03 in that search term placeholder that we gave it. So, first of all, the answer it
150:09 gave us was IMDb, top 250, um, Rotten Tomatoes, all this kind of stuff, right?
150:12 So, that's movie information that we just got from Perplexity. And what I can
150:16 do is click into the tool and we can see in the top left it filled out search
150:20 term with best movies. And we can even see that in action. If we come down to
150:23 the body request that it sent over and we expand this, we can see on the right
150:27 hand side in the result panel, this is the JSON body that it sent over to
150:32 Perplexity and it filled it in with best movies. And then of course what we got
150:36 back was our content from Perplexity, which is, you know, here are some of the
150:40 best movies across major platforms. All right, then real quick before we wrap up
150:43 here, I just wanted to talk about some common responses that you can get from
150:47 your HTTP requests. So the rule of thumb to follow is if you get data back and
150:52 you get a 200, you're good. Sometimes you'll get a response back, but you
150:55 won't explicitly see a 200 message. But if you're getting the data back, then
150:59 you're good to go. And a quick example of this is down here we have that HTTP
151:02 request which we went over earlier in this video where we went to open weather
151:06 maps API and you can see down here we got code 200 and there's data coming
151:11 back and 200 is good that's a success code. Now if you get a request in the
151:15 400s that means that you probably set up the request wrong. So 400 bad request
151:19 that could mean that your JSON's invalid. It could just mean that you
151:22 have like an extra quotation mark or you have you know an extra comma something
151:26 as silly as that. So let me show a quick example of that. We're going to test
151:29 this workflow and what I'm doing is I'm trying to send over a query to Tavi. And
151:32 you can see what we get is an error that says JSON parameter needs to be valid
151:36 JSON. And this would be a 400 error. And the issue here is if we go into the JSON
151:40 body that we're trying to send over, you can see in the result panel, we're
151:44 trying to send over a query that has basically two sets of quotation marks.
151:47 If you can see that. But here's the great news about JSON. It is so
151:51 universally used and it's been around for so long that we could basically just
151:55 copy the result over to chatbt. Paste it in here and say, I'm getting an error
151:59 message that says JSON parameter needs to be valid JSON. What's wrong with my
152:02 JSON? And as you can see, it says the issue with your JSON is the use of
152:06 double quotes around the string value in this line. So now we'd be able to go fix
152:09 that. And if we go back into the workflow, we take away these double
152:13 quotes right here. Test the step again. Now you can see it's spinning and it's
152:16 going to work. and we should get back some information about pineapples on
152:19 pizza. Another common error you could run into is a 401, meaning unauthorized.
152:23 This typically just means that your API key is wrong. You could also get a 403,
152:26 which is forbidden. That just means that maybe your account doesn't have access
152:30 to this data that you're requesting or something like that. And then another
152:33 one you could get is a 404, which sometimes you'll get that if you type in
152:36 a URL that doesn't exist. It just means this doesn't exist. We can't find it.
152:39 And a lot of times when you're looking at the actual API documentation that you
152:43 want to set up a request to. So here's an example with Tavi, it'll show you
152:47 what typical responses could look like. So here's one where you know we're using
152:50 Tavi to search for who is Leo Messi. This was an example we looked at
152:54 earlier. And with a 200 response, we are getting back like a query, an answer
152:58 results, stuff like that. We could also see we could get a 400 which would be
153:02 for bad request, you know, invalid topic. We could have a 401 which means
153:06 invalid API key. We could get all these other ones like 429, 432, but in general
153:12 400 is bad. And then even worse is a 500. And this just basically means
153:15 something's wrong with the server. Maybe it doesn't exist anymore or there's a
153:19 bug on the server side. But the good news about a 500 is it's not your fault.
153:22 You didn't set up the request wrong. It just means something's wrong with the
153:26 server. And it's really important to know that because if you think you did
153:28 something wrong, but it's really not your fault at all, you may be banging
153:32 your head against the wall for hours. So anyways, what I wanted to highlight here
153:35 is there's never just like a one-sizefits-all. I know how to set up
153:39 this one API call so I can just set up every single other API call the exact
153:42 same way. The key is to really understand how do you read the API
153:45 documentation? How do you set up your body parameters and your different
153:48 header parameters? And then if you start to run into issues, the key is
153:52 understanding and actually reading the error message that you're getting back
153:55 and adjusting from there. All right, so that's going to do it for this video.
153:58 Hopefully this has left you feeling a lot more comfortable with diving into
154:02 API documentation, walking through it just step by step using those curl
154:06 commands and really just understanding all I'm doing is I'm setting up filters
154:09 and levers here. I don't have to get super confused. It's really not that
154:12 technical. I'm pretty much in complete control over what my API is going to
154:16 send me back. The same way I'm in complete control when I'm, you know,
154:19 ordering something on Door Dash or ordering something on Amazon, whatever
154:23 it is. Hopefully by now the concept of APIs and HTTP requests makes a lot more
154:27 sense. But really, just to drive it home, what we're going to do is hop into
154:30 some actual setups in NADN of connecting to some different popular APIs and walk
154:35 through a few more step by steps just to really make sure that we understand the
154:38 differences that can come with different API documentation and how you read it
154:41 and how you set up stuff like your credentials and the body requests. So,
154:45 let's move into this next part, which I think is going to be super valuable to
154:49 see different API calls in action. Okay, so in nodn when we're working with a
154:52 large language model, whether that's an AI agent or just like an AI node, what
154:57 happens is we can only access the information that is in the large
155:00 language models training data. And a lot of times that's not going to be super
155:04 up-to-date and real time. So what we want to do is access different APIs that
155:08 let us search the web or do real-time search. And what we saw earlier in that
155:13 third step-by-step workflow was we used a tool called Tavi and we accessed it
155:17 through an HTTP request node which as you guys know looks like this. And we
155:21 were able to use this to communicate with Tavali's API server. So if we ever
155:25 want to access real-time information or do research on certain search terms, we
155:30 have to use some sort of API to do that. So, like I said, we talked about Tavi,
155:34 but in this video, I'm going to help you guys set up Perplexity, which if you
155:38 don't know what it is, it's kind of like Chat GBT, but it's really, really good
155:42 for web search and in-depth research. And it has that same sort of like, you
155:46 know, chat interface as chatbt, but what you also have is access to the API. So,
155:50 if I click on API, we can see this little screen, but what we want to go to
155:54 is the developer docs. And in the developer docs, what we're looking for
155:57 is the API reference. We can also click on the quick start guide right here
156:00 which just shows you how you can set up your API key and get all that kind of
156:03 stuff. So that's exactly what we're going to do is set up an API call to
156:08 Perplexity. So I'm going to click on API reference and what we see here is the
156:12 endpoint to access Perplexi's API. And so what I'm going to do is just grab
156:15 this curl command from that right hand side, go back into our NEN and I'm just
156:19 going to import the curl right into here. And then all we have to do from
156:22 there is basically configure what we want to research and put in our own API
156:27 key. So there we go. We have our node pretty much configured. And now the
156:30 first thing we see we need to set up is our authorization API key. And what we
156:34 could do is set this up in here as a generic credential type and save it. But
156:37 right now we're just going to keep things as simple as possible where we
156:40 imported the curl. And now I'm just going to show you where to plug in
156:43 little things. So we have to go back over to Perplexity and we need to go get
156:46 an API key. So I'm going to come over here to the left and I'm going to click
156:49 on my settings. And hopefully in here we're able to find where our API key
156:53 lives. Now we can see in the bottom left over here we have API keys. And what I'm
156:56 going to do is come in here and just create a new secret key. And we just got
156:59 a new one generated. So I'm just going to click on this button, click on copy,
157:03 and all we have to do is replace the word right here that says token. So I'm
157:07 just going to delete that. I'm going to make sure to leave a space after the
157:10 word bearer. And I'm going to paste in my Perplexity API key. So now we should
157:14 be connected. And now what we need to do is we need to set up the actual body
157:17 request. So if I go back into the documentation, we can see this is
157:21 basically what we're sending over. So that first thing is a model, which is
157:24 the name of the model that will complete your prompt. And if we wanted to look at
157:27 different models, we could click into here and look at other supported models
157:31 from Perplexity. So it took us to the screen. We click on models, and we can
157:35 see we have Sonar Pro or Sonar. We have Sonar Deep Research. We have some
157:38 reasoning models as well. But just to keep things simple, I'm going to stick
157:41 with the default model right now, which is just sonar. Then we have an object
157:45 that we're sending over, which is messages. And within the messages
157:49 object, we have a few things. So first of all we're sending over content which
157:53 is the contents of the message in turn of conversation. It can be in a string
157:57 or an array of parts. And then we have a role which is going to be the role of
158:00 the speaker in the conversation. And we have available options system user or
158:05 assistant. So what you can see in our request is that we're sending over a
158:08 system message as well as a user message. And the system message is
158:11 basically the instructions for how this AI model on perplexity should act. And
158:15 then the user message is our dynamic search query that is going to change
158:19 every time. And if we go back into the documentation, we can see that there are
158:22 a few other things we could add, but we don't have to. We could tell Perplexity
158:26 what is the max tokens we want to use, what is the temperature we want to use.
158:30 We could have it only search for things in the past week or day. So, this
158:34 documentation is basically going to be all the filters and settings that you
158:38 have access to in order to customize the type of results that you want to get
158:41 back. But, like I said, keeping this one really simple. We just want to search
158:45 the web. All I'm going to do is keep it as is. And if I disconnect this real
158:48 quick and we come in and test step, it's basically going to be searching
158:51 perplexity for how many stars are there in our galaxy. And then the AI model of
158:56 sonar is the one that's going to grab all of these five sources and it's going
159:00 to answer us. And right here it says the Milky Way galaxy, which is our home
159:04 galaxy, is estimated to contain between 100 billion and 400 billion stars. This
159:08 range is due to the difficulty blah blah blah blah blah. So that's basically how
159:11 it was able to answer us because it used an AI model called sonar. So now if we
159:15 wanted to make this search a little bit more dynamic, we could basically plug
159:19 this in and you can see in here what I'm doing is I'm just setting a search term.
159:23 So let's test this step. What happens is the output of this node is a research
159:27 term. Then we could reference that variable of research term right in here
159:31 in our actual body request to perplexity. So I would delete this fixed
159:35 message which is how many stars are there in our galaxy. And all I would do
159:38 is I'd drag in research term from the left, put it in between the two quotes,
159:43 and now it's coming over dynamically as anthropic latest developments. And all
159:47 I'd have to do now is hit test step. And we will get an answer from Perplexity
159:51 about Anthropic's recent developments. There we go. It just came back. We can
159:54 see there's five different sources right here. It went to Anthropic, it went to
159:57 YouTube, it went to TechCrunch. And what we get is that today, May 22nd, so real
160:03 time information, Claude Opus 4 was released. And that literally came out
160:07 like 2 or 3 hours ago. So that's how we know this is searching the web in real
160:11 time. And then all we'd have to do is have, you know, maybe an AI model is
160:14 changing our search term or maybe we're pulling from a Google sheet with a bunch
160:17 of different topics we need to research. But whatever it is, as long as we are
160:21 passing over that variable, this actual search result from Perplexity is going
160:25 to change every single time. And that's the whole point of variables, right?
160:28 They they vary. They're dynamic. So, I know that one was quick, but Perplexity
160:32 is a super super versatile tool and probably an API that you're going to be
160:35 calling a ton of times. So, just wanted there. So, Firecall is going to allow us
160:43 to turn any website into LLM ready data in a matter of seconds. And as you can
160:46 see right here, it's also open source. So, once you get over to Firecol, click
160:49 on this button and you'll be able to get 500 free credits to play around with. As
160:52 you can see, there's four different things we can do with Firecrawl. We can
160:57 scrape, we can crawl, we can map or we can do this new extract which basically
161:01 means we can give firecraw a URL and also a prompt like can you please
161:04 extract the company name and the services they offer and an icebreaker
161:08 out of this URL. So there's some really cool use cases that we can do with
161:11 firecrawl. So in this video we're going to be mainly looking at extract, but I'm
161:14 also going to show you the difference between scrape and extract. And we're
161:17 going to get into end and connect up so you can see how this works. But the
161:20 playground is going to be a really good place to understand the difference
161:23 between these different endpoints. All right, so for the sake of this video,
161:26 this is the website we're going to be looking at. It's called quotes to
161:29 scrape. And as you can see, it's got like 10 on this first page and it also
161:32 has different pages of different categories of quotes. And as you can
161:35 see, if we click into them, there are different quotes. So what I'm going to
161:37 do is go back to the main screen and I'm going to copy the URL of this website
161:41 and we're going to go into niten. We're going to open up a new node, which is
161:45 going to be an HTTP request. And this is just to show you what a standard get
161:49 request to a static website looks like. So we're going to paste in the URL, hit
161:53 test step, and on the right hand side, we're going to get all the HTML back
161:57 from the quotes to scrape website. Like I said, what we're looking at here is a
162:00 nasty chunk of HTML. It's pretty hard for us to read, but basically what's
162:04 going on here is this is the code that goes to the website in order to have it
162:07 be styled and different fonts and different colors. So right here, what
162:10 we're looking at is the entire first page of this website. So if we were to
162:14 search for Harry, if I copy this, we go back into edit and we control F this.
162:19 You can see there is the exact quote that has the word Harry. So everything
162:22 from the website's in here, it's just wrapped up in kind of an ugly chunk of
162:26 HTML. Now hopping back over to the fireall playground using the scrape
162:30 endpoint, we can replace that same URL. We'll run this and it's going to output
162:33 markdown formatting. So now we can see we actually have everything we're
162:36 looking for with a different quotes and it's a lot more readable for a human. So
162:41 that's what a web scrape is, right? We get the information back, whether that's
162:44 HTML or markdown, but then we would typically feed that into some sort of
162:47 LLM in order to extract the information we're looking for. In this case, we'd be
162:52 looking for different quotes. But what we can do with extract is we can give it
162:56 the URL and then also say, hey, get all of the quotes on here. And using this
162:59 method, we can say, not just these first 10 on this page. I want you to crawl
163:03 through the whole site and basically get all of these quotes, all of these
163:06 quotes, all of these quotes, all of these quotes. So it's going to be really
163:09 cool. So I'm going to show how this works in firecrawl and then we're going
163:11 to plug it into noden. All right. So what we're doing here is we're saying
163:14 extract all of the quotes and authors from this website. I gave it the website
163:18 and now what it's doing is it's going to generate the different parameters that
163:22 the LLM will be looking to extract out of the content of the website. Okay. So
163:26 here's the run we're about to execute. We have the URL and then we have our
163:29 schema for what the LLM is going to be looking for. And it's looking for text
163:33 which would be the quote and it's a string. And then it's also going to be
163:35 looking for the author of that quote which is also a string. And then the
163:39 prompt we're feeding here to the LLM is extract all quotes and their
163:43 corresponding authors from the website. So we're going to hit run and we're
163:46 going to see that it's not only going to go to that first URL, it's basically
163:49 going to take that main domain, which is quotes to scrape.com, and it's going to
163:53 be crawling through the other sections of this website in order to come back
163:56 and scrape all the quotes on there. Also, quick plug, go ahead and use code
164:00 herk10 to get 10% off the first 12 months on your Firecrawl plan. Okay, so
164:04 it just finished up. As you can see, we have 79 quotes. So down here we have a
164:09 JSON response where it's going to be an object called quotes. And in there we
164:12 have a bunch of different items which has you know text author text author
164:17 text author and we have pretty much everything from that website now. Okay
164:20 cool. But what we want to do is look at how we can do this in n so that we have
164:25 you know a list of 20 30 40 URLs that we want to extract information from. We can
164:28 just loop through and send off that automation rather than having to come in
164:33 here and type that out in firecrawl. Okay. So what we're going to do is go
164:35 back into edit end. And I apologize because there may be some jumping around
164:38 here, but we're basically just gonna clear out this HTTP request and grab a
164:42 new one. Now, what we're going to do is we want to go into Firecrawl's
164:45 documentation. So, all we have to do is import the curl command for the extract
164:48 endpoint rather than trying to figure out how to fill out these different
164:51 parameters. So, back in Firecrawl, once you set up your account, up in the top
164:54 right, you'll see a button called docs. You want to click into there. And now,
164:57 we can see a quick start guide. We have different endpoints. And what we're
165:00 going to do is on the left, scroll down to features and click on extract. And
165:04 this is what we're looking for. So, we've got some information here. The
165:07 first thing to look at is when we're using the extract, you can extract
165:10 structured data from one or multiple URLs, including wild cards. So, what we
165:14 did was we didn't just scrape one single page. We basically scraped through all
165:18 of the pages that had the main base domain of um quotescrape.com or
165:23 something like that. And if you put a asterk after it, it's going to basically
165:26 mean this is a wild card and it's going to go scrape all pages that are after it
165:30 rather than just scraping this one predefined page. As you can see right
165:34 here, it'll automatically crawl and parse all the URLs it can discover, then
165:38 extract the requested data. And we can see that's how it worked because if we
165:41 come back into the request we just made, we can see right here that it added a
165:45 slash with an asterisk after quotes to scrape.com. Okay. Anyway, so what we're
165:48 looking for here is this curl command. This is basically going to fill out the
165:51 method, which is going to be a post request. It's going to fill out the
165:54 endpoint. It'll fill out the content type, and it'll show us how to set up
165:58 our authorization. And then we'll have a body request that we'll need to make
166:02 some minor changes to. So in the top right I'm going to click copy and I'm
166:05 going to come back into edit end. Hit import curl. Paste that in there. Hit
166:09 import. And as you can see everything pretty much just got populated. So like
166:12 I said the method is going to be a post. We have the endpoint already set up. And
166:15 what I want to do is show you guys how to set up this authorization so that we
166:18 can keep it saved forever rather than having to put it in here in the
166:22 configuration panel every time. So first of all, head back over to your
166:26 firecrawl. Go to API keys on the lefth hand side. And you're just going to want
166:29 to copy that API key. So once you have that copied, head back into NN. And now
166:33 let's look at how we actually set this up. So typically what you do is we have
166:37 this as a header parameter. Not all authorizations are headers, but this one
166:41 is a header. And the key or the name is authorization and the value is bearer
166:46 space your API key. So what you'd typically do is just paste in your API
166:50 key right there and you'd be good to go. But what we want to do is we want to
166:53 save our firecrawl credential the same way you'd save, you know, a Google
166:58 Sheets credential or a Slack credential. So, we're going to come into
167:01 authentication, click on generic. We're going to click on generic type and
167:04 choose header because we know down here it's a header off. And then you can see
167:07 I have some other credentials already saved. We're going to create a new one.
167:11 I'm just going to name this firecrawl to keep ourselves organized. For the name,
167:14 we're going to put authorization. And for the value, we're going to type
167:18 bearer with a capital B space and then paste in our API key. And we'll hit
167:21 save. And this is going to be the exact same thing that we just did down below,
167:25 except for now we have it saved. So, we can actually flick this field off. We
167:28 don't need to send headers because we're sending them right here. And now we just
167:32 need to figure out how to configure this body request. Okay, so I'm going to
167:35 change this to an expression and open it up just so we can take a look at it. The
167:38 first thing we notice is that by default there are three URLs in here that we
167:41 would be extracting from. We don't want to do that here. So I'm going to grab
167:44 everything within the array, but I'm going to keep the two quotation marks.
167:47 Now all we need to do is put the URL that we're looking to extract
167:49 information from in between these quotation marks. So here I just put in
167:53 the quotes to scrape.com. But what we want to do if you remember is we want to
167:57 put an asterisk after that so that it will go and crawl all of the pages, not
168:01 just that first page and which would only have like nine or 10 quotes. And
168:04 now the rest is going to be really easy to configure because we already did this
168:07 in the playground. So we know exactly what goes where. So I'm going to click
168:10 back into our playground example. First thing is this is the quote that
168:13 firecross sent off. So I'm going to copy that. Go back and edit in and I'm just
168:17 going to replace the prompts right here. We don't want the company mission blah
168:20 blah blah. We want to paste this in here and we're looking to extract all quotes
168:24 and their corresponding authors from the website. And then next is basically
168:27 telling the LLM, what are you pulling back? So, we just told it it's pulling
168:31 back quotes and authors. So, we need to actually make the schema down here in
168:36 the body request match the prompt. So, all we have to do is go back into our
168:39 playground. Right here is the schema that we sent over in our example. And
168:42 I'm just going to click on JSON view and I'm going to copy this entire thing
168:46 which is wrapped up in curly braces. We'll come back into end and we'll start
168:51 after schema colon space. Replace all this with what we just had in um fire
168:55 crawl. And actually I think I've noticed the way that this copied over. It's not
168:58 going to work. So let me show you guys that real quick. If we hit test step,
169:01 it's going to say JSON parameter needs to be valid JSON. So what I'm going to
169:05 do is I'm going to copy all of this. Now I came into chat GBT and I'm just saying
169:08 fix this JSON. What it's going to do is it's going to just basically push these
169:12 over. When you copy it over from Firecrol, it kind of aligns them on the
169:15 left, but you don't want that. So, as you can see, it just basically pushed
169:18 everything over. We'll copy this into our Nit end right there. And all it did
169:21 was bump everything over once. And now we should be good to go. So, real quick
169:24 before we test this out, I'm just going to call this extract. And then we'll hit
169:28 test step. And we should see that it's going to be pulling. And it's going to
169:33 give us a message that says um true. And it gives us an ID. And so now what we
169:37 need to do next is pull this ID back to see if our request has been fulfilled
169:41 yet. So I'm back in the documentation. And now we are going to look at down
169:45 here asynchronous extraction and status checking. So this is how we check the
169:49 status of a request. As you saw, we just made one. So here I'm going to click on
169:52 copy this curl command. We're going to come back and end it in and we're going
169:56 to add another HTTP request and we're going to import that in there. And you
170:00 can see this one is going to be a get command. It's going to have a different
170:02 endpoint. And what we need to do if you look back at the documentation is at the
170:07 end of the extract slash we have to put the extract ID that we're looking to
170:13 check the status of. So back in n the ID is going to be coming from the left hand
170:16 side the previous node every time. So I'm just going to change the URL field
170:21 to an expression. Put a backslash and then I'm going to grab the ID pull it
170:25 right in there and we're good to go. Except we need to set up our credential.
170:28 And this is why it's great. We already set this up as a generic as a header.
170:32 And now we can just pull in easily our fire crawl off and hit test step. So
170:37 what happens now is our request hasn't been done yet. So as you can see it
170:41 comes back as processing and the data is an empty array. So what we're going to
170:44 set up real quick is something called polling where we're basically checking
170:48 in on a specific ID which is this one right here. And we're going to check and
170:51 if it's if it's empty, if the data field is empty, then that means we're going to
170:55 wait a certain amount of time and come back and try again. So after the
170:59 request, I'm going to add a if. So, this is just basically going to help us
171:02 create our filter. So, we're dragging in JSON.data, which as you can see is an
171:06 empty array, and we're just going to say is empty. But one thing you have to keep
171:10 in mind is this doesn't match. As you can see, we're dragging in an array, and
171:14 we were trying to do a filter of a string. So, we have to go to array and
171:18 then say is empty. And we'll hit test step. And this is going to say true. The
171:23 data field is empty. And so, if true, what we want to do is we're going to add
171:27 a wait. And this will wait for, you know, let's in in this case we'll just
171:30 say five seconds. So if we hit test step, it's going to wait for five
171:33 seconds. And um I wish actually I switched the logic so that this would be
171:37 on the bottom, but whatever. And then we would just drag this right back into
171:41 here. And we would try it again. So now after 5 seconds had passed or however
171:45 much time, we would try this again. And now we can see that we have our item
171:48 back and the data field is no longer empty because we have our quotes object
171:53 which has 83 quotes. So even got more than that time we did it in the
171:56 playground. And I'm thinking this is just because, you know, the extract is
171:59 kind of still in beta. So it may not be super consistent, but that's still way
172:03 better than if we were to just do a simple getit request. And then as you
172:07 can see now, if we ran this next step, this would come out. Ah, but this is interesting. So
172:13 before it knows what it's pulling back, the JSON.data field is an array. And so
172:17 we're able to set up is the array empty? But now it's an object. So we can't put
172:21 it through the same filter because we're looking at a filter for an array. So
172:25 what I'm thinking here is we could set up this continue using error output. So
172:29 because this this node would error, we could hit test step and we could see now
172:33 it's going to go down the false branch. And so this basically just means it's
172:36 going to let us continue moving through the process. And we could do then
172:38 whatever we want to do down here. Obviously this isn't perfect because I
172:41 just set this up to show you guys and ran into that. But that's typically sort
172:45 of the way we would think is how can we make this a little more dynamic because
172:49 it has to deal with empty arrays or potentially full objects. Anyways, what
172:52 I wanted to show you guys now is back in our request if we were to get rid of
172:56 this asterk. What would happen? So, we're just going to run this whole
172:59 process again. I'll hit test workflow. And now it's going to be sending that
173:04 request only to, you know, one URL rather than the other one. Aha. And I'm
173:08 glad we are doing live testing because I made the mistake of putting this in as
173:12 JSON ID which doesn't exist if we're pulling from the weight node. So all we
173:16 have to do in here is get rid of JSON. ID and pull in a basically a you know a
173:21 node reference variable. So we're going to do two curly braces. We're going to
173:25 be pulling from the extract node. And now we just want to say
173:29 item.json ID and we should be good to go now. So I'm just going to refresh this
173:33 and we'll completely do it again. So test workflow, we're doing the exact
173:37 same thing. It's not ready yet. So, we're going to wait 5 seconds and then
173:40 we're going to go check again. We hopefully should see, okay, it's not
173:42 ready still. So, we're going to wait five more seconds. Come check again. And
173:46 then whenever it is ready now, as you can see, it goes down this branch. And
173:50 we can see that we actually get our items back. And what you see here is
173:54 that this time we only got 10 quotes. Um, you know, it says nine, but
173:57 computers count from zero. But we only got 10 quotes because um we didn't put
174:04 an asterisk after the URL. So, Firecrawl didn't know I need to go scrape
174:07 everything out of this whole base URL. I'm only going to be scraping this one
174:11 specific page, which is this one right here, which does in fact only have 10
174:15 quotes. And by the way, super simple template here, but if you want to try it
174:18 out and just plug in your API key and different URLs, you can grab that in the
174:22 free school community. You'll hop in there, you will click on YouTube
174:24 resources and click on the post associated with this video, and you'll
174:28 have the JSON right there to download. Once you download that, all you have to
174:31 do is import it from file right up here, and you'll have the workflow. So,
174:34 there's a lot of cool use cases for firecrawl. It'd be cool to be able to
174:38 pull from a a sheet, for example, of 30 or 40 or 50 URLs that we want to run
174:42 through and then update based on the results. You could do some really cool
174:45 stuff here, like researching a ton of companies and then having it also create
174:49 some initial outreach for you. So, I hope you guys enjoyed that one.
174:51 Firecrawl is a super cool tool. There's lots of functionality there and there's
174:55 lots of uses of AI in Firecrawl, which is awesome. We're going to move into a
174:57 different tool that you can use to scrape pretty much anything, which is
175:00 called Appify, which has a ton of different actors, and you can scrape,
175:04 like I said, almost anything. So, let's go into the setup video. So, Ampify is
175:08 like a marketplace for actors, which essentially let us scrape anything on
175:10 the internet. As you can see right here, we're able to explore 4,500 plus
175:14 pre-built actors for web scraping and automation. And it's really not that
175:17 complicated. An actor is basically just a predefined script that was already
175:20 built for us that we can just send off a certain request to. So, you can think of
175:23 it like a virtual assistant where you're saying, "Hey, I want you to I want to
175:26 use the Tik Tok virtual assistant and I want you to scrape, you know, videos
175:30 that have the hashtag of AI content." Or you could use the LinkedIn job scraper
175:33 and you could say, "I want to find jobs that are titled business analyst." So,
175:36 there's just so many ways you could use Appify. You could get leads from Google
175:39 Maps. You could get Instagram comments. You could get Facebook posts. There's
175:43 just almost unlimited things you can do here. You can even tap into Apollo's
175:46 database of leads and just get a ton. So today I'm just going to show you guys in
175:50 NAN the easiest way to set up this Aify actor where you're going to start the
175:53 actor and then you're going to just grab those results. So what you're going to
175:56 want to do is head over to Aify using the link in the description and then use
176:01 code 30 Nate Herk to get 30% off. Okay, like I said, what we're going to be
176:03 covering today is a two-step process where you make one request to Aify to
176:07 start up an actor and then you're going to wait for it to finish up and then
176:10 you're just going to pull those results back in. So let me show you what that
176:12 looks like. What I'm going to do is hit test workflow and this is going to start
176:15 the Google Maps actor. And what we're doing here is we're asking for dentists
176:18 in New York. And then if I go to my Appify console and I go over here to
176:22 actors and click on the Google Maps extractor one, if I click on runs, we
176:25 can see that there's one currently finishing up right now. And now that
176:28 it's finished, I can go back into our workflow. I can hook it up to the get
176:32 results node. Hit test step. And this is going to pull in those 50 dentists that
176:36 we just scraped in New York. And you can see this contains information like their
176:39 address, their website, their phone number, all this kind of stuff. So you
176:43 can just basically scrape these lists of leads. So anyways, that's how this
176:46 works, but let's walk through a live setup. So once you're in your Appify
176:49 console, you click on the Appify store, and this is where you can see all the
176:52 different actors. And let's do an example of like a social media one. So
176:55 I'm going to click on this Tik Tok scraper since it's just the first one
176:58 right here. And this may seem a little bit confusing, but it's not going to be
177:01 too bad at all. We get to basically do all this with natural language. So let
177:04 me show you guys how this works. So basically, we have this configuration
177:07 panel right here. When you open up any sort of actor, they won't always all be
177:11 the same, but in this one, what we have is videos with this hashtag. So, we can
177:14 put something in. I put in AI content to play around with earlier. And then you
177:17 can see it asks, how many videos do you want back? So, in this case, I put 10.
177:21 Let's just put 25 for the sake of this demo. And then you have the option to
177:24 add more settings. So, down here, we could do, you know, we could add certain
177:26 profiles that we want to scrape. We could add a different search
177:29 functionality. We could even have it download the videos for us. So, once
177:32 you're good with this configuration, and just don't over complicate it. Think of
177:35 it the same way you would like put in filters on an e-commerce website or the
177:39 same way you would, you know, fill in your order when you're door dashing some
177:42 food. So, now that we have this filled out the way we want it, all I'm going to
177:45 do is come up to the top right and hit API and click API endpoints. The first
177:49 thing we're going to do is we're going to use this endpoint called run actor.
177:52 This is the one that's basically just going to send a request to Apify and
177:55 start this process, but it's not going to give us the live results back. That's
177:59 why the second step later is to pull the results back. What you could do is you
178:02 could run the actor synchronously, meaning it's going to send it off and
178:05 it's just going to spin in and it end until we're done and until it has the
178:09 results. But I found this way to be more consistent. So anyways, all you have to
178:12 do is click on copy and it's already going to have copied over your appy API
178:17 key. So it's really, really simple. All we're going to do here is open up a new
178:20 HTTP request. I'm going to just paste in that URL that we just copied right here.
178:24 And that's basically all we have to do except for we want to change this method
178:28 to post because as you can see right here, it says post. And so this is
178:31 basically just us putting in the actor's phone number. And so we're giving it a
178:35 call. But now what we have to do is actually tell it what we want. So right
178:38 here, we've already filled this out. I'm going to click on JSON and all I have to
178:42 do is just copy this JSON right here. Go back into N. Flick this on to send a
178:47 body and we want to send over just JSON. And then all I have to do is paste that
178:49 in there. So, as you can see, what we're sending over to this Tik Tok scraper is
178:54 I want AI content and I want 25 results. And then all this other stuff is false.
178:57 So, I'm just going to hit test step. And so, this basically returns us with an ID
179:00 and says, okay, the actor started. If we go back into here and we click on runs,
179:04 we can see that this crawler is now running. and it's going to basically
179:07 tell us how much it costed, how long it took, and all this kind of stuff. And
179:11 now it's already done. So, what we need to do now is we need to click on API up
179:14 in the top right. Click on API endpoints again and scroll all the way down to the
179:19 bottom where we can see get last run data set items. So, all I need to do is
179:23 hit this copy button right here. Go back into Nitn and then open up another HTTP
179:27 request. And then I'm just going to paste that URL right in there once
179:30 again. And I don't even have to change the method because if we go in here, we
179:34 can see that this is a get. So, all I have to do is hit test step. And this is
179:38 going to pull in those 25 results from our Tik Tok scrape based on the search
179:42 term AI content. So, you can see right here it says 25 items. And just to show
179:46 you guys that it really is 25 items, I'm just going to grab a set field. We're
179:49 going to just drag in the actual text from here and hit test step. And it
179:53 should Oh, we have to connect a trigger. So, I'm just going to move this trigger
179:57 over here real quick. And um what you can do is because we already have our
180:00 data here, I can just pin it so we don't actually have to run it again. But then
180:03 I'll hit test step. And now we can see we're going to get our 25 items right
180:08 here, which are all of the text content. So I think just the captions or the
180:11 titles of these Tik Toks. And we have all 25 Tik Toks as you can see. So I
180:15 just showed you guys the two-step method. And why I've been using it
180:18 because here's an example where I did the synchronous run. So all I did was I
180:22 came to the Google maps and I went to API endpoints and then I wanted to do
180:26 run actor synchronously which basically means that it would run it in n and it
180:30 would spin until the results were done and then it should feed back the output.
180:34 So I copied that I put it into here and as you can see I just ran it with the
180:37 Google maps looking for plumbers and we got nothing back. So that's why we're
180:40 taking this two-step approach where as you can see here we're going to do that
180:43 exact same request. We're doing a request for plumbers and we're going to
180:47 fire this off. And so nothing came back in Nitn. But if we go to our actor and
180:50 we go to runs, we can see right here that this was the one that we just made
180:54 for plumbers. And if we click into it, we can see all the plumbers. So that's
180:57 why we're taking the two-step approach. I'm going to make the exact same request
181:00 here for New York plumbers. And what I'm going to do is just run this workflow.
181:03 And now I wanted to talk about what we have to do because what happens is we
181:07 started the actor. And as you can see, it's running right now. And then it went
181:10 to grab the results, but the results aren't done yet. So that's why it comes
181:13 back and says this is an item, but it's empty. So, what we want to do is we want
181:17 to go to our runs and we want to see how long this is taking on average for 50
181:21 leads. As you can see, the most amount of time it's ever taken was 19 seconds.
181:24 So, I'm just going to go in here and in between the start actor and grab
181:28 results, I'm going to add a wait, and I'm just going to tell this thing to
181:31 wait for 22 seconds just to be safe. And now, what I'm going to do is just run
181:33 this thing again. It's going to start the actor. It's going to wait for 22
181:37 seconds. So, if we go back into Ampify, you can see that the actor is once again
181:41 running. After about 22 seconds, it's going to pass over and then we should
181:45 get all 50 results back in our HTTP request. There we go. Just finished up.
181:48 And now you can see that we have 50 items which are all of the plumbers that
181:53 we got in New York. So from here, now that you have these 50 leads and
181:56 remember if you want to come back into Ampify and change up your input, you can
182:00 change how many places you want to extract. So if you changed this to 200
182:03 and then you clicked on JSON and you copied in that body, you would now be
182:07 searching for 200 results. But anyways, that's the hard part is getting the
182:11 leads into end. But now we have all this data about them and we can just, you
182:14 know, do some research, send them off a email, whatever it is, we can just
182:18 basically have this thing running 24/7. And if you wanted to make this workflow
182:21 more advanced to handle a little bit more dynamic amount of results. What
182:25 you'd want to use is a technique called polling. So basically, you'd wait, you
182:28 check in, and then if the results were all done, you continue down the process.
182:32 But if they weren't all done, you would basically wait again and come back. And
182:36 you would just loop through this until you're confident that all of the results
182:39 are done. So that's going to be it for this one. I'll have this template
182:42 available in my free school community if you want to play around with it. Just
182:44 remember you'll have to come in here and you'll have to switch out your own API
182:47 key. And don't forget when you get to Ampify, you can use code 30 Nate Herk to
182:51 get 30% off. Okay, so those were some APIs that we can use to actually scrape
182:54 information. Now, what if we want to use APIs to generate some sort of content?
182:58 We're going to look at an image generation API from OpenAI and we're
183:02 going to look at a video generation API called Runway. So these next two
183:05 workflows will explain how you set up those API calls and also how you can
183:09 bake them into a workflow to be a little bit more practical. So let's take a
183:13 look. So this workflow right here, all I had to do was enter in ROI on AI
183:17 automation and it was able to spit out this LinkedIn post for me. And if you
183:21 look at this graphic, it's insane. It looks super professional. It even has a
183:24 little LinkedIn logo in the corner, but it directly calls out the actual
183:28 statistics that are in the post based on the research. And for this next one, all
183:31 I typed in was mental health within the workplace and it spit out this post.
183:34 According to Deote Insights, organizations that support mental health
183:38 can see up to 25% increase in productivity. And as you can see down
183:42 here, it's just a beautiful graphic. So, a few weeks ago when Chacht came out
183:45 with their image generation model, you probably saw a lot of stuff on LinkedIn
183:48 like this where people were turning themselves into action figures or some
183:51 stuff like this where people were turning themselves into Pixar animation
183:55 style photos or whatever it is. And obviously, I had to try this out myself.
183:58 And of course, this was very cool and everyone was getting really excited. But
184:01 then I started to think about how could this image generation model actually be
184:06 used to save time for a marketing team because this new image model is actually
184:09 good at spelling and it can make words that don't look like gibberish. It opens
184:13 up a world of possibilities. So here's a really quick example of me giving it a
184:16 one-s sentence prompt and it spits out a poster that looks pretty solid. Of
184:20 course, we were limited to having to do this in chatbt and coming in here and
184:24 typing, but now the API is released, so we can start to save hours and hours of
184:27 time. And so, the automation I'm going to show with you guys today is going to
184:30 help you turn an idea into a fully researched LinkedIn post with a graphic
184:34 as well. And of course, we're going to walk through setting up the HTTP request
184:39 to OpenAI's image generation model. But what you can do is also download this
184:42 entire template for free and you can use it to post on LinkedIn or you can also
184:46 just kind of build on top of it to see how you can use image generation to save
184:50 you hours and hours within some sort of marketing process. So this workflow
184:54 right here, all I had to do was enter in ROI on AI automation and it was able to
184:58 spit out this LinkedIn post for me. And if you look at this graphic, it's
185:01 insane. It looks super professional. It even has a little LinkedIn logo in the
185:05 corner, but it directly calls out the actual statistics that are in the post
185:09 based on the research. So 74% of organizations say their most advanced AI
185:13 initiatives are meeting or exceeding ROI expectations right here. And on the
185:17 other side, we can see that only 26% of companies have achieved significant
185:21 AIdriven gains so far, which is right here. And I was just extremely impressed
185:24 by this one. And for this next one, all I typed in was mental health within the
185:28 workplace. And to spit out this post, according to Deote Insights,
185:31 organizations that support mental health can see up to 25% increase in
185:35 productivity. And as you can see down here, it's just a beautiful graphic.
185:38 something that would probably take me 20 minutes in Canva. And if you can now
185:42 push out these posts in a minute rather than 20 minutes, you can start to push
185:45 out more and more throughout the day and save hours every week. And because the
185:49 post is being backed by research, the graphic is being backed by the research
185:53 post. You're not polluting anything into the internet. A lot of people in my
185:56 comments call it AI slop. Anyways, let's do a quick live run of this workflow and
185:59 then I'll walk through step by step how to set up this API call. And as always,
186:03 if you want to download this workflow for free, all you have to do is join my
186:06 free school community. link is down in the description and then you can search
186:09 for the title of the video. You can go into YouTube resources. You need to find
186:13 the post associated with this video and then when you're in there, you'll be
186:16 able to download this JSON file and that is the template. So you download the
186:20 JSON file. You'll go back into Nitn. You'll open up a new workflow and in the
186:25 top right you'll go to import from file. Import that JSON file and then there'll
186:28 be a little sticky note with a setup guide just sort of telling you what you
186:31 need to plug in to get this thing to work for you. Okay, quick disclaimer
186:33 though. I'm not actually going to post this to LinkedIn. you certainly could,
186:37 but um I'm just going to basically send the post as well as the attachment to my
186:41 email because I don't want to post on LinkedIn right now. Anyways, as you can
186:45 see here, this workflow is starting with a form submission. So, if I hit test
186:48 workflow, it's going to pop up with a form where we have to enter in our email
186:53 for the workflow to send us the results. Topic of the post and then also I threw
186:57 in here a target audience. So, you could have these posts be kind of flavored
187:00 towards a specific audience if you want to. Okay, so this form is waiting for
187:04 us. I put in my email. I put the topic of morning versus night people and the
187:07 target audience is working adults. So, we'll hit submit, close out of here, and
187:10 we'll see the LinkedIn post agent is going to start up. It's using Tavi here
187:14 for research and it's going to create that post and then pass the post on to
187:19 the image prompt agent. And that image prompt agent is going to read the post
187:22 and basically create a prompt to feed into OpenAI's image generator. And as you can
187:28 see, it's doing that right now. We're going to get that back as a base 64
187:32 string. And then we're just converting that to binary so we can actually post
187:36 that on LinkedIn or send that in email as an attachment and we'll break down
187:39 all these steps. But let's just wait and see what these results look like here.
187:43 Okay, so all that just finished up. Let me pop over to email. So in email, we
187:46 got our new LinkedIn post. Are you a morning lark or a night owl? The science
187:49 of productivity. I'm not going to read through this right now exactly, but
187:53 let's take a look at the image we got. When are you most productive? In the
187:57 morning, plus 10% productivity or night owls thrive in flexibility. I mean, this
188:00 is insane. This is a really good graphic. Okay, so now that we've seen
188:04 again how good this is, let's just break down what's going on. We're going to
188:07 start off with the LinkedIn post agent. All we're doing is we're feeding in two
188:11 things from the form submission, which was what is the topic of the post, as
188:14 well as who's the target audience. So right here, you can see morning versus
188:18 night people and working adults. And then we move into the actual system
188:21 prompt, which I'm not going to read through this entire thing. If you
188:23 download the template, the prompt will be in there for you to look at. But
188:26 basically I told it you are an AI agent specialized in creating professional
188:30 educational and engaging LinkedIn posts based on a topic provided by the user.
188:34 We told it that it has a tool called Tavly that it will use to search the web
188:38 and gather accurate information and that the post should be written to appeal to
188:42 the provided target audience. And then basically just some more information
188:45 about how to structure the post, what it should output and then an example which
188:49 is basically you receive a topic. You search the web, you draft the post and
188:53 you format it with source citations, clean structure, optional hashtags and a
188:57 call to action at the end. And as you can see what it outputs is a super clean
189:01 LinkedIn post right here. So then what we're going to do is basically we're
189:05 feeding this output directly into that next agent. And by the way, they're both
189:10 using chat GBT 4.1 through open router. All right, but before we look at the
189:12 image prompt agent, let's just take a look at these two things down here. So
189:15 the first one is the chat model that plugs into both image prompt agent and
189:19 the LinkedIn post agent. So all you have to do is go to open router, get an API
189:22 key, and then you can choose from all these different models. And in here, I'm
189:24 using GPT4.1. And then we have the actual tool that the LinkedIn agent uses for its
189:31 research which is Tavi. And what we're doing here is we're sending off a post
189:35 request using an HTTP request tool to the Tavi endpoint. So this is where
189:39 people typically start to feel overwhelmed when trying to set up these
189:42 requests because it can be confusing when you're trying to look through that
189:45 API documentation. Which is exactly why in my paid community I created a APIs
189:49 and HTTP requests deep dive because truthfully you need to understand how to
189:54 set up these requests because being able to connect to different APIs is where
189:58 the magic really happens. So Tavi just lets your LLM connect to the web and
190:02 it's really good for web search and it also gives you a thousand free searches
190:05 per month. So that's the plan that I'm on. Anyways, once you're in here and you
190:08 have an account and you get an API key, all I did was went to the Tavali search
190:12 endpoint and you can see we have a curl statement right here where we have this
190:17 endpoint. We have post as the method we have this is how we authorize ourselves
190:20 and this is all going to be pretty similar to the way that we set up the
190:23 actual request to OpenAI's image generation API. So, I'm not going to
190:26 dive into this too much. When you download this template, all you have to
190:30 do is plug in your Tavi API. But later in this video when we walk through
190:35 setting up the request to OpenAI, this should make more sense. Anyways, the
190:38 main thing to take away from this tool is that we're using a placeholder for
190:41 the request because in the request we sent over to Tavali, we basically say,
190:44 okay, here's the search query that we're going to search the internet for. And
190:47 then we have all these other little settings we can tweak like the topic,
190:51 how many results, how many chunks per source, all this kind of stuff. All we
190:55 really want to touch right now is the query. And as you can see, I put this in
190:59 curly braces, meaning it's a placeholder. I'm calling the placeholder
191:02 search term. And down here, I'm defining that placeholder as what the user is
191:06 searching for. So, as you can see, this data in the placeholder is going to be
191:09 filled in by the model. So, based on our form submission, when we asked it to,
191:13 you know, create a LinkedIn post about morning versus night people, it fills
191:17 out the search term with latest research on productivity, morning people versus
191:21 night people, and that's basically how it searches the internet. And then we
191:24 get our results back. And now it creates a LinkedIn post that we're ready to pass
191:29 off to the next agent. So the output of this one gets fed into this next one,
191:32 which all it has to do is read the output. As you can see right here, we
191:36 gave it the LinkedIn post, which is the full one that we just got spit out. And
191:39 then our system message is basically telling it to turn that into an image
191:43 prompt. This one is a little bit longer. Not too bad, though. I'm not going to
191:46 read the whole thing, but essentially we're telling it that it's going to be
191:50 an AI agent that transforms a LinkedIn post into a visual image prompt for a
191:56 textto-image AI generation model. So, we told it to read the post, identify the
192:00 message, identify the takeaways, and then create a compelling graphic prompt
192:04 that can be used with a textto image generator. We gave it some output
192:06 instructions like, you know, if there's numbers, try to work those into the
192:10 prompt. Um, you can use, you know, text, charts, icons, shapes, overlays,
192:14 anything like that. And then the very bottom here, we just gave it sort of
192:17 like an example prompt format. And you can see what it spits out is a image
192:21 prompt. So it says a dynamic split screen infographic style graphic. Left
192:25 side has a sunrise, it's bright yellow, and it has morning larks plus 10%
192:29 productivity. And the right side is a morning night sky, cool blue gradients,
192:33 a crescent moon, all this kind of stuff. And that is exactly what we saw back in
192:38 here when we look at our image. And so this is just so cool to me because first
192:41 of all, I think it's really cool that it can read a post and kind of use its
192:44 brain to say, "Okay, this would be a good, you know, graphic to be looking at
192:47 while I'm reading this post." But then on top of that, it can actually just go
192:51 create that for us. So, I think this stuff is super cool. You know, I
192:53 remember back in September, I was working on a project where someone
192:57 wanted me to help them with LinkedIn automated posting and they wanted visual
193:00 elements as well and I was like, uh, I don't know, like that might have to be a
193:04 couple month away thing when we have some better models and now we're here.
193:07 So, it's just super exciting to see. But anyways, now we're going to feed that
193:12 output, the image prompt into the HTTP request to OpenAI. So, real quick, let's
193:16 go take a look at OpenAI's documentation. So, of course, we have
193:20 the GBT image API, which lets you create, edit, and transform images.
193:24 You've got different styles, of course. You can do like memes with a with text.
193:29 You can do creative things. You can turn other images into different images. You
193:31 can do all this kind of stuff. And this is where it gets really cool, these
193:35 posters and the visuals with words because that's the kind of stuff where
193:39 typically AI image gen like wasn't there yet. And one thing real quick in your
193:42 OpenAI account, which is different than your chatbt account, this is where you
193:46 add the billing for your OpenAI API calls. You have to have your
193:50 organization verified in order to actually be able to access this model
193:54 through API. Right now, it took me 2 minutes. You basically just have to
193:57 submit an ID and it has to verify that you're human and then you'll be verified
194:00 and then you can use it. Otherwise, you're going to get an error message
194:02 that looks like this that I got earlier today. But anyways, the verification
194:06 process does not take too long. Anyways, then you're going to head over to the
194:08 API documentation that I will have linked in the description where we can
194:12 see how we can actually create an image in NAN. So, we're going to dive deeper
194:16 into this documentation in the later part of this video where I'm walking
194:19 through a step-by-step setup of this. But, we're using the endpoint um which
194:23 is going to create an image. So, we have this URL right here. We're going to be
194:27 creating a post request and then we just obviously have our things that we have
194:30 to configure like the prompt in the body. We have to obviously send over
194:35 some sort of API key. We have to, you know, we can choose the size. We can
194:37 choose the model. All this kind of stuff. So back in NN, you can see that
194:41 I'm sending a post request to that endpoint. For the headers, I set up my
194:44 API key right here, but I'm going to show you guys a better way to do that in
194:47 the later part of this video. And then for the body, we're saying, okay, I want
194:51 to use the GBT image model. Here's the actual prompt to use for the image which
194:54 we dragged in from the image prompt agent. And then finally the size we just
194:59 left it as that 1024 * 1024 square image. And so this is interesting
195:03 because what we get back is we get back a massive base 64 code. Like this thing
195:08 is huge. I can't even scroll right now. My screen's kind of frozen. Anyways, um
195:12 yeah, there it goes. It just kind of lagged. But we got back this massive
195:15 file. We can see how many tokens this was. And then what we're going to do is
195:20 we're going to convert that to binary data. So that's how we can actually get
195:23 the file as an image. As you can see now after we turn that nasty string into a
195:28 file, we have the binary image right over here. So all I did was I basically
195:32 just dragged in this field right here with that nasty string. And then when
195:36 you hit test step, you'll get that binary data. And then from there, you
195:39 have the binary data, you have the LinkedIn post. All you have to do is,
195:43 you know, activate LinkedIn, drag it right in there. Or you can just do what
195:47 I did, which is I'm sending it to myself in email. And of course, before you guys
195:50 yell at me, let's just talk about how much this run costed me. So, this was
195:55 4,273 tokens. And if we look at this API and we go down to the pricing section,
195:59 we can see that for image output tokens, which was generated images, it's going
196:03 to be 40 bucks for a million tokens, which comes out to about 17 cents. If
196:06 you can see that right here, hopefully I did the math right. But really, for the
196:09 quality and kind of for the industry standard I've seen for price, that's on
196:13 the cheaper end. And as you can see down here, it translates roughly to 2 cents,
196:17 7 cents, 19 cents per generated image for low, medium, blah blah blah blah
196:20 blah. But anyways, now that that's out of the way, let's just set up an HTTP
196:25 request to that API and generate an image. So, I'm going to add a first
196:28 step. I'm just going to grab an HTTP request. So, I'm just going to head over
196:31 to the actual API documentation from OpenAI on how to create an image and how
196:35 to hit this endpoint. And all we're going to do is we're going to copy this
196:38 curl command over here on the right. If it you're not seeing a curl command, if
196:41 you're seeing Python, just change that to curl. Copy that. And then we're going
196:45 to go back into Nitn. Hit import curl. Paste that in there. And then once we
196:49 hit import, we're almost done. So that curl statement basically just autopop
196:52 populated almost everything we need to do. Now we just have a few minor tweaks.
196:55 But as you can see, it changed the method to post. It gave us the correct
196:59 URL endpoint already. It has us sending a header, which is our authorization,
197:02 and then it has our body parameters filled out where all we'd really have to
197:06 change here is the prompt. And if we wanted to, we can customize this kind of
197:09 stuff. And that's why it's going to be really helpful to be able to understand
197:13 and read API documentation so you know how to customize these different
197:16 requests. Basically, all of these little things here like prompt, background,
197:20 model, n, output format, they're just little levers that you can pull and
197:23 tweak in order to change your output. But we're not going to dive too deep
197:26 into that right now. Let's just see how we can create an image. Anyways, before
197:30 we grab our API key and plug that in, when you're in your OpenAI account, make
197:33 sure that your organization is verified. Otherwise, you're going to get this
197:35 error message and it's not going to let you access the model. Doesn't take long.
197:39 Just submit an ID. And then also make sure that you have billing information
197:43 set up so you can actually pay for um an image. But then you're going to go down
197:47 here to API keys. You're going to create new secret key. This one's going to be
197:53 called image test just for now. And then you're going to copy that API key. Now
197:56 back in any then it has this already set up for us where all we need to do is
198:00 delete all this. We're going to keep the space after bearer. And we can paste in
198:03 our API key like that. and we're good to go. But if you want a better method to
198:08 be able to save this key in Nadn so you don't have to go find it every time.
198:12 What you can do is come to authentication, go to general or actually no it's generic and then you're
198:17 going to choose header off and we know it's header because right here we're
198:20 sending headers as a header parameter and this is where we're authorizing
198:22 oursel. So we're just going to do the same up here with the header off. And
198:26 then we're going to create a new one. I'm just going to call this one openai
198:30 image just so we can keep ourselves organized. And then you're going to do the same
198:34 thing as what we saw down in that header parameter field. Meaning the
198:39 authorization is the name and then the value was bearer space API key. So
198:44 that's all I'm going to do. I'm going to hit save. We are now authorized to
198:49 access this endpoint. And I'm just going to turn off sending headers because
198:53 we're technically sending headers right up here with our authentication. So we
198:57 should be good now. Right now we'll be getting an image of a cute baby sea
199:01 otter. Um, and I'm just going to say making pancakes. And we'll hit test
199:05 step. And this should be running right now. Um, okay. So, bad request. Please
199:09 check your parameters. Invalid type for n. It expected an integer, but it got a
199:13 string instead. So, if you go back to the API documentation, we can see n
199:18 right here. It should be integer or null, and it's also optional. So, I'm
199:21 just going to delete that. We don't really need that. And I'm going to hit test step.
199:26 And while that's running real quick, we'll just go back at n. And this
199:29 basically says the number of images to generate must be between 1 and 10. So
199:32 that's like one of those little levers you could tweak like I was talking about
199:36 if you want to customize your request. But right now by default it's only going
199:40 to give us one. Looks like this HTTP request is working. So I'll check in
199:45 with you guys in 20 seconds when this is done. Okay. So now that that finished
199:49 up, didn't take too long. We have a few things and all we really need is this
199:52 base 64. But we can see again this one costed around 17. And now we just have
199:57 to turn this into binary so we can actually view an image. So I'm going to
200:01 add a plus after the HTTP request. I'm just going to type in binary. And we can
200:06 see convert to file, which is going to convert JSON data to binary data. And
200:11 all we want to do here is move a B 64 string to file because this is a B 64
200:15 JSON. And this basically represents the image. So I'm going to drag that into
200:19 there. And then when I hit test step, we should be getting a binary image output
200:24 in a field called data. As you can see right here, and this should be our image
200:28 of a cute sea otter making pancakes. As you can see, um it's not super
200:32 realistic, and that's because the prompt didn't have any like photorealistic,
200:36 hyperrealistic elements in there, but you can easily make it do so. And of
200:39 course, I was playing around with this earlier, and just to show you guys, you
200:41 can make some pretty cool realistic images, here was um a post I made about
200:47 um if ancient Rome had access to iPhones. And obviously, this is not like
200:51 a real Twitter account. Um, but this is a dinosaurs evolved into modern-day
200:55 influencers. This was just for me testing like an automation using this
200:59 API and auto posting, but not as practical as like these LinkedIn
201:02 graphics. But if you guys want to see a video sort of like this, let me know. Or
201:05 if you also want to see a more evolved version of the LinkedIn posting flow and
201:09 how we can make it even more robust and even more automated, then definitely let
201:15 well. Okay. Okay. So, all I have to do in this form submission is enter in a
201:19 picture of a product, enter in the product name, the product description,
201:22 and my email address. And we'll send this off, and we'll see the workflow
201:26 over here start to fire off. So, we're going to upload the photo. We're going
201:29 to get an image prompt. We're going to download that photo. Now, we're creating
201:32 a professional graphic. So, after our image has been generated, we're
201:35 uploading it to a API to get a public URL so we can feed that URL of the image
201:40 into Runway to generate a professional video. Now, we're going to wait 30
201:42 seconds and then we'll check in to see if the video is done. If it's not done
201:45 yet, we're going to come down here and pull, wait five more seconds, and then
201:48 go check in. And we're going to do this infinitely until our video is actually
201:52 done. So, anyways, it just finished up. It ended up hitting this check eight
201:55 times, which indicates I should probably increase the wait time over here. But
201:58 anyways, let's go look at our finished products. So, we just got this new
202:01 email. Here are the requested marketing materials for your toothpaste. So,
202:04 first, let's look at the video cuz I think that's more exciting. So, let me
202:06 open up this link. Wow, we got a 10-second video. It's spinning. It's 3D.
202:10 The lighting is changing. This looks awesome. And then, of course, it also
202:13 sends us that image. in case we want to use that as well. And one of the steps
202:16 in the workflow is that it's going to upload your original image to your
202:19 Google Drive. So here you can see this was the original and then this was the
202:22 finished product. So now you guys have seen a demo. We're going to build this
202:25 entire workflow step by step. So stick with me because by the end of this
202:28 video, you'll have this exact system up and running. Okay. So when we're setting
202:32 up a system where we're creating an image from text and then we're creating
202:35 a video from that image, the two most important things are going to be that
202:38 image prompt and that video prompt. So what we're going to do is head over to
202:41 my school community. The link for that will be down in the description. It's a
202:43 free school community. And then what you're going to do is either search for
202:46 the title of this video or click on YouTube resources and find the post
202:50 associated with this video. And when you click into there, there'll be a doc that
202:54 will look like this or a PDF and it will have the two prompts that you'll need in
202:57 order to run the system. So head over there, get that doc, and then we can hop
203:00 into the step by step. And that way we can start to build this workflow and you
203:03 guys will have the prompts to plug right in. Cool. So once you have those, let's
203:07 get started on the workflow. So as you guys know, a workflow always has to
203:11 start with some sort of trigger. So in this case, we're going to be triggering
203:14 this workflow with a form submission. So I'm just going to grab the native NAN
203:18 form on new form event. So we're going to configure what this form is going to
203:20 look like and what it's going to prompt a user to input. And then whenever
203:24 someone actually submits a response, that's when the workflow is going to
203:27 fire off. Okay. So I'm going to leave the authentication as none. The form
203:31 title, I'm just putting go to market. For the form description, I'm going to
203:36 say give us a product photo, title, and description, and we'll get back to you
203:40 with professional marketing materials. And if you guys are interested in what I
203:43 just used to dictate that text, there'll be a link for Whisper Flow down in the
203:46 description. And now we need to add our form elements. So the first one is going
203:50 to be not a text. We're going to have them actually submit a file. So click on
203:54 file. This is going to be required. I only want them to be allowed to upload
203:57 one file. So I'm going to switch off multiple files. And then for the field
204:01 name, we're just going to say product photo. Okay. So now we're going to add
204:04 another one, which is going to be the product title. So I'm just going to
204:07 write product title. This is going to be text. For placeholder, let's just put
204:10 toothpaste since that was the example. This will be a required field. So, the
204:13 placeholder is just going to be the gray text that fills in the text box so
204:17 people are kind of they know what to put in. Okay, we're adding another one
204:21 called product description. We'll make this one required. We'll just leave the
204:24 placeholder blank cuz you don't need it. And then finally, what we need to get
204:27 from them is an email, but instead of doing text, we can actually make it
204:31 require a valid email address. So, I'm just going to call it email and we'll
204:34 just say like namele.com so they know what a valid email looks like. We'll make that
204:38 required because we have to send them an email at the end with their materials.
204:42 And now we should be good to go. So if I hit test step, we'll see that it's going
204:45 to open up a form submission and it has everything that we just configured. And
204:48 now let me put in some sample data real quick. Okay, so I put a picture of a
204:52 clone bottle. The title's clone. I said the clone smells very clean and fresh
204:55 and it's a very sophisticated scent because we're going to have that
204:59 description be used to sort of help create that text image prompt. And then
205:02 I just put my email. So I'm going to submit this form. We should see that
205:05 we're going to get data back right here in our NIN, which is the binary photo.
205:08 This is the product photo that I just submitted. And then we have our actual
205:13 table of information like the title, the description, and the email. And so when
205:17 I'm building stuff step by step, what I like to do is I get the data in here,
205:20 and then I pretty much will just build node by node, testing the data all the
205:23 way through, making sure that nothing's going to break when variables are being
205:27 passed from left to right in this workflow. Okay, so the next thing that
205:30 we need to do is we have this binary data in here and binary data is tough to
205:34 reference later. So what I'm going to do is I'm just going to upload it straight
205:37 to our Google Drive so we can pull that in later when we need it to actually
205:41 edit that image. Okay, so that's our form trigger. That's what starts the
205:44 workflow. And now what we're going to do next is we want to upload that original
205:48 image to Google Drive so we can pull it in later and then use it to edit the
205:51 image. So what I'm going to do is I'm going to click on the plus. I'm going to
205:54 type in Google Drive. And we're going to grab a Google Drive operation. That is
205:59 going to be upload file. So, I'll click on upload file. And at this point, you
206:02 need to connect your Google Drive. So, I'm not going to walk through that step
206:05 by step, but I have a video right up here where I do walk through it step by
206:08 step. But basically, you're just going to go to Docs. You have to open up a
206:12 sort of Google Cloud profile or a console, and then you just have to
206:15 connect yourself and enable the right credentials and APIs. Um, but like I
206:19 said, that video will walk through it. Anyways, now what we're doing is we have
206:23 to upload the binary field right here to our Google Drive. So, it's not called
206:27 data. We can see over here it's called product photo. So, I'm just going to
206:29 copy and paste that right there. So, it's going to be looking for that
206:32 product photo. And then we have to give it a name. So, that's why we had the
206:36 person submit a title. So, all I'm going to do is for the name, I'm going to make
206:40 this an expression instead of fixed because this name is going to change
206:43 based on the actual product coming through. I'm going to drag in the
206:47 product title from the left right here. So now the the photo in Google Drive is
206:51 going to be called cologne and then I'm just going to in parenthesis say
206:55 original. So because this is an expression, it basically means whenever
206:58 someone submits a form, whatever the title is, it's going to be title and
207:01 then it's going to say original. And that's how we sort of control that to be
207:05 dynamic. Anyways, then I'm just choosing what folder to go in. So in my drive,
207:08 I'm going to choose it to go to a folder that I just made called um product
207:12 creatives. So once we have that configured, I'm going to hit test step.
207:15 We're going to wait for this to spin. it means that it's trying to upload it
207:18 right now. And then once we get that success message, we'll quickly go to our
207:21 Google Drive and make sure that the image is actually there. So there we go.
207:25 It just came back. And now I'm going to click into Google Drive, click out of
207:27 the toothpaste, and we can see we have cologne. And that is the image that we
207:31 just submitted in NAN. All right. Now that we've done that, what we want to do
207:35 is we want to feed the data into an AI node so that it can create a text image
207:38 prompt. So I'm going to click on the plus. I'm going to grab an AI agent. And
207:43 before we do anything in here, I'm first of all going to give it its brain. So,
207:45 I'm going to click on the plus under chat model. I'm personally going to grab
207:49 an open router chat model, which basically lets you connect to a ton of
207:53 different things. Um, let me see. Open router.ai. It basically lets you connect
207:57 your agents to all the different models. So, if I click on models up here, we can
208:00 see that it just lets you connect to Gemini, Anthropic, OpenAI, Deepseek. It
208:04 has all these models and all in one place. So, go to open router, get an API
208:08 key, and then once you come back into here, all you have to do is connect your
208:11 API key. And what I'm going to use here is going to be 4.1. And then I'm just
208:15 going to name this so we know which one I'm using here. And then we now have our
208:20 agent accessing GPT4.1. Okay. So now you're going to go to that PDF that I have in the school
208:26 community and you're just going to copy this product photography prompt. Grab
208:31 that. Go back to the AI agent and then you're going to click on add option. Add
208:35 a system message. And then we're basically just going to I'm going to
208:38 click on expression and expand this full screen so you guys can see it better.
208:40 But I'm just going to paste that prompt in here. And this is going to tell the
208:44 AI agent how to take what we're giving it and turn it into a text image
208:51 optimized prompt for professional style, you know, studio photography. So, we're
208:55 not done yet because we have to actually give it the dynamic information from our
208:59 form submission every time. So, that's a user message. That's basically what it's
209:02 going to look at. So, the user message is what the agent's going to look at
209:05 every time. And the system message is basically like here are your
209:09 instructions. So for the user message, we're not going to be using a connected
209:11 chat trigger node. We're going to define below. And when we want to make sure
209:15 that this changes every time, we have to make sure it's an expression. And then
209:19 I'm just going to drill down over here to the form submission. And I'm going to
209:21 say, okay, here's what we're going to give this agent. It's going to get the
209:26 product, which the person submitted to us in the form, and we can drag in the
209:31 product, which was cologne, as you can see on the right. And then they also
209:36 gave us a description. So, all I have to do now is drag in the product
209:38 description. And so, now every time the agent will be looking at whatever
209:42 product and description that the user submitted in order to create its prompt.
209:46 So, I'm going to hit test step. We'll see right now it's using its chat model
209:50 GPT4.1. And it's already created that prompt for us. So, let's just give it a
209:53 quick read. Hyperrealistic photo of sophisticated cologne bottle,
209:56 transparent glass, sleek minimalistic design, silver metal cap, all this. But
210:00 what we have to do is we have to make sure that the image isn't being created
210:04 just on this. It has to look at this, but it also has to look at the actual
210:08 original image. So that's why our next step is going to be to redownload this
210:11 file and then we're going to push it over to the image generation model. So
210:15 at this point, you may be wondering like why are we going to upload the file if
210:18 we're just going to download it again? And the reason why I had to do that is
210:21 because when we get the file in the form of binary, we want to send the binary
210:27 data into the HTTP request right here that actually generates the image. And
210:30 we can't reference the binary way over here if it's only coming through over
210:34 here. So, we upload it so that we can then download it and then send it right
210:37 back in. And so, if that doesn't make sense yet, it probably will once we get
210:41 over to the stage. But that's why. Anyways, next step is we're going to
210:44 download that file. So, I'm going to click on this plus. We're going to be
210:47 downloading it from Google Drive and we're going to be using the operation
210:51 download file. So, we already should be connected because we've set up our
210:54 Google credentials already. The operation is going to be download the
210:57 resources a file and instead of choosing from a list, we're going to choose by
211:00 ID. And all we're going to do is download that file that we previously
211:03 uploaded every time. So I'm going to come over here, the Google Drive, upload
211:08 photo node, drag in the ID, and now we can see that's all we have to do. If we
211:11 hit test step, we'll get back that file that we originally uploaded. And we can
211:15 just make sure it's the cologne bottle. Okay, but now it's time to basically use
211:19 that downloaded file and the image prompt and send that over to an API
211:23 that's going to create an image for us. So we're going to be using OpenAI's
211:27 image generator. So here is the documentation. we have the ability to
211:30 create an image or we can create an image edit which is what we want to do
211:33 because we wanted to look at the photo and our request. So typically what you
211:38 can do in this documentation is you can copy the curl command but this curl
211:41 command is actually broken so we're not going to do that. If you copied this one
211:44 up here to actually just create an image that one would work fine but there's
211:47 like a bug with this one right now. So anyways I'm going to go into our n I'm
211:51 going to hit the plus. I'm going to grab an HTTP request and now we're going to
211:57 configure this request. So, I'm going to walk through how I'm reading the API
212:00 documentation right here to set this up. I'm not going to go super super
212:03 in-depth, but if you get confused along the way, then definitely check out my
212:06 paid course. The link for that down in the description. I've got a full course
212:10 on deep diving into APIs and HTTP requests. Anyways, the first thing we
212:13 see is we're going to be making a post request to this endpoint. So, the first
212:16 thing I'm going to do is copy this endpoint. We're going to paste that in.
212:19 And then we're also going to make sure the method is set to post. So, the next
212:23 thing that we have to do is authorize ourselves somehow. So over here I can
212:27 see that we have a header and the name is going to be authorization and then
212:31 the value is going to be bearer space R open AI key. So that's why I set up a
212:35 header authentication already. So in authentication I went to generic and
212:39 then I went to header and then you can see I have a bunch of different headers
212:42 already set up. But what I did here is I chose my OpenAI one where basically all
212:46 I did was I typed in here authorization and then in the value I typed in bearer
212:50 space and then I pasted my API key in there. And now I have my OpenAI
212:54 credential saved forever. Okay. So the first thing we have to do in our body
212:59 request over to OpenAI is we have to send over the image to edit. So that's
213:02 going to be in a field called image. And then we're sending over the actual
213:05 photo. So what I'm going to do is I'm going to click on send body. I'm going
213:10 to use form data. And now we can set up the different names and values to send
213:13 over. So the first thing is we're going to send over this image right here on
213:16 the lefth hand side. And this is in a field called data. And it's binary. So,
213:19 I'm going to choose instead of form data, I'm going to send over an NAN
213:23 binary file. The name is going to be image because that's what it said in the
213:26 documentation. And the input data field name is data. So, I'm just going to copy
213:30 that, paste it in there. And this basically means, okay, we're sending
213:34 over this picture. The next thing we need to send over is a prompt. So, the
213:37 name of this field is going to be prompt. I'm just going to copy that, add
213:42 a new parameter, and call it prompt. And then for the value, we want to send over
213:45 the prompt that we had our AI agent write. So, I'm going to click into
213:47 schema and I'm just going to drag over the output from the AI agent right
213:51 there. And now that's an expression. So, the next thing we want to send over is
213:54 what model do we want to use? Because if we don't put this in, it's going to
213:58 default to dolly 2, but we want to use gpt-image- one. So, I'm going to copy
214:04 GPT- image- one. We're going to come back into here, and I'm going to paste
214:08 that in as the value, but then the name is model because, as you can see in
214:11 here, right there, it says model. So hopefully you guys can see that when
214:15 we're sending over an API call, we just have all of these different options
214:18 where we can sort of tweak different settings to change the way that we get
214:22 the output back. And then you have some other options, of course, like quality
214:25 or size. But right now, we're just going to leave all that as default and just go
214:28 with these three things to keep it simple. And I'm going to hit test step
214:32 and we'll see if this is working. Okay, never mind. I got an error and I was
214:35 like, okay, I think I did everything right. The reason I got the error is
214:38 because I don't have any more credits. So, if you get this error, go add some
214:42 credits. Okay, so added more credits. I'm going to try this again and I'll
214:45 check back in. But before I do that, I wanted to say me clearly, I've been like
214:50 spamming this thing with creating images cuz it's so cool. It's so fun. But
214:53 everyone else in the world has also been doing that. So, if you're ever getting
214:56 some sort of like errors where it's like a 500 type of error where it means like
215:00 something's going on on the server side of things or you're seeing like some
215:04 sort of rate limit stuff, keep in mind that there's there's a limit on how many
215:07 images you can send per minute. I don't think that's been clearly defined on
215:13 GPT- image-1. But also, if the OpenAI server is receiving way too many
215:16 requests, that is also another reason why your request may be failing. So,
215:20 just keep that in mind. Okay, so now it worked. We just got that back. But what
215:23 you'll notice is we don't see an image here or like an image URL. So, what we
215:27 have to do is we have this base 64 string and we have to turn that into
215:32 binary data. So, what I'm going to do is after this node, I'm going to add one
215:37 that says um convert to file. So we're going to convert JSON data to binary
215:41 data and we're going to do B 64. So all I have to do now is show this data on
215:45 the lefth hand side. Grab the base 64 string. And then when we hit test step,
215:48 we should get a binary file over here, which if we click into it, this should
215:52 be our professional looking photo. Wow, that looks great. It even got the
215:55 wording and like the same fonts right. So that's awesome. And by the way, if we
216:00 click into the results of the create image where we did the image edit, we
216:04 can see the tokens. And with this model, it is basically $10 for a million input
216:09 tokens and $40 for a million output tokens. So right here, you can see the
216:12 difference between our input and output tokens. And this one was pretty cheap. I
216:15 think it was like 5 cents. Anyways, now that we have that image right here as
216:19 binary data, we need to turn that into a video using an API called Runway. And so
216:23 if we go into Runway and we go first of all, let's look at the price. For a
216:27 5second video, 25 cents. For a 10-second video, 50 cents. So that's the one we're
216:30 going to be doing today. But if we go to the API reference to read how we can
216:34 turn an image into a video, what we need to look at is how we actually send over
216:38 that image. And what we have to do here is send over an HTTPS URL of the image.
216:43 So we somehow have to get this binary data in NADN to a public image that
216:48 runway can access. So the way I'm going to be doing that is with this API that's
216:53 free called image BB. And um it's a free image hosting service. And what we can
216:57 do is basically just use its API to send over the binary data and we'll get back
217:02 a public URL. So come here, make a free account. You'll grab your API key from
217:05 up top. And then we basically have here's how we set this up. So what I'm
217:08 going to do is I'm going to copy the endpoint right there. We're going to go
217:12 back into naden and I'm going to add an HTTP request. And let me just configure
217:17 this up. We'll put it over here just to keep everything sort of square. But now
217:20 what I'm going to do in here is paste that endpoint in as our URL. You can
217:24 also see that it says this call can be done using post or git. But since git
217:28 requests are limited by the max amount of length, you should probably do post.
217:30 So I'm just going to go back in here and change this to a post. And then there
217:33 are basically two things that are required. The first one is our API key.
217:37 And then the second one is the actual image. Anyways, this documentation is
217:41 not super intuitive. I can sort of tell that this is a query parameter because
217:45 it's being attached at the end of the endpoint with a question mark and all
217:47 this kind of stuff. And that's just because I've looked at tons of API
217:51 documentation. So, what I'm going to do is go into nit. We're going to add a
217:55 generic credential type. It's going to be a query off. Where where was query?
217:59 There we go. And then you can see I've already added my image BB. But all
218:02 you're going to do is you would add the name as a key. And then you would just
218:05 paste in your API key. And that's it. And now we've authenticated ourselves to
218:09 the service. And then what's next is we need to send over the image in a field
218:12 called image. So I'm going to go back in here. I'm going to send over a body
218:15 because this allows us to actually send over n binary fields. And I'm not going
218:20 to do n binary. I'm going to do form data because then we can name the field
218:23 we're sending over. Like I said, not going to deep dive into how that all
218:26 works, but the name is going to be image and then the input data field name is
218:30 going to be data because that's how it's seen over here. And this should be it.
218:33 So, real quick, I'm just going to change this to get URL. And then we're going to
218:37 hit test step, which is going to send over that binary data to image BB. And
218:42 it hopefully should be sending us back a URL. And it sent back three of them. I'm
218:45 going to be using the middle one that's just called URL because it's like the
218:48 best size and everything. You can look at the other ones if you want on your
218:52 end, but this one is going to load up and we should see it's the image that we
218:55 got generated for us. It takes a while to load up on that first time, but as
218:59 you can see now, it's a publicly accessible URL and then we can feed it
219:03 into runway. So that's exactly our next step. We're going to add another request
219:07 right here. It's going to be an HTTP and this one we're going to configure to hit
219:11 runway. So here's a good example of we can actually use a curl command. So I'm
219:14 going to click on copy over here when I'm in the runway. Generate a video from
219:19 image. Come back into Naden, hit import curl, and paste that in there and hit
219:22 import. And this is going to basically configure everything we need. We just
219:26 have to tweak a few things. Typically, most API documentation nowadays will
219:29 have a curl command. The edit image one that we set up earlier was just a little
219:33 broken. Imag is just a free service, so sometimes they don't always. But let's
219:37 configure this node. So, the first thing I see is we have a header off right
219:40 here. And I don't want to send it like this. I want to send it up as a generic
219:44 type so I can save it. Otherwise, you'd have to go get your API key every time
219:47 you wanted to use Runway. So, as you can see, I've already set up my Runway API
219:51 key. So, I have it plugged in, but what you would do is you'd go get your API
219:55 key from Runway. And then you'd see, okay, how do we actually send over
219:58 authentication? It comes through with the name authorization. And then the
220:03 header is bearer space API key. So, similar to the last one. And then that's
220:06 all you would do in here when you're setting up your runway credential.
220:11 Authorization bearer space my API key. And then because we have ourselves
220:14 authenticated up here, we can flick off that headers. And all we have to do now
220:17 is configure the actual body. Okay, so first things first, what image are we
220:21 sending over to get turned into a video in that name prompt image? We're going
220:25 to get rid of that value. And I'm just going to drag in the URL that we wanted
220:29 that we got from earlier, which was that picture I s I showed you guys. So now
220:33 runway sees that image. Next, we have the seed, which if you want to look at
220:36 the documentation, you can play with it, but I'm just going to get rid of that.
220:38 Then we have the model, which we're going to be using, Gen 4 Turbo. We then
220:42 have the prompt text. So, this is where we're going to get rid of this. And
220:46 you're going to go back to that PDF you downloaded from my free school, and
220:49 you're going to paste this prompt in there. So, this prompt basically gives
220:53 us that like 3D spinning effect where it just kind of does a slow pan and a slow
220:56 rotate. And that's what I was looking for. If you're wanting some other type
220:59 of video, then you can tweak that prompt, of course. For the duration, if
221:04 you look in the documentation, it'll say the duration only basically allows five
221:08 or 10. So, I'm just going to change this one to 10. And then the last one was
221:11 ratio. And I'm just going to make the square. So here are the accepted ratio
221:16 values. I'm going to copy 960 by 960. And we're just going to paste that in
221:19 right there. And actually before we hit test step, I've realized that we're
221:22 missing something here. So back in the documentation, we can see that there's
221:25 one thing up here which is required, which is a header. X-runway- version.
221:30 And then we need to set the value to this. So I'm going to copy the header.
221:35 And we have to enable headers. I I deleted it earlier, but we're going to
221:37 enable that. So we have the version. And then I'm just going to go copy the value
221:41 that it needs to be set to and we'll paste that in there as the value.
221:44 Otherwise, this would not have worked. Okay, so that should be configured. But
221:48 before we test it out, I want to show you guys how I set up the polling flow
221:52 like this that you saw in the demo. So what we're going to do here is we need
221:57 to go see like, okay, once we send over our request right here to get a video
222:01 from our image, it's going to return an ID and that doesn't mean anything to us.
222:06 So what we have to do is get our task. So that is the basically we send over
222:10 the ID that it gives us and then it'll come back and say like the status equals
222:14 pending or running or we'll say completed. So what I'm going to do is
222:18 copy this curl command for getting task details. We're going to hook it up to
222:23 this node as an HTTP request. We're going to import that curl. Now that's pretty much set up. We
222:28 have our authorization which I'm going to delete that because as you know we
222:32 just configured that earlier as a header off. So, I'm just going to come in here
222:37 and grab my Runway API key. There it is. I couldn't find it for some reason. Um,
222:41 we have the version set up. And now all we have to do is drag in the actual ID
222:45 from the previous one. So, real quick, I'm just going to make this an
222:48 expression. Delete ID. And now we're pretty much set up. So, first of all,
222:51 I'm going to test this one, which is going to send off that request to runway
222:54 and say, "Hey, here's our image. Here's the prompt. Make a video out of it." And
222:59 as you can see, we got back an ID. Now I'm going to use this next node and I'm
223:03 going to drag in that ID from earlier. And now it's saying, okay, we're going
223:06 to check in on the status of this specific task. And if I hit test step,
223:09 what we're going to see is that it's not yet finished. So it's going to come back
223:13 and say, okay, status of this run or status of this task is running. So
223:17 that's why what I'm going to do is add an if. And this if is going to be saying,
223:24 okay, does this status field right here, does that equal running in all caps?
223:28 Because that's what it equals right now. If yes, what we're going to do is we are
223:32 going to basically wait for a certain amount of time. So here's the true
223:36 branch. I'm going to wait and let's just say it's 5 seconds. So I'll just call
223:41 this five seconds. I'm going to wait for 5 seconds and then I'm going to come
223:44 back here and try again. So as you saw in the demo, it basically tried again
223:48 like seven or eight times. And this just ensures that it's never going to move on
223:53 until we actually have a finished photo. So what you could also do is basically
223:56 say does status equal completed or whatever it means when it completes.
223:59 That's another way to do it. You just have to be careful to make sure that
224:01 whatever you're setting here as the check is always 100% going to work. And
224:07 then what you do is you would continue the rest of the logic down this path
224:10 once that check has been complete. And then of course you probably don't want
224:13 to have this check like 10 times every single time. So what you would do is
224:17 you'd add a weight step here. And once you know about how long it takes, you'd
224:21 add this here. So last time I had it at 30 seconds and it waited like eight
224:23 times. So let's just say I'm going to wait 60 seconds here. So then when this
224:27 flow actually runs, it'll wait for a minute, check. If it's still not done,
224:30 it'll continuously loop through here and wait 5 seconds every time until we're
224:34 done. Okay, there we go. So now status is succeeded. And what I'm going to do
224:37 is just view this video real quick. Hopefully this one came out nicely.
224:41 Let's take a look. Wow, this is awesome. Super clean. It's rotating really slowly. It's a full
224:49 10-second video. You can tell it's like a 3D image. This is awesome. Okay, cool.
224:54 So now if we test this if branch, we'll see that it's going to go down the other
224:57 one which is the false branch because it's actually completed. And now we can
225:01 with confidence shoot off the email with our materials. So I'm going to grab a
225:04 Gmail node. I'm going to click send a message. And we are going to have this
225:08 configured hopefully because you've already set up your Google stuff. And
225:11 now who do we send this to? We're going to go grab that email from the original
225:14 form submission which is all the way down here. We're going to make the
225:19 subject, which I'm just going to say marketing materials, and then a colon.
225:24 And we'll just drag in the actual title of the product, which in here was
225:28 cologne. I'm changing the email type to text just because I want to. Um, we're
225:32 going to make the body an expression. And we're just going to say like,
225:39 hey, here is your photo. And obviously this can be customized however you want.
225:43 But for the photo, what we have to do is grab that public URL that we generated
225:47 earlier. So right here there is the photo URL. Here is your video. And for
225:52 the video, we're going to drag in the URL we just got from the output of that
225:58 um runway get task check. So there is the video URL. And then I'm just going
226:02 to say cheers. Last thing I want to do is down here append edit an attribution
226:07 and turn that off. This just ensures that the email doesn't say this email
226:12 was sent by NAN. And now if we hit test step right here, this is pretty much the
226:15 end of the process. And we can go ahead and check. Uh-oh. Okay, so not
226:19 authorized. Let me fix that real quick. Okay, so I just switched my credential
226:21 because I was using one that had expired. So now this should go through
226:25 and we'll go take a look at the email. Okay, so did something wrong. I can
226:28 already tell what happened is this is supposed to be an expression and
226:31 dynamically come through as the title of the product, but we accidentally somehow
226:35 left off a curly brace. So, if I come back into here and and add one more
226:38 curly brace right here to the description or sorry, the subject now,
226:42 we should be good. I'll hit test step again. And now we'll go take a look at
226:46 that email. Okay, there we go. Now, we have the cologne and we have our photo
226:49 and our video. So, let's click into the quick. I'm just so amazed. This is this
226:55 is just so much fun. It look the the lighting and the the reflections. It's
227:00 it's all just perfect. And then we'll click into the photo just in case we want to see the
227:05 actual image. And there it is. This also looks awesome. All right, so that's
227:09 going to do it for today's video. I hope you guys enjoyed this style of walking
227:12 step by step through some of the API calls and sort of my thought process as
227:16 to how I set up this workflow. Okay, at this point I think you guys probably
227:19 have a really good understanding of how these AI workflows actually function and
227:22 you're probably getting a little bit antsy and want to build an actual AI
227:26 agent. Now, so we're about to get into building your first AI agent step by
227:30 step. But before that, just wanted to drive home the concept of AI workflows
227:35 versus AI agents one more time and the benefits of using workflows. But of
227:38 course, there are scenarios where you do need to use an agent. So, let's break it
227:42 down real quick. Everyone is talking about AI agents right now, but the truth
227:46 is most people are using them completely wrong and admittedly myself included.
227:50 It's such a buzzword right now and it's really cool in n to visually see your
227:54 agents think about which tools they have and which ones to call. So, a lot of
227:57 people are just kind of forcing AI agents into processes where you don't
228:01 really need it. But in reality, a simple AI workflow is not only going to be
228:04 easier to build, it's going to be more cost- effective and also more reliable
228:07 in the long run. If you guys don't know me, my name's Nate. And for a while now,
228:10 I've been running an agency where we deliver AI solutions to clients. And
228:14 I've also been teaching people from any background how to build out these things
228:17 practically and apply them to their business through deep dive courses as
228:21 well as live calls. So, if that sounds interesting to you, definitely check out
228:23 the community with the link in the description. But let's get into the
228:26 video. So, we're going to get into Naden and I'm going to show you guys some
228:29 mistakes of when I've built agents when I should have been building AI
228:31 workflows. But before that, I just wanted to lay out the foundations here.
228:35 So, we all know what chatbt is. At its core, it's a large language model that
228:39 we talk to with an input and then it basically just gives us an output. So,
228:42 if we wanted to leverage chatbt to help us write a blog post, we would ask it to
228:46 write a blog post about a certain topic. It would do that and then it would give
228:48 us the output which we would then just copy and paste somewhere else. And then
228:52 came the birth of AI agents, which is when we actually were able to give tools
228:56 to our LLM so that they could not only just generate content for us, but they
228:59 could actually go post it or go do whatever we wanted to do with it. AI
229:02 agents are great and there's definitely a time and a place for them because they
229:05 have different tools and basically the agent will use its brain to understand,
229:08 okay, I have these three tools based on what the user is asking me. Do I call
229:12 this one and then do I output or do I call this one then this one or do I need
229:17 to call all three simultaneously? It has that option and it has the variability
229:19 there. So, this is going to be a non-deterministic workflow. But the
229:23 reality is most of the processes that we're trying to enhance for our clients
229:28 are pretty deterministic workflows that we can build out with something more
229:30 linear where we still have the same tools. We're still using AI, but we have
229:34 everything going step one, step two, step three, step four, step five, step
229:38 six, which is going to reduce the variability there. It's going to be very
229:42 deterministic and it's going to help us with a lot of things. So stick with me
229:45 because I'm going to show you guys an AI agent video that I made on YouTube a few
229:49 months back and I started re-evaluating it. Like why would I ever build out the
229:52 system like that? It's so inefficient. So I'll show you guys that in a sec. But
229:55 real quick, let's talk about the pros of AI workflows over AI agents. And I
229:59 narrowed it down to four main points. The first one is reliability and
230:02 consistency. One of the most important concepts of building an effective AI
230:05 agent is the system prompt because it has to understand what its tools are,
230:09 when to use each one, and what the end goal is. and it's on its own to figure
230:12 out which ones do I need to call in order to provide a good output. But with
230:15 a workflow, we're basically keeping it on track and there's no way that the
230:18 process can sort of deviate from the guardrails that we've set up because it
230:22 has to happen in order and it can't really go anywhere else. So this makes
230:25 systems more reliable because there's never going to be a transfer of data
230:28 between workflows where things may get messed up or incorrect mappings being
230:32 sent across, you know, agent to a different agent or agent to tool. We're
230:36 just basically able to go through the process linearly. So the next one is
230:40 going to be cost efficiency. When we're using an agent and it has different
230:44 tools, every time it hits a tool, it's going to go back to its brain. It's
230:46 going to rerun through its system prompt and it's going to think about what is my
230:49 next step here. And every time you're accessing that AI agent's brain, it
230:53 costs you money. So if we're able to eliminate that aspect of decision-m and
230:57 just say, okay, you you finished step two, now you have to go on to step
231:00 three. There's no decision to be made. We don't have to make that extra API
231:04 call to think about what comes next, and we're saving money. Number three is
231:08 easier debugging and maintenance. When we have an AI workflow, we can see
231:12 exactly which node errors. We can see exactly what mappings are incorrect and
231:16 what happened here. Whereas with an AI agent workflow, it's a little bit
231:18 tougher because there's a lot of manipulating the system prompt and
231:21 messing with different tool configurations. And like I said, there's
231:25 data flowing between agent to tool or between agent to subworkflow. And that's
231:28 where a lot of things can happen that you don't really have full visibility
231:31 into. And then the final one is scalability. kind of backpacks right off
231:35 of number three. But if you wanted to add more nodes and more functionality to
231:38 a workflow, it's as simple as, you know, plugging in a few more blocks here and
231:41 there or adding on to the back. But when you want to increase the functionality
231:44 of an AI agent, you're probably going to have to give it more tools. And when you
231:47 give it more tools, you're going to have to refine and add more lines to the
231:52 system prompt, which could work great initially, but then previous
231:55 functionality, the first couple tools you added, those might stop working or
231:59 those may become less consistent. So basically, the more control that we have
232:02 over the entire workflow, the better. AI is great. There are times when we need
232:05 to make decisions and we need that little bit of flexibility. But if a
232:09 decision doesn't have to be made, why would we leave that up to the AI to
232:13 hallucinate 5 or 10% of the time when we could basically say, "Hey, this is going
232:16 to be 100% consistent." Anyways, I've made a video that talks a little bit
232:19 more about this stuff, as well as other things I've learned over the first 6
232:22 months of building agents. If you want to watch that, I'll link it up here. But
232:25 let's hop into n and take a look at some real examples. Okay, so the first
232:29 example I want to share with you guys is a typical sort of rag agent. And for
232:32 some reason it always seems like the element of rag has to be associated with
232:36 an agent, but it really doesn't. So what we have is a workflow where we're
232:39 putting a document from Google Drive into Pine Cone. We have a customer
232:42 support agent and then we have a customer support AI workflow. And both
232:46 of the blue box and the green box, they do the exact same thing, but this one's
232:49 going to be more efficient and we also have more control. So let's break this
232:52 down. Also, if you want to download this template to play around with, you can
232:54 get it for free if you go to my free school community. The link for that's
232:57 down in the description as well. You'll come into here, click on YouTube
233:00 resources, and click on the post associated with this video. And then the
233:03 workflow will be right here for you to download. Okay, so anyways, here is the
233:06 document that we're going to be looking at. It has policy and FAQ information.
233:10 We've already put it into Pine Cone. As you can see, it's created eight vectors.
233:13 And now what we're going to do is we're going to fire off an email to the
233:16 customer support agent to see how it handles it. Okay, so we just sent off,
233:20 do you offer price matching or bulk discounts? We'll come back into the
233:23 workflow, hit run, and we should see the customer support agent is hitting the
233:26 vector database, and it's also hitting its reply email tool. But what you'll
233:29 notice is that it hit its brain. So, Google Gemini 2.0 Flash in this case,
233:33 not a huge deal because it's free. But if you were using something else, it's
233:36 going to have hit that API three different times, which would be three
233:40 separate costs. So, let's check and see if it did this correctly. So, in our
233:43 email, we got the reply, "We do not offer price matching currently, but we
233:46 do run promotions and discounts regularly. Yes, bulk orders may qualify
233:50 for a discount. Please contact our sales team at salestechhaven.com for
233:54 inquiries. So, let's go validate that that's correct. So, in the FAQ section
233:57 of this doc, we have that they don't offer price matching, but they do run
234:00 promotions and discounts regularly. And then for bulk discounts, um you have to
234:04 hit up the sales team. So, it answered correctly. Okay. So, now we're going to
234:08 run the customer support AI workflow down here. It's going to grab the email.
234:11 It's going to search Pine Cone. It's going to write the email. I'll explain
234:13 what's going on here in a sec. And then it responds to the customer. So, there's
234:17 four steps here. It's going to be an email trigger. It's going to search the
234:19 knowledge base. It's going to write the email and then respond to the customer
234:23 in an email. So, why would we leave that up to the agent to decide what it needs
234:27 to do if it's always going to happen in those four steps every time? All right,
234:30 here's the email we just got in reply. As you can see, this is the one that the
234:32 agent wrote, and this one looks a lot better. Hello, thank you for reaching
234:36 out to us. In response to your inquiry, we currently do not offer price
234:39 matching. However, we do regularly run promotions and discounts, so be sure to
234:42 keep an eye out for those. That's accurate. Regarding bulk discounts, yes,
234:47 they may indeed qualify for a discount. So reach out to our sales team. If you
234:50 have any other questions, please feel free to reach out. Best regards, Mr.
234:53 Helpful, TechHaven. And obviously, I told it to sign off like that. So, now
234:57 that we've seen that, let's actually break down what's going on. So, it's the
235:00 same trigger. You know, we're getting an email, and as you can see, we can find
235:03 the text of the email right here, which was, "Do you guys offer price matching
235:07 or bulk discounts?" We're feeding that into a pine cone node. So, if you guys
235:10 didn't know, you don't even need these to be only tools. You can have them just
235:14 be nodes. where we're searching for the prompts that is, do you guys offer price
235:18 matching or bulk discounts? And maybe you might want an AI step between the
235:22 trigger and the search to maybe like formulate a query out of the email if
235:25 the email is pretty long. But in this case, that's all we did. And now we can
235:29 see we got those four vectors back, same way we would have with the agent. But
235:32 what's cool is we have a lot more control over it. So as you can see, we
235:37 have a vector and then we have a score, which basically ranks how relevant it
235:40 the vector was to the query that we sent off. And so we have some pretty low ones
235:44 over here, but what we can do is say, okay, we only want to keep if the score
235:48 is greater than 04. So it's only going to be keeping these two, as you can see,
235:51 and it's getting rid of these two that aren't super relevant. And this is
235:54 something that's a lot easier to control in this linear flow compared to having
235:59 the agent try to filter through vector results up here. Anyways, then we're
236:02 just aggregating however many results it pulls back. if it's four, if it's three,
236:06 or if it's just one, it's still just going to aggregate them together so that
236:09 we can feed it into our OpenAI node that's going to write the email. So
236:12 basically, in the user prompt, we said, "Okay, here's the customer inquiry.
236:15 Here's the original email, and here's the relevant knowledge that we found.
236:18 All you have to do now is write an email." And so by giving this AI node
236:22 just one specific goal, it's going to be more quality and consistent with its
236:26 outputs rather than we gave the agent multiple jobs. It had to not only write
236:30 the email, but it also had to figure out how to search through information and
236:33 figure out what the next step was. So this node, it only has to focus on one
236:37 thing. It has the knowledge handed to it on a silver platter to write the email
236:40 with. And basically, we said, you're Mr. Helpful, a customer support rep for Tech
236:44 Haven. Your job is to respond to incoming customer emails with accurate
236:47 information from the knowledge base. You must only answer using relevant
236:50 knowledge provided to you. Don't make anything up. We gave it the tone and
236:53 then we said only output the body in a clean format. it outputs that body and
236:57 then all it had to do is map in the correct message ID and the correct
237:02 message content. Simple as that. So, I hope this makes sense. Obviously, it's a
237:05 lot cooler to watch the agent do something like that up here, but this is
237:08 basically the exact same flow and I would argue that it's going to be a lot
237:12 better, more consistent, and cheaper. Okay, so now to show an example where I
237:15 released this as a YouTube video and a couple weeks later I was like, why did I
237:19 do it like that? So, what we have here is a technical analyst. And so basically
237:23 we're talking to it through Telegram and it has one tool which is basically going
237:27 to get a chart image and then it's going to analyze the chart image and then it
237:30 sends it back to us in Telegram. And this is the workflow that it's actually
237:33 calling right here where we're making an HTTP request to chart- image. We're
237:37 getting the chart, downloading it, analyzing the image, sending it back,
237:40 and then responding back to the agent. So there's basically like two transfers
237:44 of data here that we don't need because as you can see down here, we have the
237:49 exact same process as one simple AI workflow. So there's going to be much
237:52 much less room for error here. But first of all, let's demo how this works and
237:56 then we'll demo the actual AI workflow. Okay, so it should be listening to us
237:59 now. I'm going to ask it to analyze Microsoft. And as you can see, it's now
238:02 hitting that tool. We won't see this workflow actually in real time just
238:06 because it's like calling a different execution, but this is the workflow that
238:08 it's calling down here. I can actually just it's basically calling this right
238:12 here. Um, so what it's going to do is it's going to send us an image and then
238:16 a second or two later it's going to send us an actual analysis. So there is
238:20 Microsoft's stock chart and now it's creating that analysis as you can see
238:22 right up here and then it's going to send us that analysis. We just got it.
238:26 So if you want to see the full video that I made on YouTube, I'll I'll tag it
238:29 right up here. But not going to dive too much into what's actually happening. I
238:32 just want to prove that we can do the exact same thing down here with a simple
238:36 workflow. Although right here, I did evolve this workflow a little bit. So
238:39 it's it's not only looking at NASDAQ, but it can also choose different
238:43 exchanges and feed that into the API call. But anyways, let's make this
238:47 trigger down here active and let's just show off that we can do the exact same
238:51 thing with the workflow and it's going to be better. So, test workflow. This
238:56 should be listening to us. Now, I'm just going to ask it to um we'll do a
239:00 different one. Analyze uh Bank of America. So, now it's getting it. It is
239:04 going to be downloading the chart. Actually, want to open up Telegram so we
239:07 can see downloading the chart, analyzing the image. It's going to send us that
239:11 image and then pretty much immediately after it should be able to send us that
239:15 analysis. So we don't have that awkward 2 to 5 second wait. Obviously we're
239:19 waiting here. But as soon as this is done, we should get the both the image
239:22 and the text simultaneously. There you go. And so you can see the results are
239:27 basically the same. But this one is just going to be more consistent. There's no
239:30 transfer of data between workflow. There's no need to hit an AI model to
239:33 decide what tool I need to use. It is just going to be one seamless flow. You
239:37 can al also get this workflow in the free school community if you want to
239:39 play around with it. Just wanted to throw that out there. Anyways, that's
239:43 going to wrap us up here. I just wanted to close off with this isn't me bashing
239:46 on AI agents. Well, I guess a little bit it was. AI agents are super powerful.
239:50 They're super cool. It's really important to learn prompt engineering
239:54 and giving them different tools, but it's just about understanding, am I
239:57 forcing an agent into something that doesn't need it? Am I exposing myself to
240:02 the risk of lower quality outputs, less consistency, more difficult time scaling
240:06 this thing? Things along those lines. And so that's why I think it's super
240:08 important to get into something like Excal wireframe out the solution that
240:12 you're looking to build. Understand what are all the steps here. What are the
240:16 different API calls or different people involved? What could happen here? Is
240:21 this deterministic or is there an aspect of decision-m and variability here?
240:24 Essentially, is every flow going to be the same or not the same? Cool. So now
240:29 that we have that whole concept out of the way, I think it's really important
240:31 to understand that so that when you're planning out what type of system you're
240:34 going to build, you're actually doing it the right way from the start. But now
240:38 that we understand that, let's finally set up our first AI agent together.
240:42 Let's move into that video. All right, so at this point you guys are familiar
240:45 with Naden. You've built a few AI workflows and now it's time to actually
240:49 build an AI agent, which gets even cooler. So before we actually hop into
240:52 there and do that, just want to do a quick refresher on this little diagram
240:55 we talked about at the beginning of this video, which is the anatomy of an AI
240:59 agent. So we have our input, we have our actual AI agent, and then we have an
241:03 output. The AI agent is connected to different tools, and that's how it
241:06 actually takes action. And in order to understand which tools do I need to use,
241:10 it will look at its brain and its instructions. The brain comes in the
241:13 form of a large language model, which in this video, we'll be using open router
241:17 to connect to as many different ones as we want. and you guys have already set
241:20 up your open router credentials. Then we also have access to memory which I will
241:23 show you guys how we're going to set up in nadn. Then finally it uses its
241:27 instructions in order to understand what to do and that is in the form of a
241:31 system prompt which we will also see in naden. So all of these elements that
241:34 we've talked about will directly translate to something in nen and I will
241:38 show you guys and call out exactly where these are so there's no confusion. So
241:43 we're going to hop in nitn and you guys know that a new workflow always starts
241:46 with a trigger. So, I'm going to hit tab and I'm going to type in a chat trigger
241:50 because we want to just basically be able to talk to our AI agent right here
241:55 in the native Nadm chat. So, there is our trigger and what I'm going to do is
241:59 click the plus and add an AI agent right after this trigger so we can actually
242:02 talk to it. And so, this is what it looks like. You know, we have our AI
242:04 agent right here, but I'm going to click into it so we can just talk about the
242:07 difference between a user message up here and a system message that we can
242:11 add down here. So going back to the example with chatbt and with our diagram
242:17 when we're talking to chat gbt in our browser every single time we type and
242:21 say something to chatbt that is a user message because that message coming in
242:25 is dynamic every time. So you can see right here the source for the prompt
242:29 that the AI agent will be listening for as if it was chatbt is the connected
242:33 chat trigger node. So we're set up right here and the agent will be reading that
242:37 every time. If we were feeding in information to this agent that wasn't
242:40 coming from the chat message trigger, we'd have to change that. But right now,
242:43 we're good. And if we go back to our diagram, this is basically the input
242:47 that we're feeding into the AI agent. So, as you can see, input goes into the
242:50 agent. And that's exactly what we have right here. Input going into the agent.
242:54 And then we have the system prompt. So, I'm going to click back into the agent.
242:57 And we can see right here, we have a system message, which is just telling
243:00 this AI agent, you are a helpful assistant. So, right now, we're just
243:03 going to leave it as that. And back in our diagram that is right here, its
243:07 instructions, which is called a system prompt. So the next thing we can see
243:10 that we need is we need to give our AI agent a brain, which will be a large
243:14 language model and also memory. So I'm going to flick back into N. And you can
243:18 see we have two options right here. The first one is chat model. So I'm first of
243:21 all just going to click on the plus for chat model. I'm going to choose open
243:24 router. And we've already connected to open router. And now I just get to
243:27 choose from all of these different chat models to use. So I'm just going to go
243:34 ahead and choose a GBT 4.1 Mini. And I'm just going to rename this node
243:38 GPT 4.1 mini just so we know which one using. Cool. So now we have our input,
243:45 our AI agent, and a brain. But let's give it some memory real quick, which is
243:48 as simple as just clicking the plus under memory. And I'm just going to for
243:52 now choose simple memory, which stores it in and it end. There's no credentials
243:56 required. And as you can see, the session ID is looking for the connected
244:00 chat trigger node. because we're using the connected chat trigger node, we
244:03 don't have to change anything. We are good to go. So, this is basically the
244:07 core part of the agent, right? So, what I can do is I can actually talk to this
244:10 thing. So, I can say, "Hey," and we'll see what it says back. It's going to use
244:15 its memory. It's going to um use its brain to actually answer us. And it
244:18 says, "Hello, how can I assist you?" I can say, "My name is Nate. I am 23 years
244:25 old." And now what I'm going to basically test is that it's storing all
244:29 of this as memory and it's going to know that. So now it says, "Nice to meet you,
244:31 Nate. How can I help you?" Now I'm going to ask you, you know, what's my name and how old am I?
244:40 So we'll send that off. And now it's going to be able to answer us. Your name
244:43 is Nate and you are 23 years old. How can I assist you further? So first of
244:47 all, the reason it's being so helpful is because its system message says you're a
244:51 helpful assistant. The next piece would be it's using its brain to answer us and
244:55 it's using its memory to make sure it's not forgetting stuff about our current
245:00 conversation. So those are the three parts right there. Input, AI agent,
245:04 brain, and instructions. And now it's time to add the tools. So in this
245:08 example, we're going to build a super simple personal assistant AI agent that
245:12 can do three things. It's going to be able to look in our contact database in
245:17 order to grab contact information. with that contact information. It's going to
245:20 be able to send an email and it's going to be able to create a calendar event.
245:24 So, first thing we're going to do is we're going to set up our contact
245:27 database. And what I'm going to do for that is just I have this Google sheet.
245:31 Really simple. It just says name and email. This could be maybe you have your
245:34 contacts in Google contacts. You could connect that or an Air Table base or
245:38 whatever you want. This is just the actual tool, the actual integration that
245:42 we want to make to our AI agent. So, what I'm going to do is throw in a few
245:46 rows of example names and emails in here. Okay. So, we're just going to
245:48 stick with these three. We've got Michael Scott, Ryan Reynolds, and Oprah
245:51 Winfrey. And now, what we're going to be able to do is have our AI agent look at
245:55 this contact database whenever we ask it to send an email to someone or make a
245:59 calendar event with someone. If I go back and add it in, the first thing we
246:02 have to do is add a tool to actually access this Google sheet. So, I'm going
246:05 to click on tool. I'm going to type in Google sheet. It's as simple as that.
246:08 And you can see we have a Google Sheets tool. So, I'm going to click on that.
246:11 And now we have to set up our credential. You guys have already
246:15 connected to Google Sheets in the previous workflow, so it shouldn't be
246:17 too difficult. So choose your credential. And then the first thing is
246:20 a tool description. What we're going to do is we are going to just set this
246:24 automatically. And this basically describes to the AI agent what does this
246:28 tool do. So we could set it manually and describe ourselves, but if you just set
246:32 it automatically, the AI is going to be pretty good at understanding what it
246:35 needs to do with this tool. The next thing is a resource. So what are we
246:38 actually looking for? We're looking for a sheet within a document, not an entire
246:41 document itself. Then the operation is we want to just get rows. So I'm going to leave it
246:47 all as that. And then what we need to do is actually choose our document and then
246:50 the sheet within that document that we want to look at. So for document, I'm
246:54 going to choose contacts. And for sheet, there's only one. I'm just going to
246:57 choose sheet one. And then the last thing I want to do is just give this
247:01 actual tool a pretty intuitive name. So I'm just going to call this
247:06 contacts database. There you go. So now it should be super clear to this AI
247:09 agent when to use this tool. We may have to do some system prompting actually to
247:12 say like, hey, here are the different tools you have. But for now, we're just
247:15 going to test it out and see if it works. So what I'm going to do is open
247:19 up the chat and just ask it, can you please get Oprah Winfreyy's contact
247:23 information. There we go. We'll send that off and we will watch it basically
247:27 think. And then there we go. Boom. It hit the Google Sheet tool that we wanted
247:31 it to. And if I open up the chat, it says Oprah Winfreyy's contact
247:35 information is email opra winfrey.com. If we go into the base, we can see that
247:39 is exactly what we put for her contact information. Okay, so we've confirmed
247:43 that the agent knows how to use this tool and that it can properly access
247:46 Google Sheets. The next step now is to add another tool to be able to send
247:49 emails. So, I'm going to move this thing over. I'm going to add another tool and
247:53 I'm just going to search for Gmail and click on Gmail tool. Once again, we've
247:57 already covered credentials. So, hopefully you guys are already logged in
248:00 there. And then what we need to do is just configure the rest of the tool. So
248:04 tool description set automatically resource message operation send and then
248:10 we have to fill out the two the subject the email type and the message. What
248:14 we're able to do with our AI agents and tools is something super super cool. We
248:19 can let our AI agent decide how to fill out these three fields that will be
248:23 dynamic. And all I have to do is click on this button right here to the right
248:26 that says let the model define this parameter. So I'm going to click on that
248:29 button. And now we can see that it says defined automatically by the model. So
248:33 basically if I said hey can you send an email to Oprah Winfrey saying this um
248:39 and this it would then interpret our message our user input and it would then
248:45 fill out who's this going to who's the subject and who's the email. So I'll
248:47 show you guys an example of that. It's super cool. So I'm just going to click
248:51 on this button for subject and also this button for message. And now we can see
248:56 the actual AI use its brain to fill out these three fields. And then also I'm just going to change
249:01 the email type to text because I like it how it comes through as text. So real
249:06 quick, just want to change this name to send email. And all we have to do now is
249:10 we're going to chat with our agent and see if it's able to send that email. All
249:13 right. So I'm sending off this message that asks to send an email to Oprah
249:17 asking how she's doing and if she has plans this weekend. And what happened is
249:21 it went straight to the send email tool. And the reason it did that is because in
249:25 its memory, it remembered that it already knows Oprah Winfreyy's contact
249:29 information. So if I open chat, it says the email's been sent asking how she's
249:32 doing and if she has plans this weekend. Is there anything else that you would
249:36 like to do? So real quick before we go see if the email actually did get sent,
249:39 I'm going to click into the tool. And what we can see is on this left hand
249:44 side, we can see exactly how it chose to fill out these three fields. So for the
249:48 two, it put oprafree.com, which is correct. For the subject, it put
249:52 checking in. And for the message, it put hi Oprah. I hope this weekend finds you
249:55 well. How are you doing? Do you have any plans? Best regards, Nate. And another
250:00 thing that's really cool is the only reason that it signed off right here as
250:03 best regards Nate is because once again, it used its memory and it remembers that
250:07 our name is Nate. That's how it filled out those fields. Let me go over to my
250:11 email and we'll take a look. So, in our sent, we have the checking in subject.
250:15 We have the message that we just read in and it in. And then we have this little
250:18 thing at the bottom that says this email was automatically sent by NADN. We can
250:23 easily turn that off if we go into NADN. Open up the tool. We add an option at
250:26 the bottom that says append naden attribution. And then we just turn off
250:30 the append naden attribution. And as you can see if we click on add options,
250:33 there are other things that we can do as well. Like we can reply to the sender
250:36 only. We can add a sender name. We can add attachments. All this other stuff.
250:41 But at a high level and real quick setup, that is the send email tool. And
250:45 keep in mind, we still haven't given our agent any sort of system prompt besides
250:48 saying you're a helpful assistant. So, super cool stuff. All right, cool. And
250:52 now for the last tool, what we want to do is add a create calendar event. So, I'm going
250:59 to search calendar and grab a Google calendar node. We already should be set
251:03 up. Or if you're not, actually, all you have to do is just create new credential
251:06 and sign in real quick because you already went and created your whole
251:10 Google Cloud thing. We're going to leave the description as automatic. The resource is an event. The
251:15 operation is create. The calendar is going to be one that we choose from our
251:19 account. And now we have a few things that we want to fill out for this tool.
251:23 So basically, it's asking what time is the event going to start and what time
251:26 is the event going to end. So real quick, I'm just going to do the same
251:29 thing. I'm going to let the model decide based on the way that we interact with
251:33 it with our input. And then real quick, I just want to add one more field, which
251:36 is going to be a summary. And basically whatever gets filled in right here for
251:39 summary is what's going to show up as the name of the event in Google
251:42 calendar. But once again we're going to let the model automatically define this
251:49 field. So let's call this node create event. And actually one more thing I
251:52 forgot to do is we want to add an attendee. So we can actually let the
251:56 agent add someone to an event as well. So that is the new tool. We're going to
252:00 hit save. And remember no system prompts. Let's see if we can create a
252:04 calendar event with Michael Scott. All right. All right. So, we're asking for
252:08 dinner with Michael at 6 p.m. What's going to happen is it probably Okay, so
252:12 we're going to have to do some prompting because we don't know Michael Scott's
252:15 contact information yet, but it went ahead and tried to create that email.
252:19 So, it said that it created the event and let's click into the tool and see
252:23 what happened. So, it tried to send the event invite to michael.scottample.com. So, it
252:29 completely made that up because in our contacts base, Michael Scott's email is
252:33 mikegreatcott.com. So, it got that wrong. That's the first thing it got
252:38 wrong. The second thing it got wrong was the actual start and end date. So, yes,
252:42 it made the event for 6 p.m., but it made it for 6 p.m. on April 27th, 2024,
252:48 which was over a year ago. So, we can fix this by using the system prompt. So,
252:52 what I'm going to do real quick is go into the system prompt, and I'm just
252:55 going to make it just an expression and open it up full screen real quick. What
253:00 I'm going to say next is you must always look in the contacts database before
253:05 doing something like creating an event or sending an email. You need the
253:10 person's email address in order to do one of those actions. Okay, so that's a really simple
253:16 thing we can add. And then also what I want to tell it is what is today's
253:19 current date and time? So that if I say create an event for tomorrow or create
253:22 an event for today, it actually gets the date right. So, I'm just going to say
253:28 here is the current date slashtime. And all I have to do to give it access to
253:31 the current date and time is do two curly braces. And then right here you
253:35 can see dollar sign now which says a date time representing the current
253:39 moment. So if I click on that on the right hand side in the result panel you
253:42 can see it's going to show the current date and time. So we're happy with that.
253:45 Our system prompt has been a little bit upgraded and now we're going to just try
253:49 that exact same query again and we'll see what happens. So, I'm going to click
253:53 on this little repost message button. Send it again. And hopefully now, there
253:57 we go. It hits the contact database to get Michael Scott's email. And then it
254:00 creates the calendar event with Michael Scott. So, down here, it says, I've
254:04 created a calendar event for dinner with Michael Scott tonight at 6. If you need
254:07 any more assistance, feel free to ask. So, if I go to my calendar, we can see
254:11 we have a 2-hour long dinner with Michael Scott. If I click onto it, we
254:16 can see that the guest that was invited was mikegreatscott.com, which is exactly
254:21 what we see in our contact database. And so, you may have noticed it made this
254:23 event for 2 hours because we didn't specify. If I said, "Hey, create a
254:27 15-minute event," it would have only made it 15 minutes. So, what I'm going
254:31 to do real quick is a loaded prompt. Okay, so fingers crossed. We're saying,
254:35 "Please invite Ryan Reynolds to a party tonight that's only 30 minutes long at 8
254:39 p.m. and send him an email to confirm." So, what happened here? It went to go
254:43 create an event and send an email, but it didn't get Ryan Reynolds email first.
254:47 So, if we click into this, we can see that it sent an email to
254:50 ryan.rrensacample.com. That's not right. And it went to create an event at
254:54 ryan.rerensacample.com. And that's not right either. But the good news is if we
254:58 go to calendar, we can see that it did get the party right as far as it's 8
255:02 p.m. and only 15 minutes. So, because it didn't take the right action, it's not
255:06 that big of a deal. We know now that we have to go and refine the system prompt.
255:10 So to do that, I'm going to open up the agent. I'm going to click into the
255:14 system prompt. And we are going to fix some stuff up. Okay. So I added two
255:17 sentences that say, "Never make up someone's email address. You must look
255:21 in the contact database tool." So as you guys can see, this is pretty natural
255:24 language. We're just instructing someone how to do something as if we were
255:27 teaching an intern. Okay. So what I'm going to do real quick is clear this
255:30 memory. So I'm just going to reset the session. And now we're starting from a
255:33 clean slate. And I'm going to ask that exact same query to do that multi-step
255:37 thing with Ryan Reynolds. All right. Take two. We're inviting Ryan Reynolds
255:40 to a party at 9:00 p.m. There we go. It's hitting the contacts database. And
255:43 now it's going to hit the create event and the send email tool at the same
255:47 time. Boom. I've scheduled a 30-minute party tonight at 9:00 p.m. and invited
255:51 Ryan Reynolds. So, let's go to our calendar. We have a 9 p.m. party for 30
255:55 minutes long, and it is ryanpool.com, which is exactly what we
255:59 see in our contacts database. And then, if we go to our email, we can see now
256:03 that we have a party invitation for tonight to ryanpool.com. But what you'll
256:08 notice is now it didn't sign off as Nate because I cleared that memory. So this
256:13 would be a super simple fix. We would just want to go to the system prompt and
256:15 say, "Hey, when you're sending emails, make sure you sign off as Nate." So
256:19 that's going to be it for your first AI agent build. This one is very simple,
256:24 but also hopefully really opens your eyes to how easy it is to plug in these
256:27 different tools. And it's really just about your configurations and your
256:31 system prompts because system prompting is a really important skill and it's
256:34 something that you kind of have to just try out a lot. You have to get a lot of
256:38 reps and it's a very iterative process. But anyways, congratulations. You just
256:42 built your first AI agent in probably less than 20 minutes and now add on a
256:46 few more tools. Play around with a few more parameters and just see how this
256:49 kind of stuff works. In this section, what I'm going to talk about is dynamic
256:54 memory for your AI agents. So if you remember, we had just set up this agent
256:58 and we were using simple memory and this was basically helping us keep
257:02 conversation history. But what we didn't yet talk about was the session ID and
257:07 what that exactly means. So basically think of a session ID as some sort of
257:12 unique identifier that identifies each separate conversation. So, if I'm
257:18 talking to you, person A, and you ask me something, I'm gonna go look at
257:22 conversations from our conversation, person A and Nate, and then I can read
257:25 that for context and then respond to you. But if person B talks to me, I'm
257:29 going to go look at my conversation history with person B before I respond
257:34 to them. And that way, I keep two people and two conversations completely
257:37 separate. So, that's what a session ID is. So, if we were having some sort of
257:41 AI agent that was being triggered by an email, we would basically want to set
257:46 the session ID as the email address coming in because then we know that the
257:50 agent's going to be uniquely responding to whoever actually sent that email that
257:55 triggered it. So, just to demonstrate how that works, what I'm going to do is
257:58 just manipulate the session ID a little bit. So, I'm going to come into here and
258:02 I'm going to instead of using the chat trigger node for the session ID, I'm
258:06 going to just define it below. And I'm just going to do that exact example that
258:09 I just talked to you guys about with person A and person B. So I'm just going
258:13 to put a lowercase A in there as the session ID key. So once I save that, what I'm going
258:20 to do is just say hi. Now it's going to respond to me. It's going to update the
258:23 conversation history and say hi. I'm going to say my name is
258:27 um Bruce. I don't know why I thought of Bruce, but my name's Bruce. And now it
258:31 says nice to meet you Bruce. How can I assist you? Now what I'm going to do is
258:36 I'm going to change the session ID to B. We'll hit save. And I'm just going to
258:39 say what's my name? What's my name? And it's going to say I don't have access to your name
258:46 directly. If you'd like, you can provide your name or any other details you want
258:50 me to know. How can I assist you today? So person A is Bruce. Person B is no
258:54 name. And what I'm going to do is go back to putting the key as A. Hit save.
259:01 And now if I say, "What is my name?" with a misspelled my, it's going to say,
259:04 "Hey, Bruce." There we go. Your name is Bruce. How can I assist you further? And so
259:10 that's just a really quick demo of how you're able to sort of actually have
259:15 dynamic um conversations with multiple users in one single agent flow because
259:20 you can make this field dynamic. So, what I'm going to do to show you guys a
259:23 practical use of this, let's say you're wanting to connect your agent to Slack
259:27 or to Telegram or to WhatsApp or to Gmail. You want the memory to be dynamic
259:31 and you want it to be unique for each person that's interacting with it. So,
259:35 what I have here is a Gmail trigger. I'm going to hit test workflow, which should
259:38 just pull in an email. So, when we open up this email, we can see like the
259:41 actual body of the email. We can see, you know, like history. We can see a
259:45 thread ID, all this kind of stuff. But what I want to look at is who is the
259:48 email from? Because then if I feed this into the AI agent and first of all we
259:53 would have to change the actual um user message. So we are no longer talking to
259:57 our agent with the connected chat trigger node, right? We're connecting to
260:01 it with Gmail. So I'm going to click to find below. The user message is
260:04 basically going to be whatever you want the agent to look at. So don't even
260:09 think about end right now. If you had an agent to help you with your emails, what
260:11 would you want it to read? You'd want it to read maybe a combination of the
260:15 subject and the body. So that's exactly what I'm going to do. I'm just going to
260:19 type in subject. Okay, here's the subject down here. And I'm going to drag
260:22 that right in there. And then I'm just going to say body. And then I would drag
260:27 in the actual body snippet. And it's a snippet right now because in the actual
260:31 Gmail trigger, we have this flicked on as simplified. If we turn that off, it
260:34 would give us not a snippet. It would give us a full email body. But for right
260:37 now, for simplicity, we'll leave it simplified. But now you can see that's
260:40 what the agent's going to be reading every time, not the connected chat
260:44 trigger node. And before we hit test step, what we want to do is we want to
260:47 make the sender of this email also the session key for the simple memory. So
260:54 we're going to define below and what I'm going to do is find the from field which
260:59 is right here and drag that in. So now whenever we get a new email, we're going
261:02 to be looking at conversation history from whoever sent that email to trigger
261:06 this whole workflow. So I'll hit save and basically what I'm going to do is
261:10 just run the agent. And what it's going to do is update the memory. It's going
261:13 to be looking at the correct thing and it's taking some action for us. So,
261:17 we'll take a look at what it does. But basically, it said the invitation email
261:20 for the party tonight has been sent to Ryan. If you need any further
261:23 assistance, please let me know. And the reason why it did that is because the
261:27 actual user message basically was saying we're inviting Ryan to a party. So,
261:31 hopefully that clears up some stuff about dynamic um user messages and
261:36 dynamic memory. And now you're on your way to building some pretty cool Jetic
261:39 workflows. And something important to touch on real quick is with the memory
261:43 within the actual node. What you'll notice is that there is a context window
261:47 length parameter. And this says how many past interactions the model receives as
261:50 context. So this is definitely more of the short-term memory because it's only
261:53 going to be looking at the past five interactions before it crafts its
261:57 response. And this is not just with a simple memory node. What we have here is
262:01 if I delete this connection and click on memory, you can see there are other
262:04 types of memory we can use for our AI agents. Let's say for example we're
262:07 doing Postgress which later in this course you'll see how to set this up.
262:10 But in Postgress you can see that there's also a context window length. So
262:14 just to show you guys an example of like what that actually looks like. What
262:16 we're going to do is just connect back to here. I'm going to drag in our chat
262:20 message trigger which means I'm going to have to change the input of the AI
262:23 agent. So we're going to get rid of this whole um defined below with the subject
262:27 and body. We're going to drag in the connected chat trigger node. Go ahead
262:31 and give this another save. And now I'm just going to come into the chat and
262:37 say, "Hello, Mr. Agent. What is going on here? We have the memory is messed up."
262:41 So remember, I just changed the session ID from our chat trigger to the Gmail
262:48 trigger um the address, the email address of whoever just sent us the
262:50 email. So I'm going to have to go change that again. I'm just going to simply
262:54 choose connected chat trigger node. And now it's referencing the correct session
262:57 ID. Our variable is green. We're good to go. We'll try this again. Hello, Mr.
263:01 Agent. It's going to talk to us. So, just save that as Nate. Okay. Nice to meet you, Nate. How
263:14 can I assist you? My favorite color is blue. And I'm going to say, you know,
263:21 tell me about myself. Okay. So, it's using all that memory, right? We
263:24 basically saw a demo of this, but it basically says, other than your name and
263:27 your favorite color is blue, what else is there about you? So if I go into the
263:31 agent and I click over here into the agent logs, we can see the basically the
263:36 order of operations that the agent took in order to answer us. So the first
263:40 thing that it does is it uses its simple memory. And that's where you can see
263:43 down here, these are basically the past interactions that we've had, which was
263:49 um hello Mr. Agent, my name is Nate, my favorite color is blue. And this would
263:52 basically cap out at five interactions. So that's all we're basically saying in
263:57 this context window length right here. So, just wanted to throw that out there
264:00 real quick. This is not going to be absolutely unlimited memory to remember
264:04 everything that you've ever said to your agent. We would have to set that up in a
264:07 different way. All right, so you've got your agent up and running. You have your
264:10 simple memory set up, but something that I alluded to in that video was setting
264:15 up memory outside of NADN, which could be something like Postgress. So in this
264:17 next one, we're going to walk through the full setup of creating a superbase
264:21 account, connecting your Postgress and your Superbase so that you can have your
264:25 short-term memory with Postgress and then you can also connect a vector
264:28 database with Superbase. So let's get started. So today I'm going to be
264:30 showing you guys how to connect Postgress SQL and Superbase to Nadin. So
264:34 what I'm going to be doing today is walking through signing up for an
264:36 account, creating a project, and then connecting them both to NADN so you guys
264:40 can follow every step of the way. But real quick, Postgress is an open- source
264:43 relational database management system that you're able to use plugins like PG
264:46 vector if you want vector similarity search. In this case, we're just going
264:49 to be using Postgress as the memory for our agent. And then Superbase is a
264:52 backend as a service that's kind of built on top of Postgress. And in
264:55 today's example, we're going to be using that as the vector database. But don't
264:58 want to waste any time. Here we are in Naden. And what we know we're going to
265:01 do here for our agent is give it memory with Postgress and access to a vector
265:05 database in Superbase. So for memory, I'm going to click on this plus and
265:07 click on Postgress chat memory. And then we'll set up this credential. And then
265:10 over here we want to click on the plus for tool. We'll grab a superbase vector
265:13 store node and then this is where we'll hook up our superbase credential. So
265:16 whenever we need to connect to these thirdparty services what we have to do
265:19 is come into the node go to our credential and then we want to create a
265:22 new one. And then we have all the stuff to configure like our host our username
265:26 our password our port all this kind of stuff. So we have to hop into superbase
265:30 first create account create a new project and then we'll be able to access
265:33 all this information to plug in. So here we are in Superbase. I'm going to be
265:35 creating a new account like I said just so we can walk through all of this step
265:38 by step for you guys. So, first thing you want to do is sign up for a new
265:41 account. So, I just got my confirmation email. So, I'm going to go ahead and
265:43 confirm. Once you do that, it's going to have you create a new organization. And
265:46 then within that, we create a new project. So, I'm just going to leave
265:48 everything as is for now. It's going to be personal. It's going to be free. And
265:52 I'll hit create organization. And then from here, we are creating a new
265:54 project. So, I'm going to leave everything once again as is. This is the
265:57 organization we're creating the project in. Here's the project name. And then
266:00 you need to create a password. And you're going to have to remember this
266:02 password to hook up to our Subabase node later. So, I've entered my password. I'm
266:06 going to copy this because like I said, you want to save this so you can enter
266:08 it later. And then we'll click create new project. This is going to be
266:11 launching up our project. And this may take a few minutes. So, um, just have to
266:15 be patient here. As you can see, we're in the screen. It's going to say setting
266:18 up project. So, we pretty much are just going to wait until our project's been
266:21 set up. So, while this is happening, we can see that there's already some stuff
266:23 that may look a little confusing. We've got project API keys with a service ro
266:27 secret. We have configuration with a different URL and some sort of JWT
266:30 secret. So, I'm going to show you guys how you need to access what it is and
266:35 plug it into the right places in Naden. But, as you can see, we got launched to
266:38 a different screen. The project status is still being launched. So, just going
266:41 to wait for it to be complete. So, everything just got set up. We're now
266:44 good to connect to NAN. And what you want to do is typically you'd come down
266:47 to project settings and you click on database. And this is where everything
266:50 would be to connect. But it says connection string has moved. So, as you
266:52 can see, there's a little button up here called connect. So, we're going to click
266:55 on this. And now, this is where we're grabbing the information that we need
266:58 for Postgress. So this is where it gets a little confusing because there's a lot
267:01 of stuff that we need for Postgress. We need to get a host, a username, our
267:05 password from earlier when we set up the project, and then a port. So all we're
267:08 looking for are those four things, but we need to find them in here. So what
267:11 I'm going to do is change the type to Postgress SQL. And then I'm going to go
267:15 down to the transaction pooler, and this is where we're going to find the things
267:17 that we need. The first thing that we're looking for is the host, which if you
267:20 set it up just like me, it's going to be after the -h. So it's going to be AWS,
267:24 and then we have our region.pool.subase.com. So we're going to grab that, copy it, and then we're
267:29 going to paste that into the host section right there. So that's what it
267:32 should look like for host. Now we have a database and a username to set up. So if
267:36 we go back into that superbase page, we can see we have a D and a U. So the D is
267:40 going to stay as Postgress, but for user, we're going to grab everything
267:43 after the U, which is going to be postgress.com, and then these um
267:47 different characters. So I'm going to paste that in here under the user. And
267:50 for the password, this is where you're going to paste in the password that you
267:53 use to set up your Subbase project. And then finally at the bottom, we're
267:56 looking for a port, which is by default 5342. But in this case, we're going to
267:59 grab the port from the transaction pooler right here, which is following
268:04 the lowercase P. So we have 6543. I'm going to copy that, paste that into here
268:07 as the port. And then we'll hit save. And we'll see if we got connection
268:10 tested successfully. There we go. We got green. And then I'm just going to rename
268:13 this so I can keep it organized. So there we go. We've connected to
268:16 Postgress as our chat memory. We can see that it is going to be using the
268:19 connected chat trigger node. That's how it's going to be using the key to store
268:22 this information. and it's going to be storing it in a table in Subabase called
268:25 Naden chat histories. So real quick, I'm going to talk to the agent. I'm just
268:28 going to disconnect the subbase so we don't get any errors. So now when I send
268:31 off hello AI agent, it's going to respond to us with something like hey,
268:34 how can I help you today? Hello, how can I assist you? And now you can see that
268:37 there were two things stored in our Postgress chat memory. So we'll switch
268:40 over to superbase. And now we're going to come up here in the left and go to
268:43 table editor. We can see we have a new table that we just created called NAN
268:46 chat histories. And then we have two messages in here. So the first one as
268:49 you can see was a human type and the content was hello AI agent which is what
268:53 we said to the AI agent and then the second one was a type AI and this is the
268:58 AI's response to us. So it said hello how can I assist you today. So this is
269:01 where all of your chats are going to be stored based on the session ID and just
269:05 once again this session ID is coming from the connected chat trigger node. So
269:08 it's just coming from this node right here. As you can see, there's the
269:11 session ID that matches the one in our our chat memory table. And that is how
269:16 it's using it to store sort of like the unique chat conversations. Cool. Now
269:20 that we have Postgress chat memory set up, let's hook up our Superbase vector
269:24 store. So, we're going to drag it in. And then now we need to go up here and
269:27 connect our credentials. So, I'm going to create new credential. And we can see
269:31 that we need two things, a host and a service role secret. And the host is not
269:34 going to be the same one as the host that we used to set up our Postgress. So
269:37 let's hop into Superbase and grab this information. So back in Superbase, we're
269:41 going to go down to the settings. We're going to click on data API and then we
269:45 have our project URL and then we have our service ro secret. So this is all
269:49 we're using for URL. We're going to copy this, go back to Subase, and then we'll
269:52 paste this in as our host. As you can see, it's supposed to be HTTPS
269:56 um and then your Superbase account. So we'll paste that in and you can see
270:00 that's what we have.co. Also, keep in mind this is because I launched up an
270:03 organization and a project in Superbase's cloud. If you were to
270:06 self-host this, it would be a little different because you'd have to access
270:09 your local host. And then of course, we need our service ro secret. So back in
270:13 Superbase, I'm going to reveal, copy, and then paste it into an end. So let me
270:16 do that real quick. And as you can see, I got that huge token. Just paste it in.
270:19 So what I'm going to do now is save it. Hopefully it goes green. There we go. We
270:22 have connection tested successfully. And then once again, just going to rename
270:25 this. The next step from here would be to create our Superbase vector store
270:28 within the platform that we can actually push documents into. So you're going to
270:31 click on docs right here. You are going to go to the quick start for setting up
270:35 your vector store and then all you have to do right here is copy this command.
270:38 So in the top right, copy this script. Come back into Subabase. You'll come on
270:42 the lefth hand side to SQL editor. You'll paste that command in here. You
270:44 don't change anything at all. You'll just hit run. And then you could should
270:48 see down here success. No rows returned. And then in the table editor, we'll have
270:51 a new table over here called documents. So this is where when we're actually
270:54 vectorizing our data, it's going to go into this table. Okay. Okay. So, I'm
270:57 just going to do a real quick example of putting a Google doc into our Subbase
271:00 vector database just to show you guys that everything's connected the way it
271:03 should be and working as it should. So, I'm going to grab a Google Drive node
271:06 right here. I'm going to click download file. I'm going to select a file to
271:10 download which in this case I'm just going to grab body shop services terms
271:13 and conditions and then hit test step. And we'll see the binary data which is a
271:17 doc file over here. And now we have that information. And what we want to do with
271:21 it is add it to superbase superbase vector store. So, I'm going to type in
271:25 superbase. We'll see vector store. The operation is going to be add documents
271:28 to vector store. And then we have to choose the right credential because we
271:31 have to choose the table to put it in. So this is in this case we already made
271:34 a table. As you can see in our superbase it's called documents. So back in here
271:38 I'm going to choose the credential I just made. I'm going to choose insert
271:41 documents and I'm going to choose the table to insert it to not the N chat
271:45 histories. We want to insert this to documents because this one is set up for
271:49 vectorization. From there I have to choose our document loader as well as
271:52 our embeddings. So I'm not really going to dive into exactly what this all means
271:55 right now. If you're kind of confused and you're wanting a deeper dive on rag
271:58 and building agents, definitely check out my paid community. We've got
272:01 different deep dive topics about all this kind of stuff. But I'm just going
272:03 to set this up real quick so we can see the actual example. I'm just choosing
272:07 the binary data to load in here. I'm choosing the embedding and I'm choosing
272:10 our text splitter which is going to be recursive. And so now all I have to do
272:13 here is hit run. It's going to be taking that binary data of that body shop file.
272:17 It split it up. And as you can see there's three items. So if we go back
272:20 into our Superbase vector store and we hit refresh, we now see three items in our
272:25 vector database and we have the different content and all this
272:28 information here like the standard oil change, the synthetic oil change is
272:31 coming from our body shop document that I have right here that we put in there
272:35 just to validate the rag. And we know that this is a vector database store
272:38 rather than a relational one because we can see we have our vector embedding
272:41 over here which is all the dimensions. And then we have our metadata. So we
272:44 have stuff like the source and um the blob type, all this kind of stuff. And
272:47 this is where we could also go ahead and add more metadata if we wanted to.
272:51 Anyways, now that we have vectors in our documents table, we can hook up the
272:55 actual agent to the correct table. So in here, what I'm going to call this is um
273:00 body shop. For the description, I'm going to say use this to get information
273:06 about the body shop. And then from the table name, we have to choose the
273:08 correct table, of course. So we know that we just put all this into something
273:11 called documents. So I'm going to choose documents. And finally, we just have to
273:15 choose our embeddings, of course, so that it can embed the query and pull
273:18 stuff back accurately. And that's pretty much it. We have our AI agent set up.
273:22 So, let's go ahead and do a test and see what we get back. So, I'm going to go
273:25 ahead and say what brake services are offered at the body shop. It's going to
273:29 update the Postgress memory. So, now we'll be able to see that query. It hit
273:32 the Superbase vector store in order to retrieve that information and then
273:36 create an augmented generated answer for us. And now we have the body shop offers
273:40 the following brake services. 120 per axle for replacement, 150 per axle for
273:45 rotor replacement, and then full brake inspection is 30 bucks. So, if we click
273:49 back into our document, we can see that that's exactly what it just pulled. And
273:53 then, if we go into our vector database within Subase, we can find that
273:56 information in here. But then we can also click on NAN chat history, and we
274:00 can see we have two more chats. So, the first one was a human, which is what we
274:03 said. What brake services are offered at the body shop? And then the second one
274:07 was a AI content, which is the body shop offers the following brake services,
274:10 blah blah blah. And this is exactly what it just responded to us with within NADN
274:14 down here as you can see. And so keep in mind this AI agent has zero prompting.
274:17 We didn't even open up the system message. All that's in here is you are a
274:20 helpful assistant. But if you are setting this up, what you want to do is
274:23 you know explain its role and you want to tell it you know you have access to a
274:28 vector database. It is called X. It has information about X Y and Z and you
274:32 should use it when a client asks about X Y and Z. Anyways that's going to be it
274:35 for this one. Subase and Postgress are super super powerful tools to use to
274:38 connect up as a database for your agents, whether it's going to be
274:41 relational or vector databases and you've got lots of options with, you
274:44 know, self-hosting and some good options for security and scalability there. Now
274:47 that you guys have built an agent and you see the way that an agent is able to
274:50 understand what tools it has and which ones it needs to use, what's really
274:55 really cool and powerful about NAND is that we can have a tool for an AI agent
274:59 be a custom workflow that we built out in Nadn or we can build out a custom
275:04 agent in Naden and then give our main agent access to call on that lower
275:08 agent. So what I'm about to share with you guys next is an architecture you can
275:11 use when you're building multi- aent systems. It's basically called having an
275:15 orchestrator agent and sub agents or parent agents and child agents. So,
275:18 let's dive into it. I think you guys will think it's pretty cool. So, a
275:22 multi- aent system is one where we have multiple autonomous AI agents working
275:26 together in order to get the job done and they're able to talk to each other
275:28 and they're able to use the tools that they have access to. What we're going to
275:31 be talking about today is a type of multi- aent system called the
275:34 orchestrator architecture. And basically what that means that we have one agent
275:38 up here. I call it the parent agent and then I call these child agents. But we
275:41 have an orchestrator agent that's able to call on different sub aents. And the
275:46 best way to think about it is this agent's only goal is to understand the
275:50 intent of the user. Whether that's through Telegram or through email,
275:53 whatever it is, understanding that intent and then understanding, okay, I
275:57 have access to these four agents and here is what each one is good at. Which
276:01 one or which ones do I need to call in order to actually achieve the end goal?
276:05 So, in this case, if I'm saying to the agent, can you please write me a quick
276:11 blog post about dogs and send that to Dexter Morgan, and can you also create a
276:14 dinner event for tonight at 6 p.m. with Michael Scott? And thank you. Cool. So,
276:20 this is a pretty loaded task, right? And can you imagine if this one agent had
276:25 access to all of these like 15 or however many tools and it had to do all
276:29 of that itself, it would be pretty overwhelmed and it wouldn't be able to
276:32 do it very accurately. So, what you can see here is it is able to just
276:35 understand, okay, I have these four agents. They each have a different role.
276:38 Which ones do I need to call? And you can see what it's doing is it called the
276:41 contact agent to get the contact information. Right now, it's calling the
276:44 content creator agent. And now that that's finished up, it's probably going
276:47 to call the calendar agent to make that event. And then it's going to call the
276:50 email agent in order to actually send that blog that we had the content
276:54 creator agent make. And then you can see it also called this little tool down
276:56 here called Think. If you want to see a full video where I broke down what that
276:59 does, you can watch it right up here. But we just got a response back from the
277:03 orchestrator agent. So, let's see what it said. All right, so it said, "The
277:06 blog post about dogs has been sent to Dexter Morgan. A dinner event for
277:09 tonight at 6 p.m. with Michael Scott has been created. And if you need anything
277:12 else, let me know." And just to verify that that actually went through, you can
277:14 see we have a new event for dinner at 6 p.m. with Michael Scott. And then in our
277:18 email and our scent, we can see that we have a full blog post sent to Dexter
277:22 Morgan. And you can see that we also have a link right here that we can click
277:24 into, which means that the content creator agent was able to do some
277:28 research, find this URL, create the blog post, and send that back to the
277:31 orchestrator agent. And then the orchestrator agent remembered, okay, so
277:34 I need to send a blog post to Dexter Morgan. I've got his email from the
277:37 contact agent. I have the blog post from the content creator agent. Now all I
277:40 have to do is pass it over to the email agent to take care of the rest. So yes,
277:44 it's important to think about the tools because if this main agent had access to
277:47 all those tools, it would be pretty overwhelming. But also think about the
277:50 prompts. So, in this ultimate assistant prompt, it's pretty short, right? All I
277:54 had to say was, "You're the ultimate assistant. Your job is to send the
277:57 user's query to the correct tool. You should never be writing emails or ever
278:00 creating summaries or doing anything. You just need to delegate the task." And
278:04 then what we did is we said, "Okay, you have these six tools. Here's what
278:06 they're called. Here's when you use them." And it's just super super clear
278:10 and concise. There's almost no room for ambiguity. We gave it a few rules, an
278:14 example output, and basically that's it. And now it's able to interpret any query
278:17 we might have, even if it's a loaded query. As you can see, in this case, it
278:20 had to call all four agents, but it still got it right. And then when it
278:24 sends over something to like the email agent, for example, we're able to give
278:27 this specific agent a very, very specific system prompt because we only
278:31 have to tell it about you only have access to these email tools. And this is
278:35 just going back to the whole thing about specialization. It's not confusing. It
278:39 knows exactly what it needs to do. Same thing with these other agents. You know,
278:41 the calendar agent, of course, has its own prompts with its own set of calendar
278:45 tools. The contact agent has its own prompt with its own set of contact
278:48 tools. And then of course we have the content creator agent which has to know
278:52 how to not only do research using its tavly tool but it also has to format the
278:57 blog post with you know proper HTML. As you can see here there was like a title
279:00 there were headings there were you know inline links all that kind of stuff. And
279:04 so because we have all of this specialized can you imagine if we had
279:08 all of that system prompt thrown into this one agent and gave it access to all
279:12 the tools just wouldn't be good. And if you're still not convinced, think about
279:15 the fact that for each of these different tasks, because we know what
279:18 each agent is doing, we're able to give it a very specific chat model because,
279:21 you know, like for something like content creation, I like to use cloud
279:24 3.7, but I wouldn't want to use something as expensive as cloud 3.7 just
279:28 to get contacts or to add contacts to my contact database. So that's why I went
279:32 with Flash here. And then for these ones, I'm using 4.1 Mini. So you're able
279:36 to have a lot more control over exactly how you want your agents to run. And so
279:39 I pretty much think I hit on a lot of that, but you know, benefits of
279:43 multi-agent system, more reusable components. So now that we have built
279:46 out, you know, an email agent, whenever I'm building another agent ever, and I
279:49 realize, okay, maybe it would be nice for this agent to have a couple email
279:53 functions. Boom, I just give it access to the email agent because we've already
279:56 built it and this email agent can be called on by as many different workflows
279:59 as we want. And when we're talking about reusable components, that doesn't have
280:04 to just mean these agents are reusable. It could also be workflows that are
280:07 reusable. So, for example, if I go to this AI marketing team video, if you
280:09 haven't watched it, I'll leave a link right up here. These tools down here,
280:14 none of these are agents. They're all just workflows. So, for example, if I
280:17 click into the video workflow, you can see that it's sending data over to this
280:20 workflow. And even though it's not an agent, it still is going to do
280:24 everything it needs to do and then send data back to that main agent. Similarly,
280:28 with this create image tool, if I was to click into it real quick, you can see
280:31 that this is not an agent, but what it's going to do is it's going to take
280:34 information from that orchestrator agent and do a very specific function. That
280:38 way, this main agent up here, all it has to do is understand, I have these
280:41 different tools, which one do I need to use. So, reusable components and also
280:45 we're going to have model flexibility, different models for different agents.
280:48 We're going to have easier debugging and maintenance because like I said with the
280:51 whole prompting thing, if you tried to give that main agent access to 25 tools
280:56 and in the prompt you have to say here's when you use all 25 tools and it wasn't
280:59 working, you wouldn't know where to start. You would feel really overwhelmed
281:02 as to like how do I even fix this prompt. So by splitting things up into
281:07 small small tasks and specialized areas, it's going to make it so much easier.
281:10 Exactly like I just covered point number four, clear prompts logic and better
281:14 testability. And finally, it's a foundation for multi-turn agents or
281:17 agent memory. Just because we're sending data from main agent to sub agent
281:20 doesn't mean we're losing that context of like we're talking to Nate right now
281:24 or we're talking to Dave right now. We can still have that memory pass between
281:28 workflows. So things get really really powerful and it's just pretty cool.
281:32 Okay, so we've seen a demo. I think you guys understand the benefits here. Just
281:35 one thing I wanted to throw out before we get into like a live build of a
281:40 multi- aent system is just because this is cool and there's benefits doesn't
281:44 mean it's always the right thing to do. So if you're forcing a multi-agent
281:47 orchestrator framework into a process that could be a simple single agent or a
281:52 simple AI workflow, all you're going to be doing is you're going to be
281:54 increasing the latency. You're going to be increasing the cost because you're
281:58 making more API calls and you're probably going to be increasing the
282:02 amount of error just because kind of the golden rule is you want to eliminate as
282:06 much data transfer between workflows as you can because that's where you can run
282:10 into like some issues. But of course there are times when you do need
282:13 dedicated agents for certain functions. So, let's get into a new workflow and
282:17 build a really simple example of an orchestrator agent that's able to call
282:21 on a sub agent. All right. So, what we're going to be doing here is we're
282:24 going to build an orchestrator agent. So, I'm going to hit tab. I'm going to
282:27 type in AI agent and we're going to pull this guy in. And we're just going to be
282:29 talking to this guy using that little chat window down here for now. So, first
282:33 thing we need to do as always is connect a brain. I'm going to go ahead and grab
282:36 an open router. And we're just going to throw in a 4.1 mini. And I'll just
282:40 change this name real quick so we can see what we're using. And from here,
282:44 we're basically just going to connect to a subworkflow. And then we'll go build
282:48 out that actual subworkflow agent. So the way we do it is we click on this
282:52 plus under tool. And what we want to do is call nen workflow tool because you
282:56 can see it says uses another nen workflow as a tool. Allows packaging any
283:00 naden node as a tool. So it's super cool. That's how we can send data to
283:03 these like custom things that we built out. As you saw earlier when I showed
283:06 that little example of the marketing team agent, that's how we can do it. So
283:10 I'm going to click on this. And basically when you click on this,
283:13 there's a few things to configure. The first one is a description of when do
283:17 you use this tool. You'll kind of tell the agent that here and you'll also be
283:20 able to tell a little bit in a system prompt, but you have to tell it when to
283:23 use this tool. And then the next thing is actually linking the tool. So you can
283:27 see we can choose from a list of our different workflows in NAN. You can see
283:30 I have a ton of different workflows here, but all you have to do is you have
283:32 to choose the one that you want this orchestrator agent to send data to. And
283:36 one thing I want to call attention to here is this text box which says the
283:40 tool will call the workflow you defined below and it will look in the last node
283:44 for the response. The workflow needs to start with an execute workflow trigger.
283:48 So what does this mean? Let's just go build another workflow and we will see
283:51 exactly what it means. So I'm going to open up a new workflow which is going to
283:54 be our sub agent. So, I'm going to hit tab to open up the nodes. And it's
283:57 obviously prompting us to choose a trigger. And we're going to choose this
284:00 one down here that says when executed by another workflow, runs the flow when
284:04 called by the execute workflow node from a different tool. So, basically, the
284:07 only thing that can access this node and send data to this node is one of these
284:12 bad boys right here. So, these two things are basically just connected and
284:15 data is going to be sending between them. And what's interesting about this
284:18 node is you can have a couple ways that you accept data. So, by default, I
284:21 usually just put it on accept all data. And this will put things into a field
284:25 right here called query. But if you wanted to, you could also have it send
284:29 over specific fields. So, if you wanted to only get like, you know, a phone
284:32 number and you wanted to get a name and you wanted to get an email and you
284:36 wanted those all to already be in three separate fields, that's how you could do
284:39 that. And a practical example of that would be in my marketing team right here
284:43 in the create image. You can see that I'm sending over an image title, an
284:47 image prompt, and a chat ID. And that's another good example of being able to
284:51 send, you know, like memory over because I have a chat ID coming from here, which
284:55 is memory to the agent right here. But then I can also send that chat ID to the
284:59 next workflow if we need memory to be accessed down here as well. So in this
285:02 case, just to start off, we're not going to be sending over specified fields.
285:06 We're just going to do accept all data and let us connect an AI agent to this
285:10 guy. So I'm going to type in AI agent. We'll pull this in. The first thing we
285:14 need to do is we need to change this because we're not going to be talking
285:17 through the connected chat trigger node as we know because we have this trigger
285:21 right here. So what we're going to do is save this workflow. So now it should
285:24 actually register an end that we have this workflow. I'm going to go back in
285:27 here and we're just going to connect it. So we know that it's called subwork sub
285:32 aent. So grab that right there. And now you can see it says the sub workflow is
285:35 set up to receive all input data. Without specific inputs, the agent will
285:39 not be able to pass data to this tool. you can define the specific inputs in
285:42 the trigger. So that's exactly what I just showed you guys with changing that
285:45 right there. So what I want to do is show how data gets here so we can
285:49 actually map it so the agent can read it. So what we need to do before we can
285:52 actually test it out is we need to make sure that this orchestrator agent
285:56 understands what this tool will do and when to use it. So let's just say that
285:59 this one's going to be an email agent. First thing I'm going to do is just
286:03 intuitively name this thing email agent. I'm then going to type in the
286:07 description call this tool to take any email actions. So now it should
286:12 basically, you know, signal to this guy whenever I see any sort of query come in
286:16 that has to do with email. I'm just going to pass that query right off to
286:19 this tool. So as you can see, I'm not even going to add a system message to
286:22 this AI agent yet. We're just going to see if we can understand. And I'm going
286:26 to come in here and say, "Please send an email to Nate asking him how he's
286:30 doing." So, we fire that off and hopefully it's going to call this tool and then we'll
286:35 be able to go in there and see the query that we got. The reason that this
286:38 errored is because we haven't mapped anything. So, what I'm going to do is
286:41 click on the tool. I'm going to click on view subexecution. So, we can pop open
286:45 like the exact error that just happened. And we can see exactly what happened is
286:49 that this came through in a field called query. But the main agent is not looking
286:53 for a field called query. It's looking for a field called chat input. So I'm
286:57 just going to click on debug and editor so we can actually pull this in. Now all
287:00 I have to do is come in here, change this to define below, and then just drag
287:04 in the actual query. And now we know that this sub agent is always going to
287:09 receive the orchestrator agents message. But what you'll notice here is that the
287:12 orchestrator agent sent over a message that says, "Hi Nate, just wanted to
287:15 check in and see how you're doing. Hope all is well." So there's a mistake here
287:19 because this main agent ended up basically like creating an email and
287:23 sending it over. All we wanted to do is basically just pass the message along.
287:27 So what I would do here is come into the system prompt and I'm just going to
287:31 say overview. You are an orchestrator agent. Your only job is to delegate the task to
287:40 the correct tool. No need to write emails or create summaries. There we go. So just with a
287:45 very simple line, that's all we're going to do. And before we shoot that off, I'm
287:48 just going to go back into the sub workflow and we have to give this thing
287:52 an actual brain. so that it can process messages. We're just going to go with a
287:57 4.1 mini once again. Save that. So, it actually reflects on this main agent.
288:00 And now, let's try to send off this exact same query. And we'll see what it
288:04 does this time. So, it's calling the email agent tool. It shouldn't error
288:09 because we we we fixed it. But, as you can see now, it just called that tool
288:12 twice. So, we have to understand why did it just call the sub agent twice. First
288:16 thing I'm going to do is click into the main agent and I'm going to click on
288:19 logs. And we can see exactly what it did. So when it called the email agent
288:23 once again it sent over a subject which is checking in and an actual email body.
288:26 So we have to fix the prompting there right? But then the output which is
288:30 basically what that sub workflow sent back said here's a slightly polished
288:33 version of your message for a warm and clear tone blah blah blah. And then for
288:36 some reason it went and called that email agent again. So now it says please
288:40 send the following email and it sends it over again. And then the sub agent says
288:45 I can't send emails directly but here's the email you can send. So, they're both
288:48 in this weird loop of thinking they are creating them an email, but not actually
288:51 being able to send them. So, let's take a look and see how we can fix that. All
288:55 right, so back in the sub workflow, what we want to do now is actually let this
288:58 agent have the ability to send emails. Otherwise, they're just going to keep
289:01 doing that endless loop. So, I'm going to add a tool and type in Gmail. We're
289:05 going to change this to a send message operation. I'm just going to rename this
289:10 send email. And we're just going to have the two be defined by the model. We're
289:14 going to have the subject be defined by the model. and we're going to have the
289:17 message be defined by the model. And all this means is that ideally, you know,
289:21 this query is going to say, hey, send an email to nateample.com asking what's up.
289:26 The agent would then interpret that and it would fill out, okay, who is it going
289:29 to? What's the subject and what's the message? It would basically just create
289:33 it all itself using AI. And the last thing I'm going to do is just turn off
289:36 the Nent attribution right there. And now let's give it another shot. And keep
289:39 in mind, there's no system prompt in this actual agent. And I actually want
289:43 to show you guys a cool tip. So when you're building these multi- aent
289:47 systems and you're doing things like sending data between flows, if you don't
289:51 want to always go back to the main agent to test out like how this one's working,
289:55 what you can do is come into here and we can just edit this query and just like
289:59 set some mock data as if the main agent was sending over some stuff. So I'm
290:02 going to say like we're pretending the orchestrator agent sent over to the sub
290:12 nate@example.com asking what's up and we'll just get rid of that R. And then
290:15 now you can see that's the query. That's exactly what this agent's going to be
290:18 looking at right here. And if we hit play above this AI agent, we'll see that
290:22 hopefully it's going to call that send email tool and we'll see what it did. So
290:25 it just finished up. We'll click into the tool to see what it did. And as you
290:29 can see, it sent it to Nate example. It made the subject checking in and then
290:32 the message was, "Hey Nate, just wanted to check in and see what's up. best your
290:36 name. So, my thought process right now is like, let's get everything working
290:39 the way we want it with this agent before we go back to that orchestrator
290:43 agent and fix the prompting there. So, one thing I don't like is that it's
290:46 signing off with best your name. So, we have a few options here. We could do
290:50 that in the system prompt, but same thing with like um specialization. If
290:54 this tool is specialized in sending emails, we might as well instruct it how
290:58 to send emails in this tool. So for the message, I'm going to add a description
291:01 and I'm going to say always sign off emails as Bob. And that really should do it. So
291:08 because we have this mock data right here, I don't have to go and, you know,
291:12 send another message. I can just test it out again and see what it's going to do.
291:15 So it's going to call the send email tool. It's going to make that message.
291:18 And now we will go ahead and look and see if it's signed off in a better way.
291:21 Right here, we can see now it's signing off best, Bob. So, let's just say right
291:25 now we're happy with the way that our sub agent's working. We can go ahead and
291:29 come back into the main agent and test it out again. All right. So, I'm just
291:32 going to shoot off that same message again that says, "Send an email to Nate
291:35 asking him how he's doing." And this will be interesting. We'll see what it
291:38 sends over. It was one run and it says, "Could you please provide Nate's email
291:42 address so I can send the message?" So, what happened here was the subexecution
291:46 realized we don't have Nate's email address. And that's why it basically
291:49 responded back to this main agent and said, "I need that if I need to send the
291:53 message." So if I click on subexecution, we will see exactly what it did and why
291:57 it did that and it probably didn't even call that send email tool. Yeah. So it
292:01 actually failed and it failed because it tried to fill out the two as Nate and it
292:05 realized that's not like a valid email address. So then because this sub agent
292:09 responds with could you please provide Nate's email address so I can send the
292:13 message. That's exactly what the main agent saw right here in the response
292:16 from this agent tool. So that's how they're able to talk to each other, go
292:19 back and forth, and then you can see that the orchestrator agent prompted us
292:22 to actually provide Nate's email address. So now we're going to try,
292:25 please send an email to nativeample.com asking him how the project is coming
292:29 along. We'll shoot that off and everything should go through this time
292:32 and it should basically say, oh, which project are you referring to? This will
292:35 help me provide you with the most accurate and relevant update. So once
292:39 again, the sub agent is like, okay, I don't have enough information to send
292:42 off that message, so I'm going to respond back to that orchestrator agent.
292:46 And just because we actually need one to get through, let me shoot off one more
292:49 example. Okay, hopefully this one's specific enough. We have an email
292:52 address. We have a specified name of a project. And we should see that
292:55 hopefully it's going to send this email this time. Okay, there we go. The email
292:58 asking Nate how Project Pan is coming along. It's been sent. Anything else you
293:02 need? So, at this point, it would be okay. Which other agents could I add to
293:06 the system to make it a bit easier on myself? The first thing naturally to do
293:09 would be I need to add some sort of contact agent. Or maybe I realize that I
293:14 don't need a full agent for that. Maybe that needs to just be one tool. So
293:17 basically what I would do then is I'd add a tool right here. I would grab an
293:20 air table because that's where my contact information lives. And all I
293:24 want to do is go to contacts and choose contacts. And now I just need to change
293:30 this to search. So now this tool's only job is to return all of the contacts in
293:34 my contact database. I'm just going to come in here and call this contacts. And
293:38 now keep in mind once again there's still nothing in the system prompt about
293:41 here are the tools you have and here's what you do. I just want to show you
293:44 guys how intelligent these models can be before you even prompt them. And then
293:47 once you get in there and say, "Okay, now you have access to these seven
293:50 agents. Here's what each of them are good at, it gets even cooler." So, let's
293:54 try one more thing and see if it can use the combination of contact database and
293:58 email agent. Okay, so I'm going to fire this off. Send an email to Dexter Morgan
294:02 asking him if he wants to get lunch. You can see that right away it used the
294:05 contacts database, pulled back Dexter Morgan's email address, and now we can
294:08 see that it sent that email address over to the email agent, and now we have all
294:12 of these different data transfers talking to each other, and hopefully it
294:15 sent the email. All right, so here's that email. Hi, Dexter. Would you like
294:18 to get lunch sometime soon? Best Bob. The formatting is a little off. We can
294:21 fix that within the the tool for the email agent. But let's see if we sent
294:24 that to the right email, which is dextermiami.com. If we go into our
294:28 contacts database, we can see right here we have dextermorggan dextermiami.com.
294:32 And like I showed you guys earlier, what you want to do is get pretty good at
294:35 reading these agent logs. So you can see how your agents are thinking and what
294:38 data they're sending between workflows. And if we go to the logs here, we can
294:42 see first of all, it used its GPT4.1 mini model brain to understand what to
294:46 do. It understood, okay, I need to go to the contacts table. So I got my contact
294:50 information. Then I need to call the email agent. And what I sent over to the
294:55 email agent was send an email to dextermiami.com asking him if he wants
294:59 to get lunch. And that was perfect. All right. All right, so that's going to do
295:02 it for this one. Hopefully this opened your eyes to the possibilities of these
295:05 multi- aent systems in N&N and also hopefully it taught you some stuff
295:08 because I know all of this stuff is like really buzzwordy sometimes with all
295:12 these agents agents agents but there are use cases where it really is the best
295:15 path but it's all about like understanding what is the end goal and
295:18 how do I want to evolve this workflow and then deciding like what's the best
295:23 architecture or system to use. So that was one type of architecture for a
295:26 multi-agent system called the orchestrator architecture. But that's
295:30 not the only way to have multiple agents within a workflow or within a system. So
295:33 in this next section, I'm going to break down a few other architectures that you
295:37 can use so that you can understand what's possible and which one fits your
295:42 use case best. So let's dive right in. So in my ultimate assistant video, we
295:45 utilize an agentic framework called parent agent. So as you can see, we have
295:48 a parent agent right here, which is the ultimate assistant that's able to send
295:52 tasks to its four child agents down here, which are different workflows that
295:56 we built out within NAND. If you haven't seen that video, I'll tag it right up
295:58 here. But how it works is that the ultimate assistant could get a query
296:01 from the human and decide that it needs to send that query to the email agent,
296:04 which looks like this. And then the email agent will be able to use its
296:08 tools in Gmail and actually take action. From there, it responds to the parent
296:11 agent and then the parent agent is able to take that response back from this
296:14 child agent and then respond to us in Telegram. So, it's a super cool system.
296:17 It allows us to delegate tasks and also these agents can be activated in any
296:20 specific order. It doesn't always have to be the same. But is this framework
296:24 always the most effective? No. So today I'm going to be going over four
296:26 different agentic frameworks that you can use in your end workflows. The first
296:29 one we're going to be talking about is prompt chaining. The second one is
296:32 routing. The third one is parallelization. And the fourth one is
296:36 evaluator optimizer. So we're going to break down how they all work, what
296:38 they're good at. But make sure you stick around to the end because this one, the
296:41 evaluator optimizer, is the one that's got me most excited. So before we get
296:45 into this first framework, if you want to download these four templates for
296:48 free so you can follow along, you can do so by joining my free school community.
296:51 You'll come in here, click on YouTube resources, click on the post associated
296:54 with this video, and then you'll have the workflow right here to download. So,
296:57 the link for that's down in the description. There's also going to be a
297:00 link for my paid community, which is if you're looking for a more hands-on
297:03 approach to learning NAND. We've got a great community of members who are also
297:06 dedicated to learning NAN, sharing resources, sharing challenges, sharing
297:09 projects, stuff like that. We've got a classroom section where we're going over
297:11 different deep dive topics like building agents, vector databases, APIs, and HTTP
297:15 requests. And I also just launched a new course where I'm doing the step-by-step
297:18 tutorials of all the videos that I've shown on YouTube. And finally, we've got
297:21 five live calls per week to make sure that you're getting questions answered,
297:24 never getting stuck, and also networking with individuals in the space. We've
297:27 also got guest speakers coming in in February, which is super exciting. So,
297:30 I'd love to see you guys in these calls. Anyways, back to the video here. The
297:33 first one we're going to be talking about is prompt chaining. So, as you can
297:36 see, the way this works, we have three agents here, and what we're doing is
297:39 we're passing the output of an agent directly as the input into the next
297:44 agent, and so on so forth. So, here are the main benefits of this type of
297:47 workflow. It's going to lead to improved accuracy and quality because each step
297:51 focuses on a specific task which will help reduce errors and hallucinations.
297:54 Greater control over each step. We can refine the system prompt of the outline
297:57 writer and then we can refine the prompt of the evaluator. So we can really tweak
298:01 what's going on and how data is being transferred. Specialization is going to
298:05 lead to more effective agents. So as you can see in this example, we're having
298:09 one agent write the outline. One of them evaluates the outline and makes
298:12 suggestions. And then finally, we pass that revised outline to the blog writer
298:15 who's in charge of actually writing the blog. So this is going to lead to a much
298:20 more cohesive, thought through actual blog in the end compared to if we would
298:24 just fed in all of this system prompt into one agent. And then finally, with
298:27 this type of framework, we've got easier debugging and optimization because it's
298:30 linear. We can see where things are going wrong. Finally, it's going to be
298:32 more scalable and reusable as we're able to plug in different agents wherever we
298:35 need them. Okay, so what we have to do here is we're just going to enter in a
298:40 keyword, a topic for this um blog. So, I'm just going to enter in coffee, and
298:43 we'll see that the agents start going to work. So, the first one is an outline
298:46 writer. Um, one thing that's also really cool about this framework and some of
298:49 the other ones we're going to cover is that because we're splitting up the
298:52 different tasks, we're able to utilize different large language models. So, as
298:55 you can see, the outline writer, we gave 20 Flash because it's it's free. Um,
298:59 it's it's powerful, but not super powerful, and we just need a brief
299:02 outline to be written here. And then we can pass this on to the next one that
299:04 uses 40 Mini. It's a little more powerful, a little more expensive, but
299:08 still not too bad. and we want this more powerful chat model to be doing the
299:12 evaluating and refining of the outline. And then finally, for the actual blog
299:15 writing content, we want to use something like Claw 3.5 or even Deepseek
299:18 R1 because it's going to be more powerful and it's going to take that
299:21 revised outline and then structure a really nice blog post for us. So that's
299:25 just part of the specialization. Not only can we split up the tasks, but we
299:29 can plug and play different chat models where we need to rather than feeding
299:33 everything through one, you know, one Deep Seeker, one blog writer at the very
299:36 beginning. So, this one's finishing up here. It's about to get pushed into a
299:39 Google doc where we'll be able to go over there and take a look at the blog
299:43 that it got for us about coffee. So, looks like it just finished up. Here we
299:47 go. Detailed blog post based on option one, a comprehensive guide to coffee.
299:51 Here's our title. Um, we have a rich history of coffee from bean to cup. We
299:55 have um different methods. We have different coffee varieties. We have all
299:59 this kind of stuff, health benefits and risks. Um, and as you can see, this
300:03 pretty much was a four-page blog. We've got a conclusion at the end. Anyways,
300:06 let's dive into what's going on here. So, the concept is passing the output
300:10 into the input and then taking that output and passing it into the next
300:14 input. So, here we have here's the topic to write a blog about which all it got
300:17 here was the word coffee. That's what we typed in. The system message is that you
300:20 are an expert outline writer. Your job is to generate a structured outline for
300:24 a blog post with section titles and key points. So, here's the first draft at
300:28 the outline using 20 flash. Then, we pass that into an outline evaluator
300:32 that's using for Mini. We said here's the outline. We gave it the outline of
300:36 course and then the system message is you're an expert blog evaluator. Your
300:39 job is to revise this outline and make sure it hits these four criteria which
300:43 are engaging introduction, clear section breakdown, logical flow, and then a
300:46 conclusion. So we told it to only output the revised outline. So now we have a
300:50 new outline over here. And finally, we're sending that into a Claude 3.5
300:54 blog writer where we gave it the revised outline and just said, "You're an expert
300:58 blog writer. Generate a detailed blog post using this outline with well
301:01 ststructured paragraphs and engaging content." So that's how this works. You
301:04 can see it will be even more powerful once we hook up, you know, like some
301:07 internet search functionality and if we added like an editor at the end before
301:10 it actually pushed it into the the Google doc or whatever it is. But that's
301:14 how this framework works. But let's move into aic framework number two. Now we're
301:17 going to talk about the routing framework. In this case, we have an
301:21 initial LLM call right here to classify incoming emails. And based on that
301:24 classification, it's going to route it up as high priority, customer support,
301:28 promotion, their finance, and billing. And as you can see, there's different
301:31 actions that are going to take place. We have different agents depending on what
301:34 type of message comes through. So the first agent, which is the text
301:37 classifier here, basically just has to decide, okay, which agent do I need to
301:41 send this email off to? Anyways, why would you want to use routing? Because
301:43 you're going to have an optimized response handling. So as we can see in
301:46 this case, we're able to set up different personas for each of our
301:49 agents here rather than having one general AI response agent. Then this can
301:54 be more scalable and modular. It's going to be faster and more efficient. And
301:56 then you can also introduce human escalation for critical issues like we
302:00 do up here with our high priority agent. And finally, it's just going to be a
302:03 better user experience for for your team and also your customers. So I hit test
302:07 step. What we're getting here is an email that I just sent to myself that
302:10 says, "Hey, I need help logging into my account. Can you help me?" So this email
302:13 classifier is going to label this as customer support. As soon as we hit
302:16 play, it's going to send it down the customer support branch right here. As
302:19 you can see, we got one new item. What's going on in this step is that we're just
302:22 labeling it in our Gmail as a customer support email. And then finally, we're
302:25 going to fire it off to the customer support agent. In this case, this one is
302:29 trained on customer support activities. Um, this is where you could hook up a
302:32 customer support database if you needed. And then what it's going to do is it's
302:35 going to create an email draft for us in reply to the email that we got. So,
302:38 let's go take a look at that. So, here's the email we got. Hey, I need help
302:41 logging into my account. As you can see, our agent was able to label it as
302:44 customer support. And then finally, it created this email, which was, "Hey,
302:46 Nate, thanks for reaching out. I'd be happy to assist you with logging into
302:49 your account. Please provide me with some more details um about the issue
302:52 you're experiencing, blah blah blah." And then this one signs off, "Best
302:55 regards, Kelly, because she's the customer support rep." Okay, let's take
302:58 a look at a different example. Um, we'll pull in the trigger again and this time
303:01 we're going to be getting a different email. So, as you can see, this one
303:03 says, "Nate, this is urgent. We need your outline tomorrow or you're fired."
303:06 So, hopefully this one gets labeled as high priority. It's going to go up here
303:10 to the high priority branch. Once again, we're going to label that email as high
303:13 priority. But instead of activating an email draft reply tool, this one has
303:17 access to a Telegram tool. So, what it's going to do is text us immediately and
303:20 say, "Hey, this is the email you got. You need to take care of this right
303:23 away." Um, and obviously the logic you can choose of what you want to happen
303:26 based on what route it is, but let's see. We just got telegram message,
303:29 urgent email from Nate Hkelman stating that an outline is needed by tomorrow or
303:33 there will be serious consequences, potential termination. So that way it
303:36 notifies us right away. We're able to get into our email manually, you know,
303:39 get get caught up on the thread and then respond how we need to. And so pretty
303:42 much the same thing for the other two. Promotional email will get labeled as
303:45 promotion. We come in here and see that we are able to set a different persona
303:49 for the pro promotion agent, which is you're in charge of promotional
303:52 opportunities. Your job is to respond to inquiries in a friendly, professional
303:55 manner and use this email to send reply to customer. Always sign off as Meredith
303:59 from ABC Corp. So, each agent has a different sort of persona that it's able
304:03 to respond to. In finance agent, we have we have this agent signing off as a as
304:08 Angela from ABC Corp. Um, anyways, what I did here was I hooked them all up to
304:11 the same chat model and I hooked them all up to the same tool because they're
304:16 all going to be sending an email draft here. As you can see, we're using from
304:19 AAI to determine the subject, the message, and the thread ID, which it's
304:23 going to pull in from the actual Gmail trigger, or sorry, the Gmail trigger is
304:26 not using from AAI. We're we're mapping in the Gmail trigger because every time
304:30 an email comes through, it can just look at that um email in order to determine
304:36 the thread ID for sending out an email. But you don't have to connect them up to
304:38 the same tool. I just did it this way because then I only had to create one
304:41 tool. Same thing with the different chat models based on the, you know,
304:44 importance of what's going through each route. You could switch out the chat
304:47 models. We could have even used a cheaper, easier one for the
304:50 classification if we wanted to, but in this case, I just hooked them all up to
304:54 a 40 mini chat model. Anyways, this was just a really simple example of routing.
304:57 You could have 10 different routes, you could have just two different routes,
305:00 but the idea is that you're using one agent up front to determine which way to
305:04 send off the data. Moving on to the third framework, we've got
305:06 parallelization. What we're going to do here is be using three different agents,
305:09 and then we're going to merge their outputs, aggregate them together, and
305:12 then feed them all into a final agent to sort of, you know, throw it all into one
305:16 response. So what this is going to do is give us faster analysis rather than
305:20 processing everything linearly. So in this case we're going to be sending in
305:22 some input and then we're going to have one agent analyze the emotion behind it,
305:26 one agent do the intent behind it, and then one agent analyze any bias rather
305:30 than doing it one by one. They're all going to be working simultaneously and
305:33 then throwing their outputs together. So it can decrease the latency there.
305:36 They're going to be specialized, which means we could have specialized system
305:39 prompts like we do here. We also could do specialized um large language models
305:42 again where we could plug in different different models if we wanted to maybe
305:46 feed through the same prompt use cloud up here, OpenAI down here and then you
305:49 know DeepSeek down here and then combine them together to make sure we're getting
305:53 the best thoughtout answer. Um comprehensive review and then more
305:56 scalability as well. But how this one's going to work is we're putting in an
305:59 initial message which is I don't trust the mainstream media anymore. They
306:02 always push a specific agenda and ignore real issues. People need to wake up and
306:05 stop believing everything they see on the news. So, we're having an emotion
306:08 agent, first of all, analyze the emotional emotional tone, categorize it
306:13 as positive, neutral, negative, or mixed with a brief explanation. The intent
306:17 agent is going to analyze the intent behind this text, and then finally, the
306:20 bias agent is going to analyze this text for any potential bias. So, we'll hit
306:24 this off. Um, we're going to get those three separate analysises um or
306:28 analysis, and then we're going to be sending that into a final agent that's
306:32 going to basically combine all those outputs and then write a little bit of
306:36 report based on our input. So, as you can see, right now, it's waiting here
306:40 for um the input from the bias agent. Once that happens, it's going to get
306:42 aggregated, and now it's being sent into the final agent, and then we'll take a
306:47 look at um the report that we got in our Google doc. Okay, just finished up.
306:50 Let's hop over to Docs. We'll see we got an emotional tone, intent, and bias
306:55 analysis report. Overview is that um the incoming text has strong negative
306:59 sentiment towards mainstream media. Yep. Emotional tone is negative sentiment.
307:03 Intent is persuasive goal. Um, the bias analysis has political bias,
307:07 generalization, emotional language, lack of evidence. Um, it's got
307:10 recommendations for how we can make this text more neutral, revised message, and
307:13 then let's just read off the conclusion. The analysis highlights a significant
307:17 level of negativity and bias in the original message directed towards
307:20 mainstream media. By implementing the suggested recommendations, the author
307:23 can promote a more balanced and credible perspective that encourages critical
307:26 assessment of media consumption, blah blah blah. So, as you can see, that's
307:30 going to be a much better, you know, comprehensive analysis than if we would
307:34 have just fed the initial input into an agent and said, "Hey, can you analyze
307:37 this text for emotion, intent, and bias?" But now, we got that split up,
307:41 merged together, put into the final one for, you know, a comprehensive review
307:45 and an output. And it's going to turn the the, you know, data in into data out
307:49 process. It's going to be a lot more efficient. Finally, the one that gets me
307:53 the most excited, um, the evaluator optimizer framework, where we're going
307:56 to have an evaluator agent decide if what's passing through is good or not.
308:00 If it's good, we're fine, but if it's not, it's going to get optimized and
308:03 then sent back to the evaluator for more evaluation. And this is going to be an
308:06 endless loop until the evaluator agent says, "Okay, finally, it's good enough.
308:10 We'll send it off." So, if you watch my human in the loop video, it's going to
308:13 be just like that where we were providing feedback and we were the ones
308:16 basically deciding if it was good to go or not. But in this case, we have an
308:19 agent that does that. So it's going to be optimizing all your workflows on the
308:23 back end without you being in the loop. So obviously the benefits here are that
308:26 it's going to ensure high quality outputs. It's going to reduce errors and
308:29 manual review. It's going to be flexible and scalable. And then it's going to
308:32 optimize the AI's performance because it's sort of an iterative approach that
308:36 um you know focuses on continuous improvement from these AI generated
308:40 responses. So what we're doing here is we have a biography agent. What we told
308:45 this agent to do is um basically write a biography. You're an expert biography
308:48 writer. You'll receive information about a person. Your job is to create an
308:51 entire profile using the information they give you. And I told it you're
308:55 allowed to be creative. From there, we're setting the bio. And we're just
308:58 doing this here so that we can continue to feed this back over and over. That
309:02 way, if we have five revisions, it'll still get passed every time. The most
309:06 recent version to the agent and also the most recent version when it's approved
309:09 will get pushed up here to the Google doc. Then we have the evaluator agent.
309:14 What we told this agent to do is um evaluate the biography. Your job is to
309:18 provide feedback. We gave a criteria. So, make sure that it includes a quote
309:21 from the person. Make sure it's light and humorous and make sure it has no
309:25 emojis. Only need to output the feedback. If the biography is finished
309:29 and all criteria are met, then all you need to output is finished. So, then we
309:33 have a check to say, okay, does the output from the evaluator agent say
309:36 finished or is it feedback? If it's feedback, it's going to go to the
309:39 optimizer agent and continue on this loop until it says finished. Once it
309:42 finally says finished, as you can see, we set JSON.output, output which is the
309:47 output from the evaluator agent equals finished. When that happens, it'll go up
309:51 here and then we'll see it in our Google doc. But then what we have in the actual
309:54 optimizer agent is we're giving it the biography and this is where we're
309:58 referencing the set field where we earlier right here where we set the bio.
310:01 This way the optimizer agent's always getting the most updated version of the
310:05 bio. And then we're also going to get the feedback. So this is going to be the
310:08 output from the evaluator agent because if it does go down this path, the
310:12 evaluator agent, it means that it output feedback rather than saying finished. So
310:16 it's getting feedback, it's getting the biography, and then we're saying you're
310:19 an expert reviser. Your job is to take the biography and optimize it based on
310:22 the feedback. So it gets all it needs in the user message, and then it outputs us
310:26 a better optimized version of that biography. Okay, so let's do an example
310:30 real quick. Um, if you remember in the biography agent, well, all we have to do
310:33 is give it a, you know, some information about a person to write a biography on.
310:36 So, I'm going to come in here and I'm just going to say Jim 42
310:44 um, lives by the ocean. Okay, so that's all we're going to put in. We'll see
310:47 that it's writing a brief biography right now. And then we're going to see
310:50 it get evaluated. We're going to see if it, you know, met those criteria. If it
310:53 doesn't, it's going to get sent to the optimizer agent. the optimizer agent is
310:58 going to get um basically the criteria it needs to hit as well as the original
311:02 biography. So here's the evaluator agent. Look at that. It decides that it
311:05 wasn't good enough. Now it's being sent to the optimizer agent who is going to
311:09 optimize the bio, send it back and then hopefully on the second run it'll go up
311:12 and get published in the docs. If it's not good enough yet, then it will come
311:15 back to the agent and it will optimize it once again. But I think that this
311:18 agent will do a good job. There we go. We can see it just got pushed up into
311:21 the doc. So let's take a look at our Google doc. Here's a biography for Jim
311:26 Thompson. He lives in California. He's 43. Um, ocean enthusiast, passion,
311:31 adventure, a profound respect for nature. It talks about his early life,
311:35 and obviously he's making all this up. Talks about his education, talks about
311:38 his career, talks about his personal life. Here we have a quote from Jim,
311:42 which is, "I swear the fish are just as curious about me as I am about them."
311:45 We've even got another quote. Um, a few dad jokes along the way. Why did the
311:49 fish blush? Because it saw the ocean's bottom. So, not sure I completely get
311:53 that one. Oh, no. I get that one. Um anyways then hobbies, philosophy, legacy
311:57 and a conclusion. So this is you know a pretty optimized blog post. It meets all
312:00 the criteria that we had put into our agents as far as you know this is what
312:03 you need to evaluate for. It's very light. There's no emojis. Threw some
312:06 jokes in there and then it has some quotes from Jim as well. So as you can
312:11 see all we put in was Jim 43 lives by the ocean and we got a whole basically a
312:14 story written about this guy. And once again just like all of these frameworks
312:17 pretty much you have the flexibility here to change out your model wherever
312:20 you want. So let's say we don't really mind up front. we could use something
312:24 really cheap and quick and then maybe for the actual optimizer agent we want
312:27 to plug in something a little more um you know with reasoning aspect like
312:31 deepse R1 potentially. Anyways, that's all I've got for you guys today. Hope
312:34 this one was helpful. Hope this one, you know, sparked some ideas for next time
312:37 you're going into edit end to build an agentic workflow. Maybe looking at I
312:40 could actually have structured my workflow in this framework and it would
312:43 have been a little more efficient than the current way I'm doing it. Like I
312:46 said, these four templates will be in the free school community if you want to
312:48 download them and just play around with them to understand what's going on,
312:52 understand, you know, when to use each framework, stuff like that. All right,
312:55 so we understand a lot of the components that actually go into building an
312:58 effective agent or an effective agent system, but we haven't really yet spent
313:02 a lot of time on prompting, which is like 80% of an agent. It's so so
313:05 important. So, in this next section, we're going to talk about my methodology
313:08 when it comes to prompting a tools agent, and we're going to do a quick
313:12 little live prompting session near the end. So, if that sounds good to you,
313:15 let's get started. Building AI agents and hooking up different tools is fun
313:18 and all, but the quality and consistency of the performance of your agents
313:21 directly ties back to the quality of the system prompt that you put in there.
313:24 Anyways, today what we're going to be talking about is what actually goes into
313:27 creating an effective prompt so that your agents perform as you want them to.
313:30 I'm going to be going over the most important thing that I've learned while
313:33 building out agents and prompting them that I don't think a lot of people are
313:36 doing. So, let's not waste any time and get straight into this one. All right,
313:39 so I've got a document here. If you want to download this one to follow along or
313:42 just have it for later, you can do so by joining my free school community. The
313:44 link for that's down in the description. You'll just click on YouTube resources
313:47 and find the post associated with this video and you'll be able to download the
313:51 PDF right there. Anyways, what we're looking at today is how we can master
313:55 reactive prompting for AI agents in NAD. And the objective of this document here
313:58 is to understand what prompting is, why it matters, develop a structured
314:02 approach to reactive prompting when building out AI agents, and then learn
314:05 about the essential prompt components. So, let's get straight into it and start
314:09 off with just a brief introduction. What is prompting? Make sure you stick around
314:12 for this one because once we get through this doc, we're going to hop into NN and
314:15 do some live prompting examples. So, within our agents, we're giving them a
314:18 system prompt. And this is basically just coding them on how to act. But
314:22 don't be scared of the word code because we're just using natural language
314:25 instead of something like Python or JavaScript. A good system prompt is
314:28 going to ensure that your agent is behaving in a very clear, very specific,
314:32 and a very repeatable way. So, instead of us programming some sort of Python
314:36 agent, what we're doing is we're just typing in, "You're an email agent. Your
314:39 job is to assist the user by using your tools to take the correct action."
314:42 Exactly as if we were instructing an intern. And why does prompting matter?
314:46 I'm sure by now you guys already have a good reason in your head of why
314:49 prompting matters, and it's pretty intuitive, but let's think about it like
314:53 this as well. Agents are meant to be running autonomously, and they don't
314:56 allow that back and forth interaction like chatbt. Now, yes, there can be some
315:00 human in the loop within your sort of agentic workflows, but ideally you put
315:05 in an input, it triggers the automation, triggers the agent to do something, and
315:08 then we're getting an output. Unlike chatbt where you ask it to help you
315:10 write an email, and you can say, "Hey, make that shorter," or you can say,
315:14 "Make it more professional." We don't have that um luxury here. We just need
315:17 to trust that it's going to work consistently and high quality. So, our
315:21 goal as prompters is to get the prompts right the first time so that the agent
315:25 functions correctly every single time it's triggered. So, the key rule here is
315:28 to keep the prompts clear, simple, and actionable. You don't want to leave any
315:32 room for misinterpretation. Um, and also, less is more. Sometimes I'll see
315:35 people just throw in a novel, and that's just obviously going to be more
315:38 expensive for you, and also just more room to confuse the agent. So, less is
315:42 more. So, now let's get into the biggest lesson that I've learned while prompting
315:46 AI agents, which is prompting needs to be done reactively. I see way too many
315:51 people doing this proactively, throwing in a huge system message, and then just
315:54 testing things out. This is just not the way to go. So let's dive into what that
315:57 actually means to be prompting reactively. First of all, what is
316:01 proactive prompting? This is just writing a long detailed prompt up front
316:04 after you have all your tools configured and all of the sort of, you know,
316:08 standard operating procedures configured and then you start testing it out. The
316:12 problem here is that you don't know all the possible edge cases and errors in
316:16 advance and debugging is going to be a lot more difficult because if something
316:19 breaks, you don't know which part of the prompt is causing the issue. You may try
316:22 to fix something in there and then the issue originally you were having is
316:26 fixed, but now you cause a new issue and it's just going to be really messy as
316:28 you continue to add more and more and you end up just confusing both yourself
316:32 and the agent. Now, reactive prompting on the other hand is just starting with
316:35 absolutely nothing and adding a tool, testing it out, and then slowly adding
316:39 sentence by sentence. And as you've seen in some of my demos, we're able to get
316:43 like six tools hooked up, have no prompt in there, and the agent's still working
316:46 pretty well. At that point, we're able to start adding more lines to make the
316:49 system more robust. But the benefits here of reactive prompting are pretty clear. The first
316:54 one is easier debugging. You know exactly what broke the agent. Whether
316:58 that's I added this sentence and then the automation broke. All I have to do
317:01 is take out that sentence or I added this tool and I didn't prompt the tool
317:04 yet. So that's what caused the automation to break. So I'm just going
317:06 to add a sentence in right here about the tool. This is also going to lead to
317:10 more efficient testing because you can see exactly what happens before you hard
317:14 prompt in fixes. And essentially, you know, I'll talk about hard prompting
317:17 more later, but essentially what it is is um you're basically seeing an error
317:22 and then you're hard prompting in the error within the system prompt and
317:25 saying, "Hey, like you just did this. That was wrong. Don't do that again."
317:28 And we can only do that reactively because we don't know how the agent's
317:31 going to react before we test it out. Finally, we have the benefit that it
317:34 prevents over complicated prompts that are hard to modify later. If you have a
317:39 whole novel in there and you're getting errors, you're not going to know where
317:40 to start. You're going to be overwhelmed. So, taking it step by step,
317:44 starting with nothing and adding on things slowly is the way to go. And so,
317:48 if it still isn't clicking yet, let's look at a real world example. Let's say
317:51 you're teaching a kid to ride a bike. If you took a proactive approach, you'd be
317:56 trying to correct the child's behavior before you know what he or she is going
318:00 to do. So, if you're telling the kid to keep your back straight, lean forward,
318:03 you know, don't tilt a certain way, that's going to be confusing because now
318:06 the kid is trying to adjust to all these things you've said and it doesn't even
318:10 know what it was going to do, what he or she was going to do in the in the
318:13 beginning. But if you're taking a reactive approach and obviously maybe
318:16 this wasn't the best example cuz you don't want your kid to fall, but you let
318:20 them ride, you see what they're doing, you know, if they're leaning too much to
318:23 the left, you're going to say, "Okay, well, maybe you need to lean a little
318:26 more to the right to center yourself up." um and only correct what they
318:30 actually need to have corrected. This is going to be more effective, fewer
318:33 unnecessary instructions, and just more simple and less overwhelming. So, the
318:37 moral of the story here is to start small, observe errors, and fix one
318:41 problem at a time. So, let's take a look at some examples of reactive prompting
318:44 that I've done in my ultimate assistant workflow. As you can see right here, I'm
318:46 sure you guys have seen that video by now. If you haven't, I'll link it right
318:50 up here. But, I did a ton of reactive prompting in here because I have one
318:53 main agent calling four different agents. And then within those sub
318:56 agents, they all have different tools that they need to call. So this was very
319:00 very reactive when I was prompting this workflow or this system of agents. I
319:04 started with no persistent prompt at all. I just connected a tool and I
319:07 tested it out to see what happened. So an an example would be I hooked up an
319:10 email agent, but I didn't give it in any instructions and I running the AI to see
319:14 if it will call the tool automatically. A lot of times it will and then it only
319:17 comes to when you add another different agent that you need to prompt in, hey,
319:20 these are the two agents you have. Here's when to use each one. So anyways,
319:24 adding prompts based on errors. Here I have my system prompts. So if you guys
319:27 want to pause it and read through, you can take a look. But you can see it's
319:30 very very simple. I've got one example. I've got basically one brief rule and
319:34 then I just have all the tools it has and when to use them. And it's very very
319:38 concise and not overwhelming. And so what I want you guys to pay attention to
319:41 real quick is in the overview right here. I said, you know, you're the
319:44 ultimate personal assistant. Your job is to send the user's query to the correct
319:48 tool. That's all I had at first. And then I was getting this error where I
319:51 was saying, "Hey, write an email to Bob." And what was happening is it
319:56 wasn't sending that query to the email tool, which is supposed to do. It itself
319:59 was trying to write an email even though it has no tool to write an email. So
320:03 then I reactively came in here and said, "You should never be writing emails or
320:06 creating event summaries. You just need to call the correct tool." And that's
320:09 not something I could have proactively put in there because I didn't really
320:12 expect the agent to be doing that. So I saw the error and then I basically
320:16 hardcoded in what it should not be doing and what it should be doing. So another
320:19 cool example of hard coding stuff in is using examples. You know, we all
320:22 understand that examples are going to help the agent understand what it needs
320:26 to do based on certain inputs and how to use different tools. And so right here
320:29 you can see I added this example, but we'll also look at it down here because
320:32 I basically copied it in. What happened was the AI failed in a very specific
320:36 scenario. So I added a concrete example where I gave it an input, I showed the
320:40 actions it should take, and then I gave it the output. So in this case, what
320:43 happened was I asked it to write an email to Bob and it tried to send an
320:46 email or try it tried to hit the send email agent, but it didn't actually have
320:49 Bob's email address. So the email didn't get sent. So what I did here was I put
320:53 in the input, which was send an email to Bob asking him what time he wants to
320:56 leave. I then showed the two actions it needs to take. The first one was use the
321:00 contact agent to get Bob's email. Send this email address to the email agent
321:03 tool. And then the second action is use the email agent to send the email. And
321:07 then finally, the output that we want the personal assistant to say back to
321:10 the human is, "The email has been sent to Bob. Anything else I can help you
321:13 with?" The idea here is you don't need to put examples in there that are pretty
321:16 intuitive and that the agent's going to get right already. You only want to put
321:19 in examples where you're noticing common themes of the agents failing to do this
321:23 every time. I may as well hardcode in this example input and output and tool
321:28 calls. So, step four is to debug one error at a time. always change one thing
321:32 and one thing only at a time so you know exactly what you changed that broke the
321:36 automation. Too too often I'll see people just get rid of an entire section
321:40 and then start running things and now it's like okay well we're back at square
321:43 one because we don't know exactly what happened. So you want to get to the
321:46 point where you're adding one sentence, you're hitting run and it's either
321:49 fixing it or it's not fixing it and then you know exactly what to do. You know
321:52 exactly what broke or fixed your automation. And so one thing honestly I
321:55 want to admit here is I created that system prompt generator on my free
321:59 school community. Um, and really the idea there was just to help you with the
322:02 formatting because I don't really use that thing anymore because the fact that
322:06 doing that is very proactive in the sense that we're dropping in a sort of a
322:10 query into chat GBT, the custom GPT I built, it's giving us a system prompt
322:13 and then we're putting that whole thing in the agent and then just running it
322:16 and testing it. And in that case, you don't know exactly what you should
322:19 change to fix little issues. So, just wanted to throw that out there. I don't
322:22 really use that system prompt generator anymore. I now always like handcraft my
322:27 prompts. Anyways, from there, what you want to do is scale up slowly. So once
322:30 you confirm that the agent is consistently working with its first tool
322:34 and its first rule in its prompt, then you can slowly add more tools and more
322:38 prompt rules. So here's an example. You'll add a tool. You'll add a sentence
322:42 in the prompt about the tool. Test out a few scenarios. If it's working well, you
322:45 can then add another tool and keep testing out and slowly adding pieces.
322:48 But if it's not, then obviously you'll just hard prompt in the changes of what
322:52 it's doing wrong and how to fix that. From there, you'll just test out a few
322:55 more scenarios. Um, and then you can just kind of rinse and repeat until you
322:58 have all the functionality that you're looking for. All right, now let's look
323:01 at the core components of an effective prompt. Each agent you design should
323:04 follow a structured prompt to ensure clarity, consistency, and efficiency.
323:09 Now, there's a ton of different types of prompting you can do based on the role
323:12 of agent. Ultimately, they're going to fall under one of these three buckets,
323:16 which is toolbased prompting, conversational prompting, or like
323:19 content creation type prompting, and then categorization/ealuation prompting. And
323:23 the reason I wanted to highlight that is because obviously if we're creating like
323:26 a content creation agent, we're not going to say what tools it has if it has
323:29 no tools. But um yeah, I just wanted to throw that out there. And another thing
323:32 to keep in mind is I really like using markdown formatting for my prompts. As
323:35 you can see these examples, we've got like different headers with pound signs
323:39 and we're able to specify like different sections. We can use bolded lists. We
323:42 can use numbered lists. I've seen some people talk about using XML for
323:45 prompting. I'm not a huge fan of it because um as far as human readability,
323:49 I think markdown just makes a lot more sense. So that's what I do. Anyways, now
323:53 let's talk about the main sections that I include in my prompts. The first one
323:56 is always a background. So whether this is a role or a purpose or a context, I
324:01 typically call it something like an overview. But anyways, just giving it
324:04 some sort of background that defines who the agent is, what its overall goal is.
324:08 And this really sets the foundation of, you know, sort of identifying their
324:12 persona, their behavior. And if you don't have the section, the agent is
324:16 kind of going to lack direction and it's going to generate really generic or
324:21 unfocused outputs. So set its role and this could be really simple. You can
324:23 kind of follow this template of you are a blank agent designed to do blank. Your
324:27 goal is blank. So you are a travel planning AI assistant that helps users
324:31 plan their vacations. Your goal is to pro provide detailed personalized travel
324:35 itineraries based on the user's input. Then we have tools. This is obviously
324:38 super super important when we're doing sort of non-deterministic agent
324:41 workflows where they're going to have a bunch of different tools and they have
324:44 to use their brain, their chat model to understand which tool does what and when
324:48 to use each one. So, this section tells the agent what tools it has access to
324:52 and when to use them. It ensures the AI selects the right tool for the right
324:55 task. And a well structured tools section prevents confusion and obviously
324:59 makes AI more efficient. So, here's an example of what it could look like. We
325:02 have like the markdown header of tools and then we have like a numbered list.
325:05 We're also showing that the tools are in bold. This doesn't have to be the way
325:08 you do it, but sometimes I like to show them in bold. Um, and it's you can see
325:12 it's really simple. It's it's not too much. It's not overwhelming. It's not
325:16 too um you know, it's just very clear. Google search, use this tool when the
325:19 user asks for real-time information. Email sender, use this tool when the
325:22 user wants to send a message. Super simple. And what else you can do is you
325:26 can define when to use each tool. So right here we say we have a contact
325:29 database. Use this tool to get contact information. You must use this before
325:34 using the email generator tool because otherwise it won't know who to send the
325:36 email to. So you can actually define these little rules. Keep it very clear
325:41 within the actual tool layer of the prompt. And then we have instructions. I
325:44 usually call them rules as you can see. Um, you could maybe even call it like a
325:48 standard operating procedure. But what this does, it outlines specific rules
325:52 for the agent to follow. It dictates the order of operations at a high level.
325:55 Just keep in mind, you don't want to say do this in this order every time because
325:58 then it's like, why are you even using an agent? The whole point of an agent is
326:01 that it's, you know, it's taking an input and something happens in this
326:04 black box where it's calling different tools. It may call this one twice. It
326:07 may call this one three times. It may call them none at all. Um, the idea is
326:11 that it's variable. It's not deterministic. So, if you're saying do
326:15 it and this every time, then you should just be using a sequential workflow. It
326:18 shouldn't even be an agent. But obviously, the rules section helps
326:22 prevent misunderstandings. So, here's like a high level instruction, right?
326:25 You're greeting the user politely. If the user provides incomplete
326:27 information, you ask follow-up questions. Use the available tools only
326:31 when necessary. Structure your response in clear, concise sentences. So, this
326:34 isn't saying like you do this in this order every time. It's just saying when
326:37 this happens, do this. If this happens, do that. So, here's an example for AI
326:41 task manager. When a task is added, you confirm with the user. If a deadline is
326:45 missing, ask the user to specify one. If a task priority is high, send a
326:48 notification. Store all tasks in the task management system. So, it's very
326:52 clear, too. Um, we don't need all these extra filler words because remember, the
326:56 AI can understand what you're saying as long as it has like the actual context
326:59 words that have meaning. You don't need all these little fillers. Um, you don't
327:03 need these long sentences. So, moving on to examples, which you know, sample
327:07 inputs and outputs and also actions within those between the inputs and
327:11 outputs. But this helps the AI understand expectations by showing real
327:14 examples. And these are the things that I love to hard code in there, hard
327:18 prompt in there. Because like I said, there's no point in showing an example
327:21 if the AI was already going to get that input and output right every time. You
327:24 just want to see what it's messing up on and then put an example in and show it
327:28 how to fix itself. So more clear guidance and it's going to give you more
327:31 accurate and consistent outputs. Here's an example where we get the input that
327:34 says, can you generate a trip plan for Paris for 5 days? The action you're
327:37 going to take is first call the trip planner tool to get X, Y, and Z. Then
327:41 you're going to take another action which is calling the email tool to send
327:44 the itinerary. And then finally, the output should look something like this.
327:49 Here's a 5-day Paris itinerary. Day 1, day 2, day 3, day 4, day 5. And then I
327:53 typically end my prompts with like a final notes or important reminders
327:56 section, which just has like some miscellaneous but important reminders.
327:59 It could be current date and time, it could be rate limits, it could be um
328:03 something as simple as like don't put any emojis in the output. Um, and
328:09 sometimes why I do this is because something can get lost within your
328:13 prompt. And sometimes like I I've thrown the today's date up top, but then it
328:16 only actually realizes it when it's in the bottom. So playing around with the
328:19 actual like location of your things can be sometimes help it out. Um, and so
328:23 having a final notes section at the bottom, not with too many notes, but
328:26 just some quick things to remember like always format responses as markdown.
328:30 Here's today's date. If unsure about an answer, say I don't have that
328:33 information. So just little miscellaneous things like that. Now, I
328:36 wanted to quickly talk about some honorable mentions because like I said
328:40 earlier, the prompt sections and components varies based on the actual
328:44 type of agent you're building. So, in the case of like a content creator agent
328:48 that has no tools, um you wouldn't give it a tool section, but you may want to
328:51 give it an output section. So, here's an output section that I had recently done
328:55 for my voice travel agent. Um, which if you want to see that video, I'll drop a
328:59 link right here. But what I did was I just basically included rules for the
329:01 output because the output was very specific with HTML format and it had to
329:05 be very structured and I wanted horizontal lines. So I created a whole
329:09 section dedicated towards output format as you can see. And because I used three
329:13 pound signs for these subsections, the agent was able to understand that all
329:17 this rolled up into the format of the output section right here. So anyways, I
329:21 said the email should be structured as HTML that will be sent through email.
329:25 Use headers to separate each section. Add a horizontal line to each section.
329:28 Um, I said what it what should be in the subject. I said what should be in the
329:31 introduction section. I said how you should list these departure dates,
329:34 return dates, flights for the flight section. Um, here's something where I
329:39 basically gave it like the HTML image tag and I showed how to put the image in
329:42 there. I showed to make I said like make a inline image rather than um an
329:47 attachment. I said to have each resort with a clickable link. I also was able
329:51 to adjust the actual width percentage of the image by specifying that here in the
329:56 prompt. Um, so yeah, this was just getting really detailed about the way we
329:59 want the actual format to be structured. You can see here we have activities that
330:02 I actually misspelled in my agent, but it didn't matter. Um, and then finally
330:06 just a sign off. And then just some final additional honorable mentions,
330:10 something like memory and context management, um, some reasoning, some
330:14 error handling, but typically I think that these can be just kind of one or
330:17 two sentences that can usually go in like the rules or instructions section,
330:21 but it depends on the use case, like I said. So, if it needs to be pretty
330:24 robust, then creating an individual section at the bottom called memory or
330:28 error handling could be worth it. It just depends on, like I said, the actual
330:32 use case and the goal of the agent. Okay, cool. So, now that we've got
330:35 through that document, let's hop into Nitn and we'll just do some really quick
330:38 examples of some reactive live prompting. Okay, so I'm going to hit
330:42 tab. I'm going to type in AI agent. We're going to grab one and we're going
330:44 to be communicating with it through this connected chat trigger node. Now, I'm
330:47 going to add a chat model real quick just so we can get set up up and
330:51 running. We have our 40 Mini. We're good to go. And just a reminder, there is
330:54 zero assistant prompt in here. All it is is that you are a helpful assistant. So,
330:58 what's the first thing to do is we want to add a tool. Test it out. So, I'm
331:03 going to add a um Google calendar tool. I'm just going to obviously select my
331:07 calendar to pull from. I'm going to, you know, fill in those parameters using the
331:10 model by clicking that button. And I'm just going to say this one's called
331:14 create event. So, we have create event. And so, now we're going to do our test
331:17 and see if the tool is working properly. I'm going to say create an event for
331:23 tonight at 700 p.m. So send this off. We should see the agents able to understand
331:27 to use this create event tool because it's using an automatic description. But
331:31 now we see an issue. It created the start time for October 12th, 2023 and
331:36 the end time for also October 12th, 2023. So this is our first instance of
331:39 reactive prompting. It's calling the tool correctly. So we don't really need
331:43 to prompt in like the actual tool name yet. Um it's probably best practice just
331:47 to just to do so. But first, I'm just going to give an overview and say you
331:52 are a calendar. Actually, no. I'm just going to say you are a helpful assistant
331:56 because that's all it is right now. And we don't know what else we're adding
331:59 into this guy. But now we'll just say tools is create event just so it's
332:06 aware. Use this to create an event. And then we want to say final notes. um here
332:14 is the current date and time because that's where it messed up is because it
332:17 didn't know the current date and time even though it was able to call the
332:20 correct tool. So now we'll just send this same thing off again and that
332:24 should have fixed it. We reactively fixed the error and um we're just making
332:28 sure that it is working as it should now. Okay, there we go. It just hit the
332:31 tool and it says the event has been created for tonight at 7 p.m. And if I
332:35 click into my calendar, you can see right there we have the event that was
332:38 just created. So cool. Now that's working. What we're going to do now is
332:40 add another tool. So, we'll drag this one over here. And let's say we want to
332:44 do a send email tool. We're going to send a message. We're going to change
332:48 the name to send email. And just so you guys are aware like how it's able to
332:52 know right here, tool description, we're setting automatically. If we set
332:55 manually, we would just say, you know, use this tool to send an email. But we
332:59 can just keep it simple. Leave it as set automatic. I'm going to turn on to
333:04 subject and message as defined by the model. And that's going to be it. So now
333:07 we just want to test this thing again before we add any prompts. We'll say
333:12 send an email to bobacample.com asking what's up. We'll send this off. Hopefully it's hitting
333:17 the right tool. So we should see there we go. It hit the send email tool and
333:21 the email got sent. We can come in here and check everything was sent correctly.
333:24 Although what we noticed is it's signing off as best placeholder your name and we
333:28 don't want to do that. So let's come in here and let's add a tool section for
333:33 this tool and we'll tell it how to how to act. So send email. That's another
333:37 tool it has. And we're going to say use this to send an email. Then we're going to say sign off
333:45 emails as Frank. Okay. So that's reactively fixing an error we saw. I'm
333:48 just now going to send off that same query. We already know that it knows how
333:51 to call the tool. So it's going to do that once again. There we go. We see the
333:55 email was sent. And now we have a sign off as Frank. So that's two problems
333:58 we've seen. And then we've added one super short line into the system prompt
334:02 and fixed those problems. Now let's do something else. Let's say in Gmail we
334:08 want to be able to label an email. And in order to label an email, as you can
334:13 see, add label to a message, we need a message ID and we need a label name or
334:17 an ID for that label. And this is we could choose from a list, but more
334:21 realistically, we want the label ID to be pulled in dynamically. So if we need
334:24 to get these two things, what we have to do is first get emails and also get
334:28 labels. So first I'm going to do get many. I'm going to say this is using,
334:32 you know, we're we're calling this tool get get emails. And then we don't want
334:37 to return all. We want to do a limit. And we also want to choose from a
334:40 sender. So we'll have this also be dynamically chosen. So cool. We don't
334:45 have a system prompt in here about this tool, but we're just going to say get my
334:52 last email from Nate Herkman. So we'll send that off. It should be hitting the
334:55 get emails tool, filling in Nate Herkman as the sender. And now we can see that
334:59 we just got this email with a subject hello. We have the message ID right
335:03 here. So that's perfect. And now what we need to do is we need to create a tool
335:06 to get the label ID. So I'm going to come in here and I'm going to say um get
335:11 many and we're going to go to label. We're going to do um actually we'll just
335:16 return all. That works. There's not too many labels in there. Um and we have to
335:19 name this tool of course. So we're going to call this get labels. So once again
335:24 there's no tools in or no prompt in here about these two tools at all. and we're
335:29 gonna say get my email labels. We'll see if it hits the right tool. There we go. It did. And it
335:36 is going to basically just tell us, you know, here they are. So, here are our
335:40 different labels. Um, and here are the ones that we created. So, promotion,
335:43 customer support, high priority, finance, and billing. Cool. So, now we
335:48 can try to actually label an email. So, that email that we just got from um from
335:54 Nate Hkelman that said hello, let's try to label that one. So, I'm going to add
335:57 another Gmail tool, and this one's going to be add a label to a message. And we
336:01 need the message ID and the label ID. So, I'm just going to fill these in with
336:05 the model parameter, and I'm going to call this tool add label. So, there's no
336:12 prompting for these three tools right here, but we're going to try it out
336:17 anyway and see what happens. So, add a promotion label to my last email
336:22 from Nate Herklman. Send that off. See what happens? It's getting emails. It tried
336:29 to add a label before. So, now we're kind of We got in that weird loop. As
336:32 you can see, it tried to add a label before it got labels. So, it didn't know
336:36 what to do, right? Um, we'll click into here. We'll see that I don't really
336:40 exactly know what happened. Category promotions. Looking in my inbox,
336:43 anything sent from Nate Hkelman, we have the email right here, but it wasn't
336:47 accurately labeled. So, let's go back into our agent and prompt this thing a
336:50 little bit better to understand how to use these tools. So, I'm going to
336:53 basically go into the tools section here and I'm going to tell it about some more
336:57 tools that it has. So, get emails, right? This one was it was already
337:01 working properly and we're just saying use this to get emails. Now, we have to
337:07 add get labels. We're just saying use this to get labels. Um, and we know that we want it
337:14 to use this before actually trying to add a label, but we're not going to add
337:17 that yet. We're going to see if it can work with a more minimalistic prompt.
337:21 And then finally, I'm going to say add labels. And this one is use this tool to
337:28 add a label to an email. Okay. So now that we just have very basic tool
337:32 descriptions in here, we don't actually say like when to use it or how. So I'm
337:35 going to try this exact same thing again. Add a promotion label to my last
337:39 email from Nate HKman. Once again, it tried to use ad label before and it
337:42 tried to just call it twice as you can see. So not working. So back in email, I
337:47 just refreshed and you can see the email is still not labeled correctly. So,
337:51 let's do some more reactive prompting. What we're going to do now is just say
337:55 in order to add labels, so in the description of the ad label tool, I'm
337:59 going to say you must first use get emails to get the message ID. And
338:07 actually, I want to make sure that it knows that this is a tool. So, what I'm
338:10 going to do is I'm going to put it in in a quote, and I'm going to make it the
338:13 exact same capitalization as we defined over here. So you must first use get
338:17 emails to get the message ID of the email to label. Then you must use get
338:29 labels to get the label ID of the email to label. Okay. So we added in this one
338:33 line. So if it's still not working, we know that this line wasn't enough. I'm
338:36 going to hit save and I'm going to try the exact same thing again. Add a
338:39 promotion label to my last email. So it's getting now it's getting labels and
338:43 now it still had an error with adding labels. So, we'll take a look in here.
338:46 Um, it said that it did it successfully, but obviously didn't. It filled in label
338:53 127 blah blah blah. So, I think the message ID is correct, but the label ID
338:57 is not. So, what I'm going to try now is reactively prompting in here. I'm going
339:03 to say the label ID of the email to label. We'll try that. We'll see if that
339:07 fixes it. It may not. We'll have to keep going. So, now we'll see. It's going to
339:10 at least it fixed the order, right? So, it's getting emails and getting labels
339:14 first. And now look at that. We successfully got a labeled email. As you
339:19 can see, we have our um maybe we didn't. We'll have to go into Gmail and actually
339:22 check. Okay, never mind. We did. As you can see, we got the promotion email for
339:26 this one from Nate Hookman that says hello. And um yeah, that's just going to
339:32 be a really cool simple example of how we sort of take on the process of
339:36 running into errors, adding lines, and being able to know exactly what caused
339:38 what. So, I know the video was kind of simple and I went through it pretty
339:41 fast, but I think that it's going to be a good lesson to look back on as far as
339:45 the mindset you have and approaching reactively prompting and adding
339:48 different tools and testing things because at the end of the day, building
339:52 agents is a super super testheavy iterative, you know, refining process of
339:57 build, test, change, build, test, change, all that kind of stuff. All
339:59 right, so these next sections are a little bit more miscellaneous, but cool
340:02 little tips that you can play around with with your AI agents. We're going to
340:06 be talking about output parsing, human in the loop, error workflows, and having
340:11 an agent have a dynamic brain. So, let's get into it. All right, so output
340:15 parsing. Let's talk about what it actually means and why you need to use
340:18 it. So, just to show you guys what we're working with, I'm just going to come in
340:21 here real quick and ask our agent to create an email for us. And when it does
340:25 this, the idea is that it's going to create a subject and a body so that we
340:30 could drag this into a Gmail node. So actually before I ask it to do that,
340:33 let's just say we're we're dragging in a Gmail node and we want to have this guy
340:38 send an email. We're if I can find this node which is right up here. Okay, send
340:42 a message. Now what you can see is that we have different fields that we need to
340:46 configure the two, the subject and the message. So ideally when we're asking
340:51 the agent to create an email, it will be able to output those three different
340:54 things. So let me just show an example of that. Please send an email to
341:01 nateample.com asking what's up. We need the to the subject and the
341:06 email. Okay, so ideally we wouldn't say that every time because um we would have
341:11 that in the system prompt. The issue is this workflow doesn't let you run if the
341:14 node is errored and it's errored because we didn't fill out stuff. So I'm just
341:17 going to resend this message. But let me show you guys exactly why we need to use
341:21 an output parser. So it outputs the two to the subject and the message. And if
341:24 we actually click into it though, the issue is that it comes through all in
341:30 one single, you know, item called output. And so you can see we have the
341:33 two, the subject, and the message. And now if I went into here to actually map
341:37 these variables, I couldn't have them separated or I would need to separate
341:41 them in another step because I want to drag in the dynamic variable, but I can
341:45 only reference all of it at once. So that's not good. That's not what we
341:48 want. That's why we need to connect an output parser. So I'm going to click
341:52 into here. And right here there's an option that says require specific output
341:55 format. And I'm going to turn that on. What that does is it just gave us
341:59 another option to our AI agent. So typically we basically right here have
342:03 chat model, memory, and tool. But now we have another one called output parser.
342:06 So this is awesome. I'm going to click onto the output parser. And you can see
342:10 that we have basically three options. 99.9% of the time you are just going to
342:14 be using a structured output parser which means you're able to give your
342:20 agent basically a defined JSON schema and it will always output stuff in that
342:25 schema if you need to have it kind of be a little bit automatically fixed with AI
342:29 like I said I almost never have to use this but that's what you would use the
342:33 autofixing output parser for so if I click on the structured output parser
342:37 what happens is right now we see a JSON example so if we were to talk to our
342:41 agent and say hey can you um you know tell me some information about
342:44 California. It would output the state in one string item called state and then
342:50 would also output an array of cities LA, San Francisco, San Diego. So what we
342:54 want to do is we want to quickly define to our AI agent how to output
342:58 information and we know that we wanted to output based on this node. We need a
343:03 two, we need a subject, and we need a message. So don't worry, you're not
343:07 going to have to write any JSON yourself. I'm going to go to chatgbt and
343:12 say, "Help me write a JSON example for a structured output parser in NADN. I need
343:18 the AI agent to output a two field, a subject field, and a body field." We'll
343:24 just go ahead and send this off. And as you guys know, all LLMs are trained
343:28 really well on JSON. It's going to know exactly what I'm asking for here. And
343:31 all I'm going to have to do is copy this and paste that in. So once this finishes
343:36 up, it's very simple. Two, subject body. And it's being a little extra right now
343:39 and giving me a whole example body, but I just have to copy that. I have to go
343:43 into here and just replace that JSON example. Super simple. And now hopefully
343:49 I don't even have to prompt this guy at all. And we'll give it a try. But if
343:53 it's not working, what we would do is we would prompt in here and say, "Hey, here
343:56 is basically how we want you to output stuff. Here's your job." All that kind
343:59 of stuff, right? But let me just resend this message. We'll take a look. We'll
344:02 see that it called its output parser because this is green. And now let's
344:07 activate the Gmail node and click in. Perfect. So what we see on this left
344:11 hand side now is we have a two, we have a subject, and we have a body, which
344:14 makes this so much easier to actually map out over here and drag in. So in
344:21 different agents in this course, you're going to see me using different
344:24 structured output parsers, whether that is to get to subject and body, whether
344:28 that is to create different stories, stuff like that. Let me just show one
344:31 more quick example of like a different way you could use this. I'm going to
344:35 delete this if I can actually delete it. And we are going to just change up the
344:40 structure output parser. So let's say we want an AI agent to create a story for us. So I'm going
344:48 to just talk to this guy again. Help me write a different JSON example where I
344:54 want to have the agent output a title of the story, an array of characters, and
345:00 then three different scenes. Okay. So we'll send that off and see what it
345:03 does. And just keep in mind it's creating this JSON basically a template
345:07 that's telling your agent how to output information. So we would basically say,
345:12 "Hey, create me a story about um a forest." And it would output a title,
345:16 three characters, and three different scenes as you can see here. So we'll
345:22 copy this. We'll paste this into here. And once again, I'm not even going to
345:25 prompt the agent. And let's see how it does. Please create me a story about an
345:31 airplane. Okay, we'll go ahead and take a look at what this is going to do. This one's
345:36 going to spin a little bit longer. Oh, wow. Didn't even take too long. So, it
345:40 called the structured output parser. And now, let's click into the agent and see
345:44 how it output. Perfect. So, we have the title is the adventure of Skyward the
345:49 airplane. We have four characters, Skyward, Captain Jane, Navigator Max,
345:54 and ground engineer Leo. And then you can see we have four different scenes
345:57 that each come with a scene number and a description. So if we wanted to, we
346:01 could have this be like, you know, maybe we want an image prompt for each of
346:04 these scenes. So we can feed that into an image generation model and we would
346:07 just have to go into that chatbt and say, "Hey, for each scene, add another
346:11 field called image prompt." And it would just basically take care of it. So just
346:14 wanted to show you how this works, how easy it is to set up these different
346:18 JSON structured output parsers and why it's actually valuable to do within. So
346:22 hopefully that opened your eyes a little Okay, our workflow is actively listening
346:30 for us in Telegram and I'm going to ask it to make an expost about coffee at
346:33 night. So, as you can see, this first agent is going to search the internet
346:37 using Tavi and create that initial X post for us. Now, we just got a message
346:41 back in our Telegram that says, "Hey, is this post good to go?" Drinking coffee
346:44 at night can disrupt your sleep since caffeine stays in your system for hours,
346:48 often leading to poorer sleep quality. So, what I'm going to do is click on
346:50 respond. And this gives us the ability to give our agent feedback on the post
346:54 that it initially created. So, here is that response window and I'm going to
346:56 provide some feedback. So, I'm telling the agent to add at the end of the tweet
347:00 unless it's decaf. And as soon as I hit submit, we're going to see this go down
347:03 the path. It's going to get classified as a denial message. And now the
347:07 revision agent just made those changes and we have another message in our
347:10 telegram with a new X post. So now, as you can see, we have a new post. I'm
347:13 going to click on respond and open up that window. And what we can see here is
347:16 now we have the changes made that we requested. At the end, it says unless
347:20 it's decaf. So now all we have to do is respond good to go. And as soon as we
347:23 submit this, it's going to go up down the approval route and it's going to get
347:27 submitted and posted to X. So, here we go. Let's see that in action. I'll hit
347:29 submit and then we're going to watch it get posted onto X. And let's go check
347:33 and make sure it's there. So, here's my beautiful X profile. And as you can see,
347:35 I was playing around with some tweets earlier. But right here, we can see
347:38 drinking coffee at night can disrupt your sleep. We have the most recent
347:41 version because it says unless it's decaf. And then we can also click into
347:45 the actual blog that Tavi found to pull this information from. So, now that
347:48 we've seen this workflow in action, let's break it down. So the secret that
347:51 we're going to be talking about today is the aspect of human in the loop, which
347:55 basically just means somewhere along the process of the workflow. In this case,
347:58 it's happening right here. The workflow is going to pause and wait for some sort
348:02 of feedback from us. That way, we know before anything is sent out to a client
348:06 or posted on social media, we've basically said that we 100% agree that
348:10 this is good to go. And if the initial message is not good to go, we have the
348:14 ability to have this unlimited revision loop where it's going to revise the
348:18 output over and over until we finally agree that it's good to go. So, we have
348:21 everything color coded and we're going to break it down as simple as possible.
348:24 But before we do that here, I just wanted to do a real quick walkthrough of
348:28 a more simple human in the loop because what's going on up here is it's just
348:31 going to say, "Do you like this?" Yes or no compared to down here where we
348:35 actually give textbased feedback. So, we'll break them both down, but let's
348:37 start up here real quick. And by the way, if you want to download the
348:40 template for free and play around with either of these flows, you can get that
348:43 in my free school community. The link for that will be down in the
348:45 description. And when it comes to human in the loop in Naden, if you click on
348:48 the plus, you can see down here, human in the loop, wait for approval or human
348:53 input before continuing. You click on it, you can see there's a few options,
348:56 and they all just use the operation called send and wait for response. So
348:59 obviously there's all these different integrations, and I'm sure more will
349:03 even start to roll out, but in this example, we're just using Telegram.
349:05 Okay, so taking a look at this more simple workflow, we're going to send off
349:08 the message, make an expost about AI voice agents. What's happening is the
349:11 exact same thing as the demo where this agent is going to search the web and
349:14 then it's going to create that initial content for us. And now we've hit that
349:18 human in the loop step. As you can see, it's spinning here purple because it's
349:21 waiting for our approval. So in our Telegram, we see the post. It asks us if
349:25 this is good to go. And let's just say that we don't like this one, and we're
349:27 going to hit decline. So when I hit decline, it goes down this decision
349:31 point where it basically says, you know, did the human approve? Yes or no. If
349:34 yes, we'll post it to X. If no, it's going to send us a denial message, which
349:38 basically just says post was denied. Please submit another request. And so
349:41 that's really cool because it gives us the ability to say, okay, do we like
349:44 this? Yes. And it will get posted. Otherwise, just do nothing with it. But
349:48 what if we actually want to give it feedback so that it can take this post?
349:51 We can give it a little bit of criticism and then it will make another one for us
349:55 and it just stays in that loop rather than having to start from square one. So
349:58 that's exactly what I did down here with the human and loop 2.0 know where we're
350:02 able to give textbased feedback instead of just saying yes or no. So now we're
350:05 going to break down what's going on within every single step here. So what
350:08 I'm going to do is I'm going to click on executions. I'm going to go to the one
350:12 that we did in the live demo and bring that into the workflow so we can look at
350:15 it. So what we're going to do is just do another live run and walk through step
350:20 by step the actual process of this workflow. So I'm going to hit test
350:23 workflow. I'm going to pull up Telegram and then I'm going to ask it to make us
350:26 an expost. Okay, so I'm about to fire off make me an expost about crocodiles.
350:31 So sent that off. This expost agent is using its GPT41 model as well as Tavly
350:37 search to do research, create that post, and now we have the human in the loop
350:40 waiting for us. So before we go look at that, let's break down what's going on
350:43 up front. So the first phase is the initial content. This means that we have
350:46 a telegram trigger, and that's how we're communicating with this workflow. And
350:50 then it gets fed into the first agent here, which is the expost agent. Let's
350:53 click into the expost agent and just kind of break down what's going on here.
350:56 here. So, the first thing to notice is that we're looking for some sort of
350:59 prompt. The agent needs some sort of user message that it's going to look at.
351:03 In this case, we're not doing the connected chat trigger node. We're
351:06 looking within our Telegram node because that's where the text is actually coming
351:09 through. So, on this lefth hand side, we can see all I basically did was right
351:13 here is the text that we typed in, make me an expost about crocodiles. And all I
351:17 did was I dragged this right into here as the user message. And that is what
351:20 the agent is actually looking at in order to take action. And then the other
351:23 thing we did was gave the agent a system message which basically defines its
351:27 behavior. And so here's what we have. The overview is you are an AI agent
351:31 responsible for creating expost based on a user's request. Your instructions are
351:35 to always use the Tavly search tool to find accurate information. Write an
351:39 informative engaging tweet, include a brief reference to the source directly
351:43 in the tweet and only output the tweet. We listed its tool which it only has one
351:46 called tavly search and we told it to use this for real-time web search and
351:50 then just gave it an example basically saying okay here's an input you may get
351:54 here's the action you will take and then here's the output that we want you to
351:57 output and then we just gave them final notes and I know I may be read through
352:01 this pretty quick but keep in mind you can download the template for free and
352:03 the prompt will be in there and then what you could do is you can click on
352:06 the logs for an agent and you can basically look at its behavior so we can
352:11 see that it used its chat model GBT4.1 read through the system prompt decided
352:14 said, "Okay, I need to go use tably search." So, here's how it searched for
352:18 crocodile information. And then it used its model again to actually create that
352:22 short tweet right here. And then we'll just take a quick look at what's going
352:25 on within the actual Tavi search tool here. So, if you download this template,
352:28 all you'll have to do is plug in your own credential. Everything else should
352:32 be set up for you. But, let me just break it down real quick. So, if you go
352:35 to tavly.com and create an account, you can get a,000 free searches per month.
352:39 So, that's the kind of plan I'm on. But anyways, here is the documentation. You
352:42 can see right here we have the Tavi search endpoint which is right here. All
352:46 we have to do is authorize ourselves. So we'll have an authorization as a header
352:50 parameter and then we'll do bearer space our API token. So that's how you'll set
352:53 up your own credential. And then all I did was I copied this data field into
352:57 the HTTP request. And this is where you can do some configuration. You can look
353:00 through the docs to see how you want to make this request. But all I wanted to
353:03 do here was just change the search query. So back in end you can see in my
353:08 body request I I changed the query by using a placeholder. Right here it says
353:11 use a placeholder for any data to be filled in by the model. So I changed the
353:15 query to a placeholder called search term. And then down here I defined the
353:18 search term placeholder as what the user is searching for. So what this means is
353:22 the agent is going to interpret our query that we sent in telegram. It's
353:27 then going to use this tavly tool and basically use its brain to figure out
353:30 what should I search for. And in this case on the lefth hand side you can see
353:33 that it filled out the search term with latest news or facts about crocodiles.
353:38 And then we get back our response with information and a URL. And then it uses
353:42 all of this in order to actually create that post. Okay. So, here's where it may
353:45 seem like it's going to get a little tricky, but it's not too bad. Just bear
353:48 with me. And I wanted to do some color coding here so we could all sort of stay
353:52 on the same page. So, what we're doing now is we're setting the post. And this
353:56 is super important because we need to be able to reference the post later in the
353:59 workflow. whether that's when we're actually sending it over to X or when
354:04 we're making a revision and we need the revision agent to look at the original
354:07 post as well as the feedback from the human. So in the set node, all we're
354:11 doing is we're basically setting a field called post and we're dragging in a
354:15 variable called JSON.output. And this just means that it's going to be
354:19 grabbing the output from this agent or the revision agent no matter what. As
354:22 you can see, it's looped back into this set because if we're defining a variable
354:26 using dollar sign JSON, it means that we're going to be looking for whatever
354:29 node immediately finished right before this one. And so that's why we have to
354:32 keep this one kind of flexible because we want to make sure that at the end of
354:36 the day, if we made five or six or seven revisions, that only the most recent
354:41 version will actually be posted on X. So then we move into the human in the loop
354:44 phase of this workflow. And as you can see, it's still spinning. It's been
354:46 spinning this whole time while we've been talking, but it's waiting for our
354:50 response. So anyways, it's a send and wait for a response operation. As you
354:54 can see right here, the chat ID is coming from our Telegram trigger. So if
354:57 I scroll down in the Telegram trigger on the lefth hand side, you can see that I
355:00 have a chat ID right here. And all I did was I dragged this in right here.
355:03 Basically just meaning, okay, whoever communicates with this workflow, we need
355:07 to send and get feedback from that person. So that's how we can make this
355:10 dynamic. And then I just made my message basically say, hey, is this good to go?
355:14 And then I'm dragging in the post that we set earlier. So this is another
355:18 reason why it's important is because we want to request feedback on the most
355:21 recent version as well, not the first one we made. And then like I mentioned
355:24 within all of these human in the loop nodes, you have a few options. So you
355:28 can do free text, which is what we're doing here. Earlier what we did was
355:31 approval, which is basically you can say, hey, is there an approve button? Is
355:34 there an approve and a denial button? How do you want to set that up? But this
355:37 is why we're doing free text because it allows for us to actually give feedback,
355:40 not just say yes or no. Cool. So what we're going to do now is actually give
355:45 our feedback. So, I'm going to come into here. We have our post about crocodiles.
355:49 So, I'm going to hit respond and it's going to open up this new page. And so,
355:51 yes, it's a little annoying that this form has to pop up in the browser rather
355:56 than natively in Telegram or whatever, you know, Slack, Gmail, wherever you're
355:59 doing the human in the loop, but I'm sure that'll be a fix that'll come soon.
356:01 But, it's just right now, I think it's coming through a web hook. So, they just
356:04 kind of have to do it like this. Anyways, let's say that we want to
356:07 provide some feedback and say make this shorter. So, I'm going to say make this
356:11 shorter. And as I submit it, you're going to see it go to this decision
356:14 point and then it's going to move either up or down. And this is pretty clearly a
356:18 denial message. So we'll watch it get denied and go down to the revision agent
356:22 as you can see. And just like that that quickly, we already have another one to
356:26 look at. So before we look at it and give feedback, let's just look at what's
356:29 actually going on within this decision point. So in any automation, you get to
356:32 a point where you have to make a decision. And what's really cool about
356:36 AI automation is now we can use AI to make a decision that typically a
356:39 computer couldn't because a typical decision would be like is this number
356:42 greater than 10 or less than 10. But now it can read this text that we submitted.
356:46 Make this shorter and it can say okay is this approved or declined. And basically
356:50 I just gave it some short definitions of like what an approval message might look
356:53 like and what a denial message might look like. And you can look through that
356:56 if you download the template. But as you can see here it pushed this message down
357:00 the declined branch because we asked it to make a revision. And so it goes down
357:03 the denial branch which leads into the revision agent. And this one's really
357:07 really simple. All we did here was we gave it two things as the user message.
357:12 We said here's the post to revise. So as you can see it's this is the initial
357:15 post that the first agent made for us. And then here is the human feedback. So
357:18 it's going to look at this. It's going to look at this and then it's going to
357:21 make those changes because all we said in the system prompt was you're an
357:24 expert Twitter writer. Your job is to take an incoming post and revise it
357:27 based on the feedback that the human submitted. And as you can see here is
357:31 the output. it made the tweet a lot shorter. And that's the beauty of using
357:34 the set node is because now we loop that back in. The most recent version has
357:38 been submitted to us for feedback. So, let's open that up real quick in our
357:41 Telegram. And now you can see that the shorter tweet has been submitted to us.
357:45 And it's asking for a response. So, at this point, let's say we're good to go
357:47 with this tweet. I'm going to click respond. Open up this tab. Recent
357:50 crocodile attacks in Indonesia. Highlight the need for caution in their
357:54 habitats. Stay safe. We've got a few emojis. And I'm just going to say, let's
357:58 just say send it off because it can interpret multiple ways of saying like
358:02 yes, it's good to go. So, as soon as I hit submit, we're going to watch it go
358:05 through the decision point and then post on our X. So, you see that right here,
358:09 text classifier, and now it has been posted to X. And I just gave our X
358:12 account a refresh. You can see that we have that short tweet about recent
358:16 crocodile attacks. Okay, so now that we've seen another example of a live run
358:19 through, a little more detailed, let me talk about why I made the color coding
358:23 like this. So the set note here, its job is basically just I'm going to be
358:27 grabbing the most recent version of the post because then I can feed it into the
358:31 human in the loop. I can then feed that into the revision if we need to make
358:34 another revision because you want to be able to make revisions on top of
358:38 revisions. You don't want to be only making revisions on the first one.
358:40 Otherwise, you're going to be like, what's the point? And then also, of
358:44 course, you want to post the most recent version, not the original one, because
358:47 again, what's the point? So in here, you can see there's two runs. The first one
358:52 was the first initial content creation and then the second one was the revised
358:55 one. Similarly, if we click into the next node which was request feedback,
358:59 the first time we said make this shorter and then the second time we said send it
359:02 off. And then if we go into the next node which was the text classifier, we
359:06 can see the first time it got denied because we said make this shorter and
359:10 the second time it said send it off and it got approved. And that's basically
359:14 the flow of you know initial creation. We're setting the most recent version.
359:18 We're getting feedback. We're making a decision using AI. And as you can tell
359:22 for the text classifier, I'm using 2.0 Flash rather than GPT 4.1. And then of
359:27 course, if it's approved, it gets posted. If it's not, it makes revisions.
359:30 And like I said, this is unlimited revisions. And it's revisions on top of
359:34 revisions. So when it comes to Human in the Loop, you can do it in more than
359:37 just Telegram, too. So if you click on the plus, you can see right here, Human
359:40 in the Loop, wait for approval or human input before continuing. We've got
359:45 Discord, Gmail, Chat, Outlook, Telegram, Slack. We have a lot of stuff you can
359:48 do. However, so far with my experience, it's been limited to one workflow. And
359:52 what do I mean by that? It's kind of tough to do this when you're actually
359:56 giving an agent a tool that's supposed to be waiting for human approval. So,
359:59 let me show you what I mean by that. Okay, so here's an agent where I tried
360:02 to do a human in the loop tool because we have the send and wait message
360:06 operation as a tool for an agent. So, let me show you what goes on here. We'll
360:09 hit test workflow. Okay, so I'm going to send off get approval for this message.
360:13 Hey John, just wanted to see if you had the meeting minutes. And you're going to
360:15 watch that it's going to call the get approval tool, but here's the issue. So,
360:21 it's waiting for a response, right? And we have the ability to respond, but the
360:25 waiting is happening at the agent level. It really should be waiting down here
360:28 for the tool because, as you saw in the previous example, the response from this
360:32 should be the actual feedback from the human. And we haven't submitted that
360:35 yet. And right now, the response from this tool is literally just the message.
360:40 So, what you'll see here is if I go back into Telegram and I click on respond and
360:43 we open up this tab, it basically just says no action required. And if I go
360:47 back into the workflow, you can see it's still spinning here and there's no way
360:51 for this to give another output. So, it just doesn't really work. And so, what I
360:54 was thinking was, okay, why don't I just make another workflow where I just use
360:57 the actual node like we saw on the previous one. That should work fine
361:00 because then it should just spin down here on the tool level until it's ready.
361:04 And so, let me show you what happens if we do that. Okay, so like I said, I
361:07 built a custom tool down here which is called get approval. It would be sending
361:10 the data to this workflow, it would send off an approval message using the send
361:14 and wait, and it doesn't really work. I even tried adding a wait here. But what
361:18 happens typically is when you use a workflow to call another one, it's going
361:21 to be waiting and looking in the last node of that workflow for the response,
361:26 but it doesn't yet work yet with these operations. And I'll show you guys why,
361:29 and I'm sure NNN will fix this soon, but it's just not there yet. So, I'm just
361:32 going to send off the exact same query, get approval for this message. We'll see
361:35 it call the tool and basically as you can see it finished up instantly and now
361:40 it's waiting here and we already did get a message back in telegram which
361:43 basically said ready to go hey John just wanted to see if you had the meeting
361:45 minutes and it gives us the option to approve or deny and if we click into the
361:50 subworkflow this one that it actually sent data to. We can see that the
361:53 execution is waiting. So this workflow is properly working because it's waiting
361:57 here for human approval. But if we go back into the main flow it's waiting
362:01 here at the agent level rather than waiting here. So, there's no way for the
362:05 agent to actually get our live feedback and use that to take action how it needs
362:09 to. So, I just wanted to show you guys that I had been experimenting with this
362:12 as a tool. It's not there yet, but I'm sure it will be here soon. And when it
362:16 is, you can bet that I'll have a video it. Today, I'm going to be showing you
362:22 guys how you can set up an error workflow in NAN so that you can log all
362:26 of your errors as well as get notified every time one of your active workflows
362:29 fails. The cool part is all we have to do is set up one error workflow and then
362:32 we can link that one to all of our different active workflows. So I think
362:35 you'll be pretty shocked how quick and easy this is to get set up. So let's get
362:38 into the video. All right, so here's the workflow that we're going to be using
362:41 today as our test workflow that we're going to purposely make error and then
362:44 we're going to capture those errors in a different one and feed that into a
362:47 Google sheet template as well as some sort of Slack or email notification. And
362:51 if you haven't seen my recent video on using this new think tool in edit then
362:54 I'll tag it right up here. Anyways, in order for a workflow to trigger an error
362:58 workflow, it has to be active. So, first things first, I'm going to make this
363:00 workflow active. There we go. This one has been activated. And now, what I'm
363:04 going to do is go back out to my NAD. We're going to create a new workflow.
363:07 And this is going to be our error logger workflow. Okay. So, you guys are going
363:10 to be pretty surprised by how simple this workflow is going to be. I'm going
363:13 to add a first step. And I'm going to type an error. And as you can see,
363:16 there's an error trigger, which says triggers the workflow when another
363:18 workflow has an error. So, we're going to bring this into the workflow. We
363:21 don't have to do anything to configure it. You can see that what we could do is
363:24 we could fetch a test event just to see what information could come back. But
363:28 what we're going to do is just trigger a live one because we're going to get a
363:31 lot more information than what we're seeing right here. So, quickly pay
363:34 attention to the fact that I named this workflow error logger. I'm going to go
363:38 back into my ultimate assistant active workflow. Up in the top right, I'm going
363:41 to click on these three dots, go down to settings, and then right here, there's a
363:45 setting called error workflow, which as you can see, a second workflow to run if
363:48 the current one fails. The second workflow should always start with an
363:52 error trigger. And as you saw, we just set that up. So, all I have to do is
363:55 choose a workflow. I'm going to type an error and we called it error logger. So
363:59 I'm going to choose that one, hit save. And now these two workflows are
364:02 basically linked so that if this workflow ever has an error that stops
364:06 the workflow, it's going to be captured in our second one over here with the
364:09 information. So let's see a quick example of that. Okay, so this workflow
364:13 is active. It has a telegram trigger as you can see. So I'm going to drag in my
364:16 telegram and I'm just going to say, "Hey." And what's going to happen is
364:19 obviously we're going to get a response back because this workflow is active and
364:23 it says, "How can I assist you today?" Now, what I'm going to do is I'm just
364:26 going to get rid of the chat model. So, this agent essentially has no brain. I'm
364:29 going to hit save. We're going to open up Telegram again, and we're going to
364:33 say, "Hey." And now, we should see that we're not going to get any response back
364:36 in Telegram. If we go into the executions of this ultimate assistant,
364:39 you can see that we just got an error right now. And that was when we just
364:42 sent off the query that said, "Hey," and it errored because the chat model wasn't
364:46 connected. So, if we hop into our error logger workflow and click on the
364:49 executions, we should see that we just had a new execution. And if we click
364:53 into it, we'll see all the information that came through. So what it's going to
364:57 tell us is the ID of the execution, the URL of the workflow, the name of the
365:01 workflow, and then we'll also see what node errored and the error message. So
365:05 here under the object node, we can see different parameters. We can see what's
365:08 kind of how the node's configured. We can see the prompt even. But what we're
365:11 interested in is down here we have the name, which is ultimate assistant. And
365:14 then we have the message, which was a chat model sub node must be connected
365:19 and enabled. So anyways, we have our sample data. I'm going to hit copy to
365:22 editor, which just brings in that execution into here so we can play with
365:26 it. And now what I want to do is map up the logic of first of all logging it in
365:30 a Google sheet. So here's the Google sheet template I'm going to be using.
365:32 We're going to be putting in a timestamp, a workflow name, the URL of
365:36 the workflow, the node that errored, and the error message. If you guys want to
365:39 get this template, you can do so by joining my free school community. The
365:42 link for that's down in the description. Once you join the community, all you
365:44 have to do is search for the title of the video up top, or you can click on
365:48 YouTube resources, and you'll find the post. And then in the post is where
365:51 you'll see the link to the Google sheet template. Anyways, now that we have this
365:54 set up, all we have to do is go back into our error logger. We're going to
365:57 add a new node after the trigger. And I'm going to grab a sheets node. What we
366:02 want to do is append a row in sheets. Um I'm just going to call this log error.
366:05 Make sure I choose the right credential. And then I'm going to choose the sheet
366:08 which is called error logs. And so now we can see we have the values we need to
366:11 send over to these different columns. So for time stamp, all I'm going to do is
366:14 I'm actually going to make an expression and I'm just going to do dollar sign
366:18 now. And this is basically just going to send over to Google Sheets the current
366:21 time whenever this workflow gets triggered. And if you don't like the way
366:23 this is coming through, you can play around with format after dollar sign
366:27 now. And then you'll be able to configure it a little bit more. And you
366:30 can also ask chat to help you out with this JavaScript function. And feel free
366:34 to copy this if you want. I'm pulling in the full year, month, day, and then the
366:37 time. Okay, cool. Then we're just pretty much going to drag and drop the other
366:40 information we need. So the first thing is the workflow name. And to get to
366:43 that, I'm going to close out of the execution. and then we'll see the
366:45 workflow and we can pull in the name right there which is ultimate personal
366:49 assistant. For the URL, I'm going to open back up execution and grab the URL
366:54 from here. For node, all I have to do is look within the node object. We're going
366:57 to scroll down until we see the name of the node which is right here, the
367:01 ultimate assistant. And then finally, the error message, which should be right
367:04 under that name right here. Drag that in, which says a chat model sub node
367:08 must be connected and enabled. So now that we're good to go here, I'm going to
367:10 test step and then we'll check our Google sheet and make sure that that
367:13 stuff comes through correctly. And as you can see, it just got
367:16 populated and we have the URL right here, which if we clicked into, it would
367:20 take us to that main ultimate personal assistant workflow. As you can see, when
367:23 this loads up, and it takes us to the execution that actually failed as well,
367:27 so we could sort of debug. So now we have an error trigger that will update a
367:31 Google sheet and log the information. But maybe we also want to get notified
367:34 when there's an error. So I'm just going to drag this off right below and I'm
367:38 going to grab the Slack node and I'm going to choose to send a message right
367:41 here. and then we'll just configure what we want to send over. Okay, so we're
367:44 going to be sending a message to a channel. I'm going to choose the channel
367:48 all awesome AI stuff. And then we just need to configure what the actual
367:51 message is going to say. So I'm going to change this to an expression, make this
367:55 full screen and let's fill this out. So pretty much just customize this however
367:58 you want. Let's say I want to start off with workflow error and we will put the
368:04 name of the workflow. So I'll just close out of here, throw that in there. So now
368:06 it's going to come through saying workflow error ultimate personal
368:10 assistant. And then I'm going to say like what node errored at what time and
368:14 what the error message was. So first let's grab the name of the node. Um if I
368:17 just have to scroll down to name right here. So ultimate assistant errored at
368:23 and I'm going to do that same dollar sign now function. So it says ultimate
368:27 assistant errored at 2025417. And then I'm just going to go down and say the error message was and
368:36 we're just going to drag in the error message. There we go. And then finally, we'll just provide a
368:43 link to the workflow. So see this execution here. And then we'll drag in
368:48 the link, which is all the way up top. And we should be good to go. And then if
368:50 you want to make sure you're not sending over a little message at the bottom that
368:54 says this was sent from Nad, you're going to add an option. You're going to
368:56 click on include link to workflow. And then you're going to turn that off. And
368:59 now we hit test up. And we hop into Slack. And we can see we got workflow
369:02 error ultimate personal assistant. We have all this information. We can click
369:05 into this link. and we don't have this little message here that says automated
369:09 with this NN workflow. So maybe you could just set up a channel dedicated
369:12 towards error logging, whatever it is. Okay, so let's save this real quick and
369:17 um let's just do another sort of like example. Um one thing to keep in mind is
369:20 there's a difference between the workflow actually erroring out and going
369:24 red and just something not working correctly. And I'll show you exactly
369:27 what I meant by that. So this example actually triggered the error workflow
369:32 because the execution on this side is red and it shows an error. But what
369:35 happens is, for example, with our Tavly tool right here, I have no
369:38 authentication pulled up. So this tool is not going to work. But if I come into
369:42 Telegram and say search the web for Apples, it's going to work. This this
369:46 workflow is going to go green even though this tool is not going to work.
369:49 And we'll see exactly why. So as you can see, it says I'm currently unable to
369:52 search the web due to a connection error. So if we go into the execution,
369:56 we can see that this thing went green even though it didn't work the way we
369:59 wanted to. But what happened is the tool came back and it was green and it
370:02 basically just didn't work because our authentication wasn't correct. And then
370:06 you can even see in the think node, it basically said the web search function
370:09 is encountering an authentication error. I need to let the user know the search
370:11 isn't currently available and offer alternative ways to help blah blah blah.
370:15 But all of these nodes actually went green and we're fine. So this example
370:19 did not trigger the error logger. As you can see, if we check here, there's
370:22 nothing. We check in Slack, there's nothing. So what we can do is we'll
370:26 actually make something error. So I'll go into this memory and it's going to be
370:29 looking for a session ID within the telegram trigger and I'm just going to
370:33 add an extra d. So this variable is not going to work. It's probably going to
370:36 error out and now we'll actually see something happen with our error
370:40 workflow. So I'm just going to say hey and we will watch basically nothing will
370:47 here. Okay, that confused me but I realized I didn't save the workflow. So
370:49 now that we've saved it, this is not going to work. So let's once again say
370:52 hey. And we should see that nothing's going to come back over here. Um, I
370:56 believe if we go into our error logger, we should see something pop through. We
370:59 just got that row and you can see the node changed, the error message changed,
371:02 all that kind of stuff. And then in Slack, we got another workflow error at
371:07 a new time and it was a different node. And finally, we can just come into our
371:09 error logger workflow and click on executions. And we'll see the newest run
371:14 was the one that we just saw in our logs and in our Slack, which was the memory
371:19 node that aired. As you can see right here, simple memory. And so really the
371:22 question then becomes, okay, well what happens if this workflow in itself
371:26 errors too. I really don't foresee that happening unless you're doing some sort
371:30 of crazy AI logic over here. But it really needs to just be as simple as
371:32 you're mapping variables from here somewhere else. So you really shouldn't
371:36 see any issues. Maybe an authentication issue, but I don't know. Maybe if this
371:39 if this workflow is erring for you a ton, you probably are just doing
371:42 something wrong. Anyways, that's going to do it for this one. I know it was a
371:45 quicker one, but hopefully if you didn't know about this, it's something that
371:53 If you've ever wondered which AI model to use for your agents and you're tired
371:56 of wasting credits or overpaying for basic tasks, then this video is for you
371:59 because today I'm going to be showing you a system where the AI agent picks
372:04 its brain dynamically based on the task. This is not only going to save you
372:06 money, but it's going to boost performance and we're also getting full
372:09 visibility into the models that it's choosing based on the input and we'll
372:13 see the output. That way, all we have to do is come back over here, update the
372:16 prompt, and continue to optimize the workflow over time. As you can see,
372:19 we're talking to this agent in Slack. So, what I'm going to do is say, "Hey,
372:22 tell me a joke." You can see my failed attempts over there. And it's going to
372:25 get this message. As you can see, it's picking a model, and then it's going to
372:29 answer us in Slack, as well as log the output. So, we can see we just got the
372:32 response, why don't scientists trust Adams? Because they make up everything.
372:35 And if I go to our model log, we can see we just got the input, we got the
372:38 output, and then we got the model which was chosen, which in this case was
372:42 Google Gemini's 2.0 Flash. And the reason it chose Flash is because this
372:45 was a simple input with a very simple output and it wanted to choose a free
372:48 model so we're not wasting credits for no reason. All right, let's try
372:51 something else. I'm going to ask it to create a calendar event at 1 p.m. today
372:55 for lunch. Once this workflow fires off, it's going to choose the model. As you
372:58 can see, it's sending that over to the dynamic agent to create that calendar
373:02 event. It's going to log that output and then send us a message in Slack. So,
373:06 there we go. I just have created the calendar event for lunch at 1 p.m.
373:09 today. If you need anything else, just let me know. We click into the calendar
373:12 real quick. There is our launch event at one. And if we go to our log, we can see
373:16 that this time it used OpenAI's GBT 4.1 mini. All right, we'll just do one more
373:20 and then we'll break it down. So, I'm going to ask it to do some research on
373:23 AI voice agents and create a blog post. Here we go. It chose a model. It's going
373:27 to hit Tavi to do some web research. It's going to create us a blog post, log
373:31 the output, and send it to us in Slack. So, I'll check in when that's done. All
373:35 right, so it just finished up and as you can see, it called the Tavly tool four
373:38 times. So, it did some in-depth research. It logged the output and we
373:42 just got our blog back in Slack as you can see. Wow. It is pretty thorough. It
373:47 talks about AI voice agents, the rise of voice agents. Um there's key trends like
373:51 emotionally intelligent interactions, advanced NLP, real-time multilingual
373:55 support, all this kind of stuff. Um that's the whole blog, right? It ends
373:57 with a conclusion. And if you're wondering what model it used for this
374:00 task, let's go look at our log. We can see that it ended up using Claude 3.7
374:04 sonnet. And like I said, it knew it had to do research. So it hit the tabletly
374:08 tool four different times. The first time it searched for AI voice agents
374:12 trends, then it searched for case studies, then it searched for growth
374:16 statistics, and then it searched for ethical considerations. So, it made us a
374:21 pretty like holistic blog. Anyways, now that you've seen a quick demo of how
374:24 this works, let's break down how I set this up. So, the first things first,
374:27 we're talking to it in Slack and we're getting a response back in Slack. And as
374:31 you can see, if I scroll up here, I had a a few fails at the beginning when I
374:34 was setting up this trigger. So, if you're trying to get it set up in Slack,
374:37 um it can be a little bit frustrating, but I have a video right up here where I
374:40 walk through exactly how to do that. Anyways, the key here is that we're
374:44 using Open Router as the chat model. So, if you've never used Open Router, it's
374:47 basically a chat model that you can connect to and it basically will let you
374:50 route to any model that you want. So, as you can see, there's 300 plus models
374:54 that you can access through Open Router. So, the idea here is that we have the
374:57 first agent, which is using a free model like Gemini 2.0 Flash. we have this one
375:02 choosing which model to use based on the input. And then whatever this model
375:05 chooses, we're using down here dynamically for the second agent to
375:09 actually use in order to use its tools or produce some sort of output for us.
375:12 And just so you can see what that looks like, if I come in here, you can see
375:15 we're using a variable. But if I got rid of that and we change this to fixed, you
375:18 can see that we have all of these models within our open router dynamic brain to
375:23 choose from. But what we do is instead of just choosing from one of these
375:26 models, we're basically just pulling the output from the model selector agent
375:30 right into here. And that's the one that it uses to process the next steps. Cool.
375:34 So let's first take a look at the model selector. What happens in here is we're
375:38 feeding in the actual text that we sent over in Slack. So that's pretty simple.
375:42 We're just sending over the message. And then in the system message here, this is
375:45 where we actually can configure the different models that the AI agent has
375:49 access to. So I said, "You're an agent responsible for selecting the most
375:52 suitable large language model to handle a given user request. Choose only one
375:55 model from the list below based strictly on each model's strengths." So we told
375:59 it to analyze the request and then return only the name of the model. We
376:02 gave it four models. Obviously, you could give it more if you wanted to. And
376:05 down here, available models and strengths. We gave it four models and we
376:09 basically defined what each one's good at. You could give it more than four if
376:12 you wanted to, but just for this sake of the demo, I only gave it four. And then
376:16 we basically said, return only one of the following strings. And as you can
376:19 see in this example, it returned anthropic claude 3.7 sonnet. And so one
376:23 quick thing to note here is when you use Gemini 2.0 flash, for some reason it
376:28 likes to output a new line after a lot of these strings. So all I had to do later is I
376:33 clean up this new line and I'll show you exactly what I mean by that. But now we
376:37 have the output of our model and then we move on to the actual Smartyp Pants
376:40 agent. So in this one, we're giving it the same user message as the previous
376:44 agent where we're just basically coming to our Slack trigger and we're dragging
376:47 in the text from Slack. And what I wanted to show you guys is that here we
376:51 have a system message and all I gave it was the current date and time. So I
376:54 didn't tell it anything about using Tavi for web search. I didn't tell it how to
376:57 use its calendar tools. This is just going to show you that it's choosing a
377:00 model intelligent enough to understand the tools that it has and how to use
377:04 them. And then of course the actual dynamic brain part. We looked at this a
377:07 little bit, but basically all I did is I pulled in the output of the previous
377:12 agent, the model selector agent. And then, like I said, we had to just trim
377:16 up the end because if you just dragged this in and Open Router was trying to
377:20 reference a model that had a new line character after it, it would basically
377:23 just fail and say this model isn't available. So, I trimmed up the end and
377:26 that's why. And you can see in my Open Router account, if I go to my activity,
377:31 we can see which models we've used and how much they've costed. So, anyways,
377:35 Gemini 2.0 Flash is a free model, but if we use it through open router, they have
377:38 to take a little bit of a, you know, they got to get some kickback there. So,
377:41 it's not exactly free, but it's really, really cheap. But the idea here is, you
377:44 know, Claude 3.7 sonnet is more expensive and we don't need to use it
377:48 all the time, but if we want our agent to have the capability of using Claude
377:51 at some point, then we probably would just have to plug in Claude. But now, if
377:55 you use this method, if you want to talk to the agent just about some general
377:58 things or looking up something on your calendar or sending an email, you don't
378:01 have to use Claude and waste these credits. You could go ahead and use a
378:05 free model like 2.0 Flash or still a very powerful cheap model like GPT 4.1
378:09 Mini. And that's not to say that 2.0 Flash isn't super powerful. It's just
378:12 more of a lightweight model. It's very cheap. Anyways, that's just another cool
378:15 thing about Open Router. That's why I've gotten in the habit of using it because
378:19 we can see the tokens, the cost, and the breakdown of different models we've
378:22 used. From there, we're feeding the output into a Google sheet template,
378:24 which by the way, you can download this workflow as well as these other ones
378:28 down here that we'll look at in a sec. You can download all this for free by
378:31 joining my Free School community. All you have to do is go to YouTube
378:34 resources or search for the title of this video and when you click on the
378:37 post associated with this video, you'll have the JSON which is the end workflow
378:41 to download as well as you'll see this Google sheet template somewhere in that
378:44 post so that you can just basically copy it over and then you can plug everything
378:48 into your environment. Anyways, just logging the output of course and we're
378:51 sending over a timestamp. So I just said, you know, whatever time this
378:54 actually runs, you're going to send that over the input. So the Slack message
378:57 that triggered this workflow. The output, I'm basically just bringing the
379:01 output from the Smartyp Pants agent right here. And then the model is the
379:04 output from the model selector agent. And then all that's left to do is send
379:08 the response back to the human in Slack where we connected to that same channel
379:11 and we're just sending the output from the agent. So hopefully this is just
379:15 going to open your eyes to how you can set up a system so that your actual main
379:19 agent is dynamically picking a brain to optimize your cost and performance. And
379:23 in a space like AI where new models are coming out all the time, it's important
379:26 to be able to test out different ones for their outputs and see like what's
379:30 going on here, but also to be able to compare them. So, two quick tools I'll
379:34 show you guys. This first one is Vellum, which is an LLM leaderboard. You can
379:38 look at like reasoning, math, coding, tool use. You have all this stuff. You
379:41 can compare models right here where you can select them and look at their
379:45 differences. And then also down here is model comparison with um all these
379:49 different statistics you can look at. You can look at context, window, cost,
379:53 and speed. So, this is a good website to look at, but just keep in mind it may
379:55 not always be completely up to date. Right here, it was updated on April
380:00 17th, and today is the 30th, so doesn't have like the 4.1 models. Anyways,
380:04 another one you could look at is this LM Arena. So, I'll leave the link for this
380:07 one also down in the description. You can basically compare different models
380:09 by chatting with them like side by side or direct. People give ratings and then
380:13 you can look at the leaderboard for like an overview or for text or for vision or
380:17 for whatever it is. just another good tool to sort of compare some models.
380:21 Anyways, we'll just do one more quick before we go on to the example down
380:24 below. Um because we haven't used the reasoning model yet and those are
380:28 obviously more expensive. So, I'm asking you a riddle. I said you have three
380:31 boxes. One has apples, one has only oranges, and one has a mix of both.
380:35 They're all incorrectly labeled and you can pick one fruit from the box without
380:38 looking. How can you label all boxes correctly? So, let's see what it does.
380:42 Hopefully, it's using the reasoning model. Okay, so it responded with a
380:45 succinct way to see it is to pick one piece of fruit from the box labeled
380:49 apples and oranges. Since that label is wrong, the box must actually contain
380:53 only apples or only oranges. Whatever fruit you draw tells you which single
380:56 fruit box that really is. Once you know which box is purely apples or purely
381:00 oranges, you can use the fact that all labels are incorrect to deduce the
381:04 proper labels for the remaining two boxes. And obviously, I had chatbt sort
381:07 of give me that riddle and that's basically the answer it gave back. So,
381:11 real quick, let's go into our log and we'll see which model it used. And it
381:16 used OpenAI's 01 reasoning model. And of course, we can just verify that by
381:18 looking right here. And we can see it is OpenAI 01. So, one thing I wanted to
381:22 throw out there real quick is that Open Router does have sort of like an auto
381:26 option. You can see right here, Open Router/auto, but it's not going to give
381:30 you as much control over which models you can choose from, and it may not be
381:34 as costefficient as being able to define here are the four models you have, and
381:37 here's when to use each one. So, just to show you guys like what that would do if
381:40 I said, "Hey," it's going to use its model and it's going to pick one based
381:43 on the input. And here you can see that it used GPT4 mini. And then if I go
381:47 ahead and send in that same riddle that I sent in earlier, remember earlier it
381:51 chose the reasoning model, but now it's going to choose probably not the
381:54 reasoning model. So anyways, looks like it got the riddle right. And we can see
381:57 that the model that it chose here was just GPT40. So I guess the argument is
382:00 yes, this is cheaper than using 01. So if you want to just test out your
382:04 workflows by using the auto function, go for it. But if you do want more control
382:07 over which models to use, when to use each one, and you want to get some
382:10 higher outputs in certain scenarios, then you want to take probably the more
382:13 custom route. Anyways, just thought I'd drop that in there. But let's get back
382:16 to the video. All right, so now that you've seen how this agent can choose
382:19 between all those four models, let's look at like a different type of example
382:22 here. Okay, so down here we have a rag agent. And this is a really good use
382:25 case in my mind because sometimes you're going to be chatting with a knowledge
382:28 base and it could be a really simple query like, can you just remind me what
382:31 our shipping policy is? Or something like that. But if you wanted to have
382:35 like a comparison and like a deep lookup for something in the knowledge base,
382:38 you'd probably want more of a, you know, a more intelligent model. So we're doing
382:41 a very similar thing here, right? This agent is choosing the model with a free
382:45 model and then it's going to feed in that selection to the dynamic brain for
382:49 the rag agent to do its lookup. And um what I did down here is I just put a
382:52 very simple flow if you wanted to download a file into Superbase just so
382:57 you can test out this Superbase rag agent up here. But let's chat with this
383:00 thing real quick. Okay, so here is my policy and FAQ document, right? And then
383:04 I have my Subbase table where I have these four vectors in the documents
383:07 table. So what we're going to do is query this agent for stuff that's in
383:11 that policy and FAQ document. And we're going to see which model it uses based
383:15 on how complex the query is. So if I go ahead and fire off what is our shipping
383:18 policy, we'll see that the model selector is going to choose a model,
383:22 send it over, and now the agent is querying Superbase and it's going to
383:25 respond with here's Tech Haven's shipping policy. Orders are processed
383:28 within 1 to two days. standard shipping takes 3 to seven business days, blah
383:31 blah blah. And if we compare that with the actual documentation, you can see
383:35 that that is exactly what it should have responded with. And you'll also notice
383:38 that in this example, we we're not logging the outputs just because I
383:41 wanted to show a simple setup. But we can see the model that it chose right
383:46 here was GPT 4.1 mini. And if we look in this actual agent, you can see that we
383:50 only gave it two options, which was GPT 4.1 mini and anthropic cloud 3.5 sonnet,
383:54 just because of course I just wanted to show a simple example. But you could up
383:58 this to multiple models if you'd like. And just to show that this is working
384:02 dynamically, I'm going to say what's the difference between our privacy policy
384:05 and our payment policy. And what happens if someone wants to cancel their order
384:08 or return an item? So, we'll see. Hopefully, it's choosing the cloud model
384:11 because this is a little bit more complex. Um, it just searched the vector
384:15 database. We'll see if it has to go back again or if it's writing an answer. It
384:18 looks like it's writing an answer right now. And we'll see if this is accurate.
384:22 So, privacy versus payment. We have privacy focuses on data protection.
384:26 payment covers accepted payment methods. Um what happens if someone wants to
384:28 cancel the order? We have order cancellation can be cancelled within 12
384:32 hours. And we have a refund policy as well. And if we go in here, we could
384:36 validate that all this information is on here. And we can see this is how you
384:41 cancel. And then this is how you refund. Oh yeah, right here. Visit our returns
384:44 and refund page. And we'll see what it says is that here is our return and
384:47 refund policy. And all this information matches exactly what it says down here.
384:52 Okay. So those are the two flows I wanted to share with you guys today.
384:54 Really, I just hope that this is going to open your eyes to the fact that you
384:58 can have models be dynamic based on the input, which really in the long run will
385:02 save you a lot of tokens for your different chat models. All right, so now
385:05 we're going to move on to web hooks, which I remember seemed really
385:08 intimidating as well, just like APIs and HTTP requests, but they're really even
385:12 simpler. So, we're going to dive into what exactly they are and show an
385:16 example of how that works in NN. And then I'm going to show you guys two
385:19 workflows that are triggered by NAN web hooks and then we send data back to that
385:22 web hook. So, don't worry, you guys will see exactly what I'm talking about.
385:27 Let's get into it. Okay, web hooks. So, I remember when I first learned about
385:31 APIs and HTTP requests, and then I was like, what in the world is a web hook?
385:36 They're pretty much the same thing except for, think about it like this.
385:41 The web hook, rather than us sending off data somewhere or like sending off an
385:46 API call, we are the one that's waiting for an API call. We're just waiting and
385:51 listening for data. So let me show you an example of what that actually looks
385:55 like. So here you can see we have a web hook trigger and a web hook is always
385:58 going to come in the form of a trigger because essentially our end workflow is
386:03 waiting for data to be sent to it. Whether that's like a form is submitted
386:07 and now the web hook gets it or whatever that is we are waiting for the data
386:10 here. So when I click into the web hook what we see is we have a URL and this is
386:15 basically the URL that wherever we're sending data from is going to send data
386:21 to. So later in this course, I'm going to show you an example where I do like
386:25 an 11labs voice agent. And so our end web hook URL right here, that's where 11
386:29 Labs is sending data to. Or I'll show you an example with lovable where I
386:33 build a little app and then our app is sending data to this URL. So that's how
386:37 it works, right? Important things to remember is you still have to set up
386:40 your method. So if you are setting up some sort of request on a different
386:44 service and you're sending data to this web hook, it's probably going to be a
386:47 post. So, you'll want to change that and make sure that these actually align.
386:51 Anyways, what we're going to do is we're going to change this to a post because I
386:54 I just know it's going to be post. And this is our web hook URL, which is a
386:59 test URL. So, I'm going to click on the button to copy it. And what I'm going to
387:02 do is take this and I'm going to go into Postman, which is just kind of like an
387:06 API platform that I use to show some demos of how we can send these requests.
387:10 So, I'm going to click on send an API request in here. This basically just
387:14 lets us test out and see if our web hook's working. So what I'm going to do
387:18 is I'm going to change the request to post. I'm going to enter the web hook
387:23 URL from my NAND web hook. Okay. And now basically what we have is the ability to
387:27 send over certain information. So I'm just going to go to body. I'm going to
387:31 send over form data. And now you can see just like JSON, it's key value pairs. So
387:35 I'm just going to send over a field called text. And the actual value is
387:39 going to be hello. Okay. So that's it. I'm going to click on send. But what
387:43 happens is we're going to get a request back which is a 404 error. It says the
387:48 web hook is not registered and basically there's a hint that says click the test
387:53 workflow button. So because this workflow I'm sorry because the web hook
387:57 is supposed to be listening right now it's not listening. And the reason it's
388:00 not listening is because we are in an act inactive workflow like we've talked
388:03 about before and we haven't clicked listen for test event. So if I click
388:08 listen now you can see it is listening for this URL. Okay. So, I go back into
388:13 Postman. I hit send. And now it's going to say workflow was started. I come back
388:17 into here. We can see that the node has executed. And what we have is our text
388:21 that we entered right there in the body, which was text equals hello. We also get
388:24 all this other random stuff that we don't really need. Um, I don't even know
388:28 what this stuff really stands for. We can see our host was our our nit cloud
388:33 account. All this kind of stuff. Um, but that's not super important for us right
388:36 now, right? I just wanted to show that that's how it works. So, they're both
388:41 configured as post. Our postman is sending data to this address and we saw
388:47 that it worked right. So what comes next is the fact that we can respond to this
388:52 web hook. So right now we're set to respond immediately which would be
388:56 sending data back right away. But let's say later you'll see an example in 11
389:00 labs and my end workflow we where we want to grab data do something with it
389:04 and send something back. And how that would work is we would change this
389:08 method to using respond to web hook node and then we basically like add like
389:11 maybe an AI agent right here and then after the AI agent we would basically
389:15 just go respond to web hook and as you can see it just returns data to that
389:18 same web hook address. So that would just be sending data back to lovable 11s
389:23 postman wherever we actually got the initial request from we would do
389:26 something with that data and send it back to that web hook. So really, it's
389:31 the exact same way that we think about APIs except for we are now the API
389:35 server rather than the person who's sending the request. So if you think
389:39 about the example with um being at a restaurant, you know, like we would look
389:44 at the menu, we would send off a request through the waiter to the kitchen and
389:47 then the kitchen would get that request via a web hook. They would create the
389:52 food and then they would send it back to the actual person via responding to web
389:56 hook. So that's all it is. is we are just on the other side of the table now.
390:00 And then one last important thing to know about web hooks in NN is ideally
390:05 you have to make it active. So we would click on active but what it's going to
390:09 tell us is that you can now make calls to your production web hook URL no
390:14 longer the test URL. So if you remember in here we have test and production and
390:18 what I copied into Postman was the test. So if I go back into Postman and I hit
390:23 send, nothing happens, right? But if I now go to production URL, copy the
390:28 production, go into Postman and paste in the production URL, hit send. There we
390:33 go. Workflow was started because that's the actual active workflow production
390:38 URL. So I can click on executions and we can see this is the newest one we just
390:41 got when we made our workflow active and it did in fact start. And just to prove
390:45 to you that that's how it works, let me change the body which now instead of
390:49 saying hello, it's going to Oops. And we'll send that off. Go into
390:58 edit end and we'll basically just watch a new execution will pop up because it's
391:02 actively listening. I'm not doing anything. There you go. You see it pop
391:06 up. And if we click into the actual web hook data, we will see right here the
391:11 text says active run. Cool. So that's web hooks. Hopefully that clears up any
391:13 confusion that you may have had regarding web hooks. But let's dive into
391:17 some examples in NAND where we have a web hook trigger and then we're also
391:20 configuring that other third party app that's going to send data to our web
391:29 workflow. Hey there, I'm Archer. What can I do for you? Hey Archer, can you
391:32 help me send an email? Of course, I'd be delighted to help. Who's the email for and what's it
391:40 about? Just give me the details and I'll get it sent out. Awesome. Yeah, I just
391:44 wanted to send one to Phil. I wanted to ask about an update for the project that
391:49 we discussed last week. Got it. So, you want to send an email to Phil asking
391:53 about a project update? No problem. I'll handle that. Just hang
391:58 tight while I send this off. All done. It's been sent to Phil. He'll be getting
392:02 that email any minute. Anything else I can help you with? Yeah, could you
392:04 actually go ahead and help me send another one? Of course. I'm always happy to lend a
392:10 hand, or rather an email address. Who is this email for and what's it about? Just
392:15 give me the details and I'll whip it up. Just another one to Phil. I forgot to
392:18 mention that I wanted to see if he wants All right, another email to Phil. This
392:26 time to ask about lunch tomorrow. I'm on it. Just a sec. Gotcha. The email is on
392:31 its way to Phil. He'll be sure to see your invitation for lunch tomorrow. Let
392:35 me know if you need anything else. No, that's going to be it. Thanks, Archer.
392:39 You're very welcome. Always happy to help, especially when it comes to
392:42 emails. happy to handle any other messages you need to send in the future.
392:46 Have a great day. Okay, so that was a quick demo of the super cool
392:49 conversational voice agent that we built using 11 Labs and Nad. His name's Archer
392:54 and he helps us send emails. As you can tell, he was very enthusiastic about
392:58 sending emails. So, here's that first email that we asked Archer to send to
393:01 Phil. Hi, Phil. I hope this message finds you well. I'm writing to inquire
393:03 about the project update we discussed last week. Could you please share the
393:06 latest developments at your earliest convenience? Looking forward to your
393:09 response. Best Nate. And then we asked Archer to send another email just asking
393:12 if he wants to get lunch tomorrow to Phil. So, hi Phil. I was wondering if
393:16 you're available for lunch tomorrow. Let me know what works for you. Best Nate.
393:19 So, now that we've seen a quick demo, we heard the voice. We've seen the emails
393:21 actually come through. We're going to hop back into Nad and we're going to
393:24 explain what's going on here so that you guys can get this sort of system up and
393:27 running for yourselves. Okay, so there are a few things that I want to break
393:30 down here. First of all, just within NAN, whenever you're building an AI
393:33 agent, as you guys should know, there's going to be an input and then that input
393:37 is going to be fed into the agent. The agent's going to use its system prompt
393:41 and its brain to understand what tools it needs to hit. It's going to use those
393:44 tools to take action and then there's going to be some sort of output. So, in
393:47 the past when we've done tutorials on personal assistants, email agents,
393:51 whatever it was, rag agents, usually that the input that we've been using has
393:56 been something like Telegram or Gmail or even just the NAN chat trigger. Pretty
393:59 much all we're switching out here for the input and the output is 11 Labs. So,
394:04 we're going to be getting a post request from 11 Labs, which is going to send
394:07 over the body parameters like who the email is going to, um, what the message
394:11 is going to say, stuff like that. And then the agent once it actually does
394:14 that, it's going to respond using this respond to web hook node. So, we'll get
394:17 into 11 Labs and I'll show you guys how I prompted the agent and everything like
394:21 that in 11 Labs. But first, let's take a quick look at what's going on in the
394:25 super simple agent setup here in NN. So, these are tools that I've used multiple
394:28 times on videos on my channel. The first one is contact data. So, it's just a
394:31 simple Google sheet. This is what it looks like. Here's Phil's information
394:34 with the correct Gmail that we were having information sent to. And then I
394:37 just put other ones in here just to sort of dummy data. But all we're doing is
394:42 we're hooking up the tool um Google Sheets. It's going to be reading get
394:46 rows sheet within the document. We link the document. That's pretty much all we
394:49 had to do. Um and then we just called it contact data so that when we're
394:52 prompting the agent, it knows when to use this tool, what it has. And then the
394:56 actual tool that sends emails is the send email tool. So in here we're
395:01 connecting a Gmail tool. Um this one is you know we're using all the from AI
395:05 functions which makes it really really easy. Um we're sending a message of
395:09 course and so the from AI function basically takes the query coming in from
395:14 the agent and understands um dynamically the AI is looking for okay what's the
395:17 email address based on the user's message. Okay we grab the email address
395:20 we're going to put it in the two parameter. How can we make a subject out
395:24 of this message? We'll put it here. And then how can we actually construct an
395:28 email body and we put it there. So that's all that's going on here. Here
395:31 we've got our tools. We've obviously got a chat model. In this case, we're just
395:34 using um GPT40. And then we have the actual what's taking place within the agent. So
395:41 obviously there's an input coming in. So that's where we define this information.
395:46 Input agent output and then the actual system message for the agent. So the
395:49 system message is a little bit different than the user agent. The system message
395:53 is defining the role. This is your job as an agent. This is what you should be
395:57 doing. These are the tools you have. And then the user message is like each
396:01 execution, each each run, each time that we interact with the agent through 11
396:05 Labs, it's going to be a different user message coming in, but the system
396:07 message is always going to remain the same as it's the prompt for the AI
396:11 agents behavior. Anyways, let's take a look at the prompt that we have here.
396:15 First, the overview is that you are an AI agent responsible for drafting and
396:19 sending professional emails based on the user's instructions. You have access to
396:23 two tools, contact data to find email addresses and send email to compose and
396:27 send emails. Your objective is to identify the recipient's contact
396:31 information, draft a professional email and sign off as Nate before sending. The
396:36 tools you have obviously uh contact data. It retrieves email addresses based
396:39 on the name. So we have an example input John Doe. Example output an email
396:43 address and then send email. Sends an email with a subject and a body. The
396:47 example input here is an email address. Um subject and a body with example email
396:52 subject body. Um so that's what we have for the system message. And then for the
396:57 um user message, as you can see, we're basically just saying um okay, so the
397:01 email is going to be for this person and the email content is going to be this.
397:04 So in this case, this execution it was the email's for Phil and the email
397:08 content is asking about lunch tomorrow. So that's all that we're being fed in
397:13 from 11 Labs. And then the agent takes that information to grab the contact
397:17 information and then it uses its AI brain to make the email message.
397:22 Finally, it basically just responds to the web hook with um the email to Phil
397:26 regarding lunch tomorrow has been successfully sent and then 11 Labs
397:30 captures that response back and then it can respond to us with gotcha. We were
397:33 able to send that off for you. Is there anything else you need? So, that's
397:37 pretty much all that's going on here. Um if you see in the actual web hook, what
397:41 we're getting here is, you know, there's different things coming back. We have
397:44 different little technical parameters, all this kind of stuff. all that we want
397:48 to configure and I'll show you guys how we configure this in 11 Labs is the the
397:52 JSON body request that's being sent over. So we're in table format. If we
397:56 went to JSON, we could see down here we're looking at body. In the body, we
398:00 set up two fields to send over from 11 Labs to Nidend using that post request
398:05 web hook. The first field that we set up was two. And as you can see, that's when
398:11 the 11 Labs model based on what we say figures out who the email is going to
398:15 and puts that there and then figures out what's the email content. What do you
398:17 want me to say in this email? And then throws that in here. So, um, that's how
398:22 that's going to work as far as setting up the actual web hook node right here.
398:27 Um, we have a we wanted to switch this to a post method because 11 Labs is
398:32 sending us information. Um, we have a test URL and a production URL. The test
398:36 one we use for now and we have to manually have naden listen for a test
398:41 event. Um I will show an example of what happens if we don't actually do this
398:44 later in the video. But when you push the app into production, you make the
398:48 workflow active, you would want to put this web hook in 11 Labs as the
398:52 production URL rather than the test URL so that you can make sure that the
398:56 stuff's actually coming over. We put our path as end just to clean up this URL.
399:01 All that it does is changes the URL. Um and then authentication we put none. And
399:04 then finally for response instead of doing immediately or wait when last node
399:08 finishes we want to do using respond to web hook node. That way we get the
399:12 information the agent takes place and then responds and then all we have here
399:15 is respond to web hook. So it's very simple as you can see it's only you know
399:19 really four nodes you know the email the brain um and then the two tools and the
399:24 web hooks. So um hopefully that all made sense. We are going to hop into 11 labs
399:29 and start playing around with this stuff. Also, quick side note, if you
399:32 want to hop into this workflow, check out the prompts, play around with how I
399:36 configured things. Um, you'll be able to download this workflow for free in the
399:39 free school community. Link for that will be down in the description. You'll
399:41 just come into here, you'll click on YouTube resources, you will click on the
399:45 post associated with this video, and then you're able to download the
399:48 workflow right here. Once you download the workflow, you can import it from
399:52 file, and then you will have this exact canvas pop up on your screen. Then, if
399:55 you're looking to take your skills with Naden a little bit farther, feel free to
399:58 check out my paid community. The link for that will also be down in the
400:01 description. Great community in here. A lot of people obviously are learning NAN
400:05 and um asking questions, sharing builds, sharing resources. Got a great classroom
400:09 section going over, you know, client builds and some deep dive topics as well
400:13 as five live calls per week. So, you can always make sure you're getting your
400:15 questions answered. Okay. Anyways, back to the video. So, in 11 Labs, this is
400:19 the email agent. This is just the test environment where we're going to be
400:22 talking to it to try things out. So, we'll go back and we'll see how we
400:25 actually configured this agent. And if you're wondering why I named him
400:28 Marcher, it's just because his actual voice is Archer. So, um, that wasn't my
400:32 creativity there. Anyways, once we are in the configuration section of the
400:35 actual agent, we need to set up a few things. So, first is the first message.
400:39 Um, we pretty much just when we click on call the agent, it's going to say, "Hey
400:42 there, I'm Marcher. What can I do for you?" Otherwise, um, if we leave this
400:45 blank, then we will be the ones to start the conversation. But from there, you
400:49 will set up a system prompt. So in here, this is a prompt I have is you are a
400:53 friendly and funny personal assistant who loves helping the user with tasks in
400:56 an upbeat and approachable way. Your role is to assist the user with sending
401:00 emails. When the user provides details like who the email is for and what's it
401:04 about, you will pass that information to the NAN tool and wait for its response.
401:08 I'll show you guys in a sec how we configure the NAN tool and how all that
401:12 works. But anyways, once you get confirmation from NAN that the email was
401:15 sent, cheerfully let the user know it's done and ask if there's anything else
401:19 you can help with. Keep your tone light, friendly, and witty while remaining
401:22 efficient and clear in your responses. So, as you can see in the system prompt,
401:25 I didn't even really put in anything about the way it should be conversating
401:30 as far as like sounding natural and using filler words and um and sometimes
401:34 I do that to make it sound more natural, but this voice I found just sounded
401:38 pretty good just as is. Then, we're setting up the large language model. Um,
401:42 right now we're using Gemini 1.5 Flash just because it says it's the fastest.
401:44 You have other things you can use here, but I'm just sticking with this one. And
401:48 so this is what it uses to extract information pretty much out of the
401:52 conversation to pass it to NAND or figure out how it's going to respond to
401:55 you. That's what's going on here. And then with temperature, um, I talked
401:59 about I like to put a little bit higher, especially for some fun use cases like
402:02 this. Um, basically this is just the randomness and creativity of the
402:05 responses generated so that it's always going to be a little different and it's
402:08 going to be a little fun um, the higher you put it. But if you wanted it to be
402:11 more consistent and you had like, you know, you were trying to get some sort
402:15 of information back um right the way you want it, then you would probably want to lower this a
402:19 little bit. Um and then you have stuff like knowledge base. So if this was
402:24 maybe like um a customer support, you'd be able to put some knowledge base in
402:27 there. Or if you watch my previous voice video about um sort of doing voice rag,
402:32 you could still do the sending it to NADN, hitting a vector database from
402:35 Naden and then getting the response back. But anyways, um, in this case,
402:39 this is where we set up the tool that we were able to call up here as you saw in
402:44 the system prompt. So the tool NAN, this is where you're putting the web hook
402:49 from your the web hook URL from NAN. That's where you're putting that right
402:52 here. As you can see, um, web hook- test NAN. The method is going to be a post.
402:56 So we can send information from 11 Labs to NAN. And we just named it Naden to
403:01 make the system prompt make more sense for um just just for me when I was
403:05 creating this. It makes sense to send something to the tool called Naden.
403:09 Anyways, as you can see, the description is use this tool to take action upon the
403:12 user's request. And so we can pretty much just leave it as that. We don't
403:15 have any headers or authorization going in here, but we do need to send over
403:19 body parameters. Um otherwise, if we didn't have this, nothing would be sent
403:23 over to Naden at all. So the description of the body parameters is in a friendly
403:28 way ask the user to provide the name of the recipient and what the email is
403:31 about unless they already provided that information. So the LLM is understanding
403:36 when it's conversating with the human it needs to extract the name of this person
403:41 to send an an email to and then actually what is the email going to have. As you
403:44 can see we didn't have to put in like oh what's the subject of the email because
403:49 our AI agent in N8N can use its brain to figure out a subject. So we have two
403:53 parameters that we're setting over. If you remember when we were in here, we
403:56 got the two parameters from the body right here, which was two and email
404:00 content. And so that's as you can see what we set up here. So the first one
404:04 was a string. It was called two and this is who the email is going to. And then
404:07 the second one was also a string. It's called email content. And this is what
404:12 is the email about. So we hit save changes. Now we have a tool that gets
404:16 called whenever um the voice agent realizes it has information to send over
404:20 to N to send an email. and it has the right parameters to do so. So now we
404:23 have this voice agent pretty much set up here with its tools and its system
404:26 prompt. We're going to test the agent. And remember earlier I talked about the
404:30 fact that right now we're testing. So we have to have the web hook actively
404:33 listening. Once this is in production, it's going to be listening all the time.
404:36 But right now we have to manually hit test event before we call it. Otherwise,
404:39 it's not going to work. So I'm going to talk to Archer, ask him to send an
404:42 email. It's going to go through and then we're going to ask him to send another
404:45 email where we don't hit this web hook test event and then we'll see what he
404:49 says. You're probably just going to say he couldn't access the database or he
404:52 couldn't find the contact information. Something like that. Hey there, I'm
404:56 Archer. What can I do for you? Hey Archer, can you help me send an email?
405:00 Of course. I'd love to help. Who's the email for and what's it about? Tell me
405:03 the details and I'll whip it up in a jiffy. Sweet. Um, it's going to be for
405:09 Phil. Okay, so the email is for Phil. What's it about? I'm ready to get this
405:13 email sent out. Yeah, just let him know that I'm caught in traffic and I'm going
405:17 to be late to the meeting. Got it. So, you want to let Phil know
405:21 you're running late because of traffic. I'll get that email sent out right away.
405:26 All done. I just sent the email to Phil letting him know you're running late.
405:30 Anything else I can help you with? Yeah, I'm actually going to need to send
405:34 another one. No problem. Who's this one for and what's it about? I'm ready to
405:38 send email number two. Yeah. Can you let Phil know that um I actually just drove
405:43 my car off a bridge? Oh my goodness. I can't send an email like that. Maybe you should call Phil
405:49 instead. Is there someone else you want Um, no. Can you send an email to Phil?
405:58 I'm so sorry, but I can't send an email like that. It sounds like you're in a
406:02 difficult situation. Is there someone else you can reach out to for help?
406:07 Okay, didn't expect that. So, I guess the model doesn't want to say something
406:11 like that. We'll try again. Um, workflow is not active. So, let's see. Hey there,
406:16 I'm Archer. What can I do for you? Hey, Archer, can you send an email?
406:21 Of course, I'd be happy to help you with that. Who is the email for and what is
406:25 it about? I'll get it sent out in a jify. Just wanted to touch base with Phil
406:32 about um the meeting later today. Okay. So, you want to send an email to
406:36 Phil about the meeting later today. Got it. I'll get this done right away.
406:39 Anything else I can help with today? Oops. Looks like there was an error
406:43 sending that email. It's possible there's a typo in the email address or
406:47 something is wrong on my end. Let me try again. Can you please double check the
406:50 email address? Okay. Anyway, so that's how that works. As you can see, the
406:54 second time the web hook wasn't active, so wasn't able to send that um that
407:03 endnot passed through. So, that's going to be it for this one. I hope that
407:05 everything made sense. Um it's just really cool how easy it basically is to
407:09 switch out an input and you can have the agent function the same. Obviously, a
407:12 few things would change as you start to add more tools. Your user message would
407:15 have to be tweaked a little bit. You'd have to tweak the actual system prompts
407:18 a little bit. But, as you can see in this one, kept it very, very simple.
407:21 Basically, just told it its role. Gave it the the two tools and how to use
407:25 them. And as you can see, um it was pretty seamless as far as being able to
407:29 have the agent fill in things, make the easily. Today, we're going to be talking
407:36 about how you can build anything with Lovable and NAN. So we're going to be
407:40 doing a live build of spinning up a web page with lovable and then also building
407:44 the backend on nit. But first of all, I wanted to go over high level what this
407:47 architecture looks like. So right here is lovable. This is what we're starting
407:50 off with. And this is where we're going to be creating the interface that the
407:53 user is interacting with. What we do here is we type in a prompt in natural
407:57 language and Lovable basically spins up that app in seconds. And then we're able
408:01 to talk back and forth and have it make minor fixes for us. So what we can do is
408:05 when the user inputs information into our lovable website it can send that
408:09 data to nadn the nadn workflow that we're going to set up can you know use
408:13 an agent to take action in something like gmail or slack air tableable or
408:17 quickbooks and then naden can send the data back to lovable and display it to
408:21 the user and this is really just the tip of the iceberg there's also some really
408:25 cool integrations with lovable and superbase or stripe or resend so there's
408:29 a lot of ways you can really use lovable to develop a full web app and so while
408:32 we're talking high We just wanted to show you an example flow of what this
408:35 naden could look like where we're capturing the information the user is
408:40 sending from lovable via web hook. We're feeding that to a large language model
408:43 to create some sort of content for us and then we're sending that back and it
408:46 will be displayed in the Lovable web app. So let's head over to Lovable and
408:49 get started. So if you've never used Lovable before, don't worry. I'm going
408:52 to show you guys how simple it is. You can also sign up using the link in the
408:56 description for double the credits. Okay, so this is all I'm going to start
408:58 with just to show you guys how simple this is. I said, "Help me create a web
409:02 app called Get Me Out of This, where a user can submit a problem they're
409:05 having." Then I said to use this image as design inspiration. So, I Googled
409:09 landing page design inspiration, and I'm just going to take a quick screenshot of
409:12 this landing page, copy that, and then paste it into Lovable. And then we'll
409:16 fire this off. Cool. So, I just sent that off. And on the right hand side,
409:19 we're seeing it's going to spin up a preview. So, this is where we'll see the
409:22 actual web app that it's created and get to interact with it. Right now, it's
409:25 going to come through and start creating some code. And then on the lefth hand
409:27 side is where we're going to have that back and forth chat window to talk to
409:31 lovable in order to make changes for us. So right now, as you can see, it's going
409:34 to be creating some of this code. We don't need to worry about this. Let's go
409:37 into nit real quick and get this workflow ready to send data to. Okay, so
409:42 here we are in Nen. If you also haven't used this app before, there'll be a link
409:44 for it down in the description. It's basically just going to be a workflow
409:47 builder and you can get a free trial just to get started. So you can see I
409:50 have different workflows here. We're going to come in and create a new one.
409:53 And what I'm going to do is we're gonna add a first step that's basically
409:56 saying, okay, what actually triggers this workflow. So I'm gonna grab a web
410:00 hook. And so all a web hook is is, you know, it looks like this. And this is
410:04 basically just a trigger that's going to be actively listening for something to
410:08 send data to it. And and data is received at this URL. And so right now
410:11 there's a test URL and there's a production URL. Don't worry about that.
410:15 We're going to click on this URL to copy it to our clipboard. And basically we
410:18 can give this to Lovable and say, "Okay, whenever a user puts in a problem
410:22 they're having, you're going to send the data to this web hook." Cool. So hopping
410:26 over to Lovable. As you can see, it's still coding away and looks like it's
410:29 finishing up right now. And it's saying, "I've created a modern problem-solving
410:32 web app with a hero section, submission form, and feature section in blue
410:36 color." Um, looks like there's an error. So all we have to do is click on try to
410:39 fix, and it should go back in there and continue to spin up some more code.
410:42 Okay, so now it looks like it finished that up. And as you can see, we have the
410:46 website filled up. And so it created all of this with just uh an image as
410:50 inspiration as well as just me telling it one sentence, help me create a web
410:53 app called get me out of this where a user can submit a problem they're
410:55 having. So hopefully this should already open your eyes to how powerful this is.
410:59 But let's say for the sake of this demo, we don't want all this. We just kind of
411:02 want one simple landing page where they send a problem in. So all I'd have to do
411:05 is on this lefth hand side, scroll down here and say make this page more simple.
411:12 We only need one field which is what problem can we help with? So we'll just
411:21 send that off. Very simple query as if we were just kind of talking to a
411:24 developer who was building this website for us and we'll see it modify the code
411:27 and then we'll see what happens. So down here you can see it's modifying the code
411:31 and now we'll see what happens. It's just one interface right here. So it's
411:34 created like a title. It has these different buttons and we could easily
411:36 say like, okay, when someone clicks on the home button, take them here. Or when
411:40 someone clicks on the contact button, take them here. And so there's all this
411:42 different stuff we can do, but for the sake of this video, we're just going to
411:45 be worrying about this interface right here. And just to give it some more
411:49 personality, what we could do is add in a logo. So I can go to Google and search
411:54 for a thumbs up logo PNG. And then I can say add this logo in the top left. So
412:01 I'll just paste in that image. We'll fire this off to lovable. And it should
412:05 put that either right up here or right up here. We'll see what it does. But
412:08 either way, if it's not where we like it, we can just tell it where to put it.
412:11 Cool. So, as you can see, now we have that logo right up there. And let's say
412:14 we didn't like this, all we'd have to do is come up to a previous version, hit on
412:18 these three dots, and hit restore. And then it would just basically remove
412:21 those changes it just made. Okay. So, let's test out the functionality over
412:24 here. Let's say a problem is we want to get out of Oh, looks like the font is
412:28 coming through white. So, we need to And boom, we just told it to change the
412:43 text to black and now it's black and we can see it. So anyways, I want to say
412:49 get me out of a boring meeting. So we'll hit get me out of this and we'll see
412:53 what happens. It says submitting and nothing really happens. Even though it
412:56 told us, you know, we'll get back to you soon. Nothing really happened. So, what
413:00 we want to do is we want to make sure that it knows when we hit this button,
413:03 it's going to send that data to our Naden web hook. So, we've already copied
413:06 that web hook to our clipboard, but I'm just going to go back into Naden. We
413:09 have the web hook. We'll click on this right here back into lovable. Basically
413:13 just saying when I click get me out of this, so this button right here, send
413:16 the data to this web hook. And also, what we want to do is say as a
413:22 post request because it's going to be sending data. So, we're going to send
413:25 that off. And while it's making that change to the code, real quick, we want
413:29 to go into edit end and make sure that our method for this web hook is indeed
413:32 post. So I don't want to dive into too much what that means really, but Lovable
413:37 is going to be sending a post request to our web hook. Meaning there's going to
413:40 be stuff within this web hook like body parameters and different things. And so
413:43 if this wasn't configured as a post request, it might not work. So you'll
413:47 see once we actually get the data and we catch it in any of them. But anyways,
413:51 now when the users click on get me out of this, the form will send the problem
413:55 description to your web hook via a post request. So let's test it out. So we're
413:58 going to say I forgot to prepare a brief for my meeting. We're going to go back
414:01 and end it in real quick and make sure we hit listen for test event. So now our
414:05 web hook is actively listening back in lovable. We'll click get me out of this
414:08 and we will see what happens. We can come and end it in and we can now see we
414:12 got this information. So here's the body I was talking about where we're
414:15 capturing a problem which is I forgot to prepare a brief for my meeting. So, we
414:19 now know that Lovable is able to send data to NAND. And now it's on us to
414:24 configure what we want to happen in NAND so we can send the data back to Lovable.
414:27 Cool. So, what I'm going to do is I'm going to click on the plus that's coming
414:30 off of the web hook. And I'm going to grab an AI agent. What this is going to
414:34 do is allow us to connect to a different chat model and then the agent's going to
414:38 be able to take this problem and produce a response. And I'm going to walk
414:41 through the step by step, but if you don't really want to worry about this
414:43 and you just want to worry about the lovable side of things, you can download
414:47 the finished template from my free school community. I'll link that down in
414:50 the description. That way, you can just plug in this workflow and just give
414:53 lovable your noden web hook and you'll be set up. But anyways, if you join the
414:56 free school community, you'll click on YouTube resources, click on the post
415:00 associated with this video, and you'll be able to download the JSON right here.
415:03 And then when you have that JSON, you can come into Nadn, open up a new
415:07 workflow, click these three dots on the top, and then click import from file.
415:10 And when you open that up, it'll just have the finished workflow for you right
415:13 here. But anyways, what I'm going to do is click into the AI agent. And the
415:17 first thing is we have to configure what information the agent is going to
415:21 actually read. So first of all, we're going to set up that as a user prompt.
415:24 We're going to change this from connected chat trigger node to define
415:28 below because we don't have a connected chat trigger node. We're using a web
415:31 hook as we all know. So we're going to click on define below and we are
415:34 basically just going to scroll down within the web hook node where the
415:38 actual data we want to look at is which is just the problem that was submitted
415:42 by the user. So down here in the body we have a problem and we can just drag that
415:46 right in there and that's basically all we have to do. And maybe we just want to
415:48 define to the agent what it's looking at. So we'll just say like the problem
415:53 and then we'll put a colon. So now you can see in the result panel this is what
415:57 the agent will be looking at. And next we need to give it a system message to
416:00 understand what it's doing. So, I'm going to click on add option, open up a
416:04 system message, and I am going to basically tell it what to do. So, here's
416:07 a system message that I came up with just for a demo. You're an AI excuse
416:11 generator. Your job is to create clever, creative, and context appropriate
416:15 excuses that someone could use to avoid or get out of a situation. And then we
416:19 told it to only return the excuse and also to add a touch of humor to the
416:22 excuses. So, now before we can actually run this to see how it's working, we
416:26 need to connect its brain, which is going to be an AI chat model. So, what
416:28 I'm going to do is I'm going to click on this plus under chat model. For this
416:33 demo, we'll do an OpenAI chat model. And you have to connect a credential if you
416:36 haven't done so already. So, you would basically come into here, click create
416:39 new credential, and you would just have to insert your API key. So, you can just
416:44 Google OpenAI API. You'll click on API platform. You can log in, and once
416:47 you're logged in, you just have to go to your dashboard, and then on the left,
416:50 you'll have an API key section. All you'll have to do is create a new key.
416:56 We can call this one um test lovable. And then when you create that, you just
416:59 copy this value. Go back into Nitn. Paste that right here. And then when you
417:03 hit save, you are now connected to OpenAI's API. And we can finally run
417:07 this agent real quick. If I come in here and hit test step, we will see that it's
417:11 going to create an excuse for I forgot to prepare a brief for my meeting, which
417:15 is sorry, I was too busy trying to bond with my coffee machine. Turns out it
417:19 doesn't have a prepare briefs setting. So basically what we have is we're
417:22 capturing the problem that a user had. We're using an AI agent to create a
417:26 excuse. And then we need to send the data back to Lovable. So all we have to
417:30 do here is add the plus coming off of the agent. We're going to call this a
417:34 respond to web hook node. And we're just going to respond with the first incoming
417:37 item, which is going to be the actual response from the agent. But all we have
417:41 to do also to configure this is back in the web hook node, there's a section
417:45 right here that says respond, instead of responding immediately, we want to
417:49 respond using the respond to web hook node. So now it will be looking over
417:52 here, and that's how it's going to send data back to lovable. So this is pretty
417:56 much configured the way we need it, but we have to configure Lovable now to wait
418:00 for this response. Okay. So what I'm telling Lovable is when the data gets
418:04 sent to the web hook, we wait for the response from the web hook, then output
418:08 that in a field that says here is your excuse. So we'll send this off to
418:12 Lovable and see what it comes up with. Okay, so now it said that I've added a
418:14 new section that displays here is your excuse along with the response message
418:18 from the web hook when it's received. So let's test it out. First, I'm going to
418:21 go back and edit in and we're going to hit test workflow. So the web hook is
418:24 now listening for us. So we'll come into our lovable web app and say I want to
418:30 skip a boring meeting. We'll hit get me out of this. So now that data should be
418:34 captured in Naden. It's running. And now the output is I just realized my pet
418:38 goldfish has a lifealtering decision to make regarding his tank decorations and
418:41 I simply cannot miss this important family meeting. So it doesn't look
418:45 great, but it worked. And if we go into edit end, we can see that this run did
418:48 indeed finish up. And the output over here was I just realized my pet goldfish
418:52 has a lifealtering decision blah blah blah. So basically what what's happening
418:56 is the web hook is returning JSON which is coming through in a field called
418:59 output and then we have our actual response which is exactly what lovable
419:03 sent through. So it's not very pretty and we can basically just tell it to
419:06 clean that up. So what I just did is I said only return the output fields value
419:11 from the web hook response not the raw JSON. So we wanted to just output this
419:15 right here which is the actual excuse. And so some of you guys may not even
419:18 have had this problem pop up. I did a demo of this earlier just for testing
419:21 and I basically walked through these same steps and this wasn't happening.
419:26 But you know sometimes it happens. Anyways, now it says the form only
419:29 displays the value from the output field. So let's give it another try. So
419:32 back in we're going to hit test workflow. So it's listening for us in
419:35 lovable. We're going to give it a problem. So I'm saying I overslept and
419:38 I'm running late. I'm going to click get me out of this. And we'll see the
419:41 workflow just finished up. And now we have the response in a clean format
419:44 which is I accidentally hit the snooze button until it filed for a restraining
419:48 order against me for harassment. Okay. So now that we know that the
419:51 functionality within N is working. It's sending data back. We want to customize
419:55 our actual interface a little bit. So the first thing I want to do just for
419:58 fun is create a level system. So every time someone submits a problem, they're
420:01 going to get a point. And if they get five points, they'll level up. If they
420:04 get 20 total points, they'll level up again. Okay. So I just sent off create a
420:08 dynamic level system. Every time a user submits a problem, they get a point.
420:12 Everyone starts at level one and after five points, they reach level two. Then
420:15 after 50 more points, they reach level three. And obviously, we'd have to bake
420:18 in the rest of the the levels and how many points you need. But this is just
420:21 to show you that this is going to increase every time that we submit a
420:24 problem. And also, you'd want to have some sort of element where people
420:28 actually log in and get authenticated. And you can store that data in Superbase
420:31 or in um you know, Firebase, whatever it is, so that everyone's levels are being
420:36 saved and it's specific to that person. Okay, so looks like it just created a
420:40 level system. It's reloading up our preview so we can see what that looks
420:43 like now. Um, looks like there may have been an error, but now, as you can see
420:47 right here, we have a level system. So, let's give it another try. I'm going to
420:50 go into Nitn. We're going to hit test workflow. So, it's listening once again,
420:53 and we're going to describe a problem. So, I'm saying my boss is mean. I don't
420:56 want to talk to him. We're going to hit submit. The NN workflow is running right
420:59 now on the back end. And we just got a message back, which is, I'd love to
421:02 chat, but I've got a hot date with my couch and binge watching the entire
421:05 season of Awkward Bosses. And you can see that we got a point. So, four more
421:09 points to unlock level two. But before we continue to throw more prompts so
421:12 that we get up to level two, let's add in one more cool functionality. Okay, so I'm just firing
421:18 off this message that says add a drop down after what problem can we help with
421:22 that gives the user the option to pick a tone for the response. So the options
421:26 can be realistic, funny, ridiculous, or outrageous. And this data of course
421:31 should be passed along in that web hook to NADN because then we can tell the
421:35 agent to say okay here's the problem and here's the tone of an excuse the user is
421:39 requesting and now it can make a request or a response for us. So looks like it's
421:44 creating that change right now. So now we can see our dropown menu that has
421:47 realistic, funny, ridiculous, and outrageous. As you can see before you
421:50 click on it, it's maybe not super clear that this is actually a drop down. So
421:53 let's make that more clear. And what I'm going to do is I'm going to take a
421:56 screenshot of this section right here. I'm going to copy this and I'm just
422:00 going to paste it in here and say make this more clear that it is a drop-own
422:06 selection and we'll see what it does here. Okay, perfect. So, it just added a
422:10 little arrow as well as a placeholder text. So, that's way more clear. And now
422:13 what we want to do is test this out. Okay, so now to test this out, we're
422:16 going to hit test workflow. But just keep in mind that this agent isn't yet
422:20 configured to also look at the tone. So this tone won't be accounted for yet.
422:23 But what we're going to do is we have I overslept and the response is going to
422:28 be funny. We'll hit generate me a or sorry get me out of this. So we have a
422:32 response and our level went up. We got another point. But if we go into Nit, we
422:36 can see that it didn't actually account for the tone yet. So all we have to do
422:40 is in the actual user message, we're basically just going to open this up and
422:43 also add a tone. And we can scroll all the way down here and we can grab the
422:48 tone from the body request. And now it's getting the problem as well as the tone.
422:51 And now in the system prompt, which is basically just defining to the agent its
422:55 role. We have to tell it how to account for different tones. Okay, so here's
422:58 what I came up with. I gave it some more instructions and I said, "You're going
423:02 to receive a problem as well as a tone. And here are the possible tones, which
423:05 are realistic, funny, ridiculous, and outrageous." And I kind of said what
423:09 that means. And then I said, "Your excuse should be one to three sentences
423:12 long, and match the selected tone." So that's all we're going to do. We're
423:15 going to hit save. Okay. So now that it's looking at everything, we're going
423:18 to hit test workflow. The web hook's listening. We'll come back into here and
423:20 we're going to submit. I broke my friend's iPhone and the response tone
423:23 should be outrageous. So, we're going to send that off. And it's loading because
423:27 our end workflow is triggering. And now we just got it. We also got a message
423:31 that says we earned a point. So, right here, we now only need two more for
423:34 level two. But the excuse is I was trying to summon a unicorn with my
423:38 telekinetic powers and accidentally transformed your iPhone into a rogue
423:41 toaster that launched itself off the counter. I swear it was just trying to
423:45 toast a bagel. Okay, so obviously that's pretty outrageous and that's how we know
423:48 it's working. So, I'm sure you guys are wondering what would you want to do if
423:51 you didn't want to come in here and every single time make this thing, you
423:55 know, test workflow. What you would do is you'd switch this to an active
423:59 workflow. Now, basically, we're not going to see the executions live anymore
424:03 with all these green outlines. But what's happening now is it's using the
424:06 production URL. So, we're going to have to copy the production URL, come back
424:11 into Lovable, and just basically say I switched the URL or sorry, let's call I
424:17 switched the web hook to this. And we'll paste that in there, and it should just
424:21 change the data. The logic should be all the exact same because we've already
424:25 built that into this app, but we're just going to switch the web hook. So, now we
424:28 don't have to go click test workflow every time in NAN. And super excited. We
424:32 have two more problems to submit and then we'll be level two. So now it says
424:36 the web hook URL has been updated. So let's test it out. As you can see in
424:39 here, we have an active workflow. We're not hitting test workflow. We're going
424:42 to come in here and submit a new problem. So we are going to say um I
424:51 want to take four weeks off work, but my boss won't let me. We are going to make
424:56 the response tone. Let's just do a realistic one. And we'll click get me
425:00 out of this. It's now calling that workflow that's active and it's
425:02 listening. So we got a point. We got our response which is I've been dealing with
425:06 some unforeseen family matters that need my attention. I believe taking 4 weeks
425:09 off will help me address them properly. I plan this time to use this time to
425:12 ensure everything is in order so I can return more focused and productive. I
425:15 would definitely say that that's realistic. What we can do is come into
425:19 NAN. We can click up here on our executions and we can see what just
425:22 happened. So this is our most recent execution and if we click into here it
425:26 should have been getting the problem which was I want to take four weeks off
425:30 work and the tone which was realistic. Cool. Cool. So, now that we know that
425:33 our active new web hook is working, let's just do one more query and let's
425:38 earn our level two status. I'm also curious to see, you know, we haven't
425:41 worked in any logic of what happens when you hit level two. Maybe there's some
425:44 confetti. Maybe it's just a little notification. We're about to find out.
425:47 Okay, so I said I got invited on a camping trip, but I hate nature. We're
425:50 going to go with ridiculous and we're going to send this off. See what we get
425:55 and see what level two looks like. Okay, so nothing crazy. We could have worked
425:58 in like, hey, you know, make some confetti pop up. All we do is we get
426:01 promoted to level two up here. But, you know, as you can see, the bar was
426:04 dynamic. It moved and it did promote us to level two. But the excuse is, I'd
426:08 love to join, but unfortunately, I just installed a new home system that detects
426:11 the presence of grass, trees, and anything remotely outdoorsy. If I go
426:15 camping, my house might launch an automated rescue mission to drag me back
426:19 indoors. So, that's pretty ridiculous. And also, by the way, up in the preview,
426:22 you can make it mobile. So, we can see what this would look like on mobile.
426:24 Obviously, it's not completely optimized yet, so we'd have to work on that. But
426:28 that's the ability to do both desktop and mobile. And then when you're finally
426:31 good with your app, up in the top right, we can hit publish, which is just going
426:34 to show us that we can connect it to a custom domain or we can publish it at
426:38 this domain that is made for us right here. Anyways, that is going to be it
426:42 for today's video. This is really just the tip of the iceberg with, you know,
426:45 nodn already has basically unlimited capabilities. But when you connect that
426:49 to a custom front end when you don't have to have any sort of coding
426:52 knowledge, as you can see, all of these prompts that I use in here was just me
426:56 talking to it as if I was talking to a developer. And it's really, really cool
426:59 how quick we spun this up. All right, hopefully you guys thought that was
427:02 cool. I think that 11 Labs is awesome and it's cool to integrate agents with
427:06 that as well as lovable or bolt. or these other vibe coding apps that let
427:09 you build things. That would have taken so much longer and you would have kind
427:13 of had to know how to code. So, really cool. So, we're nearing the end of the
427:16 course, but it would be pretty shameful if I didn't at least cover what MCP
427:20 servers are because they're only going to get more commonly used as we evolve
427:25 through the space. So, we're going to talk about MCP servers. We're going to
427:28 break it down as simple as possible. And then I'm going to do a live setup where
427:31 I'm self-hosting NADN in front of you guys step by step. and then I'm going to
427:36 connect to a community node in NN that lets us access some MCP servers. So,
427:40 let's get into it. Okay, so model context protocol. I swear the past week
427:45 it's been the only thing I've seen in YouTube comments, YouTube videos,
427:49 Twitter, LinkedIn, it's just all over. And I don't know about you guys, but
427:52 when I first started reading about this kind of stuff, I was kind of intimidated
427:56 by it. I didn't completely understand what was going on. It was very techy
428:00 and, you know, kind of abstract. I also felt like I was getting different
428:04 information based on every source. So, we're going to break it down as simple
428:08 as possible how it makes AI agents more intelligent. Okay, so we're going to
428:11 start with the basics here. Let's just pretend we're going back to Chad GBT,
428:15 which is, you know, a large language model. What we have is an input on the
428:19 left. We're able to ask it a question. You know, help me write this email, tell
428:22 me a joke, whatever it is. We feed in an input. The LM thinks about it and
428:26 provides some sort of answer to us as an output. And that's really all that
428:30 happens. The next evolution was when we started to give LLM tools and that's
428:36 when we got AI agents because now we could ask it to do something like write
428:39 an email but rather than just writing the email and giving it back to us it
428:42 could call a tool to actually write that email and then it would tell us there
428:46 you go the job's done. And so this really started to expand the
428:49 capabilities of these LLM because they could actually take action on our behalf
428:53 rather than just sort of assisting us and getting us 70% of the way there. And
428:57 so before we start talking about MCP servers and how that enhances our agents
429:01 abilities, we need to talk about how these tools work and sort of the
429:05 limitations of them. Okay, so sticking with that email example, let's pretend
429:08 that this is an email agent that's helping us take action in email. What
429:12 it's going to do is each tool has a very specific function. So this first tool
429:15 over here, you can see this one is going to label emails. The second tool in the
429:18 middle is going to get emails and then this third one on the right is going to
429:21 send emails. So if you watched my ultimate assistant video, if you
429:24 haven't, I'll tag it right up here. What happened was we had a main agent and
429:27 then it was calling on a separate agent that was an email agent. And as you can
429:31 see here was all of its different tools and each one had one very specific
429:35 function that it could do. And it was basically just up to the email agent
429:38 right here to decide which one to use based on the incoming query. And so the
429:42 reason that these tools aren't super flexible is because within each of these
429:46 configurations, we basically have to hardcode in what is the operation I'm
429:50 doing here and what's the resource. And then we can feed in some dynamic things
429:54 like different message ids or label ids. Over here you know the operation is get
429:58 the resources message. So that won't change. And then over here the operation
430:02 is that we're sending a message. And so this was really cool because agents were
430:06 able to use their brains whatever large language model we had plugged into them
430:09 to understand which tool do I need to use. And it still works pretty well. But
430:12 when it comes to being able to scale this up and you want to interact with
430:15 multiple different things, not just Gmail and Google Calendar, you also want
430:19 to interact with a CRM and different databases, that's where it starts to get
430:24 a little confusing. So now we start to interact with something called an MCP
430:28 server. And it's basically just going to be a layer between your agent and
430:32 between the tools that you want to hit, which would be right here. And so when
430:35 the agent sends a request to the specific MCP server, in this case, let's
430:39 pretend it's notion, it's going to get more information back than hey, what
430:42 tools do I have? And what's the functionality here? It's also going to
430:45 get information about like what are the resources there, what are the schemas
430:48 there, what are the prompts there, and it uses all of that to understand how to
430:53 actually take the action that we asked back here in the whole input that
430:56 triggered the workflow. when it comes to different services talking to each
431:01 other. So in this case Nadn and notion there's been you know a standard in the
431:04 way that we send data across and we get data back which has been the rest APIs
431:08 and these standards are really important because we have to understand how can we
431:12 actually format our data and send it over and know that it's going to be
431:16 received in the way that we intend it to be. And so that's exactly what was going
431:19 on back up here where every time that we wanted to interact with a specific tool
431:24 we were hitting a specific endpoint. So the endpoint for labeling emails was
431:27 different for the endpoint for sending emails. And besides just those endpoints
431:31 or functions being different, there was also different things that we had to
431:35 configure within each tool call. So over here you can see what we had to do was
431:39 in order to send an email, we have to give it who it's going to, what the
431:43 subject is, the email type, and the message, which is different from the
431:45 information we need to send to this tool, which is what's the message ID you
431:49 want to label, and what is the label name or ID to give to that message. By
431:54 going through the MCP server, it's basically going to be a universal
431:57 translator that takes the information from the LLM and it enriches that with
432:01 all of the information that we need in order to hit the right tool with the
432:04 right schema, fill in the right parameters, access the right resources,
432:08 all that kind of stuff. The reason I put notion here for an example of an MCP
432:13 server is because within your notion, you'll have multiple different databases
432:16 and within those databases, you're going to have tons of different columns and
432:20 then all of those, you know, are going to have different pages. So, being able
432:23 to have the MCP server translate back to your agent, here are all of the
432:27 databases you have. Here is the schema or the different fields or columns that
432:32 are in each of your databases. Um, and also here are the actions you can take.
432:35 Now, using that information, what do you want to do? Real quick, hopping back to
432:39 the example of the ultimate assistant. What we have up here is the main agent
432:43 and then it had four child workflows, child agents that it could hit that had
432:48 specializations in certain areas. So the Gmail agent, which we talked about right
432:51 down here, the Google calendar agent, the contact agent, which was Air Table,
432:56 and then the content creator agent. So all that this agent had to do was
432:59 understand, okay, based on what's coming in, based on the request from the human,
433:03 which of these different tools do I actually access? And we can honestly
433:07 kind of think of these as MCP servers. Because once the query gets passed off
433:11 to the Gmail agent, the Gmail agent down here is the one that understands here
433:14 are the tools I have, here are the different like, you know, parameters I
433:17 need to fill out. I'm going to take care of it and then we're going to respond
433:21 back to the main agent. This system made things a little more dynamic and
433:24 flexible because then we didn't have to have the ultimate assistant hooked up to
433:28 like 40 different tools, you know, all the combinations of all of these. And it
433:31 made its job a little more easier by just delegating the work to different
433:35 MCP servers. And obviously, these aren't MCP servers, but it's kind of the same
433:39 concept. The difference here is that let's say all of a sudden Gmail adds
433:42 more functionality. We would have to come in here and add more tools in this
433:46 case. But what's going on with the MCP servers is whatever MCP server that
433:51 you're accessing, it's on them to continuously keep that server updated so
433:56 that people can always access it and do what they need to do. By this point, it
433:59 should be starting to click, but maybe it's not 100% clear. So, we're going to
434:03 look at an actual example of like what this really looks like in action. But
434:06 before we do, just want to cover one thing, which is, you know, the agent
434:10 sending over a request to a server. the server translates it in order to get all
434:14 this information and get the tool calls, all that kind of stuff. Um, and what's
434:18 going on here is called MCP protocol. So, we have the client, which is just
434:21 the interface that we're using. In this case, it's NN. It could be your claude
434:25 or your, you know, coding window, whatever it is. And then we're sending
434:29 over something to the MCP server, and that's called MCP protocol. Also, one
434:32 thing to keep in mind here that I'm not going to dive into, but if you were to
434:35 create your own MCP server and it had access to all of your own resources,
434:39 your schemas, your tools, all that kind of stuff, you just got to be careful
434:42 there. There's some security concerns because if anyone was getting into that
434:45 server, they could basically ask for anything back. So, that's something that
434:49 was brought up in my paid community. We were having a great discussion about MCP
434:53 and stuff like that. So, just keep it in mind. So, let's look more at an example
434:57 in Naden once again. So coming down here, let's pretend that we have this
435:01 beautiful air table agent that we built out in NAN. As you can see, it has these
435:06 um seven different tools, which is get record, update record, get bases, create
435:10 record, search record, delete record, and get bases schema. The reason we
435:14 needed all of these different tools is because, as you know, they each have
435:17 different operations inside of them, and then they each have different parameters
435:20 to be filled out. So the agent takes care of all of that. But this could be a
435:24 lot more lean of a system if we were able to access Air Table's MCP server as
435:28 you see what we're doing right here because this is able to list all the
435:32 tools that we have available in Air Table. So here you can see I asked the
435:36 Air Table agent what actions do I have? It then listed these 13 different
435:39 actions that we have which are actually more than the seven we had built out
435:42 here. And we can see we have list records, search records, and then 11
435:47 more. And this is actually just the agent telling us the human what we have
435:51 access to. But what the actual agent would look at in order to use the tool
435:54 is a list of the tools where it would be here's the name, here's the description
435:57 of when you use this tool, and then here's the schema of what you need to
436:01 send over to this tool. Because when we're listing records, we have to send
436:04 over different information like the base ID, the table ID, max records, how we
436:08 want to filter, which is different than if we want to list tables because we
436:12 need a base ID and a detail label. So all of this information coming back from
436:17 the MCP server tells the agent how it needs to fill out all of these
436:19 parameters that we were talking about earlier where it's like send email. You
436:22 have different things than you need to fill out for labeling emails. So once
436:27 the agent gets this information back from the MCP server, it's going to say
436:31 okay well I know that I need to use the search records tool because the user
436:35 asked me to search for records with the name Bob in it. So I have this schema
436:39 that I need to use and I'm going to use my air tableable execute tool in order
436:43 to do so. And basically what it's going to do is going to choose which tool it
436:46 needs based on the information it was fed previously. So in this case the air
436:50 table execute tool would search records and it would do it by filling in this
436:53 schema of information that we need to pass over to air tableable. So now I
436:57 hope you can see how basically what's going on in this tool is all 13 tools
437:01 wrapped up into one and then what's going on here is just feeding all the
437:04 information we need in order to make the correct decision. So, this is the
437:07 workflow we were looking at for the demo. We're not going to dive into this
437:10 one because it's just a lot to look at. I just wanted to put a ton of MCP
437:14 servers in one agent and see that even if we had no system prompt, if it could
437:18 understand which one to use and then still understand how to call its tools.
437:20 So, I just thought that was a cool experiment. Obviously, what's next is
437:23 I'm going to try to build some sort of huge, you know, personal type assistant
437:27 with a ton of MCP servers. But for now, let's just kind of break it down as
437:30 simple as possible by looking at an individual MCP agent. And so I I don't
437:35 know why I called it an MCP agent. In this case, it's just kind of like a
437:39 firecrawl agent with access to firecraws MCP server. So yeah. Okay. So taking a
437:43 look at firecraw agent, we're going to ask what tools do you have? It's hitting
437:48 the firecrawl actions right now in order to pull back all of the resources. And
437:51 as you can see, it's going to come back and say, hey, we have these, you know,
437:54 nine actions you can take. I don't know if it's nine, but it's going to be
437:57 something like that. It was nine. So as you can see, we have access to scrape,
438:01 map, crawl, batch scrape, all this other stuff. And what's really cool is that if
438:04 we click into here, we can see that we have a description for when to use each
438:07 tool and what you actually need to send over. So if we want to scrape, it's a
438:11 different schema to fill out than if we want to do something like extract. So
438:14 let's try actually asking it to do something. So let's say um extract the
438:23 rewards program name from um chipotle.com. So we'll see what it does
438:27 here. Obviously, it's going to do the same thing, listing its actions, and
438:31 then it should be using the firecrawl extract method. So, we'll see what comes
438:36 back out of that tool. Okay, it went green. Hopefully, we actually got a
438:39 response. It's hitting it again. So, we'll see what happened. We'll dive into
438:43 the logs after this. Okay, so on the third run, it finally says the rewards
438:47 program is called Chipotle Rewards. So, let's take a look at run one. It used
438:51 firecrawl extract and it basically filled in the prompt extract the name of
438:54 the rewards program. It put it in as a string. We got a request failed with
438:58 status code 400. So, not sure what happened there. Run two, it did a
439:01 firecross scrape. We also got a status code 400. And then run three, what it
439:06 did was a firecall scrape once again, and it was able to scrape the entire
439:09 thing. And then it used its brain to figure out what the rewards program was
439:12 called. Taking a quick look at the firewall documentation, we can see that
439:16 a 400 error code means that the parameters weren't filled out correctly.
439:20 So what happened here was basically it just didn't fill these out exactly
439:23 correctly the schema of like the prompt and everything to send over. And so
439:26 really these kind of issues just come down to a matter of you know making the
439:31 tool parameters more robust and also more prompting within the actual
439:34 firecrawl agent itself. But it's pretty cool that it was able to understand okay
439:37 this didn't work. Let me just try some other things. Okay, so real quick just
439:41 wanted to say if you want to hop into Nit and test out these MCP nodes, you're
439:44 going to have to self-host your environment because you need to use the
439:48 community nodes and you can only access self-hosted. Today we're going to be
439:54 going through the full setup of connecting MCP servers to NN. I'm going
439:58 to walk through how you self-host your NN. I'm going to walk through how you
440:01 can install the community node and then how to actually set up the community
440:04 node. The best part is you don't have to open up a terminal or a shell and type
440:08 in any install commands. All we have to do is connect to the servers through
440:11 NIND. So, if that sounds like something that interests you, let's dive into it.
440:14 Make sure you guys stick around for the end of this one because we're going to
440:17 talk about the current limitations of these MCP nodes. We're going to talk
440:20 about some problems you may face that no one else is talking about and really
440:23 what does it mean to actually be able to scale these agents with MCP servers.
440:26 Now, there are a ton of different platforms that you can use to host NN.
440:29 The reason I'm using Alstio is because it's going to be really simple for
440:32 deploying and managing open- source software. Especially with something like
440:35 NN, you can pretty much deploy it in just a few clicks and it's going to take
440:39 care of installation, configuration, security, backups, updates, all this
440:43 kind of stuff. So, there's no need for you to have that DevOps knowledge
440:47 because I certainly don't. So, that's why we're going with Alstio. It's also
440:51 SOCK 2 and GDPR compliant. So, that's important. Anyways, like I said, we're
440:54 going to be going through the full process. So, I'm going to click on free
440:57 trial. I'm going to sign up with a new account. So, I'm going to log out and
441:00 sign up as a new user. Okay. Now that I entered that, we're already good to go.
441:02 The first thing I'm going to do is set up a payment method so that we can
441:05 actually spin up a service. So, I went down to my account in the lefth hand
441:08 side and then I clicked on payment options and I'm going to add a card real
441:11 quick. Now that that's been added, it's going to take a few minutes for our
441:14 account to actually be approved. So, I'm just going to wait for that. You can see
441:16 we have a support ticket that got opened up, which is just waiting for account
441:21 activation. Also, here's the approval email I got. Just keep in mind it says
441:23 it'll be activated in a few minutes during business hours, but if you're
441:26 doing this at night or on the weekends, it may take a little longer. Okay, there
441:29 we go. We are now activated. So, I'm going to come up here to services and
441:32 I'm going to add a new service. What we're going to do is just type in nadn
441:35 and it's going to be a really quick oneclick install. Basically, I'm going
441:39 to just be deploying on htzner as a cloud provider. I'm going to switch to
441:43 my region and then you have different options for service plans. So, these
441:46 options obviously have different numbers of CPUs, different amount of RAM and
441:50 storage. I'm going to start right now on just the medium. I would keep in mind
441:53 that MCP servers can be kind of resource intensive. So, if you are running
441:57 multiple of them and your environment is crashing, then you're probably just
441:59 going to want to come in here and upgrade your service plan. So, we can
442:02 see down here, here is the estimated hourly price. Here's the plan we're
442:05 going with. And I'm going to go ahead and move forward. Now, we're going to
442:08 set up the name. So, this will pop up as your domain for your NAND environment.
442:11 Then, I went ahead and called this Nad- demo. What you can do here is you can
442:15 add more volume. So, if you wanted to, you could increase the amount of
442:17 storage. And as you can see down here, it's going to increase your hourly
442:20 price. I'm not going to do that, but you do have that option. And then of course
442:24 you have some advanced configuration for software updates and system updates.
442:26 Once again, I'm just going to leave that as is. And then you can also choose the
442:30 level of support that you need. You can scan through your different options
442:32 here. Obviously, you'll have a different price associated with it. But on the
442:36 default plan, I'm just going to continue with level one support. And now I'm
442:40 going to click on create service. Okay. So, I don't have enough credits to
442:42 actually deploy this service. So, I'm going to have to go add some credits in
442:46 my account. So, back in the account, I went to add credits. And now that I have
442:49 a card, I can actually add some credits. So, I'm going to agree to the terms and
442:52 add funds. Payment successful. Nice. We have some money to play with. Down here,
442:55 we can see 10 credits. This is also where we'll see how much we're spending
442:59 per hour once we have this service up and running. Unfortunately, we have to
443:01 do that all again. So, let me get back to the screen we were just on. Okay, now
443:04 we're back here. I'm going to click create service. We're deploying your
443:07 service. Please wait. And this is basically just going to take a few
443:10 minutes to spin up. Okay, so now what we see is the service is currently running.
443:13 We can click into the service and we should be able to get a link that's
443:16 going to take us to our NN instance. So, here's what it looks like. We can see
443:18 it's running. and we have all these different tabs and all these different
443:20 things to look through. We're going to keep it simple today and not really dive
443:23 into it. But what we're going to do is come down here to our network and this
443:27 is our actual domain to go to. So if I select all of this and I just click go
443:31 to this app, it's going to spin up our NN environment and because this is the
443:34 first time we're visiting it. We just have to do the setup. Okay. So once we
443:37 have that configured, going to hit next. We have to do some fun little onboarding
443:39 where it's going to ask us some questions right here. So then when
443:42 you're done with that, you just got to click get started and you now have this
443:45 option to get some paid features for free. I'm going to hit skip and we're
443:49 good to go. We are in NAN. So what's next is we need to install a community
443:52 node. So if you come down here to your settings and you click on settings, um
443:56 you can see you're on the community plan. We can go all the way down here to
443:59 community nodes. And now we have to install a community node. So in the
444:02 description we have the GitHub repository for this NAD nodes MCP that
444:06 was made by Nerding. And you can see there's some information on how to
444:09 actually install this. But all we have to do is basically just copy this line
444:12 right here. I'm just going to copy NAN- Nodes MCP. Click on install community
444:17 node. Put that in there. Hit understand. And so the reason you can only do this
444:20 on self-hosted is because these nodes are not a native verified node from
444:24 Naden. So it's just like, you know, we're downloading it from a public
444:27 source, at least the code. And then we hit install. Package installed. We can
444:31 now see we have one community node, which is the MCP. Cool. So I'm going to
444:35 leave my settings and I'm going to open up a new workflow. And we're just going
444:38 to hit tab to see if we have the actual node. So if I type in MCP, we can see
444:42 that we have MCP client and we have this little block, which just means that it
444:45 is part of the community node. So, I'm going to click into here, and we can see
444:48 we have some different options. We can execute a tool. We can get a prompt
444:52 template. We can list available prompts, list available resources, list available
444:56 tools, and read a resource. Right now, let's go with list available tools. Um,
444:59 the main one we'll be looking at is listing tools and then executing tools.
445:03 So, quick plug for the school community. If you're looking for a more hands-on
445:06 learning experience, as well as wanting to connect with over 700 members who are
445:10 also dedicated to this rapidly evolving space, then definitely give this
445:13 community a look. We have great discussions, great guest speakers as you
445:16 can see. We also have a classroom section with stuff like building agents,
445:20 vector databases, APIs and HTTP request, step-by-step builds. All the live calls
445:23 are recorded, all this kind of stuff. So, if this sounds interesting to you,
445:26 then I'd love to see you in a live call. Anyways, let's get back to the video.
445:29 So, obviously, we have the operation, but we haven't set up a credential yet.
445:32 So now what you're going to do is go to a different link in the description
445:36 which is the GitHub repository for different MCP servers and we can pretty
445:40 much connect to any of these like I said without having to run any code in our
445:44 terminal and install some stuff at least because we're hosting in the cloud. If
445:47 we're hosting locally it may be a little different. Okay, so I've seen a ton of
445:50 tutorials go over like Brave Search or Firecrawl. Um so let's try to do
445:53 something a little more fun. I think first we'll start off with Airbnb
445:56 because this one is going to be free. You don't even have to go get an API
445:59 key. So that's really cool. So, I'm going to click into this Airbnb MCP
446:02 server. There's a bunch of stuff going on here. And if you understand GitHub
446:06 and repositories and some code, you can look through like the Docker file and
446:09 everything, which is pretty cool. But for us non techies, all we have to do is
446:13 come down here. It's going to tell us what tools are available. But we just
446:16 need to look at how to actually install this. And so, all we're looking for is
446:21 the MPX type installer. And so after my testing, I tried this one first, but it
446:25 wouldn't let us execute the tool because we need to use this thing that is ignore
446:28 robots text, which just basically lets us actually access the platform. So you
446:32 can see here we have a command, which is npx, and then we have an array of
446:36 arguments, which is -y, this open B&B thing, and then also the ignore robots
446:39 text. So first of all, I'm just going to grab the command, which is npx. Copy
446:43 that. Go back and edit in, and we're going to create a new credential. This
446:47 one's going to be for Airbnb. So I'm just going to name this so we have it
446:49 kept. And then we're just going to paste the command right into there, mpx. Now
446:53 we can see we have arguments to fill out. So I'm going to go back into that
446:57 documentation. We can see the arguments are -ash-y. And then the next one is the
447:01 open B&B. And then the next one is ignore robots text. So we're going to
447:05 put them in one by one. So first is the dashy. Now I'm going to go back and grab
447:09 the at@ openb put a space and then paste it in there. And then I'm going to put another
447:14 space. And then we're going to grab this last part which is the ignore robots
447:18 txt. So once we paste that in there, we can basically just hit save. As you can
447:21 see, we've connected successfully. Um the credential is now in our space and
447:24 we didn't have to type in anything in a terminal. And now if we hit test step,
447:28 we should be able to pull in the tools that this MCP server gives us access to.
447:32 So it's as easy as that. As you can see, there are two tools. The first one is
447:35 Airbnb search. Here's when you use it, and then here's the schema to send over
447:39 to that tool. And then the second one is Airbnb listing details. Here's when you
447:43 want to use that, and then here's the schema that you would send over. And now
447:46 from here, which is really cool, we can click on another node, which is going to
447:49 be an MCP client once again. And this time, we want to execute a tool. We
447:53 already have our credential set up. We just did that together. And now all we
447:56 have to do is configure the tool name and the tool parameters. So just as a
447:59 quick demo that this actually works. The tool name, we're going to drag in Airbnb
448:03 search, as you can see. And then for the parameter, we can see these are the
448:05 different things that we need to fill out. And so all I'm going to do is just
448:10 send over a location. So I obviously hardcoded in location equals Los
448:13 Angeles. That's all we're going to try. And now we're going to hit test step and
448:16 we should see that we're getting Airbnbs back that are in Los Angeles. There we
448:21 go. We have um ton of different items here. So, let's actually take a look at
448:25 this listing. So, if I just copy this into the browser, we should see an
448:29 Airbnb arts district guest house. This is in Culver City, California. And
448:32 obviously, we could make our search more refined if we were able to also put in
448:36 like a check-in and checkout date, how many adults, how many children, how many
448:40 pets. We could specify the price, all this kind of stuff. Okay, cool. So that
448:43 was an example of how we search through an MCP server to get the tools and then
448:46 how we can actually execute upon that tool. But now if we want to give our
448:51 agent access to an MCP server, what we would do is obviously we're going to add
448:54 an AI agent. We are first of all going to come down here, give it a chat input
448:58 so we can actually talk to the agent. So we'll add that right here. And now we
449:01 obviously need to connect a chat model so that the agent has a brain and then
449:05 give it the MCP tools. So first of all, just to connect a chat model, I'm just
449:08 going to grab an open AI. I'm sorry for being boring, but all we have to do is
449:11 create a credential. So, if you go to your OpenAI account and grab an API key.
449:15 So, here's my account. As you can see, I have a ton of different keys, but I'm
449:17 just going to create a new one. This is going to be MCP test and then all we
449:21 have to do is copy that key. Come back and end it in and we're going to paste
449:25 that right in here. So, there's our key. Hit save. We'll go green. We're good to
449:29 go. We're connected to OpenAI. And now we can choose our model. So, for Mini is
449:32 going to work just fine here. Now, to add a tool once again, we're going to
449:35 add the MCP client tool right here. And let's just do Airbnb one more time. So,
449:40 we're connected to Airbnb list tools and I'm just going to say what tools do I
449:46 have and what's going to happen is it errored because the NAND nodes MCP tool
449:50 is not recognized yet even though the MCP nodes are. So, we have to go back
449:55 into Alstio real quick and change one thing. So, coming back into the GitHub
449:59 repository for the NN MCP node, we can see it gives us some installation
450:03 information, right? But if we go all the way down to how to use it as a tool, um
450:06 if I can scroll all the way down here. So here is an example of using it as a
450:10 tool. You have to set up the environment variable within your hosting
450:14 environment. So whether it's Allesio or Render or Digital Ocean or wherever
450:17 you're doing it, it'll be a little different, but you just have to navigate
450:19 down to where you have environment variables. We have to set nad community
450:26 package_allow tool usage. We have to set that to equal true. So I'm going to come
450:30 back into our Alstio service. And right here we have the software which is NAN
450:33 version latest. And what we can do is we can you know restart, view app logs. We
450:37 can change the version here or we can update the config which if we open this
450:41 up it may look a little intimidating but all we're looking for is right here we
450:45 have environment and we can see we have like different stuff with our Postgress
450:49 with our web hook tunnel URLs all this kind of stuff and so at the bottom I'm
450:52 just going to add a new line and I'm just going to paste in that command we
450:55 had which was nadn community packages allow and then instead of an equal I'm
450:59 going to put a colon and now we have that nadn community packages allow is
451:03 set to true and I'm just adding a space after the colon so now it's link and all
451:07 we're just going to do is hit update and restart. And so this is going to respin
451:10 up our instance. Okay, so it looks like we are now finished up. I'm going to go
451:13 ahead and close out of this. We can see that our instance is running. So now I'm
451:16 going to come back into here and I actually refresh this. So our agent's
451:20 gone. So let me get him back real quick. All right, so we have our agent back.
451:23 We're going to go ahead and add that MCP tool once again. Right here we are going
451:27 to have our credential already set up. The operation is list tools. And now
451:31 let's try one more time asking it what tools do you have? And it knows to use this node because
451:37 it's the operation here is list tools. So it's going to be pretty intelligent
451:40 about it. Now it's able to actually call that tool because we set up that
451:43 environment variable. So let's see what Airbnb responds with as far as what
451:47 tools it actually can use. Cool. So I have access to the following tools.
451:50 Airbnb search and listing details. Now let's add the actual tool that's going
451:56 to execute on that tool. So Airbnb um once again we have a credential already
451:58 set up. The operation we're going to choose execute tool instead. And now we
452:02 have to set up what is going on within this tool. So the idea here is that when
452:06 the client responds with okay I have Airbnb search and I have Airbnb listing
452:10 details the agent will then figure out based on what we asked which one do I
452:15 use and the agent has to pass that over to this next one which is actually going
452:19 to execute. So what we want to do here is the tool name cannot be fixed because
452:23 we want to make this dynamic. So, I'm going to change this to an expression
452:26 and I'm going to use the handy from AI function here, which is basically we're
452:30 just going to tell the AI agent, okay? You know, based on what's going on, you
452:34 choose which tool to use and you're going to put that in here. So, I'm going
452:38 to put in quotes tool and then I'm going to just define what that means. And in
452:43 quotes after a comma, I'm going to say the tool selected. So, we'll just leave
452:47 it as simple as that. And then what's really cool is for the tool parameters,
452:51 this is going to change based on the actual tool selected because there's
452:54 different schemas or parameters that you can send over to the different tools. So
452:58 we're going to start off by just hitting this button, which lets the model define
453:02 this parameter. It's going to get back what not only what tool am I using, but
453:05 what schema do I need to send over. So it should be intelligent enough to
453:09 figure it out for simple queries. So let's change this name to Airbnb
453:14 execute. I'm going to change this other one to Airbnb tools and then we'll have
453:18 the agent try to figure out what's going on. And just a reminder, there's no
453:22 system prompt in here. It literally just says your helpful assistant. So, we'll
453:25 see how intelligent this stuff is. Okay, so I'm asking it to search for Airbnbs
453:29 in Chicago for four adults. Let's try that off. We should obviously be using
453:34 the Airbnb search tool. And then we want to see if it can fill out the parameters
453:37 with a location, but also how many adults are going to be there because
453:41 earlier all we did was location. So, we got a successful response already back.
453:45 Once this finishes up, we should see potentially a few links down here that
453:49 actually link to places. So, here we go. Um, luxury designer penthouse Gold
453:53 Coast. It's apartment. It has three bedrooms, eight beds. So, that
453:56 definitely fits four guests. And you can also see it's going to give us the price
453:59 per night as well as, you know, the rating and just some other information.
454:02 So, let's click into this one real quick and we'll take a look. Make sure it
454:05 actually is in Chicago and it has all the stuff. This one does have 10 guests.
454:09 So, awesome. And we can see we got five total listings. So without having to
454:14 configure, you know, here's the API documentation and here's how we set up
454:18 our HTTP request, we're already able to do some pretty cool Airbnb searches. So
454:21 let's take a look in the Airbnb execute tool. We can see that what it sent over
454:25 was a location as well as a number of adults, which is absolutely perfect. The
454:29 model was able to determine how to format that and send it over as JSON.
454:33 And then we got back our actual search results. And now we're going to do
454:35 something where you actually do need an API key because most of these you are
454:39 going to need an API key. So we're going to go ahead and do Brave search because
454:42 you can search the web um using Brave Search API. So we're going to click into
454:46 this and all we have to do is once again we can see the tools here but we want to
454:49 scroll down and see how you actually configure it. So the first step is to go
454:53 to Brave Search and get an API key. You can click on this link right here and
454:56 you'll be able to sign in and get 2,000 free queries and then you'll grab your
454:59 API key. So I'm going to log in real quick. So it may send you a code to your
455:03 email to verify it. You'll just put in the code, of course, and then we're
455:06 here. As you can see, I've only done one request so far. I'm going to click on
455:10 API keys on this lefth hand side, and we're just going to copy this token, and
455:12 then we can put it into our configuration. So, let's walk through
455:15 how we're going to do that. So, I'm going to come in here and add a new
455:18 tool. We're going to add another MCP client tool, and we're going to create a
455:21 new credential because we're no longer connecting to Airbnb's server. We're
455:26 connecting to Brave Search Server. So, create new credential. Let me just name
455:28 this one real quick so we don't get confused. And then of course we have to
455:32 set up our command, our arguments, and our environments. And this is where
455:35 we're going to put our actual API key. Okay, so first things first, the
455:38 command. Coming back into the Brave Search MCP server documentation, we can
455:42 see that we can either do Docker, but what we're doing every time we're
455:46 connecting to this in NN is going to be MPX. So our command once again is MPX.
455:51 Copy that, paste it into the command. And now let's go back and get our
455:53 arguments, which is always going to start off with -ashy. Then after that, put a space.
455:59 We're going to connect to this MCP server, which is the Brave Search. And
456:02 then you can see that's it. In the Airbnb one, we had to add the robots
456:05 text. In this one, we didn't. So, everyone is going to configure a little
456:08 bit differently, but all you have to do is just read through the command, the
456:11 arguments, and then the environment variables. And in this case, unlike the
456:15 Airbnb one, we actually do need an API key. So, what we're going to do is we're
456:19 going to put in all caps brave_api_key. So, in the environment
456:22 variables, I'm going to change this to an expression just so we can actually
456:27 see. Brave API_key. And then I'm going to put an equals and then it says to put
456:30 your actual API key. So that's where we're going to paste in the API key from
456:34 Brave Search. Okay. So I put in my API key. Obviously I'm going to remove that
456:37 after this video gets uploaded. But now we'll hit save and we'll make sure that
456:40 we're good to go. Cool. And now we're going to actually test this out. So I'm
456:44 going to call this Brave Search Tools. Um and then before we add the actual execute tool, I'm just
456:52 going to ask and make sure it works. So what Brave Search tools do you have?
456:58 And it knows of course to hit the brave search because we gave it a good name.
457:01 And it should be pulling back with its different functions which I believe
457:04 there are two. Okay. So we have Brave web search and we have Brave local search. We also
457:09 have, you know, of course the description of when to use each one and
457:13 the actual schemas to send over. So let's add a tool and make sure that it's
457:16 working. We're going to click on the plus. We're going to add an MCP client
457:19 tool. We already have our Brave credential connected. We're going to
457:23 change the operation to execute tool. And once again, we're going to fill in
457:26 the tool name and the parameters. So for the tool name, same exact thing. We're
457:30 going to do from AI. And once again, this is just telling the AI what to fill
457:34 in here. So we're going to call it tool. We're going to give it a very brief
457:37 description of the tool selected. And then we are just going to
457:41 enable the tool parameters to be filled out by the model automatically. Final
457:46 thing is just to call this Brave search execute. Cool. There we go. So now we
457:52 have um two functions for Airbnb, two for Brave search, and let's make sure
457:54 that the agent can actually distinguish between which one to use. So I'm going
458:00 to say search the web for information about AI agents. So we'll send that off.
458:05 Looks like it's going straight to the Brave Search execute. So we may have to
458:08 get into the system prompt and tweak it a little bit. Now it's going back to the
458:12 Brave Search tools to understand, okay, what actions can I take? And now it's
458:15 going back to the Brave Search execute tool. And hopefully this time it'll get
458:19 it right. So, it looks like it's going to compile an answer right now based on
458:23 its search result and then we'll see exactly what happened. There we go. So,
458:27 we have looks like Oh, wow. It gave us nine different articles. Um, what are AI
458:32 agents by IBM? We can click into here to read more. So, this takes us straight to
458:36 IBM's article about AI agents. We have one also from AWS. We can click into
458:40 there. There's Amazon. And let's go all the way to the bottom. We also have one
458:44 on agents from Cloudflare. So, let's click into here. And we can see it took
458:48 us exactly to the right place. So super cool. We didn't have to configure any
458:51 sort of API documentation. As you can see in Brave Search, if we wanted to
458:55 connect to this a different way, we would have had to copy this curl
458:58 command, statically set up the different headers and the parameters. But now with
459:02 this server, we can just hit it right away. So let's take a look in the agent
459:05 logs, though, because we want to see what happened. So the first time it
459:09 tried to go straight to the execute tool and as you can see it filled in the
459:12 parameters incorrectly as well as the actual tool name because it didn't have
459:16 the information from the server. Then it realized okay I need to go here first so
459:20 that I can find out what I can do. I tried to use a tool called web search as
459:24 you can see earlier web search. But what I needed to do was use a tool called
459:28 brave web search. So now on the second try back to the tool it got it right and
459:32 it said brave web search. It also filled out some other information like how many
459:35 articles are we looking for and what's the offset. So if we were to come back
459:41 in here and say get me one article on dogs. Let's see what it would do. So
459:44 hopefully it's going to fill in the count as one. Once again it went
459:48 straight to the tool and it may I was going to say if we had memory in the
459:51 agent it probably would have worked because it would have seen that it used
459:55 brave web search previously but there's no memory here. So, it did the exact
459:58 same pattern and we would basically just have to prompt in this agent, hey,
460:03 search the MCP server to get the tools before you try to execute a tool. But
460:07 now we can see it found one article. It's called it's just Wikipedia. So, we
460:10 can click in here and see it's dog on Wikipedia. But if we click into the
460:14 actual Brave search execute tool, we can see that what it filled out for the
460:17 query was dogs and it also knew to make the count one rather than last time it
460:21 was 10. Okay. Okay, so something I want you guys to keep in mind is when you're
460:25 connecting to different MCP servers, the setup will always be the same where
460:29 you'll look in the GitHub repository, you'll look at the command, which will
460:32 be npx, you'll look at the arguments, which will be -ashy, space, the name of
460:36 the server, and then sometimes there'll be more. And then after that, you'll do
460:39 your environment variable, which is going to be a credential, some sort of
460:43 API key. So here, what we did was we asked Air Table to list its actions. And
460:46 in this case, as you can see, it has 13 different actions. And within each
460:49 action, there's going to be different parameters to send over. So, when you
460:52 start to scale up to some of these MCP servers that have more actions and more
460:56 parameters, you're going to have to be a little more specific with your
461:00 prompting. As you can see in this agent, there's no prompting going on. It's just
461:03 your helpful assistant. And what I'm going to try is in my Air Table, I have
461:07 a base called contacts, a table called leads, and then we have this one record.
461:11 So, let's try to ask it to get that record. Okay. So, I'm asking it to get
461:14 the records in my Air Table base called contacts, in my table called leads.
461:19 Okay, so we got the error receive tool input did not match expected schema. And
461:23 so this really is because what has to happen here is kind of complex. It has
461:26 to first of all go get the bases to grab the base ID and then it has to go grab
461:31 the tables in that base to get the table ID and then it has to formulate that
461:35 over in a response over here. As you can see, if the operation was to list
461:38 records, it has to fill out the base ID and the table ID in order to actually
461:42 get those records back. So that's why it's having trouble with that. And so a
461:45 great example of that is within my email agent for my ultimate assistant. In
461:50 order to do something like label emails, we have to send over the message ID of
461:54 the email that we want to label. And we have to send over the ID of the label to
461:57 actually add to that message. And in order to do those two things, we first
462:01 have to get all emails to get the message ID. And then we have to get
462:04 labels to get the label ID. So it's a multi-step process. And that's why this
462:08 agent with minimal prompting and not a super robust parameter in here. It's
462:12 literally just defining by the model, it's a little bit tough. But if I said
462:16 something like get my air table bases, we'll see if it can handle that because
462:20 that's more of a one-step function. And it looks like it's having trouble
462:23 because if we click into this actions, we can see that the operation of listing
462:28 bases sends over an empty array. So it's having trouble being able to like send
462:31 that data over. Okay, so I'm curious though. I went into my Air Table and I
462:35 grabbed a base ID. Now I'm going to ask what tables are in this air table base
462:39 ID and I gave it the base ID directly so it won't have to do that list basis
462:42 function. And now we can see it actually is able to call the tool hopefully. So
462:46 it's green and it probably put in that base ID and we'll be able to see what
462:50 it's doing here. But this just shows you there are still obviously some
462:53 limitations and I'm hoping that Nad will make a native you know MCP server node.
462:57 But look what it was able to do now is it has here are the tables within the
463:01 air table base ID that we provided and these are the four tables and this is
463:04 correct. And so now I'm asking it what records are in the air table base ID of
463:09 this one and the table ID of this one. And it should be able to actually use
463:12 its list records tool now in order to fill in those different parameters. And
463:16 hopefully we can see our record back which should be Robert California. So we
463:20 got a successful tool execute as you can see. Let's wait for this to pop back
463:24 into the agent and then respond to us. So now we have our actual correct
463:28 record. Robert California Saber custom AI solution all this kind of stuff. And
463:31 as you can see, that's exactly what we're looking at within our actual Air
463:34 Table base. And so, I just thought that that would be important to show off here
463:38 how this is like really cool, but it's not fully there yet. So, I definitely
463:41 think it will get there, especially if we get some more native integrations
463:43 with Naden. But, I thought that that would be a good demo to show the way
463:46 that it needs to fill in these parameters in order to get records. And,
463:51 you know, this type of example applies to tons of different things that you'll
463:55 do within MCP servers. So, there's one more thing I want to show you guys real
463:58 quick, just so you will not be banging your head against the wall the way I was
464:01 a couple days ago when I was trying to set up Perplexity. So, because you have
464:03 all these different servers to choose from, you may just trust that they're
464:06 all going to be the exact same and they're going to work the same. So, when
464:10 I went to set up the Plexity ask MCP server, I was pretty excited. Command
464:13 was mpx. I put in my arguments. I put in my environment variable, which was my
464:17 perplexity API key. And you can see I set this up exactly as it should be. My
464:20 API keys in there. I triple checked to make sure it was correct. And then when
464:23 I went to test step, basically what happened was couldn't connect to the MCP
464:27 server. Connection closed. And so after digging into what this meant, because I
464:31 set up all these other ones, as you can see in here, I did these and I have more
464:34 that I've connected to. The reason why this one isn't working, I imagine, is
464:38 because on the server side of things, on Perplexity side of things, it's either
464:42 going undergoing maintenance or it's not fully published yet. And it's not
464:45 anything wrong with the way that you're deploying it. So, I just wanted to throw
464:48 that out there because there may be some other ones in this big list that are not
464:52 fully there yet. So, if you are experiencing that error and you know
464:54 that you're filling out that, you know, MPX and the arguments and the
464:58 environment variable correct, then that's probably why don't spend all day
465:01 on it. Just wanted to throw that out there because, you know, I had I had a
465:06 moment the other day. Well, it's been a fun journey. I appreciate you guys
465:08 spending all this time with me. We've got one more section to close off on and
465:12 this is going to be kind of just the biggest lessons that I had learned over
465:16 the first six months of me building AI agents as a non-programmer. Let's go.
465:20 Because everyone's talking about this kind of stuff, there's a lot of hype and
465:22 there's a lot of noise to cut through. So, the first thing I want to do is talk
465:25 about the hard truths about AI agents and then I'll get into the seven lessons
465:28 that I've learned over the past six months building these things. So, the
465:32 first one is that most AI agent demos online are just that, they're demos. So,
465:35 the kind of stuff that you're going to see on LinkedIn, blog posts, YouTube,
465:40 admittedly, my own videos as well, these are not going to be productionready
465:43 builds or productionready templates that you could immediately start to plug into
465:46 your own business or try to sell to other businesses. You'll see all sorts
465:50 of cool use cases like web researchers, salespeople, travel agents. Just for
465:54 some context, these are screenshots of some of the videos I've made on YouTube.
465:57 This one is like a content creator. This one is a human and loop calendar agent.
466:01 We've got a technical analyst. We have a personal assistant with all its agents
466:04 over here. stuff like that. But the reality is that all of these pretty much
466:07 are just going to be, you know, proof of concepts. They're meant to open
466:11 everyone's eyes to what this kind of stuff looks like visually, how you can
466:14 spin this kind of stuff up, the fundamentals that go into building these
466:17 workflows. And at least me personally, my biggest motivation in making these
466:21 videos is to show you guys how you can actually start to build some really cool
466:24 stuff with zero programming background. And so why do I give all those templates
466:27 away for free? It's because I want you guys to download them, hit run, see the
466:31 data flow through and understand what's going on within each node rather than
466:34 being able to sell that or use it directly in your business because
466:37 everyone has different integrations. Everyone's going to have different
466:40 system prompting and different little tweaks that they need for an automation
466:44 to be actually high value for them. Besides that, a lot of this is meant to
466:47 be within a testing environment, but if you push it into production and you
466:50 expose it to all the different edge cases and tons of different users,
466:53 things are going to come through differently and the automation is going
466:56 to break. And what you need to think about is even these massive companies in
467:00 the space like Apple, Google, Amazon, they're also having issues with AI
467:04 reliability like what we saw with Apple intelligence having to be delayed. So if
467:07 a company like this with a massive amount of resources is struggling with
467:10 some of these productionready deployments, then it's kind of
467:14 unrealistic to think that a beginner or non-programmer in these tools can spin
467:18 up something in a few days that would be fully production ready. And by that I
467:21 just mean like the stuff you see online. You could easily get into nodn, build
467:25 something, test it, and get it really robust in order to sell it. That's not
467:28 what I'm saying at all. Just kind of the stuff you see online isn't there yet.
467:32 Now, the second thing is being able to understand the difference between AI
467:36 agents and AI workflows. And it's one of those buzzwords that everyone's kind of
467:39 calling everything an agent when in reality that's not the truth. So, a lot
467:42 of times people are calling things AI agents, even if they're just sort of
467:46 like an AI powered workflow. Now, what's an AI powered workflow? Well, as you can
467:49 see right down here, this is one that I had built out. And this is an AI powered
467:53 workflow because it's very sequential. As you can see, the data moves from here
467:57 to here to here to here to here to here. And it goes down that process every
468:00 time. Even though there are some elements in here using AI like this
468:04 basic chain and this email writing agent. Now, this has a fixed amount of
468:07 steps and it flows in this path every single time. Whereas something over here
468:11 like an AI agent, it has different tools that it's able to call and based on the
468:14 input, we're not sure if it's going to call each one once or it's going to call
468:17 this one four times or if it's going to call this one and then this one. So
468:20 that's more of a non-deterministic workflow. And that's when you need to
468:24 use something like an AI agent. The difference here is that it's choosing
468:27 its own steps. The process is not predefined, meaning every time we throw
468:31 an input, we're not sure what's going to happen and what we're going to get back.
468:35 And then the agent also loops, calls its tools, it observes what happens, and
468:38 then it reloops and thinks about it again until it realizes, okay, based on
468:43 the input, I've done my job. Now I'm going to spit something out. And so
468:46 here's just a different visualization of, you know, an AI agent with an input,
468:49 the agent has decision, and then there's an output or this AI workflow where we
468:54 have an input, tool one, LLM call, tool two, tool three, output where it's going
468:58 to happen in that process every single time. And the truth is that most
469:02 problems don't require true AI agents. they can simply be solved with building
469:07 a workflow that is enhanced with AI. And a common mistake, and I think it's just
469:09 because of all the hype around AI agents, is that people are opting
469:13 straight away to set up an agent. Like in this example right here, let's say
469:16 the input is a form trigger where we're getting a form response. We're using
469:19 this tool to clean up the data. We're using this LLM call. So it's an AI
469:23 enhanced workflow to actually write a personalized email. We're using this to
469:26 update the CRM and then we're using this to send the email and then we get the
469:30 output back as the human. We could also set this up as a AI agent where we're
469:34 getting the form response. We're sending this agent the information and it can
469:37 choose, okay, first I'm going to clean the data and then I'm going to come back
469:40 here and think about it and then I'm going to update the CRM and then I'm
469:43 going to create an email and then I'm going to send the email and then I'm
469:46 going to output and respond to the human and tell it that, you know, we we did
469:50 that job for you. But because this process is pretty linear, it's going to
469:53 be a lot more consistent if we do a workflow. It's going to be easier to
469:56 debug. Whereas over here, the agent may mess up some tool calls and do things in
470:00 the wrong order. So it's better to just structure it out like that. And so if we
470:03 start approaching using these no code tools to build AI workflows first, then
470:07 we can start to scale up to agents once we need more dynamic decision-m and tool
470:11 calling. Okay, but that's enough of the harsh truths. Let's get into the seven
470:15 most important lessons I've learned over the six months of building AI agents as
470:19 a non-programmer. So the first one is to build workflows first. And notice I
470:23 don't even say AI workflows here, I say workflows. So, over the past six months
470:27 of building out these systems and hopping on discovery calls with clients
470:30 where I'm trying to help them implement AI into their business processes, we
470:34 always start by, you know, having them explain to me some of their pain points
470:38 and we talk through processes that are repetitive and processes that are a big
470:42 time suck. And a lot of times they'll come in, you know, thinking they need an
470:45 AI agent or two. And when we really start to break down this process, I
470:48 realize this doesn't need to be an agent. This could be an AI workflow. And
470:51 then we break down the process even more and I'm like, we don't even need AI
470:55 here. We just need rule-based automation and we're going to send data from A to B
470:59 and just do some manipulation in the middle. So let's look at this flowchart
471:02 for example. Here we have a form submission. We're going to store data.
471:05 We're going to route it based on if it's sales, support, or general. We'll have
471:09 that ticket or notification. Send an automated acknowledgement. And then
471:13 we'll end the process. So this could be a case where we don't even need AI. If
471:16 we're having the forms come through and there's already predefined these three
471:19 types which are either sales, support, or general, that's a really easy
471:24 rules-based automation. Meaning, does inquiry type equal sales? If yes, we'll
471:28 go this way and so on and so forth. Now, maybe there's AI we need over here to
471:32 actually send that auto acknowledgement or it could be as simple as an automated
471:35 message that we're able to define based on the inquiry type. Now, if this the
471:41 form submission is just a a block of text and we need an AI to read it,
471:45 understand it and decide if it's sales, support, or general, then we would need
471:49 AI right here. And that's where we would have to assess what the data looks like
471:52 coming in and then what we need to do with the data. So, it's always important
471:56 to think about, do we even need AI here? Because a lot of times when we're trying
471:59 to cut off some of that lowhanging fruit, when we realize that we're doing
472:02 some of this stuff too manually, we don't even need AI yet. We're just going
472:06 to create a few workflow automations and then we can start getting more advanced
472:09 with adding AI in certain steps. So hopefully this graphic adds a little
472:12 more color here. On the left we're looking at a rule-based sort of filter
472:16 and on the right we're looking at an AI powered filter. So let's take a look at
472:20 the left one first. We have incoming data. So let's just say we're routing
472:24 data off based on if someone's budget is greater than 10 or less than 10.
472:28 Hopefully it's greater than 10. Um so the filter here is is X greater than 10?
472:34 If yes, we'll send it up towards tool one. If no, we're going to send it down
472:37 towards tool two. And those are the only two options because those are the only
472:41 two buckets that a number can fit into. Unless I guess it's exactly equal to 10.
472:45 I probably should have made this sign a greater than or equal to, but anyways,
472:48 you guys get the point. Now, over here, if we're looking at an AI powered sort
472:51 of filter right here, we're using a large language model to evaluate the
472:55 incoming data, answer some sort of question, and then route it off based on
473:00 criteria. So incoming data we have to look at or sorry not we the AI is
473:04 looking at what type of email this is because this uses some element of
473:09 reasoning or logic or decision-m something that actually needs to be able
473:12 to read the context and understand the meaning of what's coming through in
473:15 order to make that decision. This is where before AI we would have to have a
473:19 human in the loop. We'd have to have a human look at this data and analyze
473:22 which way it's going to go rather than being able to write some sort of code or
473:28 filter to do so because it's more than just like what words exist. It's
473:31 actually like when these words come together in sentences and paragraphs,
473:35 what does it mean? So AI is able to read that and understand it and now it can
473:39 decide if it's a complaint, if it's billing or if it's promotion and then
473:42 based on what type it is, we'll send it off to a different tool to take the next
473:46 action. So the big takeaway here is to find the simplest approach first. You
473:51 may not even need an agent at all. So why would you add more complexity if you
473:54 don't have to? And also if you start to learn the fundamentals of workflow
473:59 automation, data flow, logic, creative problem solving, all that kind of stuff,
474:02 it's going to make it so much easier when you decide to scale up and start
474:05 building out these multi-aggentic systems as far as, you know, sending
474:09 data between workflows and understanding routing. Your life's going to be a lot
474:13 easier. So only use AI where it actually is going to provide value. And also
474:18 using AI and hitting an LLM isn't free typically. And I mean if you're
474:22 self-hosting, but anyways, it's not free. So why would you want to spend
474:24 that extra money in your workflow if you don't have to? You can scale up when you
474:28 need the system to decide the steps on its own, when you need it to handle more
474:32 complex multi-step reasoning, and when you needed to control usage dynamically.
474:36 And I highlighted those three words because that's very like human sounding,
474:41 right? Decide, reason, dynamic. Okay, moving on to lesson number two. This is
474:45 to wireframe before you actually get in there and start building. One of the
474:48 biggest mistakes that I made early on and that I see a ton of people making
474:51 early on is jumping straight into their builder, whatever it is, and trying to
474:56 get the idea in their head onto a canvas without mapping it out at all. And this
475:00 causes a lot of problems. So, the three main ones here are you you start to
475:03 create these messy, over complicated workflows because you haven't thought
475:06 out the whole process yet. You're going to get confused over where you actually
475:10 need AI and where you don't. and you may end up spending hours and hours
475:15 debugging, trying to revise um all this kind of stuff because you didn't
475:18 consider either a certain integration up front or a certain functionality up
475:21 front or you didn't realize that this could be broken down into different
475:24 workflows and it would make the whole thing more efficient. I can't tell you
475:27 how many times when I started off building these kind of things that I got
475:31 almost near the end and I realized I could have done this with like 20 less
475:34 nodes or I could have done this in two workflows and made it a lot simpler. So,
475:37 I end up just deleting everything and restarting. So what we're looking at
475:40 right here are a different Excalibraw wireframes that I had done. As you can
475:43 see, I kind of do them differently each time. There's not really a, you know,
475:47 defined way that you need to do this correctly or correct color coding or
475:51 shapes. The idea here is just to get your thoughts from your head onto a
475:56 paper or onto a screen and map it out before you get into your workflow
476:00 builder because then in the process of mapping things out, you're going to
476:02 understand, okay, there may be some complexities here or I need all of this
476:05 functionality here that I didn't think of before. And this isn't really to say
476:08 that there's one correct way to wireframe. As you can see, sometimes I
476:12 do it differently. Um, there's not like a designated schema or color type or
476:16 shape type that you should be using. Whatever makes sense to you really. But
476:19 the idea here is even if you don't want to wireframe and visually map stuff out,
476:23 it's just about planning before you actually start building. So, how can you
476:28 break this whole project? You know, a lot of people ask me, I have an input
476:31 and I know what that looks like and I know what I want to get over here, but
476:34 in between I have no idea what that looks like. So, how can we break this
476:39 whole project into workflows? And each workflow is going to have like
476:42 individual tasks within that workflow. So, breaking it down to as many small
476:46 tasks as possible makes it a lot more easy to handle. Makes it a lot less
476:49 overwhelming than looking at the entire thing at once and thinking, how do I get
476:53 from A to Z? And so, what that looks like to either wireframe or to just
476:57 write down the steps is you want to think about what triggers this workflow.
477:01 How does this process start? And what does the data look like coming in that
477:05 triggers it? From there, how does the data move? Where does it go? Am I able
477:09 to send it down one path? Do I have to send it off different ways based on some
477:13 conditional logic? Do I need some aspect of AI to take decisions based on the
477:17 different types of data coming through? You know, what actions have to be taken?
477:21 Where do we need rag or API calls involved? Where do we need to go out
477:26 somewhere to get more external data to enrich the context going through to the
477:29 next LLM? What integrations are involved? So, if you ask yourself these
477:33 kind of questions while you're writing down the steps or while you're
477:36 wireframing out the skeleton of the build, you are going to answer so many
477:39 more questions, especially if it comes to, you know, you're trying to work with
477:42 a client and you're trying to understand the scope of work and understand what
477:46 the solution is going to look like. If you wireframe it out, you're going to
477:49 have questions for them that they might have not have thought of either, rather
477:52 than you guys agree on a scope of work and you start building this thing out
477:54 and then all of a sudden there's all these complexities. Maybe you priced way
477:58 too low. Maybe you don't know the functionality. And the idea here is just
478:02 to completely align on what you're trying to build and what the client
478:06 wants or what you're trying to build and what you're actually going to do in your
478:09 canvas. So there's multiple use cases, but the idea here is that it's just
478:13 going to be so so helpful. And because you're able to break down every single
478:17 step and every task involved, you'll have a super clear idea on if it's going
478:20 to be an agent or if it's going to be a workflow because you'll see if the stuff
478:23 happens in the same order or if there's an aspect of decision-m involved. So,
478:28 when I'm approaching a client build or an internal automation that I'm trying
478:31 to build for myself, there is no way that more than half my time is spent in
478:36 the builder. pretty much upfront I'm doing all of the wireframing and
478:38 understanding what this is going to look like because the goal here is that we're
478:42 basically creating a step-by-step instruction manual of how to put the
478:45 pieces together. You should think of it as if you're putting together a Lego
478:48 set. So, you would never grab all the pieces from your bag of Legos, rip it
478:52 open, and just start putting them together and trying to figure out where
478:55 what goes where. You're always going to have right next to you that manual where
478:58 you're looking at like basically the step-by-step instructions and flipping
479:01 through. So, that's what I do with my two monitors. On the left, I have my
479:04 wireframe. On the right, I have my NADN and I'm just looking back and forth and
479:07 connecting the pieces where I know the integrations are supposed to be. You
479:10 need a clear plan. Otherwise, you're not going to know how everything fits
479:14 together. It's like you were trying to, you know, build a 500 piece puzzle, but
479:17 you're not allowed to look at the actual picture of a completed puzzle, and
479:20 you're kind of blindly trying to put them together. You can do it. It can
479:24 work, but it's going to take a lot longer. Moving on to number three, we
479:27 have context is everything. The AI is only going to be as good as the
479:31 information that you provide it. It is really cool. The tech has come so far.
479:34 These AI models are super super intelligent, but they're pre-trained.
479:37 So, they can't just figure things out, especially if they're operating within a
479:41 specific domain where there's, you know, industry jargon or your specific
479:45 business processes. It needs your subject matter expertise in order to
479:48 actually be effective. It doesn't think like we do. It doesn't have past
479:52 experiences or intuition, at least right away. We can give it stuff like that. It
479:56 only works with the data it's given. So, garbage in equals garbage out. So, what
479:59 happens if you don't provide high quality context? Hallucination. The AI
480:03 is going to start to make up stuff. Tool misuse. It's not going to use the tools
480:06 correctly and it's going to fail to achieve your tasks that you need it to
480:10 do. And then vague responses. If it doesn't have clear direction and a clear
480:13 sight of like what is the goal? What am I trying to do here? It is just not
480:16 going to be useful. It's going to be generic and it's going to sound very
480:20 obviously like it came from a chat GBT. So, a perfect example here is the
480:24 salesperson analogy. Let's say you hire a superstar salesman who is amazing,
480:29 great sales technique. He understands how to build rapport, closing
480:32 techniques, communication skills, just like maybe you're taking a GPT40 model
480:36 out of the box and you're plugging it into your agent. Now, no matter how good
480:41 that model is or the salesperson is, there are going to be no closed sales
480:45 without the subject matter expertise, the business process knowledge, you
480:48 know, understanding the pricing, the features, the examples, all that kind of
480:52 stuff. So, the question becomes, how do you actually provide your AI agents with
480:56 better context? And there are three main ways here. The first one is within your
480:59 agent you have a system prompt. So this is kind of like the fine-tuning of the
481:03 model where we're training it on this is your role. This is how you should
481:05 behave. This is what you're supposed to do. Then we have the sort of memory of
481:09 the agent which is more of the short-term memory we're referring to
481:13 right here where it can understand like the past 10 interactions it had with the
481:17 user based on the input stuff like that. And then the final aspect which is very
481:21 very powerful is the rag aspect where it's able to go retrieve information
481:24 that it doesn't currently have but it's able to understand what do I need to go
481:28 get and where can I go get it. So it can either hit different APIs to get
481:31 real-time data or it can hit its knowledge base that hopefully is syncing
481:35 dynamically and is updated. So either way it's reaching outside of its system
481:40 prompt to get more information from these external sources. So anyways
481:44 preloaded knowledge. This is basically where you tell the agent its job, its
481:48 description, its role. As if on day one of a summer internship, you told the
481:52 intern, "Okay, this is what you're going to do all summer." You would define its
481:55 job responsibilities. You would give key facts about your business, and you would
481:59 give it rules and guidelines to follow. And then we move on to the user specific
482:02 context, which is just sort of its memory based on the person it's
482:06 interacting with. So, this reminds the AI what the customer has already asked,
482:09 previous troubleshooting steps that have been taken, maybe information about the
482:14 customer. And without this user context, specific memory, the AI is going to ask
482:17 the same questions over and over. It's going to forget what's already been
482:21 conversated about, and it'll probably just annoy the end user with repetitive
482:26 information and not very tailored information. So we're able to store
482:29 these past interactions so that the AI can see it before it responds and before
482:33 it takes action so that it's actually more seamless like a human conversation.
482:36 It's more natural and efficient. And then we have the aspect of the real-time
482:40 context. This is because there's some information that's just too dynamically
482:44 changing or too large to fit within the actual system prompt of the agent. So
482:48 instead of relying on this predefined knowledge, we can retrieve this context
482:51 dynamically. So maybe it's as simple as we're asking the agent what the weather
482:55 is. So, it hits that weather API in order to go access real-time current
482:58 information about the weather. It pulls it back and then it responds to us. Or
483:01 it could be, you know, we're asking about product information within a
483:04 database. So, it could go hit that knowledge base what that has all of our
483:07 product information and it will search through it, look for what it needs, and
483:11 then pull it back and then respond to us with it. So, that's the aspect of Rag
483:15 and it's super super powerful. Okay. And this is a great segue from Rag. Now,
483:19 we're talking about vector databases and when not to use a vector database. So, I
483:23 think something similar happened here with vector databases as the same way it
483:26 happened with AI agents is that it was just some cool magic buzzword and it
483:31 sounded like almost too good to be true. So, everyone just started overusing them
483:35 and overusing the term. And that's definitely something that I have to
483:39 admit that I fell victim to because when I first started building this stuff, I
483:42 was taking all types of data, no matter what it was, and I was just chucking it
483:46 into a vector database and chatting with it. And because you know 70% of the time
483:50 I was getting the right answers. I was like this is so cool because it's that
483:53 you know as you can see based on this illustration it is sort of like that
483:58 multi-dimensional data representation. It's a multi-dimensional space where the
484:02 data points that you were storing are stored as these little vectors these
484:05 little dots everywhere. And they're not just placed in there. They're placed in
484:09 there intelligently because the actual context of the chunk that you're putting
484:13 into the vector database it's placed based on its meaning. So, it's embedded
484:17 based on all these numerical representations of data. As you can see,
484:21 like right up here, this is what the sort of um embedding dimensions look
484:25 like. And each point has meaning. And so, it's placed somewhere where other
484:29 things are placed that are similar. They're placed near them. So, over here
484:32 we have like, you know, animals, cat, dog, wolf, those are placed similarly.
484:35 We have like fruits over here, but also like tech stuff because Google's here
484:39 and Apple, which isn't the fruit, but it's also the tech brand. So, you know,
484:43 it also kind of shifts as as you embed more vectors in there. So, it's just
484:46 like multi-changing. It's very intelligent and the agent's able to scan
484:50 everything and grab back all the chunks that are relevant really quickly. And
484:53 like I said, it's just kind of one of those buzzwords that super cool.
484:58 However, even though it sounds cool, after building these systems for a
485:01 while, I learned that vector databases are not always necessary for most
485:05 business automation needs. If your data is structured and it needs exact
485:09 retrieval, which a lot of times company data is very structured and you do need
485:13 exact retrieval, a relational database is going to be much better for that use
485:17 case. And you know, just because it's a buzzword, that's exactly what it is, a
485:21 buzz word. So that doesn't always mean it's the best tool for the job. So
485:24 because in college I studied business analytics, I've had a little bit of a
485:28 background with like databases, relational databases, and analytics. Um,
485:32 but if you don't really understand the difference between structured and
485:35 unstructured data and what a relational database is, we'll go over it real
485:39 quick. Structured data is basically anything that can fit into rows and
485:44 columns because it has an organized sort of predictable schema. So in this
485:47 example, we're looking at customer data and we have two different tables and
485:51 this is relational data because over here we have a customer ID column. So
485:56 customer ID 101 is Alice and we have Alice's email right here. Customer ID
486:00 102 is Bob. We have Bob's email and then we have a different table that is able
486:04 to relate back to this customer lookup table because we match on the fields
486:08 customer ID. Anyways, this is an order table it looks like. So we have order
486:12 one by customer ID 101 and the product was a laptop. And we may think okay well
486:16 we're looking at order one. Who was that? We can relate it back to this
486:20 table based on the customer ID and then we can look up who that user was. So
486:23 there's a lot of use cases out there. When I said, you know, a lot of business
486:27 data is going to be structured like user profiles, sales records, you know,
486:32 invoice details, all this kind of stuff. You know, even if it's not a relational
486:35 aspect of linking two tables together, if it's structured data, which is going
486:39 to be, you know, a lot of chart stuff, number stuff, um, Excel sheets, Google
486:44 Sheets, all that kind of stuff, right? And if it's structured data, it's going
486:47 to be a lot more efficient to query it using SQL rather than trying to
486:51 vectorize it and put it into a vector database for semantic search.
486:56 So we said as a non-programmer, if you're, you know, I'm sure you've been
486:59 hearing SQL quering and maybe you don't understand exactly what it is. This is
487:03 what it is, right? So we're almost kind of using natural language to extract
487:07 information, but we could have, you know, half a million records in a table.
487:10 And so it's just a quicker way to actually filter through that stuff to
487:14 get what we need. So in this case, let's say the SQL query we're doing is based
487:19 on the user question of can you check the status of my order for a wireless
487:23 mouse placed on January 10th. On the left, we have an orders table. And this
487:26 is the information we need. These are the fields, but there may be 500,000
487:30 records. So we have to filter through it really quickly. And how we would do this
487:33 is we would say, okay, first we're going to do a select statement, which just
487:36 means, okay, we just want to see order ID, order date, order status, because
487:39 those are the only columns we care about. We want to grab it from the
487:42 orders table. So, this table and then now we set up our filters. So, we're
487:46 just looking for only rows where product name equals wireless mouse because
487:50 that's the product she bought. And then um and the order date is January 10,
487:57 2024. So, we're just saying whenever these two conditions are met, that's
488:01 when we want to grab those records and actually look at them. So, that's an
488:04 example of like what a SQL query is doing. And then on the other side of
488:08 things, we have unstructured data, which is usually the best use case for
488:11 unstructured data going into a vector database, based on my experience, is
488:16 just vectorizing a ton of text. So big walls of text, chunking them up,
488:19 throwing them into a vector database, and they're placed, you know, based on
488:21 the meaning of those chunks, and then can be grabbed back semantically,
488:26 intelligently by the agent. But anyways, this is a quick visualization that I
488:29 made right over here. Let's say we have um a ton tons of PDFs, and they're just
488:33 basically policy information. We take that text, we chunk it up. So, we're
488:36 just splitting it based on the characters within the actual content.
488:40 And then each chunk becomes a ve a vector, which is just one of these dots
488:43 in this threedimensional space. And they're placed in different areas, like
488:48 I said, based on the actual meaning of these chunks. So, super cool stuff,
488:52 right? So then when the agent wants to, you know, look in the vector database to
488:55 pull some stuff back, it basically makes a query and vectorizes that query
488:59 because it will be placed near other things that are related and then it will
489:02 grab like everything that's near it and that's how it pulls back if we're doing
489:05 like a nearest neighbor search. But don't want to get too technical here. I
489:09 wanted to show an example of like why that's beneficial. So on the left we
489:15 have product information about blankets and on the right we also have product
489:18 information about blankets and we just decided on the right it's a vector
489:22 database on the left it's a relational database and so let's say we hooked this
489:27 up to you know a customer chatbot on a website and the customer asked I'm
489:32 looking for blankets that are fuzzy now if it was a relational database the
489:36 agent would be looking through and querying for you know where the
489:40 description contains the word fuzzy or Maybe material is contains the word
489:44 fuzzy. And because there's no instances of the word fuzzy right here, we may get
489:49 nothing back. But on the other side of things, when we have the vector
489:52 database, because each of these vectors are placed based on the meaning of their
489:56 description and their material, the agent will be able to figure out, okay,
490:00 if I go over here and I pull back these vectors, these are probably fuzzy
490:03 because I understand that it's cozy fleece or it's um, you know, handwoven
490:08 cotton. So that's like why there's some extra benefits there because maybe it's
490:12 not a word for word match, but the agent can still intelligently pull back stuff
490:15 that's similar based on the actual context of the chunks and the meaning.
490:20 Okay, moving on to number five. Why prompting is critical for AI agents. Um,
490:24 we already talked about it a little bit, I guess, in the context is everything
490:28 section because prompting is giving it more context, but this should be a whole
490:34 lesson in itself because it is truly an art. And you have to find that fine line
490:38 between you don't want to over prompt it and you want to minimize your token
490:40 usage, but you also want to give it enough information. But, um, when people
490:44 think of prompting, they think of chatgbt, as you can see right here,
490:47 where you have the luxury of talking to chat, it's going to send you something
490:50 back. You can tell it, hey, make that shorter, or hey, make it more
490:53 professional. It'll send it back and you can keep going back and forth and making
490:57 adjustments until you're happy with it and then you can finally accept the
491:00 output. But when we're dealing with AI agents and we're trying to make these
491:04 systems autonomous, we only have one shot at it. So, we're going to put in a
491:07 system prompts right here that the agent will be able to look at every time
491:10 there's like an input and we have to trust that the output and the actions
491:14 taken before the output are going to be high quality. And so, like I said, this
491:18 is a super interesting topic and if you want to see a video where I did more of
491:20 a deep dive on it, you can check it out. I'll tag it right here. Um, where I
491:24 talked about like this lesson, but the biggest thing I learned building these
491:28 agents over the past six months was reactive prompting is way better than
491:33 proactive prompting. Admittedly, when I started prompting, I did it all wrong. I
491:38 was lazy and I would just like grab a custom GPT that I saw someone use on
491:42 YouTube for, you know, a prompt generator that generates the most
491:45 optimized prompts for your AI agents. I think that that's honestly a bunch of
491:49 garbage. I even have created my own AI agent system prompt architect and I
491:52 posted it in my community and people are using it, but I wouldn't recommend to
491:57 use it to be honest. Um, nowadays I think that the best practice is to write
492:01 all of your prompts from scratch by hand from the beginning and start with
492:04 nothing. So, that's what I meant by saying reactive prompting. Because if
492:07 you're grabbing a whole, you know, let's say you have 200 lines of prompts and
492:10 you throw it in here into your system prompt and then you just start testing
492:14 your agent, you don't know what's going on and why the agent's behaving as it
492:19 is. You could have an issue pop up and you add a different line in the system
492:23 prompt and the issue that you originally were having is fixed, but now a new
492:26 issues popped up and you're just going to be banging your head against the wall
492:30 trying to debug this thing by taking out lines, testing, adding lines, testing.
492:34 it's just going to be such a painful process when in reality what you should
492:38 do is reactive prompt. So start with nothing in the system prompt. Give your
492:42 agent a tool and then test it. Throw in a couple queries and see if you're
492:45 liking what's coming back. You're going to observe that behavior and then you
492:49 have the ability to correct the system prompt reactively. So based on what you
492:53 saw, you can add in a line and say, "Hey, don't do that." Or, you know, this
492:57 worked. Let's add another tool and add another prompt now or another line in
493:01 the prompt. Because what we know right now is that it's working based on what
493:06 we have. That way if we do add a line and then we test and then we observe the
493:10 behavior and we see that it broke, we know exactly what broke this automation
493:13 and we can pinpoint it rather than if we threw in a whole pre-generated system
493:17 prompt. So that's the main reason why I don't do that anymore. And then it's
493:22 just that loop of test, reprompt, test again, reprompt. Um, and what's super
493:27 cool about this is because you can basically hard prompt your agent with
493:31 things in the system prompt because you're able to show it examples of, you
493:35 know, hey, I just asked you this and you took these steps. That was wrong. Don't
493:39 do that again. This is what you should have done. And basically, if you give it
493:42 that example within the system prompt, you're training this thing to not behave
493:45 like that. And you're only improving the consistency of your agent's performance.
493:50 So the the key elements of a strong AI agent prompt and this isn't like every
493:54 single time. These are the five things you should have because every agent's
493:57 different. For example, if you're creating a context creation agent, you
494:01 wouldn't need a tool section really if it's not if it doesn't have any tools.
494:03 You' just be prompting it about its output and about its role. But anyways,
494:07 the first one that we're talking about is role. This is sort of just like
494:10 telling the AI who it is. So this could be as simple as like you're a legal
494:13 assistant specializing in corporate law. Your job is to summarize contracts in
494:18 simple terms and flag risky clauses. Something like that. It gives the AI
494:22 clear purpose and it helps the model understand the tone and the scope of its
494:26 job. Without this, the AI is not going to know how to frame responses and
494:28 you're going to get either very random outputs or you're going to get very
494:32 vague outputs that are very clearly generated by AI. Then of course you have
494:36 the context which is going to help the agent understand, you know, what is
494:39 actually coming in every time because essentially you're going to have
494:41 different inputs every time even though the system prompt is the same. So saying
494:44 like this is what you're going to receive, this is what you're going to do
494:48 with it, um this is your end goal. So that helps tailor the whole process and
494:51 make it more seamless as well. That's one common mistake I actually see with
494:54 people's prompting when they start is they forget to define what are you going
494:57 to be getting every time? Because the agency, they're going to be getting a
494:59 ton of different emails or maybe a ton of different articles, but it needs to
495:02 know, okay, this information that you're throwing at me, what is it? Why am I
495:06 getting it? Then of course the tool instructions. So when you're building a
495:09 tools agent, this is the most important thing in my mind. Yes, it's good to add
495:13 rules and show like when you use each thing, but having an actual section for
495:17 your tools is going to increase the consistency a lot. At least that's what
495:21 I found because this tells the AI exactly what tools are available, when
495:26 to use them, how they work. Um, and this is really going to ensure correct tool
495:29 usage rather than the AI trying to go off of like these sort of guidelines
495:34 because it's a nondeterministic workflow and um trying to trying to guess of
495:39 which one will do what and um yeah, have a tool section and define your tools.
495:42 Then you've got your rules and constraints and this is going to help
495:45 prevent hallucination. It's going to help the agent stick to sort of like a
495:49 standard operating procedure. Now, you just have to be careful here because you
495:52 don't want to say something like do all of these in this order every time
495:56 because then it's like why are you even using an agent? You should just be using
495:59 a workflow, right? But anyways, just setting some foundational like if X do
496:06 Y, if Z do A, like that sort of thing. And then finally, examples, which I
496:10 think are super super important, but I would never just put these in here
496:13 blind. I would only use examples to directly counter and directly uh correct
496:18 something that's happened. So what I alluded to earlier with the hard
496:20 prompting. So let's say you give the agent an input. It calls tool one and
496:24 then it gives you an output that's just incorrect, completely incorrect. You'd
496:28 want to give it in the example, you could show, okay, here's the input I
496:30 just gave you. Now here's what you should have done. Call tool two and then
496:34 call tool three and then give me the output. And then it knows like, okay,
496:37 that's what I did. This is what I should have done. So if I ever get an input
496:40 similar to this, I can just call these two tools because I know that's an
496:44 example of like how it should behave. So hard prompting is really really going to
496:48 come in handy and not just in the examples but also just with the rest of
496:52 your system prompt. All right, moving on to number six. We have scaling agents
496:57 can be a nightmare. And this is all part of like one of the hard truths I talked
497:00 about earlier where a lot of the stuff you see online is a great proof of
497:04 concept, a great MVP, but if you were to try to push this into production in your
497:07 own business, you're going to notice it's not there yet. Because when you
497:10 first build out these AI agents, everything can seem to work fine. It's a
497:13 demo. It's cool. It really opens your eyes to, you know, the capabilities, but
497:17 it hasn't usually gone under that rigorous testing and evaluation and
497:23 setting up these guard rails um and all of that, you know, continuous monitoring
497:26 that you need to do to evaluate its performance before you can push it out
497:30 to all, you know, 100 users that you want to eventually push it out to. You
497:34 know, on a single user level, if you have a few hallucinations every once in
497:38 a while, it's not a huge deal. But as you scale the use case, you're just
497:42 going to be scaling hallucinations and scaling problems and scaling all these
497:45 failures. So that's where it gets tricky. You can start to get retrieval
497:48 issues as your database grows. It's going to be harder for your agent to cut
497:51 through the noise and grab what it needs. So you're going to get more, you
497:54 know, inaccuracies. You're going to have some different performance bottlenecks
497:57 and the agents, you know, latency is going to increase. you're going to start
498:00 to get inconsistent outputs and you're going to experience all those edge cases
498:04 that you hadn't thought of when you were the only one testing because, you know,
498:06 there's just an infinite amount of scenarios that an agent could be exposed
498:10 to. So, a good little lesson learned here would be to scale vertically before
498:14 you start to try to scale horizontally, which um we'll break that down. And I
498:17 made this little visualization so we can see what that means. So, let's say we
498:21 want to help this business with their customer support, sales, inventory, and
498:26 HR management. Rather than building out little building blocks of tools within
498:30 each of these processes, let's try to perfect one area first vertically and
498:34 then we'll start to move across the organization and look at doing other
498:37 things and scaling to more users. So, because we can focus on this one area,
498:41 we can set up a knowledge base and set up like the data sources and build that
498:45 automated pipeline. We can set up how this kind of stuff gets organized with
498:48 our sub workflows. We can set up, you know, an actual agent that's going to
498:52 have different tool calls. And within this process, what we have over here are
498:57 evaluations, monitoring performance, and then setting up those guard rails
499:01 because we're testing throughout this, you know, ecosystem vertically and and
499:04 getting exposure to all these different edge cases before we try to move into
499:08 other, you know, areas where we need to basically start this whole process
499:11 again. We want to have done the endto-end system first, understand you
499:16 know the problems that may arise, how to make it robust and how to evaluate and
499:20 you know iterate over it before we start making more automations. And so like I
499:24 alluded to earlier, if you try to start scaling horizontally too quick before
499:28 you've done all these testing, you are going to notice that hallucinations are
499:31 going to increase. Your retrieval quality is going to drop as more users
499:35 users come in. The agent's handling more memory. it's handling more um more
499:39 knowledge in its database to try to cut through your response times are going to
499:42 slow and then you're just going to get more inconsistent results. And so you
499:45 can do things like, you know, setting strict retrieval rules and guard rails.
499:49 You could do stuff like segmenting your data into different vector databases
499:53 based on the context or different name spaces or different, you know,
499:56 relational databases. You could use stuff like um asynchronous processing or
500:00 caching in order to improve that response time. And then you could also
500:04 look at doing stuff like only you know um having an agent evaluate and making
500:08 sure that the confidence on these responses are above a certain threshold
500:12 otherwise we'll you know request human help and not actually respond ourselves
500:17 or the agent wouldn't respond itself. So the seventh one is that no code tools
500:21 like nadn have their limits. They're super great. They're really empowering
500:25 non-developers to get in here and spin up some really cool automations. And
500:29 it's really like the barrier to entry is so low. you can learn how to build this
500:32 stuff really quickly, which is why I think it's awesome and you know, it's
500:35 the main tool that I use um personally and also within my agency. But when you
500:39 start to get into what we just talked about with lesson number six, scaling
500:43 and making these things really robust and production ready, you may notice
500:47 some limits with no code tools like NE. Now, the reason I got into building
500:51 stuff like this is because, you know, obviously non-programmer and it has a
500:55 really nice drag and drop interface where you can build these workflows very
500:59 visually without writing scripts. So, Nen is, you know, basically open source
501:03 because you can self-host it. The code is accessible. Um, and it's built on top
501:07 of Langchain, which is just like a basically a language that helps connect
501:10 to different things and create these like agents. Um, and because of that,
501:14 it's just wrapped up really pretty for us in a graphical user interface where
501:18 we can interact with it in that drag and drop way without having to get in there
501:23 and hands- on keyboard write code. And it has a ton of pre-built integrations
501:26 as you can see right here. Connect anything to everything. Um, I think
501:29 there's like a thousand integrations right here. And all of these different
501:33 integrations are API calls. They're just wrapped up once again in a pretty way
501:37 for us in a user interface. And like I talked about earlier, when I started, I
501:40 didn't even know what an API was. So the barrier entry was super low. I was able
501:43 to configure this stuff easily. And besides these built-in integrations,
501:46 they have these really simple tool calls. So development is really fast
501:49 with building workflows compared to traditional coding. um the modularity
501:53 aspect because you can basically build out a workflow, you can save that as a
501:57 tool and then you can call that tool from any other workflow you want. So
502:00 once you build out a function once, you've basically got it there forever
502:03 and it can be reused which is really cool. And then my favorite aspect about
502:07 n and the visual interface is the visual debugging because rather than having 300
502:11 lines of code and you know you're getting errors in line 12 and line 45,
502:15 you're going to see it's going to be red or it's going to be green and you know
502:18 if it's green you're good and if you see red there's an error. So you know
502:22 exactly usually you know exactly where the problem's coming from and you're
502:25 able to get in there look at the execution data and get to the bottom of
502:29 it pretty quick. But overall these no code platforms are great. They allow us
502:32 to connect APIs. We can connect to pretty much anything because we have an
502:37 HTTP request within NADN. Um they're going to be really really good for
502:40 rulebased decision-m like super solid. Um if we're creating workflows that's
502:43 just going to do some data manipulation and transferring data around you know
502:47 your typical ETL based on the structured logic super robust. you can make some
502:51 really really awesome basic AI powered workflows where you're integrating
502:54 different LLMs. You've got all the different chat models basically that you
502:57 can connect to um for you know different classification or content generation or
503:01 outreach anything like that. Um your multi-agentic workflows because like I
503:05 said earlier you have the aspect of creating different tools um as workflows
503:09 as well as creating agents as workflows that you can call on from multiple
503:12 agents. So you can really get into some cool multi-agentic inception thing going
503:16 on with with agents calling agents calling agents. um and passing data
503:21 between different workflows and then just the orchestration of AI services,
503:25 coordinating multiple AI tools within a single process. So that's the kind of
503:29 stuff that NN is going to be super super good at. And now the hidden limitations
503:34 of these noode AI workflow/ aent builders. Let's get into it. Now, in my
503:38 opinion, this stuff really just comes down to when we're trying to get into
503:41 like enterprise solutions at scale with a ton of users and a ton of
503:44 authentication and a ton of data. Because if you're building out your own
503:48 internal automations, you're going to be solid. Like there's not going to be
503:50 limitations. If you're building out, you know, proof of concepts and MVPs, um,
503:55 YouTube videos, creating content, like you can do it all, I would say. But when
503:58 you need to start processing, you know, massive data sets that are going to
504:03 scale to thousands or millions of users, your performance can slow down or even
504:06 fail. And that's maybe where you'd want to rely on some custom code backend to
504:11 actually spin up these more robust functionalities. In these agentic
504:14 systems, tool calling is really, really critical. The agent needs to be able to
504:18 decide which one to use and do it efficiently. And like I talked about,
504:22 Nen is built on top of lang chain. It provides a structured way to call AI
504:26 models and APIs, but it lacks some of that flexibility of writing that really
504:30 custom code within there for complex decision-m. And then when it comes to
504:34 authentication at scale, it can struggle with like secure large scale
504:38 authentication and data access control. Obviously, you can hook up to external
504:41 systems to help with some of that processing, but when it comes to maybe
504:45 like handling OOTH tokens and all these encrypted credentials and session
504:49 management, not that it's not doable with NN, um it just seems like getting
504:54 in there with some custom code, it could be quicker and more robust. And also,
504:59 that's coming from someone who doesn't actually do that myself. Um, just some
505:03 stuff I've heard and you know, with what's going on within the agency. Now
505:06 ultimately it seems like if you are delivering this stuff at scale for some
505:10 big clients um the best approach is going to be a mix a hybrid of no code
505:15 and custom code because you can use NN to spin up stuff really quick. You've
505:20 got that modularity you can orchestrate automate you know connect to anything
505:23 you need but then working in every once in a while some custom Python script for
505:27 some of that complex you know large scale processing and data handling. And
505:31 when you combine these two together, you're going to be able to spin some
505:34 stuff up a lot quicker, and that's going to be pretty robust and powerful. Thanks
9:03 start building some stuff. All right. Right. So, before we dive into actually building AI agents, I
9:08 want to share some eyeopening research that underscores exactly why you're
9:11 making such a valuable investment in yourself today. This research report
9:14 that I'm going to be walking through real quick will be available for free in
9:17 my school community if you want to go ahead and take a look at it. It's got a
9:21 total of 48 sources that are all from within the past year. So, you know it's
9:24 real, you know it's relevant, and it was completely generated for me using
9:27 Perplexity, which is an awesome AI tool. So, just a year ago, AI was still
9:31 considered experimental technology for most businesses. Now, it's become the
9:35 core driver of competitive advantage across every industry and business size.
9:39 What we're witnessing isn't just another tech trend. It's a fundamental business
9:44 transformation. Let me start with something that might surprise you. 75%
9:48 of small businesses now use AI tools. That's right. This isn't just enterprise
9:52 technology anymore. In fact, the adoption rates are climbing fastest
9:55 among companies generating just over a million dollars in revenue at 86%.
9:59 What's truly remarkable is the investment threshold. The median annual
10:03 AI investment for small businesses is just 1,800. That's less than 150 bucks
10:08 per month to access technology that was science fiction just a few years
10:13 ago. Now, I know some of you might be skeptical about AI's practical value.
10:16 Let's look at concrete outcomes businesses are achieving. Marketing
10:20 teams are seeing a 22% increase in ROI for AIdriven campaigns. Customer service
10:25 AI agents have reduced response time by 60% while resolving 80% of inquiries
10:29 without human intervention. Supply chains optimized with AI have cut
10:34 transportation costs by 5 to 10% through better routing and demand forecasting.
10:37 These are actual measured results from implementations over the past year. Now,
10:40 for those of you from small organizations, consider these examples.
10:44 Henry's House of Coffee used AIdriven SEO tools to improve their product
10:48 descriptions, resulting in a 200% improvement in search rankings and 25%
10:53 revenue increase. Vanisec insurance implemented custom chat bots that cut
10:57 client query resolution time from 48 hours to just 15 minutes. Small
11:01 businesses using Zapier automations saved 10 to 15 hours weekly on routine
11:06 data entry and CRM updates. What's revolutionary here is that none of these
11:09 companies needed to hire AI specialists or data scientists to achieve these
11:15 results. The economic case for AI skills is compelling. 54% of small and medium
11:19 businesses plan to increase AI spending this year. 83% of enterprises now
11:23 prioritize AI literacy in their hiring decisions. Organizations with AI trained
11:27 teams are seeing 5 to 8% higher profitability than their peers. But
11:31 perhaps most telling is this. Small businesses using AI report 91% higher
11:37 revenue growth than nonAI adopters. That gap is only widening. So the opportunity
11:42 ahead. The truth is mastering AI is no longer optional. It's becoming the price
11:45 of entry for modern business competitiveness. those who delay risk
11:48 irrelevance while early adopters are already reaping the benefits of
11:52 efficiency, innovation, and market share gains. Now, the good news is that we're
11:56 still in the early stages. By developing these skills now, you're positioning
11:59 yourself at the forefront of this transformation and going to be in
12:02 extremely high demand over the next decade. So, let's get started building
12:10 agent. All right, so here we are on Naden's website. You can get here using
12:14 the link in the description. And what I'm going to do is go ahead and sign up
12:17 for a free trial with you guys. And this is exactly the process you're going to
12:19 take. And you're going to get two weeks of free playing around. And like I said,
12:23 by the end of those two weeks, you're already going to have automations up and
12:26 running and tons of templates imported into your workflows. And I'm not going
12:29 to spend too much time here, but basically Nitn just lets you automate
12:33 anything. Any business process that you have, you can automate it visually with
12:37 no code, which is why I love it. So here you can see NIDN lets you automate
12:40 business processes without limits on your logic. It's a very visual builder.
12:44 We have a ton of different integrations. We have the ability to use code if you
12:48 want to. Lots of native nodes to do data transformation. And we have tons of
12:52 different triggers, tons of different AI nodes. And we're going to dive into this
12:55 so you can understand what's all going on. But there's also hundreds of
12:58 templates to get you started. Not only on the end website itself, but also in
13:02 my free school community. I have almost 100 templates in there that you can plug
13:05 in right away. Anyways, let's scroll back up to the top and let's get started
13:08 here with a new account. All right. All right. So, I put in my name, my email,
13:11 password, and I give my account a name, which will basically be up in the top
13:16 search bar. It'll be like nate herkdemo.app.n.cloud. So, that's what
13:19 your account name means. And you can see I'm going to go ahead and start our
13:22 14-day free trial. Just have to do some quick little onboarding. So, it asks us
13:26 what type of team are we on. I'm just going to put product and design. It asks
13:29 us the size of our company. It's going to ask us which of these things do we
13:32 feel most comfortable doing. These are all pretty technical. I just want to put
13:35 none of them, and that's fine. And how did you hear about any? Let's go ahead
13:39 with YouTube and submit that off. And now you have the option to invite other
13:42 members to your workspace if you want to collaborate and share some credentials.
13:44 For now, I'm just going to go ahead and skip that option. So from here, our
13:48 workspace is already ready. There's a little quick start guide you could watch
13:50 from Eniden's YouTube channel, but I'm just going to go ahead and click on
13:53 start automating. All right, so here we are. This is what Eniden looks like. And
13:57 let's just familiarize with this dashboard a little bit real quick. So up
14:00 in the top left, we can see we have 14 days left in our free trial and we've
14:05 used zero out of a,000 executions. An execution just basically means when you
14:09 run a workflow from end to end that's going to be an execution. So we can see
14:12 on the lefth hand side we have overview. We have like a personal set of projects.
14:15 We have things that have been shared with us. We have the ability to add a
14:19 project. We have the ability to go to our admin panel where we can upgrade our
14:23 instance of nodn. We can turn it off. That sort of stuff. So here's my admin
14:26 panel. You can see how many executions I have, how many active workflows I have,
14:29 which I'll explain what that means later. We have the ability to go ahead
14:33 and manage our nen versions. And this is where you could kind of upgrade your
14:36 plan and change your billing information, stuff like that. But you'll
14:39 notice that I didn't even have to put any billing details to get started with
14:43 my twoe free trial. But then if I want to get back into my workspace, I'm just
14:45 going to click on open right here. And that will send us right back into this
14:49 dashboard that we were just on. Cool. So right here we can see we can either
14:53 start from scratch, a new workflow, or we can test a simple AI agent example.
14:56 So let's just click into here real quick and break down what is actually going on
15:00 here. So, in order for us to actually access this demo where we're going to
15:03 just talk to this AI agent, it says that we have to start by saying hi. So,
15:06 there's an open chat button down here. I'm going to click on open chat and I'm
15:11 just going to type in here, hi. And what happens is our AI agent fails because
15:15 this is basically the brain that it needs to use in order to think about our
15:18 message and respond to us. And what happens is we can see there's an error
15:21 message. So, because these things are red, I can click into it and I can see
15:25 what is the error. It says error in subnode OpenAI model. So that would be
15:29 this node down here which is called OpenAI model. I would click into this
15:33 node and we can basically see that the error is there is no credentials. So
15:38 when you're in NADN what happens is in order to access any sort of API which
15:41 we'll talk about later but in order to access something like your Gmail or
15:47 OpenAI or your CRM you always need to import some sort of credential which is
15:50 just a fancy word for a password in order to actually like get into that
15:54 information. So right here we can see there's 100 free credits from OpenAI.
15:58 I'm going to click on claim credits. And now we just are using our NEN free
16:02 OpenAI API credits and we're fine on this front. But don't worry, later in
16:05 this video I'm going to cover how we can actually go to OpenAI and get an API key
16:10 and create our own password in here. But for now, we've claimed 100 free credits,
16:13 which is great. And what I'm going to do is just go ahead and resend this message
16:16 that says hi. So I can actually go to this hi text and I can just click on
16:20 this button which says repost message. And that's just going to send it off
16:23 again. And now our agent's going to actually be able to use its brain and
16:27 respond to us. So what it says here is welcome to NINDN. Let's start with the
16:30 first step to give me memory. Click the plus button on the agent that says
16:34 memory and choose simple memory. Just tell me once you've done that. So sure,
16:37 why not? Let's click on the plus button under memory. And we'll click on simple
16:41 memory real quick. And we're already set up. Good to go. So now I'm just going to
16:46 come down here and say done. Now we can see that our agent was able to use its
16:49 memory and its brain in order to respond to us. So now it can prompt us to add
16:54 tools. It can do this other stuff, but we're going to break that down later in
16:57 this video. Just wanted to show you real quick demo of how this works. So, what I
17:01 would do is up in the top right, I can click on save just to make sure that the
17:05 what we've done is actually going to be saved. And then to get back out to the
17:08 main screen, I'm going to click on either overview or personal. But if I
17:11 click on overview, that just takes us back to that home screen. But now, let's
17:15 talk about some other stuff that happens in a workflow. So, up in the top right,
17:19 I'm going to click on create workflow. You can see now this opens up a new
17:22 blank page. And then you have the option up here in the top left to name it. So
17:26 I'm just going to call this one demo. Now we have this new workflow that's
17:31 saved in our N environment called demo. So a couple things before we actually
17:34 drag in any nodes is up here. You can see where is this saved. If you have
17:37 different projects, you can save workflows in those projects. If you want
17:40 to tag them, you can tag different things like if you have one for customer
17:45 support or you have stuff for marketing, you can give your workflows different
17:48 tags just to keep everything organized. But anyways, every single workflow has
17:53 to start off with some sort of trigger. So when I click on add first step, it
17:56 opens up this panel on the right that says what triggers this workflow. So we
18:00 can have a manual trigger. We can have a certain event like a new message in
18:04 Telegram or a new row in our CRM. We can have a schedule, meaning we can set this
18:08 to run at 6 a.m. every single day. We can have a web hook call, form
18:11 submission, chat message like we saw earlier. There's tons of ways to
18:15 actually trigger a workflow. So for this example, let's just say I'm going to
18:18 click on trigger manually, which literally just gives us this button
18:21 where if we click test workflow, it goes ahead and executes. Cool. So this is a
18:26 workflow and this is a node, but this is a trigger node. What happens after a
18:29 trigger node is different types of nodes, whether that's like an action
18:34 node or a data transformation node or an AI node, some sort of node. So what I
18:39 would do is if I want to link up a node to this trigger, I would click on the
18:42 plus button right here. And this pulls up a little panel on the right that says
18:46 what happens next. Do you want to take action with AI? Do you want to take
18:49 action within a certain app? Do you want to do data transformation? There's all
18:52 these other different types of nodes. And what's cool is let's say we wanted
18:55 to take action within an app. If I clicked on this, we can see all of the
18:58 different native integrations that Nin has. And once again, in order to connect
19:02 to any of these tons of different tools that we have here, you always need to
19:06 get some sort of password. So let's say Google Drive. Now that I've clicked into
19:09 Google Drive, there's tons of different actions that we can take and they're all
19:12 very intuitive. you know would you want to copy a file would you want to share a
19:16 file do you want to create a shared drive it's all very natural language and
19:19 let's say for example I want to copy a file in order for nitn to tell Google
19:24 drive which file do we want to copy we first of all have to provide a
19:27 credential so every app you'll have to provide some sort of credential and then
19:31 you have basically like a configuration panel right here in the middle which
19:35 would be saying what is the resource you want what do you want to do what is the
19:38 file all this kind of stuff so whenever you're in a node in nen what you're
19:42 going to have is on the left you have an input panel which is basically any data
19:45 that's going to be feeding into this current node. In the middle you'll have
19:49 your configuration which is like the different settings and the different
19:52 little levers you can tweak in order to do different things. And then on the
19:56 right is going to be the output panel of what actually comes out of this node
20:00 based on the way that you configured it. So every time you're looking at a node
20:03 you're going to have three main places input configuration and output. So,
20:07 let's just do a quick example where I'm going to delete this Google Drive node
20:11 by clicking on the delete button. I'm going to add an AI node because there's
20:14 a ton of different AI actions we can take as well. And all I'm going to do is
20:17 I'm just going to talk to OpenAI's kind of like chatbt. So, I'll click on that
20:21 and I'm just going to click on message a model. So, once that pulls up, we're
20:25 going to be using our NEN free OpenAI credits that we got earlier. And as you
20:30 can see, we have to configure this node. What do we want to do? The resource is
20:34 going to be text. It could be image, audio, assistant, whatever we want. The
20:38 operation we're taking is we want to just message a model. And then of
20:42 course, because we're messaging a model, we have to choose from this list of
20:47 OpenAI models that we have access to. And actually, it looks like this N free
20:51 credits only actually give us access to a chat model. And this is a bit
20:54 different. Not exactly sure why. Probably just because they're free
20:57 credits. So, what we're going to do real quick is head over to OpenAI and get a
21:01 credential so I can just show you guys how this works with input configuration
21:06 and output. So, basically, you'd go to openai.com. You'd come in here and you'd
21:09 create an account if you don't already have one. If you have a chat GBT account
21:12 and you're on like maybe the 20 bucks a month plan, that is different than
21:17 creating an OpenAI API account. So, you'd come in here and create an OpenAI
21:20 account. As you see up here, we have the option for Chatbt login or API platform
21:25 login, which is what we're looking for here. So, now that you've created an
21:29 account with OpenAI's API, what you're going to do is come up to your dashboard
21:34 and you're going to go to your API keys. And then all you'd have to do is click
21:38 on create new key. Name this one whatever you want. And then you have a
21:42 new secret key. But keep in mind, in order for this key to work, you have to
21:45 have put in some billing information in your OpenAI account. So, throw in a few
21:49 bucks. They'll go a lot longer than you may think. And then you're going to take
21:52 that key that we just copied, come back into Nitn, and under the credential
21:56 section, we're going to click on create new credential. All I had to do now was
22:00 paste in that API key right there. And then you have the option to name this
22:02 credential if you have a ton of different ones. So I can just say, you
22:07 know, like demo on May 21st. And now I have my credential saved and named
22:11 because now we can tell the difference between our demo credential and our NAN
22:15 free OpenAI credits credential. And now hopefully we have the ability to
22:18 actually choose a model from the list. So, as you can see, we can access chat
22:25 GBT for latest, 3.5 Turbo, 4, 4.1 mini, all this kind of stuff. I'm going to
22:28 choose 4.1 mini, but as you can see, you can come back and change this whenever
22:31 you want. And I'm going to keep this really simple. In the prompt, I'm just
22:35 going to type in, tell me a joke. So now, when this node executes, it's
22:39 basically just going to be sending this message to OpenAI's model, which is
22:44 GBT4.1 Mini, and it's just going to say, "Tell me a joke." And then what we're
22:48 going to get on the output panel is the actual joke. So what I can do is come up
22:52 right here and click on test step. This is going to run this node and then we
22:56 get an output over here. And as you can see both with the input and the output
23:00 we have three options of how we want to view our data. We can click on schema,
23:05 we can click on table or we can click on JSON. And this is all the exact same
23:09 data. It's just like a different way to actually look at it. I typically like to
23:13 look at schema. I think it just looks the most simple and natural language.
23:17 But what you can see here is the message that we got back from this open AAI
23:21 model was sure here's a joke for you. Why don't scientists trust atoms?
23:25 Because they make up everything. And what's cool about schemas is that this
23:29 is all drag and drop. So now once we have this output, we could basically
23:32 just use it however we want. So if I click out of here and I open up another
23:36 node after this, and for now I'm just going to grab a set node just to show
23:39 you guys how we can drag and drop. What I would do is let's say we wanted to add
23:43 a new field and I'm just going to call this open AI's response. So we're
23:49 creating a field called open AI's response. And as you can see it says
23:52 drag an input field from the left to use it here. So as we know every node we
23:57 have input configuration output on the input we can basically choose which one
24:01 of these things do we want to use. I just want to reference this content
24:04 which is the actual thing that OpenAI said to us. So I would drag this from
24:08 here right into the value. And now we can see that we have what's called a
24:12 variable. So anything that's going to be wrapped in these two curly braces and
24:16 it's going to be green is a variable. And it's coming through as JSON
24:20 message.content which is basically just something that represents whatever is
24:24 coming from the previous node in the field called content. So we can see
24:29 right here JSON message.content we have message. Within message we have
24:33 basically a subfolder called content and that's where we access this actual
24:37 result this real text. And you can see if I click into this variable, if I make
24:41 it full screen, we have an expression which is our JSON variable. And then we
24:45 have our result, which is the actual text that we want back. So now if I go
24:49 ahead and test this step, we can see that we only get output to us OpenAI's
24:54 response, which is the text we want. Okay, so this would basically be a
24:58 workflow because we have a trigger and then we have our nodes that are going to
25:02 execute when we hit test workflow. So if I hit test workflow, it's going to run
25:05 the whole thing. And as you can see, super visual. We saw that OpenAI was
25:09 thinking and then we come over here and we get our final output which was the
25:13 actual joke. And now let me show you one more example of how we can map our
25:16 different variables without using a manual trigger. So let's say we don't
25:19 want a manual trigger. I'm just going to delete that. But now we have no way to
25:22 run this workflow because there's no sort of trigger. So I'm just going to
25:25 come back in here and grab a chat trigger just so we can talk to this
25:29 workflow in Naden. I'm going to hook it up right here. I would just basically
25:33 drag this plus into the node that I want. So I just drag it into OpenAI. And
25:37 now these two things are connected. So if I went into the chat and I said
25:41 hello, it's going to run the whole workflow, but it's not really going to
25:44 make sense because I said hello and now it's telling me a joke about why don't
25:48 scientists trust atoms. So what I would want to do is I'd want to come into this
25:52 OpenAI node right here. And I'm just going to change the actual prompt. So
25:56 rather than asking it to tell me a joke, what I would do is I'd just delete this.
26:00 And what I want to do is I want OpenAI to go ahead and process whatever I type
26:05 in this chat. same way it would work if we were in chatbt in our browser and
26:10 whatever we type OpenAI responds to. So all I would have to do to do that is I
26:14 would grab the chat input variable right here. I would drag that into the prompt
26:20 section. And now if I open this up, it's looking at the expression called
26:23 JSON.input because this field right here is called chat input. And then the
26:27 result is going to be whatever we type anytime. even if it's different 100
26:31 times in a row, it's always going to come back as a result that's different,
26:34 but it's always going to be referenced as the same exact expression. So, just
26:38 to actually show you guys this, let's save this workflow. And I'm going to
26:42 say, "My name is Nate. I like to eat ice cream. Make up a funny story about me."
26:53 Okay, so we'll send this off and the response that we should get will be one
26:57 that is actually about me and it's going to have some sort of element of a story
27:00 with ice cream. So let's take a look. So it said, "Sure, Nate, here's a funny
27:03 story for you." And actually, because we're setting it, it's coming through a
27:06 little weird. So let's actually click into here to look at it. Okay, so here
27:09 is the story. Let me just make this a little bigger. I can go ahead and drag
27:12 the configuration panel around by doing this. I can also make it larger or
27:16 smaller if I do this. So let's just make it small. We'll move it all the way to
27:20 the left and let's read the story. So, it said, "Sure, Nate. Here's a funny
27:24 story just for you. Once upon a time, there was a guy named Nate who loved ice
27:26 cream more than anything else in the world. One day, Nate decided to invent
27:31 the ultimate ice cream. A flavor so amazing that it would make the entire
27:34 town go crazy." So, let's skip ahead to the bottom. Basically, what happens is
27:38 from that day on, Nate's stand became the funniest spot in town. A place where
27:41 you never knew if you'd get a sweet, savory, or plain silly ice cream. And
27:45 Nate, he became the legendary ice cream wizard. That sounds awesome. So that's
27:49 exactly how you guys can see what happened was in this OpenAI node. We
27:54 have a dynamic input which was us talking to this thing in a chat trigger.
27:59 We drag in that variable that represents what we type into the user prompt. And
28:03 this is going to get sent to OpenAI's model of GPT 4.1 Mini because we
28:08 configured this node to do so. And the reason we were able to actually
28:11 successfully do that is because we put in our API key or our password for
28:17 OpenAI. And then on the right we get this output which we can look at either
28:22 in schema view, table view or JSON view. But they all represent the same data. As
28:26 you can see, this is the exact story we just read. Something I wanted to talk
28:29 about real quick that is going to be super helpful for the rest of this
28:32 course is just understanding what is JSON. And JSON stands for JavaScript
28:37 object notation. And it's just a way to identify things. And the reason why it's
28:40 so important to talk about is because over here, right, we all kind of know
28:43 what schema is. It's just kind of like the way something's broken down. And as
28:47 you can see, we have different drill downs over here. And we have different
28:50 things to reference. Then we all understand what a table is. It's kind of
28:53 like a table view of different objects with different things within them. Kind
28:56 of like the subfolders. And once again, you can also drag and drop from table
29:00 view as well. And then we have JSON, which also you can drag and drop. Don't
29:04 worry, you can drag and drop pretty much this whole platform, which is why it's
29:08 awesome. But this may look a little more cody or intimidating, but I want to talk
29:13 about why it is not. So, first of all, JSON is so so important because
29:17 everything that we do is pretty much going to be built on top of JSON. Even
29:21 the workflows that you're going to download later when you'll see like,
29:24 hey, you can download this template for free. When you download that, it's going
29:28 to be a JSON file, which means the whole workflow in NN is basically represented
29:32 as JSON. And so, hopefully that doesn't confuse you guys, but what it is is it's
29:39 literally just key value pairs. So what I mean by that is like over here the key
29:44 is index and index equals zero and then we have like the role of the openi
29:48 assistant and that's the key and the value of the role is assistant. So it's
29:52 very very natural language if you really break it down. What is the content that
29:55 we're looking at? The content that we're looking at is this actual content over
29:59 here. But like I said the great thing about that is that pretty much every
30:03 single large language model or like chat gbt cloud 3.5 they're all trained on
30:08 JSON and they all understand it. So, well, because it's universal. So, right
30:11 here on the left, we're looking at JSON. If I was to just copy this entire JSON,
30:17 go into ChatgBT and say, "Hey, help me understand this JSON." And then I just
30:21 basically pasted that in there, it's going to be able to tell us exactly like
30:24 which keys are in here and what those values are. So, it says this JSON
30:28 represents the response from an AI model like chatbt in a structured format. Let
30:32 me break it down for you. So, basically, it's going to explain what each part of
30:36 this JSON means. We can see the index is zero. That means it's the first
30:39 response. We can see the role equals assistant. We can see that the content
30:44 is the funny story about Nate. We can see all this stuff and it basically is
30:48 able to not only break it down for us, but let's say we need to make JSON. We
30:52 could say, "Hey, I have this natural language. Can you make that into JSON
30:55 for me?" Hey, can you help me make a JSON body where my name is Nate? I'm 23
31:02 years old. I went to the University of Iowa. I like to play pickle ball. We'll
31:08 send that off and basically it will be able to turn that into JSON for us. So
31:13 here you go. We can see name Nate, age 23, education, University of Iowa,
31:18 interest pickle ball. And so don't let it overwhelm you. If you ever need help
31:22 either making JSON or understanding JSON, throw it into chat and it will do
31:26 a phenomenal job for you. And actually, just to show you guys that I'm not
31:29 lying, let's just copy this JSON that chat gave us. Go back into our workflow
31:33 and I'm just going to add a set field just to show you guys. And instead of
31:36 manual mapping, I'm just going to set some data using JSON. So I'm going to
31:41 delete this, paste in exactly what chat gave me. Hit test step. And what do we
31:44 see over here? We see the name of someone named Nate. We see their age. We
31:47 see their education. And we see their interest in either schema table or JSON
31:53 view. So hopefully that gives you guys some reassurance. And just once again,
31:57 JSON's super important. And it's not even code. That is just a really quick
32:03 foundational understanding of a trigger, different nodes, action nodes, AI nodes.
32:08 You have a ton to play with. And that's kind of like the whole most overwhelming
32:12 part about NIN is you know what you need to do in your brain, but you don't know
32:16 maybe which is the best nen node to actually get that job done. So that's
32:19 kind of the tough part is it's a lot of just getting the reps in, understanding
32:24 what node is best for what. But I assure you by the time your twoe trial is up,
32:27 you'll have mastered pretty much all that. All right, but something else I
32:30 want to show you guys is now what we're looking at is called the editor. So if
32:34 you look at the top middle right here, we have an editor. And this is where we
32:37 can, you know, zoom out, we can move around, we can basically edit our
32:41 workflow right here. And it moves from left to right, as you guys saw, the same
32:46 way we we read from left to right. And now, because we've done a few runs and
32:49 we've tested out these different nodes, what we'll click into is executions. And
32:53 this will basically show us the different times we've ran this workflow.
32:57 And what's cool about this is it will show us the data that has moved through.
33:01 So let's say you set up a workflow that every time you get an email, it's going
33:04 to send some sort of automated response. You could come into this workflow, you
33:07 could click on executions, and you could go look at what time they happened, what
33:11 actually came through, what email was sent, all that kind of stuff. So if I go
33:15 all the way down to this third execution, we can remember that what I
33:19 did earlier was I asked this node to tell us a joke. We also had a manual
33:23 trigger rather than a chat trigger. And we can see this version of the workflow.
33:28 I could now click into this node and I could see this is when we had it
33:32 configured to tell us a joke. And we could see the actual joke it told us
33:35 which was about scientists not trusting atoms. And obviously we can still
33:39 manipulate this stuff, look at schema, look at table and do the same thing on
33:42 that left-hand side as well. So I wanted to talk about how you can import
33:46 templates into your own NN environment because it's super cool and like I said
33:49 they're all kind of built on top of JSON. So, I'm going to go to NN's
33:53 website and we're going to go to product and we're going to scroll down here to
33:56 templates. And you can see there's over 2100 workflow automation templates. So,
34:00 let's scroll down. Let's say we want to do this one with cloning viral Tik Toks
34:04 with AI avatars. And we can use this one for free. So, I'll click on use for
34:07 free. And what's cool is we can either copy the template to clipboard or since
34:10 we're in the cloud workspace, we could just import it right away. And so, this
34:14 is logged into my other kind of my main cloud instance, but I'll still show you
34:16 guys how this works. I would click on this button. it would pull up this
34:19 screen where I just get to set up a few things. So, there's going to be
34:22 different things we'd have to connect to. So, you would basically just select
34:25 your different credentials if you already had them set up. If not, you
34:27 could create them right here. And then you would just basically be able to hit
34:32 continue. And as this loads up, you see we have the exact template right there
34:36 to play with. Or let's say you're scrolling on YouTube and you see just a
34:39 phenomenal Nate Herk YouTube video that you want to play around with. All you
34:42 have to do is go to my free school community and you will come into YouTube
34:46 resources or search for the title of the video. And let's say you wanted to play
34:49 with this shorts automation that I built. What you'll see right here is a
34:52 JSON file that you'll have to download. Once you download that, you'll go back
34:56 into Nitn, create a new workflow, and then when you import that from file if
34:59 you click on this button right here, you can see the entire workflow comes in.
35:02 And then all you're going to have to do is follow the setup guide in order to
35:05 connect your own credentials to these different nodes. All right. And then the
35:08 final thing I wanted to talk about is inactive versus active workflows. So you
35:11 may have noticed that none of our executions actually counted up from
35:16 zero. And the reason is because this is counting active workflow executions. And
35:20 if we come up here to the top right, we can see that we have the ability to make
35:24 a workflow active, but it has to have a trigger node that requires activation.
35:27 So real quick, let's say that we come in here and we want a workflow to start
35:32 when we have a schedule trigger. So I would go to schedule and I would
35:35 basically say, okay, I want this to go off every single day at midnight as we
35:38 have here. And what would happen is while this workflow is inactive, it's
35:42 only actually going to run if we hit test workflow and then it runs. But if
35:47 we were to flick this on as active now, it says your schedule trigger will now
35:51 trigger executions on the schedule you have defined. These executions will not
35:55 show up immediately in the editor, but you can see them in the execution list.
35:59 So this is basically saying two things. It's saying now that we have the
36:01 schedule trigger set up to run at midnight, it's actually going to run at
36:05 midnight because it's active. If we left this inactive, it would not actually
36:09 run. And all it meant by the second part is if we were sitting in this workflow
36:13 at midnight, we wouldn't see it execute and go spinning and green and red in
36:18 live real time, but it would still show up as an execution. But if it's an
36:22 active workflow, you just don't get to see them live visually running and
36:26 spinning anymore. So that's the difference between an active workflow
36:29 and an inactive workflow. Let's say you have a trigger that's like um let's say
36:33 you have a HubSpot trigger where you want this basically to fire off the
36:37 workflow whenever a new contact is created. So you'd connect to HubSpot and
36:42 you would make this workflow active so that it actually runs if a new contact's
36:46 created. If you left this inactive, even though it says it's going to trigger on
36:50 new contact, it would not actually do so unless this workflow was active. So
36:53 that's a super important thing to remember. All right. And then one last
36:57 thing I want to talk about which we were not going to dive into because we'll see
37:01 examples later is there is one more way that we can see data rather than schema
37:05 table or JSON and it's something called binary. So binary basically just means
37:11 an image or maybe a big PDF or a word doc or a PowerPoint file. It's basically
37:15 something that's not explicitly textbased. So let me show you exactly
37:19 what that might look like. What I'm going to do is I'm going to add another
37:22 trigger under this workflow and I'm going to click on tab. And even though
37:25 it doesn't say like what triggers this workflow, we can still access different
37:28 triggers. So I'm just going to type in form. And this is going to give us a
37:32 form submission that basically is an NAND native form. And you can see
37:35 there's an option at the bottom for triggers. So I'm going to click on this
37:38 trigger. Now basically what this pulls up is another configuration panel, but
37:42 obviously we don't have an input because it's a trigger, but we are going to get
37:46 an output. So anyways, let me just set up a quick example form. I'm just going
37:50 to say the title of this form is demo. The description is binary data. And now
37:55 what happens if I click on test step, it's going to pull up this form. And as
37:58 you can see, we haven't set up like any fields for people to actually submit
38:02 stuff. So the only option is to submit. But when I hit submit, you can see that
38:06 the node has been executed. And now there's actually data in here. Submitted
38:09 at with a timestamp. And then we have different information right here. So let
38:13 me just show you guys. We can add a form element. And when I'm adding a form
38:17 element, we can basically have this be, you know, date, it can be a drop down,
38:20 it can be an email, it can be a file, it can be text. So, real quick, I'm just
38:23 going to show you an example where, let's say we have a form where someone
38:27 has to submit their name. We have the option to add a placeholder or make it
38:30 required. And this isn't really the bulk of what I'm trying to show you guys. I
38:34 just want to show you binary data. But anyways, let's say we're adding another
38:37 field that's going to be a file. I'm just going to say file. And this will
38:41 also be required. And now if I go ahead and hit test step, it's going to pull up
38:45 a new form for us with a name parameter and a file parameter. So what I did is I
38:49 put my name and I put in just a YouTube short that I had published. And you can
38:53 see it's an MP4 file. So if I hit submit, we're going to get this data
38:56 pulled into N as you can see in the background. Just go ahead and watch. The
39:00 form is going to actually capture this data. There you go. Form submitted. And
39:05 now what we see right here is binary data. So this is interesting, right? We
39:09 still have our schema. We still have our table. We still have our JSON, but what
39:13 this is showing us is basically, okay, the name that the person submitted was
39:17 Nate. The file, here are some information about it as far as the name
39:21 of it, the mime type, and the size, but we don't actually access the file
39:25 through table or JSON or schema view. The only way we can access a video file
39:29 is through binary. And as you can see, if I clicked on view, it's my actual
39:33 video file right here. And so that's all I really wanted to show you guys was
39:36 when you're working with PDFs or images or videos, a lot of times they're going
39:39 to come through as binary, which is a little confusing at first, but it's not
39:42 too bad. And we will cover an example later in this tutorial where we look at
39:47 a binary file and we process it. But as you can see now, if we were doing a next
39:52 node, we would have schema, table, JSON, and binary. So we're still able to work
39:55 with the binary. We're still able to reference it. But I just wanted to throw
39:58 out there, when you see binary, don't get scared. It just basically means it's
40:02 a different file type. It's not just textbased. Okay, so that's going to do
40:05 it for just kind of setting up the foundational knowledge and getting
40:09 familiar with the dashboard and the UI a little bit. And as you move into these
40:12 next tutorials, which are going to be some step by steps, I'm going to walk
40:15 through every single thing with you guys setting up different accounts with
40:19 Google and something called Pine Cone. And we'll talk about all this stuff step
40:22 by step. But hopefully now it's going to be a lot better moving into those
40:25 sections because you've seen, you know, some of the input stuff and how you
40:29 configure nodes and just like all this terminology that you may not have been
40:33 familiar with like JSON, JavaScript variables, workflows, executions, that
40:38 sort of stuff. So, like I said, let's move into those actual step-by-step
40:41 builds. And I can assure you guys, you're going to feel a lot more
40:43 comfortable after you have built a workflow end to end. All right, we're
40:48 going to talk about data types in Nadn and what those look like. It's really
40:50 important to get familiar with this before we actually start automating
40:53 things and building agents and stuff like that. So, what I'm going to do is
40:57 just pull in a set node. As you guys know, this just lets us modify, add, or
41:01 remove fields. And it's very, very simple. We basically would just click on
41:05 this to add fields. We can add the name of the field. We choose the data type,
41:08 and then we set the value, whether that's a fixed value, which we'll be
41:13 looking at here, or if we're dragging in some sort of variable from the lefth
41:15 hand side. But clearly, right now, we have no data incoming. We just have a
41:20 manual trigger. So, what I'm going to do is zoom in on the actual browser so we
41:24 can examine this data on the output a bit bigger and I don't have to just keep
41:27 cutting back and forth with the editing. So, as you can see, there's five main
41:31 data types that we have access to and end it in. We have a string, which is
41:35 basically just a fancy name for a word. Um, as you can see, it's represented by
41:40 a little a, a letter a. Then we have a number, which is represented by a pound
41:43 sign or a hashtag, whatever you want to call it. Um, it's pretty
41:47 self-explanatory. Then we have a boolean which is basically just going to be true
41:50 or false. That's basically the only thing it can be represented by a little
41:54 checkbox. We have an array which is just a fancy word for list. And we'll see
41:58 exactly what this looks like. And then we have an object which is probably the
42:01 most confusing one which basically means it's just this big block which can have
42:05 strings in them, numbers in them. It can have booleans in them. It can have
42:09 arrays in them. And it can also have nested objects within objects. So we'll
42:12 take a look at that. Let's just start off real quick with the string. So let's
42:17 say a string would be a name and that would be my name. So if I hit test step
42:21 on the right hand side in the JSON, it comes through as key value pair like we
42:26 talked about. Name equals Nate. Super simple. You can tell it's a string
42:30 because right here we have two quotes around the word Nate. So that represents
42:34 a string. Or you could go to the schema and you can see that with name equals
42:38 Nate, there's the little letter A and that basically says, okay, this is a
42:41 string. As you see, it matches up right here. Cool. So that's a string. Let's
42:46 switch over to a number. Now we'll just say we're looking at age and we'll throw
42:51 in the number 50. Hit test step. And now we see age equals 50 with the pound sign
42:55 right here as the symbol in the schema view. Or if we go to JSON view, we have
43:01 the key value pair age equals 50. But now there are no double quotes around
43:05 the actual number. It's green. So that's how we know it's not a string. This is a
43:10 number. And um that's where you may run into some issues where if you had like
43:13 age coming through as a string, you wouldn't be able to like do any
43:17 summarizations or filters, you know, like if age is greater than 50, send it
43:21 off this way. If it's less than 50, send it that way. In order to do that type of
43:24 filtering and routing, you would need to make sure that age is actually a number
43:30 variable type or data type. Cool. So there's age. Let's go to a boolean. So
43:35 we're going to basically just say adult. And that can only be true or false. You
43:39 see, I don't have the option to type anything here. It's only going to be
43:42 false or it's only going to be true. And as you can see, it'll come through.
43:45 It'll look like a string, but there's no quotes around it. It's green. And that's
43:49 how we know it's a boolean. Or we could go to schema, and we can see that
43:53 there's a checkbox rather than the letter A symbol. Now, we're going to move on to
43:58 an array. And this one's interesting, right? So, let's just say we we want to
44:01 have a list of names. So, if I have a list of names and I was typing in my
44:05 name and I tried to hit test step, this is where you would run into an error
44:09 because it's basically saying, okay, the field called names, which we set right
44:13 here, it's expecting to get an array, but all we got was Nate, which is
44:17 basically a string. So, to fix this error, change the type for the field
44:21 names or you can ignore type conversions, whatever. Um, so if we were
44:25 to come down to the option and ignore type conversions. So when we hit ignore
44:29 type conversions and tested the step, it basically just converted the field
44:32 called names to a string because it just could understand that this was a string
44:35 rather than an array. So let's turn that back off and let's actually see how we
44:39 could get this to work if we wanted to make an array. So like we know an array
44:44 just is a fancy word for a list. And in order for us to actually send through an
44:48 end and say, okay, this is a list, we have to wrap it in square brackets like
44:53 this. But we also have to wrap each item in the list in quotes. So I have to go
44:58 like this and go like that. And now this would pass through as a list of a of
45:02 different strings. And those are names. And so if I wanted to add another one
45:06 after the first item, I would put a comma. I put two quotes. And then inside
45:10 that I could put another name. Hit test step. And now you can see we're getting
45:14 this array that's made up of different strings and they're all going to be
45:17 different names. So I could expand that. I could close it out. Um we could drag
45:21 in different names. And in JSON, what that looks like is we have our key and
45:25 then we have two closed brackets, which is basically exactly what like right
45:29 here. This is exactly what we typed right here. So that's how it's being
45:32 represented within these square brackets right here. Okay, cool. So the final one
45:36 we have to talk about is an object. And this one's a little more complex. So if
45:40 I was to hit test step here, it's going to tell us names expects an object, but
45:44 we got an array. So once again, you could come in here, ignore type
45:47 conversions, and then it would just basically come through as a string, but
45:50 it's not coming through as an array. So that's not how we want to do it. And I
45:55 don't want to mess with the actual like schema of typing in an object. So what
45:58 I'm going to do is go to chat. I literally just said, give me an example
46:02 JSON object to put into naden. It gives me this example JSON object. I'm going
46:06 to copy that. Come into the set node, and instead of manual mapping, I'm just
46:09 going to customize it with JSON. Paste the one that chat just gave us. And when
46:15 I hit test step, what we now see first of all in the schema view is we have one
46:18 item with you know this is an object and all this different stuff makes it up. So we
46:24 have a string which is name herk. We have a string which is email nate
46:27 example.com. We have a string which is company true horizon. Then we have an
46:33 array of interests within this object. So I could close this out. I could open
46:36 it up. And we have three interests. AI automation nadn and YouTube content. And
46:40 this is, you know, chat GBT's long-term memory about me making this. And then we
46:45 also have an object within our object which is called project. And the interesting difference
46:51 here with an object or an array is that when you have an array of interests,
46:54 every single item in that array is going to be called interest zero, interest
46:58 one, interest two. And by the way, this is three interests, but computers start
47:01 counting from zero. So that's why it says 0, one, two. But with an object, it
47:06 doesn't all have to be the same thing. So you can see in this project object
47:11 project object we have one string called title we have one string called called
47:15 called status and we have one string called deadline and this all makes up
47:18 its own object. As you can see if we went to table view this is literally
47:22 just one item that's really easy to read. And you can tell that this is an
47:26 array because it goes 012. And you can tell that this is an object because it
47:29 has different fields in it. This is a one item. It's one object. It's got
47:33 strings up top. It has no numbers actually. So the date right here, this
47:37 is coming through as a string variable type. We can tell because it's not
47:40 green. We can tell because it has double quotes around it. And we can also tell
47:43 because in schema it comes through with the letter A. But this is just how you
47:47 can see there's these different things that make up um this object. And you can
47:52 even close them down in JSON view. We can see interest is an array that has
47:55 three items. We could open that up. We can see project is an object because
47:59 it's wrapped in in um curly braces, not not um the closed square brackets as you
48:05 can see. So, there's a difference. And I know this wasn't super detailed and it's
48:09 just something really really important to know heading into when you actually
48:13 start to build stuff out because you're probably going to get some of those
48:15 errors where you're like, you know, blank expects an object but got this or
48:19 expects an array and got this. So, just wanted to make sure I came in here and
48:23 threw that module at you guys and hopefully it'll save you some headaches
48:26 down the road. Real quick, guys, if you want to be able to download all the
48:29 resources from this video, they'll be available for free in my free school
48:32 community, which will be the link in the pinned comment. There'll be a zip file
48:36 in there that has all 23 of these workflows, as you can see, and also two
48:40 PDFs at the bottom, which are covered in the video. So, like I said, join the
48:43 Free School community. Not only does it have all of my YouTube resources, but
48:46 it's also a really quick growing community of people who are obsessed
48:49 with AI automation and using ND every day. All you'll have to do is search for
48:53 the title of this video using the search bar or you can click on YouTube
48:56 resources and find the post associated with this video. And then you'll have
48:59 the zip file right here to download which once again is going to have all 23
49:04 of these JSON N workflows and two PDFs. And there may even be some bonus files
49:07 in here. You'll just have to join the free school community to find out. Okay,
49:11 so we talked about AI agents. We talked about AI workflows. We've gotten into
49:14 NADN and set up our account. We understand workflows, nodes, triggers,
49:19 JSON, stuff like that, and data types. Now, it's time to use all that stuff
49:21 that we've talked about and start applying it. So, we're going to head
49:24 into this next portion of this course, which is going to be about step-by-step
49:26 builds, where I'm going to walk you through every single step live, and
49:31 we'll have some pretty cool workflows set up by the end. So, let's get into
49:34 it. Today, we're going to be looking at three simple AI workflows that you can
49:37 build right now to get started learning NAND. We're going to walk through
49:40 everything step by step, including all of the credentials and the setups. So,
49:43 let's take a look at the three workflows we're going to be building today. All
49:46 right, the first one is going to be a rag pipeline and chatbot. And if you
49:50 don't know what rag means, don't worry. We're going to explain it all. But at a
49:52 high level, what we're doing is we're going to be using Pine Cone as a vector
49:55 database. If you don't know what a vector database is, we'll break it down.
49:58 We're going to be using Google Drive. We're going to be using Google Docs. And
50:01 then something called Open Router, which lets us connect to a bunch of different
50:05 AI models like OpenAI's models or Anthropics models. The second workflow
50:08 we're going to look at is a customer support workflow that's kind of going to
50:11 be building off of the first one we just built. Because in the first workflow,
50:14 we're going to be putting data into a Pine Cone vector database. And in this
50:17 one, we're going to use that data in there in order to respond to customer
50:21 support related emails. So, we'll already have had Pine Cone set up, but
50:24 we're going to set up our credentials for Gmail. And then we're also going to
50:28 be using an NAN AI agent as well as Open Router once again. And then finally,
50:31 we're going to be doing LinkedIn content creation. And in this one, we'll be
50:35 using an NAN AI agent and open router once again, but we'll have two new
50:38 credentials to set up. The first one being Tavi, which is going to let us
50:41 search the web. And then the second one will be Google Sheets where we're going
50:44 to store our content ideas, pull them in, and then have the content written
50:49 back to that Google sheet. So by the end of this video, you're going to have
50:51 three workflows set up and you're going to have a really good foundation to
50:55 continue to learn more about NADN. You'll already have gotten a lot of
50:57 credentials set up and understand what goes into connecting to different
51:00 services. One of the trickiest being Google. So we'll walk through that step
51:03 by step and then you'll have it configured and you'll be good. And then
51:05 from there, you'll be able to continuously build on top of these three
51:08 workflows that we're going to walk through together because there's really
51:11 no such thing as a finished product in the space. Different AI models keep
51:14 getting released and keep getting better. There's always ways to improve
51:17 your templates. And the cool thing about building workflows in NAN is that you
51:20 can make them super customized for exactly what you're looking for. So, if
51:24 this sounds good to you, let's hop into that first workflow. Okay, so for this
51:27 first workflow, we're building a rag pipeline and chatbot. And so if that
51:31 sounds like a bunch of gibberish to you, let's quickly understand what rag is and
51:36 what a vector database is. So rag stands for retrieval augmented generation. And
51:40 in the simplest terms, let's say you ask me a question and I don't actually know
51:43 the answer. I would just kind of Google it and then I would get the answer from
51:47 my phone and then I would tell you the answer. So in this case, when we're
51:50 building a rag chatbot, we're going to be asking the chatbot questions and it's
51:53 not going to know the answer. So it's going to look inside our vector
51:56 database, find the answer, and then it's going to respond to us. And so when
52:00 we're combining the elements of rag with a vector database, here's how it works.
52:03 So the first thing we want to talk about is actually what is a vector database.
52:07 So essentially this is what a vector database would look like. We're all
52:11 familiar with like an x and yaxis graph where you can plot points on there on a
52:14 two dimensional plane. But a vector database is a multi-dimensional graph of
52:19 points. So in this case, you can see this multi-dimensional space with all
52:23 these different points or vectors. And each vector is placed based on the
52:27 actual meaning of the word or words in the vector. So over here you can see we
52:31 have wolf, dog and cat. And they're placed similarly because the meaning of
52:35 these words are all like animals. Whereas over here we have apple and
52:38 banana which the meaning of the words are food more likely fruits. And that's
52:42 why they're placed over here together. So when we're searching through the
52:46 database, we basically vectorize a question the same way we would vectorize
52:50 any of these other points. And in this case, we were asking for a kitten. And
52:53 then that query gets placed over here near the other animals and then we're
52:56 able to say okay well we have all these results now. So what that looks like and
53:00 what we'll see when we get into NAND is we have a document that we want to
53:03 vectorize. We have to split the document up into chunks because we can't put like
53:07 a 50page PDF as one chunk. So it gets split up and then we're going to run it
53:10 through something called an embeddings model which basically just turns text
53:15 into numbers. Just as simple as that. And as you can see in this case let's
53:18 say we had a document about a company. We have company data, finance data, and
53:22 marketing data. And they all get placed differently because they mean different
53:26 things. And the the context of those chunks are different. And then this
53:30 visual down here is just kind of how an LLM or in this case, this agent takes
53:34 our question, turns it into its own question. We vectorize that using the
53:38 same embeddings model that we used up here to vectorize the original data. And
53:42 then because it gets placed here, it just grabs back any vectors that are
53:46 nearest, maybe like the nearest four or five, and then it brings it back in
53:49 order to respond to us. So don't want to dive too much into this. Don't want to
53:53 over complicate it, but hopefully this all makes sense. Cool. So now that we
53:57 understand that, let's actually start building this workflow. So what we're
53:59 going to do here is we are going to click on add first step because every
54:03 workflow needs a trigger that basically starts the workflow. So, I'm going to
54:08 type in Google Drive because what we're going to do is we are going to pull in a
54:12 document from our Google Drive in order to vectorize it. So, I'm going to choose
54:15 a trigger which is on changes involving a specific folder. And what we have to
54:19 do now is connect our account. As you can see, I'm already connected, but what
54:22 we're going to do is click on create new credential in order to connect our
54:25 Google Drive account. And what we have to do is go get a client ID and a
54:29 secret. So, what we want to do is click on open docs, which is going to bring us
54:33 to Naden's documents on how to set up this credential. We have a prerequisite
54:37 which is creating a Google Cloud account. So I'm going to click on Google
54:40 Cloud account and we're going to set up a new project. Okay. So I just signed
54:43 into a new account and I'm going to set up a whole project and walk through the
54:46 credentials with you guys. You'll click up here. You'll probably have something
54:49 up here that says like new project and then you'll click into new project. All
54:54 we have to do now is um name it and you you'll be able to start for free so
54:56 don't worry about that yet. So I'm just going to name this one demo and I'm
55:00 going to create this new project. And now up here in the top right you're
55:02 going to see that it's kind of spinning up this project. and then we'll move
55:06 forward. Okay, so it's already done and now I can select this project. So now
55:10 you can see up here I'm in my new project called demo. I'm going to click
55:15 on these three lines in the top left and what we're going to do first is go to
55:18 APIs and services and click on enabled APIs and services. And what we want to
55:22 do is add the ones we need. And so right now all I'm going to do is add Google
55:27 Drive. And you can see it's going to come up with Google Drive API. And then
55:31 all we have to do is really simply click enable. And there we I just enabled it.
55:35 So you can see here the status is enabled. And now we have to set up
55:37 something called our OOTH consent screen, which basically is just going to
55:43 let Nadn know that Google Drive and Naden are allowed to talk to each other
55:46 and have permissions. So right here, I'm going to click on OOTH consent screen.
55:49 We don't have one yet, so I'm going to click on get started. I'm going to give
55:53 it a name. So we're just going to call this one demo. Once again, I'm going to
55:57 add a support email. I'm going to click on next. Because I'm not using a Google
56:01 Workspace account, I'm just using a, you know, nate88@gmail.com. I'm going to have to
56:05 choose external. I'm going to click on next. For contact information, I'm
56:08 putting the same email as I used to create this whole project. Click on next
56:12 and then agree to terms. And then we're going to create that OOTH consent
56:17 screen. Okay, so we're not done yet. The next thing we want to do is we want to
56:20 click on audience. And we're going to add ourselves as a test user. So we
56:23 could also make the app published by publishing it right here, but I'm just
56:26 going to keep it in test. And when we keep it in test mode, we have to add a
56:30 test user. So I'm going to put in that same email from before. And this is
56:32 going to be the email of the Google Drive we want to access. So I put in my
56:36 email. You can see I saved it down here. And then finally, all we need to do is
56:40 come back into here. Go to clients. And then we need to create a new client.
56:45 We're going to click on web app. We're going to name it whatever we want. Of
56:47 course, I'm just going to call this one demo once again. And now we need to
56:52 basically add a redirect URI. So if you click back in Nitn, we have one right
56:57 here. So, we're going to copy this, go back into cloud, and we're going to add
57:00 a URI and paste it right in there, and then hit create, and then once that's created,
57:06 it's going to give us an ID and a secret. So, all we have to do is copy
57:10 the ID, go back into Nit and paste that right here. And then we need to go grab our
57:16 secret from Google Cloud, and then paste that right in there. And now we have a
57:19 little button that says sign in with Google. So, I'm going to open that up.
57:22 It's going to pull up a window to have you sign in. Make sure you sign in with
57:25 the same account that you just had yourself as a test user. That one. And
57:30 then you'll have to continue. And then here is basically saying like what
57:33 permissions do we have? Does anyone have to your Google Drive? So I'm just going
57:36 to select all. I'm going to hit continue. And then we should be good.
57:39 Connection successful and we are now connected. And you may just want to
57:43 rename this credential so you know you know which email it is. So now I've
57:47 saved my credential and we should be able to access the Google Drive now. So,
57:49 what I'm going to do is I'm going to click on this list and it's going to
57:52 show me the folders that I have in Google Drive. So, that's awesome. Now,
57:56 for the sake of this video, I'm in my Google Drive and I'm going to create a
57:59 new folder. So, new folder. We're going to call this one um FAQ. Create this one
58:05 because we're going to be uploading an FAQ document into it. So, here's my FAQ
58:09 folder um right here. And then what I have is down here I made a policy and
58:14 FAQ document which looks like this. We have some store policies and then we
58:17 also have some FAQs at the bottom. So, all I'm going to do is I'm going to drag
58:21 in my policy and FAQ document into that new FAQ folder. And then if we come into
58:27 NAN, we click on the new folder that we just made. So, it's not here yet. I'm
58:30 just going to click on these dots and click on refresh list. Now, we should
58:35 see the FAQ folder. There it is. Click on it. We're going to click on what are
58:38 we watching this folder for. I'm going to be watching for a file created. And
58:43 then, I'm just going to hit fetch test event. And now we can see that we did in
58:47 fact get something back. So, let's make sure this is the right one. Yep. So,
58:50 there's a lot of nasty information coming through. I'm going to switch over
58:52 here on the right hand side. This is where we can see the output of every
58:55 node. I'm going to click on table and I'm just going to scroll over and there
59:00 should be a field called file name. Here it is. Name. And we have policy and FAQ
59:04 document. So, we know we have the right document in our Google Drive. Okay. So,
59:08 perfect. Every time we drop in a new file into that Google folder, it's going
59:11 to start this workflow. And now we just have to configure what happens after the
59:15 workflow starts. So, all we want to do really is we want to pull this data into
59:20 n so that we can put it into our pine cone database. So, off of this trigger,
59:24 I'm going to add a new node and I'm going to grab another Google Drive node
59:28 because what happened is basically we have the file ID and the file name, but
59:32 we don't have the contents of the file. So, we're going to do a download file
59:35 node from Google Drive. I'm going to rename this one and just call it
59:38 download file just to keep ourselves organized. We already have our
59:41 credential connected and now it's basically saying what file do you want
59:45 to download. We have the ability to choose from a list. But if we choose
59:48 from the list, it's going to be this file every time we run the workflow. And
59:52 we want to make this dynamic. So we're going to change from list to by ID. And
59:56 all we have to do now is we're going to look on the lefth hand side for that
59:59 file that we just pulled in. And we're going to be looking for the ID of the
60:02 file. So I can see that I found it right down here in the spaces array because we
60:06 have the name right here and then we have the ID right above it. So, I'm
60:10 going to drag ID, put it right there in this folder. It's coming through as a
60:14 variable called JSON ID. And that's just basically referencing, you know,
60:17 whenever a file comes through on the the Google Drive trigger. I'm going to use
60:21 the variable JSON. ID, which will always pull in the files ID. So, then I'm going
60:25 to hit test step and we're going to see that we're going to get the binary data
60:28 of this file over here that we could download. And this is our policy and FAQ
60:33 document. Okay. So, there's step two. We have the file downloaded in NADN. And
60:37 now it's just as simple as putting it into pine cone. So before we do that,
60:41 let's head over to pine cone.io. Okay, so now we are in pine cone.io, which is
60:45 a vector database provider. You can get started for free. And what we're going
60:48 to do is sign up. Okay, so I just got logged in. And once you get signed up,
60:52 you should see us a page similar to this. It's a get started page. And what
60:55 we want to do is you want to come down here and click on, you know, begin setup
60:59 because we need to create an index. So I'm going to click on begin setup. We
61:03 have to name our index. So you can call this whatever you want. We have to
61:08 choose a configuration for a text model. We have to choose a configuration for an
61:11 embeddings model, which is sort of what I talked about right in here. This is
61:15 going to turn our text chunks into a vector. So what I'm going to do is I'm
61:19 going to choose text embedding three small from OpenAI. It's the most cost
61:23 effective OpenAI embedding model. So I'm going to choose that. Then I'm going to
61:26 keep scrolling down. I'm going to keep mine as serverless. I'm going to keep
61:29 AWS as the cloud provider. I'm going to keep this region. And then all I'm going
61:33 to do is hit create index. Once you create your index, it'll show up right
61:36 here. But we're not done yet. You're going to click into that index. And so I
61:39 already obviously have stuff in my vector database. You won't have this.
61:41 What I'm going to do real quick is just delete this information out of it. Okay.
61:45 So this is what yours should look like. There's nothing in here yet. We have no
61:48 name spaces and we need to get this configured. So on the left hand side, go
61:53 over here to API keys and you're going to create a new API key. Name it
61:58 whatever you want, of course. Hit create key. And then you're going to copy that
62:02 value. Okay, back in NDN, we have our API key copied. We're going to add a new
62:07 node after the download file and we're going to type in pine cone and we're
62:10 going to grab a pine cone vector store. Then we're going to select add documents
62:14 to a vector store and we need to set up our credential. So up here, you won't
62:18 have these and you're going to click on create new credential. And all we need
62:21 to do here is just an API key. We don't have to get a client ID or a secret. So
62:24 you're just going to paste in that API key. Once that's pasted in there and
62:27 you've given it a name so you know what this means. You'll hit save and it
62:30 should go green and we're connected to Pine Cone and you can make sure that
62:34 you're connected by clicking on the index and you should have the name of
62:37 the index right there that we just created. So I'm going to go ahead and
62:40 choose my index. I'm going to click on add option and we're going to be
62:43 basically adding this to a Pine Cone namespace which back in here in Pine
62:48 Cone if I go back into my database my index and I click in here you can see
62:51 that we have something called namespaces. And this basically lets us
62:55 put data into different folders within this one index. So if you don't specify
62:59 an index, it'll just come through as default and that's going to be fine. But
63:02 we want to get into the habit of having our data organized. So I'm going to go
63:05 back into NADN and I'm just going to name this name space FAQ because that's
63:10 the type of data we're putting in. And now I'm going to click out of this node.
63:13 So you can see the next thing that we need to do is connect an embeddings
63:17 model and a document loader. So let's start with the embeddings model. I'm
63:20 going to click on the plus and I'm going to click on embeddings open AAI. And
63:23 actually, this is one thing I left out of the Excalaw is that we also will need
63:27 to go get an OpenAI key. So, as you can see, when we need to connect a
63:30 credential, you'll click on create new credential and we just need to get an
63:33 API key. So, you're going to type in OpenAI API. You'll click on this first
63:37 link here. If you don't have an account yet, you'll sign in. And then once you
63:40 sign up, you want to go to your dashboard. And then on the lefth hand
63:44 side, very similar thing to Pine Cone, you'll click on API keys. And then we're
63:47 just going to create a new key. So you can see I have a lot. We're going to
63:49 make a new one. And I'm calling everything demo, but this is going to be
63:53 demo number three. Create new secret key. And then we have our key. So we're
63:56 going to copy this and we're going to go back into Nit. Paste that right here. We
63:59 paste it in our key. We've given in a name. And now we'll hit save and we
64:03 should go green. Just keep in mind that you may need to top up your account with
64:06 a few credits in order for you to actually be able to run this model. Um,
64:10 so just keep that in mind. So then what's really important to remember is
64:13 when we set up our pine cone index, we use the embedding model text embedding
64:17 three small from OpenAI. So that's why we have to make sure this matches right
64:20 here or this automation is going to break. Okay, so we're good with the
64:24 embeddings and now we need to add a document loader. So I'm going to click
64:27 on this plus right here. I'm going to click on default data loader and we have
64:31 to just basically tell Pine Cone the type of data we're putting in. And so
64:35 you have two options, JSON or binary. In this case, it's really easy because we
64:39 downloaded a a Google Doc, which is on the lefth hand side. You can tell it's
64:42 binary because up top right here on the input, we can switch between JSON and
64:47 binary. And if we were uploading JSON, all we'd be uploading is this gibberish
64:51 nonsense information that we don't need. We want to upload the binary, which is
64:55 the actual policy and FAQ document. So, I'm just going to switch this to binary.
64:58 I'm going to click out of here. And then the last thing we need to do is add a
65:01 text splitter. So, this is where I was talking about back in this Excal. we
65:05 have to split the document into different chunks. And so that's what
65:08 we're doing here with this text splitter. I'm going to choose a
65:12 recursive character text splitter. There's three options and I won't dive
65:15 into the difference right now, but recursive character text splitter will
65:18 help us keep context of the whole document as a whole, even though we're
65:22 splitting it up. So for now, chunk size is a th00and. That's just basically how
65:25 many characters am I going to put in each chunk? And then is there going to
65:29 be any overlap between our chunks of characters? So right now I'm just going
65:33 to leave it default a,000 and zero. So that's it. You just built your first
65:37 automation for a rag pipeline. And now we're just going to click on the play
65:40 button above the pine cone vector store node in order to see it get vectorized.
65:43 So we're going to basically see that we have four items that have left this
65:47 node. So this is basically telling us that our Google doc that we downloaded
65:51 right here. So this document got turned into four different vectors. So if I
65:55 click into the text splitter, we can see we have four different responses and
65:59 this is the contents that went into each chunk. So we can just verify this by
66:03 heading real quick into Pine Cone, we can see we have a new name space that we
66:07 created called FAQ. Number of records is four. And if we head over to the
66:10 browser, we can see that we do indeed have these four vectors. And then the
66:13 text field right here, as you can see, are the characters that were put into
66:17 each chunk. Okay, so that was the first part of this workflow, but we're going
66:21 to real quick just make sure that this actually works. So we're going to add a
66:24 rag chatbot. Okay. So, what I'm going to do now is hit the tab, or I could also
66:27 have just clicked on the plus button right here, and I'm going to type in AI
66:31 agent, and that is what we're going to grab and pull into this workflow. So, we
66:35 have an AI agent, and let's actually just put him right over here. Um, and
66:41 now what we need to do is we need to set up how are we actually going to talk to
66:44 this agent. And we're just going to use the default N chat window. So, once
66:47 again, I'm going to hit tab. I'm going to type in chat. And we have a chat
66:51 trigger. And all I'm going to do is over here, I'm going to grab the plus and I'm
66:54 going to drag it into the front of the AI agent. So basically now whenever we
66:58 hit open chat and we talk right here, the agent will read that chat message.
67:02 And we know this because if I click into the agent, we can see the user message
67:07 is looking for one in the connected chat trigger node, which we have right here
67:10 connected. Okay, so the first step with an AI agent is we need to give it a
67:14 brain. So we need to give it some sort of AI model to use. So we're going to
67:18 click on the plus right below chat model. And what we could do now is we
67:22 could set up an OpenAI chat model because we already have our API key from
67:25 OpenAI. But what I want to do is click on open router because this is going to
67:30 allow us to choose from all different chat models, not just OpenAIs. So we
67:33 could do Claude, we could do Google, we could do Plexity. We have all these
67:36 different models in here which is going to be really cool. And in order to get
67:39 an Open Router account, all you have to do is go sign up and get an API key. So
67:43 you'll click on create new credential and you can see we need an API key. So
67:47 you'll head over to openouter.ai. You'll sign up for an account. And then all you
67:50 have to do is in the top right, you're going to click on keys. And then once
67:54 again, kind of the same as all all the other ones. You're going to create a new
67:57 key. You're going to give it a name. You're going to click create. You have a
68:01 secret key. You're going to click copy. And then when we go back into NN and
68:04 paste it in here, give it a name. And then hit save. And we should go green.
68:07 We've connected to Open Router. And now we have access to any of these different
68:12 chat models. So, in this case, let's use let's use Claude um 3.5
68:19 Sonnet. And this is just to show you guys you can connect to different ones.
68:22 But anyways, now we could click on open chat. And actually, let me make sure you
68:26 guys can see him. If we say hello, it's going to use its brain claw 3.5 sonnet.
68:30 And now it responded to us. Hi there. How can I help you? So, just to validate
68:34 that our information is indeed in the Pine Cone vector store, we're going to
68:38 click on a tool under the agent. We're going to type in Pine Cone um and grab a
68:43 Pine Cone vector store and we're going to grab the account that we just
68:46 selected. So, this was the demo I just made. We're going to give it a name. So,
68:50 in this case, I'm just going to say knowledge base. We're going to give a description.
68:59 Call this tool to access the policy and FAQ database. So, we're basically just
69:03 describing to the agent what this tool does and when to use it. And then we
69:08 have to select the index and the name space for it to look inside of. So the
69:11 index is easy. We only have one. It's called sample. But now this is important
69:14 because if you don't give it the right name space, it won't find the right
69:19 information. So we called ours FAQ. If you remember in um our Pine Cone, we
69:23 have a namespace and we have FAQ right here. So that's why we're doing FAQ. And
69:26 now it's going to be looking in the right spot. So before we can chat with
69:30 it, we have to add an embeddings model to our Pine Cone vector store, which
69:33 same thing as before. We're going to grab OpenAI and we're going to use
69:37 embedding3 small and the same credential you just made. And now we're going to be
69:41 good to go to chat with our rag agent. So looking back in the document, we can
69:44 see we have some different stuff. So I'm going to ask this chatbot what the
69:48 warranty policy is. So I'm going to open up the chat window and say what is our
69:54 warranty policy? Send that off. And we should see that it's going to use its
69:57 brain as well as the vector store in order to create an answer for us because
70:00 it didn't know by itself. So there we go. just finished up and it said based on the information
70:06 from our knowledge base, here's the warranty policy. We have one-year
70:10 standard coverage. We have, you know, this email for claims processes. You
70:14 must provide proof of purchase and for warranty exclusions that aren't covered,
70:18 damage due to misuse, water damage, blah blah blah. Back in the policy
70:22 documentation, we can see that that is exactly what we have in our knowledge
70:26 base for warranty policy. So, just because I don't want this video to go
70:29 too long, I'm not going to do more tests, but this is where you can get in
70:31 there and make sure it's working. One thing to keep in mind is within the
70:34 agent, we didn't give it a system prompt. And what a system prompt is is
70:38 just basically a message that tells the agent how to do its job. So what you
70:42 could do is if you're having issues here, you could say, you know, like this
70:45 is the name of our tool which is called knowledgeb. You could tell the agent and
70:49 in system prompt, hey, like your job is to help users answer questions about the
70:54 um you know, our policy database. You have a tool called knowledgebase. You
70:58 need to use that in order to help them answer their questions. and that will
71:01 help you refine the behavior of how this agent acts. All right, so the next one
71:05 that we're doing is a customer support workflow. And as always, you have to
71:08 figure out what is the trigger for my workflow. In this case, it's going to be
71:12 triggered by a new email received. So I'm going to click on add first step.
71:16 I'm going to type in Gmail. Grab that node. And we have a trigger, which is on
71:19 message received right here. And we're going to click on that. So what we have
71:23 to do now is obviously authorize ourselves. So we're going to click on
71:26 create new credential right here. And all we have to do here is use OOTH 2. So
71:30 all we have to do is click on sign in. But before we can do that, we have to
71:33 come over to our Google Cloud once again. And now we have to make sure we
71:36 enable the Gmail API. So we'll click on Gmail API. And it'll be really simple.
71:40 We'll just have to click on enable. And now we should be able to do that OOTH
71:43 connection and actually sign in. You'll click on the account that you want to
71:46 access the Gmail. You'll give it access to everything. Click continue. And then
71:50 we're going to be connected as you can see. And then you'll want to name this
71:54 credential as always. Okay. So now we're using our new credential. And what I'm
71:57 going to do is if I hit fetch test event. So now we are seeing an email
72:01 that I just got in this inbox which in this case was nencloud was granted
72:05 access to your Google account blah blah blah. Um so that's what we just got.
72:09 Okay. So I just sent myself a different email and I'm going to fetch that email
72:13 now from this inbox. And we can see that the snippet says what is the privacy
72:17 policy? I'm concerned about my data and passwords. And what we want to do is we
72:21 want to turn off simplify because what this button is doing is it's going to
72:24 take the content of the email and basically, you know, cut it off. So in
72:28 this case, it didn't matter, but if you're getting long emails, it's going
72:30 to cut off some of the email. So if we turn off simplify fetch test event, once
72:34 again, we're now going to get a lot more information about this email, but we're
72:37 still going to be able to access the actual content, which is right here. We
72:41 have the text, what is privacy policy? I'm concerned about my data and
72:44 passwords. Thank you. And then you can see we have other data too like what the
72:48 subject was, who the email is coming from, what their name is, all this kind
72:52 of stuff. But the idea here is that we are going to be creating a workflow
72:56 where if someone sends an email to this inbox right here, we are going to
72:59 automatically look up the customer support policy and respond back to them
73:02 so we don't have to. Okay. So the first thing I'm actually going to do is pin
73:05 this data just so we can keep it here for testing. Which basically means
73:08 whenever we rerun this, it's not going to go look in our inbox. It's just going
73:12 to keep this email that we pulled in, which helps us for testing, right? Okay,
73:16 cool. So, the next step here is we need to have AI basically filter to see is
73:21 this email customer support related? If yes, then we're going to have a response
73:24 written. If no, we're going to do nothing because maybe the use case would
73:28 be okay, we're going to give it an access to an inbox where we're only
73:32 getting customer support emails. But sometimes maybe that's not the case. And
73:35 let's just say we wanted to create this as sort of like an inbox manager where
73:38 we can route off to different logic based on the type of email. So that's
73:41 what we're going to do here. So I'm going to click on the plus after the
73:44 Gmail trigger and I'm going to search for a text classifier node. And what
73:49 this does is it's going to use AI to read the incoming email and then
73:53 determine what type of email it is. So because we're using AI, the first thing
73:56 we have to do is connect a chat model. We already have our open router
73:59 credential set up. So I'm going to choose that. I'm going to choose the
74:01 credential and then I'm for this one, let's just keep it with 40 mini. And now
74:07 this AI node actually has AI and I'm going to click into the text classifier.
74:09 And the first thing we see is that there's a text to classify. So all we
74:13 want to do here is we want to grab the actual content of the email. So I'm
74:17 going to scroll down. I can see here's the text, which is the email content.
74:20 We're going to drag that into this field. And now every time a new email
74:25 comes through, the text classifier is going to be able to read it because we
74:28 put in a variable which basically represents the content of the email. So
74:32 now that it has that, it still doesn't know what to classify it as or what its
74:35 options are. So we're going to click on add category. The first category is
74:39 going to be customer support. And then basically we need to give it a
74:42 description of what a customer support email could look like. So I wanted to
74:46 keep this one simple. It's pretty vague, but you could make this more detailed,
74:49 of course. And I just sent an email that's related to helping out a
74:52 customer. They may be asking questions about our policies or questions about
74:56 our products or services. And what we can do is we can give it specific
74:59 examples of like here are some past customer support emails and here's what
75:02 they've looked like. And that will make this thing more accurate. But in this
75:05 case, that's all we're going to do. And then I'm going to add one more category
75:08 that's just going to be other. And then for now, I'm just going to say any email
75:14 that is not customer support related. Okay, cool. So now when we click out of
75:18 here, we can see we have two different branches coming off of this node, which
75:21 means when the text classifier decides, it's either going to send it off this
75:24 branch or it's going to send it down this branch. So let's quickly hit play.
75:28 It's going to be reading the email using its brain. And now you can see it has
75:32 outputed in the customer support branch. We can also verify by clicking into
75:35 here. And we can see customer support branch has one item and other branch has
75:39 no items. And just to keep ourselves organized right now, I'm going to click
75:42 on the other branch and I'm just going to add an operation that says do nothing
75:46 just so we can see, you know, what would happen if it went this way for now. But
75:49 now is where we want to configure the logic of having an agent be able to read
75:55 the email, hit the vector database to get relevant information and then help
75:58 us write an email. So I'm going to click on the plus after the customer support
76:02 branch. I'm going to grab an AI agent. So this is going to be very similar to
76:05 the way we set up our AI agent in the previous workflow. So, it's kind of
76:08 building on top of each other. And this time, if you remember in the previous
76:12 one, we were talking to it with a connected chat trigger node. And as you
76:15 can see here, we don't have a connected chat trigger node. So, the first thing
76:19 we want to do is change that. We want to define below. And this is where you
76:22 would think, okay, what do we actually want the agent to read? We want it to
76:25 read the email. So, I'm going to do the exact same thing as before. I'm going to
76:29 go into the Gmail trigger node, scroll all the way down until we can find the
76:32 actual email content, which is right here, and just drag that right in.
76:35 That's all we're going to do. And then we definitely want to add a system
76:39 message for this agent. We are going to open up the system message and I'm just
76:42 going to click on expression so I can expand this up full screen. And we're
76:46 going to write a system prompt. Again, for the sake of the video, keeping this
76:49 prompt really concise, but if you want to learn more about prompting, then
76:52 definitely check out my communities linked down below as well as this video
76:56 up here and all the other tutorials on my channel. But anyways, what we said
76:59 here is we gave it an overview and instructions. The overview says you are
77:03 a customer support agent for TechHaven. Your job is to respond to incoming
77:06 emails with relevant information using your knowledgebased tool. And so when we
77:10 do hook up our Pine Cone vector database, we're just going to make sure
77:13 to call it knowledgebase because that's what the agent thinks it has access to.
77:17 And then for the instructions, I said your output should be friendly and use
77:20 emojis and always sign off as Mr. Helpful from TechHaven Solutions. And
77:24 then one more thing I forgot to do actually is we want to tell it what to
77:27 actually output. So if we didn't tell it, it would probably output like a
77:31 subject and a body. But what's going to happen is we're going to reply to the
77:34 incoming email. We're not going to create a new one. So we don't need a
77:38 subject. So I'm just going to say output only the body content of the email. So
77:44 then we'll give it a try and see what that prompt looks like. We may have to
77:47 come back and refine it, but for now we're good. Um, and as you know, we have
77:51 to connect a chat model and then we have to connect our pine cone. So first of
77:54 all, chat model, we're going to use open router. And just to show you guys, we
77:58 can use a different type of model here. Let's use something else. Okay. So,
78:01 we're going to go with Google Gemini 2.0 Flash. And then we need to add the Pine
78:04 Cone database. So, I'm going to click on the plus under tool. I'm going to search
78:09 for Pine Cone Vector Store. Grab that. And we have the operation is going to be
78:13 retrieving documents as a tool for an AI agent. We're going to call this
78:20 knowledge capital B. And we're going to once again just say call this tool to
78:27 access policy and FAQ information. We need to set up the index as well as the
78:31 namespace. So sample and then we're going to call the namespace, you know,
78:35 FAQ because that's what it's called in our pine cone right here as you can see.
78:38 And then we just need to add our embeddings model and we should be good
78:42 to go which is embedded OpenAI text embedding three small. So we're going to
78:46 hit the play above the AI agent and it's going to be reading the email. As you
78:49 can see once again the prompt user message. It's reading the email. What is
78:52 the privacy policy? I'm concerned about my data and my passwords. Thank you. So
78:56 we're going to hit the play above the agent. We're going to watch it use its
78:59 brain. We're going to watch it call the vector store. And we got an error. Okay.
79:04 So, I'm getting this error, right? And it says provider returned error. And
79:09 it's weird because basically why it's erroring is because of our our chat
79:12 model. And it's it's weird because it goes green, right? So, anyways, what I
79:16 would do here is if you're experiencing that error, it means there's something
79:19 wrong with your key. So, I would go reset it. But for now, I'm just going to
79:22 show you the quick fix. I can connect to a OpenAI chat model real quick. And I
79:27 can run this here and we should be good to go. So now it's going to actually
79:31 write the email and output. Super weird error, but I'm honestly glad I caught
79:34 that on camera to show you guys in case you face that issue because it could be
79:37 frustrating. So we should be able to look at the actual output, which is,
79:41 "Hey there, thank you for your concern about privacy policy. At Tech Haven, we
79:45 take your data protection seriously." So then it gives us a quick summary with
79:48 data collection, data protection, cookies. If we clicked into here and
79:51 went to the privacy policy, we could see that it is in fact correct. And then it
79:55 also was friendly and used emojis like we told it to right here in the system
79:59 prompt. And finally, it signed off as Mr. Helpful from Tech Haven Solutions,
80:02 also like we told it to. So, we're almost done here. The last thing that we
80:05 want to do is we want to have it actually reply to this person that
80:09 triggered the whole workflow. So, we're going to click on the plus. We're going
80:13 to type in Gmail. Grab a Gmail node and we're going to do reply to a message.
80:17 Once we open up this node, we already know that we have it connected because
80:20 we did that earlier. We need to configure the message ID, the message
80:24 type, and the message. And so all I'm going to do is first of all, email type.
80:28 I'm going to do text. For the message ID, I'm going to go all the way down to
80:31 the Gmail trigger. And we have an ID right here. This is the ID we want to
80:35 put into the message ID so that it responds in line on Gmail rather than
80:39 creating a new thread. And then for the message, we're going to just drag in the
80:43 output from the agent that we just had write the message. So, I'm going to grab
80:46 this output, put it right there. And now you can see this is how it's going to
80:49 respond in email. And the last thing I want to do is I want to click on add option, append
80:55 nadn attribution, and then just check that off. So then at the bottom of the
80:59 email, it doesn't say this was sent by naden. So finally, we'll hit this test
81:04 step. We will see we get a success message that the email was sent. And
81:07 I'll head over to the email to show you guys. Okay, so here it is. This is the
81:11 one that we sent off to that inbox. And then this is the one that we just got
81:13 back. As you can see, it's in the same thread and it has basically the privacy
81:19 policy outlined for us. Cool. So, that's workflow number two. Couple ways we
81:22 could make this even better. One thing we could do is we could add a node right
81:25 here. And this would be another Gmail one. And we could basically add a label
81:31 to this email. So, if I grab add label to message, we would do the exact same
81:34 thing. We'd grab the message ID the same way we grabbed it earlier. So, now it
81:38 has the message ID of the label to actually create. And then we would just
81:41 basically be able to select the label we want to give it. So in this case, we
81:44 could give it the customer support label. We hit test step, we'll get
81:48 another success message. And then in our inbox, if we refresh, we will see that
81:51 that just got labeled as customer support. So you could add on more
81:55 functionality like that. And you could also down here create more sections. So
81:59 we could have finance, you know, a logic built out for finance emails. We could
82:02 have logic built out for all these other types of emails and um plug them into
82:07 different knowledge bases as well. Okay. So the third one we're going to do is a
82:11 LinkedIn content creator workflow. So, what we're going to do here is click on
82:14 add first step, of course. And ideally, you know, in production, what this
82:17 workflow would look like is a schedule trigger, you know. So, what you could do
82:20 is basically say every day I want this thing to run at 7:00 a.m. That way, I'm
82:23 always going to have a LinkedIn post ready for me at, you know, 7:30. I'll
82:27 post it every single day. And if you wanted it to actually be automatic,
82:30 you'd have to flick this workflow from inactive to active. And, you know, now
82:34 it says, um, your schedule trigger will now trigger executions on the schedule
82:37 you have defined. So now it would be working, but for the sake of this video,
82:40 we're going to turn that off and we are just going to be using a manual trigger
82:44 just so we can show how this works. Um, but it's the same concept, right? It
82:48 would just start the workflow. So what we're going to do from here is we're
82:51 going to connect a Google sheet. So I'm going to grab a Google sheet node. I'm
82:55 going to click on get rows and sheet and we have to create our credential once
82:58 again. So we're going to create new credential. We're going to be able to do
83:02 ooth to sign in, but we're going to have to go back to Google Cloud and we're
83:05 going to have to grab a sheet and make sure that we have the Google Sheets API
83:08 enabled. So, we'll come in here, we'll click enable, and now once this is good
83:12 to go, we'll be able to sign in using OOTH 2. So, very similar to what we just
83:16 had to do for Gmail in that previous workflow. But now, we can sign in. So,
83:20 once again, choosing my email, allowing it to have access, and then we're
83:23 connected successfully, and then giving this a good name. And now, what we can
83:26 do is choose the document and the sheet that it's going to be pulling from. So,
83:29 I'm going to show you. I have one called LinkedIn posts, and I only have one
83:33 sheet, but let's show you the sheet real quick. So, LinkedIn posts, what we have
83:38 is a topic, a status, and a content. And we're just basically going to be pulling
83:42 in one row where the status equals to-do, and then we are going to um
83:46 create the content, upload it back in right here, and then we're going to
83:50 change the status to created. So, then this same row doesn't get pulled in
83:52 every day. So, how this is going to work is that we're going to create a filter.
83:56 So the first filter is going to be looking within the status column and it
84:01 has to equal to-do. And if we click on test step, we should see that we're
84:04 going to get like all of these items where there's a bunch of topics. But we
84:08 don't want that. We only want to get the first row. So at the bottom here, add
84:12 option. I'm going to say return only first matching row. Check that on. We'll
84:15 test this again. And now we're only going to be getting that top row to
84:19 create content on. Cool. So we have our first step here, which is just getting
84:23 the content from the Google sheet. Now, what we're going to do is we need to do
84:27 some web search on this topic in order to create that content. So, I'm going to
84:30 add a new node. This one's going to be called an HTTP request. So, we're going
84:34 to be making a request to a specific API. And in this case, we're going to be
84:38 using Tavly's API. So, go on over to tavly.com and create a free account.
84:42 You're going to get a,000 searches for free per month. Okay, here we are in my
84:46 account. I'm on the free researcher plan, which gives me a thousand free
84:49 credits. And right here, I'm going to add an API key. We're going to name it,
84:54 create a key, and we're going to copy this value. And so, you'll start to get
84:56 to the point when you connect to different services, you always need to
85:00 have some sort of like token or API key. But anyways, we're going to grab this in
85:03 a sec. What we need to do now is go to the documentation that we see right
85:06 here. We're going to click on API reference. And now we have right here.
85:10 This is going to be the API that we need to use in order to search the web. So,
85:14 I'm not going to really dive into like everything about HTTP requests right
85:17 now. I'm just going to show you the simple way that we can get this set up.
85:21 So first thing that we're going to do is we obviously see that we're using an
85:25 endpoint called Tavali search and we can see it's a post request which is
85:28 different than like a git request and we have all these different things we need
85:31 to configure and it can be confusing. So all we want to do is on the top right we
85:35 see this curl command. We're going to click on the copy button. We're going to
85:40 go back into our NEN, hit import curl, paste in the curl command, hit
85:46 import, and now the whole node magically just basically filled in itself. So
85:50 that's really awesome. And now we can sort of break down what's going on. So
85:53 for every HTTP request, you have to have some sort of method. Typically, when
85:58 you're sending over data to a service, which in this case, we're going to be
86:01 sending over data to Tavali. It's going to search the web and then bring data
86:05 back to us. That's a post request because we're sending over body data. If
86:09 we were just like kind of trying to hit an and if we were just trying to access
86:14 like you know um bestbuy.com and we just wanted to scrape the information that
86:17 could just be a simple git request because we're not sending anything over
86:20 anyways then we're going to have some sort of base URL and endpoint which is
86:24 right here. The base URL we're hitting is api.com/tavaly and then the endpoint
86:30 we're hitting is slash search. So back in the documentation you can see right
86:34 here we have slash search but if we were doing like an extract we would do slash
86:37 extract. So that's how you can kind of see the difference with the endpoints.
86:40 And then we have a few more things to configure. The first one of course is
86:44 our authorization. So in this case, we're doing it through a header
86:46 parameter. As you can see right here, the curl command set it up. Basically
86:51 all we have to do is replace this um token with our API key from Tavi. So I'm
86:56 going to go back here, copy that key in N. I'm going to get rid of token and
87:00 just make sure that you have a space after the word bearer. And then you can
87:03 paste in your token. And now we are connected to Tavi. But we need to
87:07 configure our request before we send it off. So right here are the parameters
87:11 within our body request. And I'm not going to dive too deep into it. You can
87:13 go to the documentation if you want to understand like you know the main thing
87:17 really is the query which is what we're searching for. But we have other things
87:20 like the topic. It can be general or news. We have search depth. We have max
87:24 results. We have a time range. We have all this kind of stuff. Right now I'm
87:28 just going to leave everything here as default. We're only going to be getting
87:31 one result. And we're going to be doing a general topic. We're going to be doing
87:34 basic search. But right now, if we hit test step, we should see that this is
87:37 going to work. But it's going to be searching for who is Leo Messi. And
87:40 here's sort of like the answer we get back as well as a URL. So this is an
87:45 actual website we could go to about Lionel Messi and then some content from
87:51 that website. Right? So we are going to change this to an expression so that we
87:54 can put a variable in here rather than just a static hard-coded who is Leo
87:58 Messi. We'll delete that query. And all we're going to do is just pull in our
88:02 topic. So, I'm just going to simply pull in the topic of AI image generation.
88:06 Obviously, it's a variable right here, but this is the result. And then we're
88:09 going to test step. And this should basically pull back an article about AI
88:14 image generation. And you know, so here is a deep AI um link. We'll go to it.
88:19 And we can see this is an AI image generator. So maybe this isn't exactly
88:23 what we're looking for. What we could do is basically just say like, you know, we
88:28 could hardcode in search the web for. And now it's going to be saying search
88:31 the web for AI image generation. We could come in here and say yeah actually
88:34 you know let's get three results not just one. And then now we could test
88:37 that step and we're going to be getting a little bit different of a search
88:42 result. Um AI image generation uses text descriptions to create unique visuals.
88:45 And then now you can see we got three different URLs rather than just one.
88:49 Anyways, so that's our web search. And now that we have a web search based on
88:53 our defined topic, we just need to write that content. So I'm going to click on
88:58 the plus. I'm going to grab an AI agent. And once again, we're not giving it the
89:01 connected chat trigger node to look at. That's nowhere to be found. We're going
89:05 to feed in the research that was just done by Tavi. So, I'm going to click on
89:10 expression to open this up. I'm going to say article one with a colon and I'm
89:15 just going to drag in the content from article one. I'm going to say article 2
89:20 with a colon and just drag in the content from article 2. And then I'm
89:25 going to say article 3 colon and just drag in the content from the third
89:29 article. So now it's looking at all three article contents. And now we just
89:32 need to give it a system prompt on how to write a LinkedIn post. So open this
89:36 up. Click on add option. Click on system message. And now let's give it a prompt
89:41 about turning these three articles into a LinkedIn post. Okay. So I'm heading
89:45 over to my custom GPT for prompt architect. If you want to access this,
89:48 you can get it for free by joining my free school community. Um you'll join
89:51 that. It's linked in the description and then you can just search for prompt
89:54 architect and you should find the link. Anyways, real quick, it's just asking
89:58 for some clarification questions. So, anyways, I'm just shooting off a quick
90:01 reply and now it should basically be generating our system prompt for us. So,
90:05 I'll check in when this is done. Okay, so here is the system prompt. I am going
90:09 to just paste it in here and I'm just going to, you know, disclaimer, this is
90:12 not perfect at all. Like, I don't even want this tool section at all because we
90:16 don't have a tool hooked up to this agent. Um, we're obviously just going to
90:19 give it a chat model real quick. So, in this case, what I'm going to do is I'm
90:22 going to use Claude 3.5 Sonnet just because I really like the way that it
90:25 writes content. So, I'm using Claude through Open Router. And now, let's give
90:28 it a run and we'll just see what the output looks like. Um, I'll just click
90:31 into here while it's running and we should see that it's going to read those
90:34 articles and then we'll get some sort of LinkedIn post back. Okay, so here it is.
90:39 The creative revolution is here and it's AI powered. Gone are the days of hiring
90:42 expensive designers or struggling with complex software. Today's entrepreneurs
90:46 can transform ideas into a stunning visuals instantly using AI image
90:50 generators. So, as you can see, we have a few emojis. We have some relevant
90:53 hashtags. And then at the end, it also said this post, you know, it kind of
90:56 explains why it made this post. We could easily get rid of that. If all we want
90:59 is the content, we would just have to throw that in the system prompt. But now
91:03 that we have the post that we want, all we have to do is send it back into our
91:07 Google sheet and update that it was actually made. So, we're going to grab
91:11 another sheets node. We're going to do update row and sheet. And this one's a
91:14 little different. It's not just um grabbing stuff from a row. We're trying
91:18 to update stuff. So, we have to say what document we want, what sheet we want.
91:21 But now, it's asking us what column do we want to match on. So, basically, I'm
91:25 going to choose topic. And all we have to do is go all the way back down to the
91:28 sheet. We're going to choose the topic and drag it in right here. Which is
91:32 basically saying, okay, when this node gets called, whenever the topic equals
91:37 AI image generation, which is a variable, obviously, whatever whatever
91:40 topic triggered the workflow is what's going to pop up here. We're going to
91:44 update that status. So, back in the sheets, we can see that the status is
91:47 currently to-do, and we need to change it to created in order for it to go
91:51 green. So, I'm just going to type in created, and obviously, you have to
91:53 spell this correctly the same way you have it in your Google Sheets. And then
91:56 for the content, all I'm going to do is we're just going to drag in the output
92:00 of the AI agent. And as you can see, it's going to be spitting out the
92:03 result. And now if I hit test step and we go back into the sheet, we'll
92:06 basically watch this change. Now it's created. And now we have the content of
92:10 our LinkedIn post as well with some justification for why it created the
92:14 post like this. And so like I said, you could basically have this be some sort
92:17 of, you know, LinkedIn content making machine where every day it's going to
92:21 run at 7:00 a.m. It's going to give you a post. And then what you could do also
92:24 is you can automate this part of it where you're basically having it create
92:27 a few new rows every day if you give it a certain sort of like general topic to
92:32 create topics on and then every day you can just have more and more pumping out.
92:35 So that is going to do it for our third and final workflow. Okay, so that's
92:39 going to do it for this video. I hope that it was helpful. You know, obviously
92:42 we connected to a ton of different credentials and a ton of different
92:46 services. We even made a HTTP request to an API called Tavali. Now, if you found
92:49 this helpful and you liked this sort of live step-by-step style and you're also
92:53 looking to accelerate your journey with NAN and AI automations, I would
92:56 definitely recommend to check out my paid community. The link for that is
92:58 down in the description. Okay, so hopefully those three workflows taught
93:02 you a ton about connecting to different services and setting up credentials.
93:05 Now, I'm actually going to throw in one more bonus step-by-step build, which is
93:09 actually one that I shared in my paid community a while back, and I wanted to
93:12 bring it to you guys now. So, definitely finish out this course, and if you're
93:14 still looking for some more and you like the way I teach, then feel free to check
93:17 out the paid community. The link for that's down in the description. We've
93:19 got a course in there that's even more comprehensive than what you're watching
93:22 right now on YouTube. We've also got a great community of people that are using
93:25 Niten to build AI automations every single day. So, I'd love to see you guys
93:28 in that community. But, let's move ahead and build out this bonus workflow. Hey
93:34 guys. So, today I wanted to do a step by step of an invoice workflow. And this is
93:39 because there's different ways to approach stuff like this, right? There's
93:42 the conversation of OCR. There's a conversation of maybe extracting text
93:46 from PDFs. Um, there's the conversation of if you're always getting invoices in
93:50 the exact same format, you probably don't need AI because you could use like
93:54 a code node to extract the different parameters and then push that through.
93:58 So, that's kind of stuff we're going to talk about today. And I I haven't showed
94:01 this one on YouTube. It's not like a YouTube build, but it's not an agent.
94:04 It's an AI powered workflow. And I also wanted to talk about like just the
94:07 foundational elements of connecting pieces, thinking about the workflow. So,
94:11 what we're going to do first actually is we're going to hop into Excalar real
94:14 quick. and I'm going to create a new one. And we're just going to real quickly
94:20 wireframe out what we're doing. So, first thing we're going to draw out here
94:24 is the trigger. So, we'll make this one yellow. We'll call this the
94:30 trigger. And what this is going to be is invoice. Sorry, we're going to do new
94:41 um Google Drive. So the Google Drive node, it's going to be triggering the
94:45 workflow and it's going to be when a new invoice gets dropped into
94:49 um the folder that we're watching. So that's the trigger. From there, and like
94:53 I said, this is going to be a pretty simple workflow. From there, what we're
94:56 going to do is basically it's going to be a PDF. So the first thing to
95:02 understand is actually let me just put Google Drive over here. So the first
95:07 thing to understand from here is um you know what what do the invoices
95:12 look like? These are the questions that we're going to have. So the first one's what
95:19 do the invoices look like? Um because that determines what happens next. So if
95:23 they are PDFs that happen every single time and they're always in the same
95:27 format, then next we'd want to do okay well we can just kind of extract the
95:33 text from this and then we can um use a code node to extract the information we
95:39 need per each parameter. Now if it is a scanned invoice where it's maybe not as
95:44 we're not maybe not as able to extract text from it or like turn it into a text
95:48 doc, we'll probably have to do some OCR element. Um, but if it's PDF that's
95:54 generated by a computer, so we can extract the text, but they're not going
95:57 to come through the same every time, which is what we have in this case. I
96:00 have two example invoices. So, we know we're overall we're looking for like
96:04 business name, client name, invoice number, invoice date, due date, payment
96:07 method, bank details, maybe stuff like that, right? But both of these are
96:10 formatted very differently. They all have the same information, but they're
96:15 formatted differently. So that's why we can't that's why we want to use an AI
96:19 sort of information extractor node. Um so that's one of the main questions. The
96:22 other ones we'd think about would be like you know where do they go? So once we get
96:30 them where do they go? Um you know the frequency of them coming in and then
96:34 also like really any other action. So bas building off of where do they go? It's
96:42 also like what actions will we take? So, does that mean um are we just going to
96:46 throw it in a CRM or are we just going to throw it in a CRM or maybe a database
96:49 or are we also going to like send them an automated follow-up based on you know
96:53 the email that we extract from it and say, "Hey, we received your invoice.
96:56 Thanks." So, like what does that look like? So, those are the questions we
96:59 were initially going to ask. Um and then that helps us pretty much plan out the
97:05 next steps. So because we figured out um extract the same like x amount of
97:21 fields. So because we found out we want fields but the formats may not be
97:31 [Music] consistent. We will use an AI information extractor. Um that is just a long
97:43 sentence. So shorten this up a little bit or sorry make it smaller a little
97:46 bit. Okay so we have that. Um the um like updated to our Google sheet which will
98:08 invoice which I'll just call invoice database and then a follow-up email can
98:14 be sent or no not a follow-up email we'll just say an email, an internal email will be sent.
98:22 So, an email will be sent to the internal billing team. Okay, so this is what we've got, right?
98:30 We have our questions. We've kind of answered the questions. So, now we know
98:32 what the rest of the flow is going to look like. We already know this is not
98:35 going to be an agent. It's going to be a workflow. So, what we're going to do is
98:38 we're going to add another node right here, which is going to be, you know,
98:43 PDF comes in. And what we want to do is we want to extract the text from that
98:51 PDF. Um let's make this text smaller. So we're going to extract the text. And
98:55 we'll do this by using a um extract text node. Okay, cool. Now
99:01 once we have the text extracted, what do we need to do? We need to
99:09 um just moving over these initial questions. So we have the text
99:13 extracted. Extracted. What comes next? What comes next is we need to
99:18 um like decide on the fields to extract. And how do we get this? We get this
99:29 from our invoice database. So let's quickly set up the invoice database. I'm
99:33 going to do this by opening up a Google sheet which we are just going to call
99:41 the oops invoice DB. So now we need to figure out what we actually want to put
99:48 into our invoice DB. So first thing we'll do is um you know we're pretending
99:53 that our business is called Green Grass. So we don't need that. We don't need the
99:55 business information. We really just need the client information. So invoice
100:00 number will be the first thing we want. So, we're just setting up our database
100:04 here. So, invoice number. From there, we want to get client name, client address,
100:08 client email, client phone. [Music] oops, client name, client
100:21 email, client address, and then we want client phone. Okay, so we have those
100:29 five things. And let's see what else we want. probably the amount. So, we'll
100:43 um, and due date. Invoice date and due date. Okay. Invoice date and due date. Okay.
100:52 So, we have these, what are these? Eight. Eight fields. And I'm just going
100:57 to change these colors so it looks visually better for us. So, here are the
101:01 fields we have and this is what we want to extract from every single invoice
101:04 that we are going to receive. Cool. So, we know we have these
101:09 eight things. I'm just going to actually fine. So, we have our eight
101:17 fields to extract um and then they're going to be pushed to invoice DB and then we'll set up the
101:23 once we have these fields we can basically um create our email. So this
101:29 is going to be an AI node that's going to info extract. So it's going to
101:34 extract the eight fields that we have over here. So we're going to send the data
101:39 into there and it's going to extract those fields. Once we extract those
101:47 fields, we don't probably need to set the data because because coming out of
101:51 this will basically be those eight fields. So um you know every time what's
101:57 going to happen is actually sorry let me add another node here so we can
102:01 connect these. So what's going to come out of here is one item which will be
102:05 the one PDF and then what's coming out of here will be eight items every time.
102:09 So that's what we've got. We could also want to think about maybe if two
102:12 invoices get dropped in at the same time. How do we want to handle that loop
102:16 or just push through? But we won't worry about that yet. So we've got one item
102:19 coming in here. the node that's extracting the info will push out the
102:22 eight items and the eight items only. And then what we can do from there is
102:31 update invoice DB and then from there we can also and this could be like out of
102:35 here we do two things or it could be like a uh sequential if that makes
102:39 sense. So, well, what else we know we need to do is we know that we also need
102:43 to email billing team. And so, what I was saying there is we could either have it like this
102:51 where at the same time it branches off and it does those two things. And it
102:54 really doesn't matter the order because they're both going to happen either way.
102:57 So, for now, to keep the flow simple, we'll just do this or we're going to
103:02 email the billing team. Um, and what's going to happen is, you
103:12 internal, because this is internal, we already know like the billing email. So,
103:21 billing@acample.com. This is what we're going to feed in because we already know
103:23 the billing email. We don't have to extract this from anywhere. Um, so we
103:30 have all the info we need. We will what else do we need to feed in
103:34 here? So some of these some of these fields we'll have to filter in. So some
103:42 of the extracted fields because like we want to say hey you know we got this invoice on this
103:50 date um to this client and it's due on this date. So, we'll have some of the
103:53 extracted fields. We'll have a billing example and then potentially
103:59 like potentially like previous or like an email template potentially like that's that's something
104:06 we can think about or we can just prompt So, yeah. Okay. Okay. So what we want to
104:17 do here is actually this. What we need to do is the email has to be generated
104:22 somewhere. So before we feed into an emailing team node and let me actually
104:26 change this. So we're going to have green nodes be AI and then blue nodes
104:30 are going to be not AI. So we're going to get another AI node right here which
104:35 is going to be craft email. So we'll connect these pieces once again.
104:43 Um, and so I hope this I hope you guys can see like this is me trying to figure
104:46 out the workflow before we get into nit because then we can just plug in these
104:49 pieces, right? Um, and so I didn't even think about this. I mean, obviously we would have
104:54 got in there and end and realized, okay, well, we need an email to actually
105:00 configure these next fields, but that's just how it works, right? So
105:05 anyways, this stuff is actually hooked up to the wrong place. We need this to
105:07 be hooked up over here to the craft email tool. So, email template will also be
105:14 hooked up here. And then the billing example will be hooked up. This is the
105:18 No, this will still go here because that's actually the email team or the
105:21 email node. So, email node, send email node, which is an action and we'll be feeding in this as
105:31 well as the actual email. So the email that's written by AI will be fed in. And I
105:40 think that ends the process, right? So we'll just add a quick Oops. We'll just add a quick
105:47 yellow note over here. And I always my my colors always change, but just trying to keep things
105:53 consistent. Like in here, we're just saying, okay, the process is going to
105:56 end now. Okay, so this is our workflow, right? New invoice PDF comes through. We
106:01 want to extract the text. We're using an extract text node which is just going to
106:05 be a static extract from PDF PDF or convert PDF to text file type of thing.
106:09 We'll get one item sent to an AI node to extract the eight fields we need. The
106:12 eight items will be fed into the next node which is going to update our Google
106:16 sheet. Um and I'll just also signify here this is going to be a Google sheet
106:19 because it's important to understand the integrations and like who's involved in
106:24 each process. So this is going to be AI. This is going to be AI and that's
106:29 going to be an extract node. This is going to be a Gmail node and then we
106:33 have the process end. Cool. So this is our wireframe. Now we can get into naden
106:38 and start building out. We can see that this is a very very sequential flow. We
106:42 don't need an agent. We just need two AI nodes So let us get into niten and start
106:51 building this thing. So um we know we know what's starting this
106:55 process which is which is a trigger. So, I'm going to grab a Google Drive
106:59 trigger. We're going to do um on changes to a specific file or no, no, specific
107:04 folder, sorry. Changes involving a specific folder. We're going to choose
107:08 our folder, which is going to be the projects folder, and we're going to be
107:12 watching for a file created. So, we've got our ABC Tech Solutions.
107:18 I'm going to download this as a PDF real quick. So, download as a PDF. I'm going
107:23 to go to my projects folder in the drive, and I'm going to drag this guy in
107:26 here. Um, there it is. Okay, so there's our PDF. We'll come in here and we'll hit
107:32 fetch test event. So, we should be getting our PDF. Okay, nice. We will just make sure it's the
107:39 right one. So, we we should see a ABC Tech Solutions Invoice. Cool. So, I'm
107:42 going to pin this data just so we have it here. So, just for reference, pinning
107:46 data, all it does is just keeps it here. So, if we were to refresh this this
107:50 page, we'll still have our pinned data, which is that PDF to play with. But if
107:53 we would have not pinned it, then we would have had to fetch test event once
107:56 again. So not a huge deal with something like this, but if you're maybe doing web
108:00 hooks or API calls, you don't want to have to do it every time. So you can pin
108:03 that data. Um or like an output of an AI node if you don't want to have to rerun the AI.
108:11 But anyway, so we have our our PDF. We know next based on our wireframe. And
108:16 let me just call this um invoice flow wireframe. So we know next is we need to extract
108:23 text. So perfect. We'll get right into NADN. We'll click on next and we will do
108:28 an extract from file. So let's see. We want to extract from PDF. And
108:34 although what do we have here? We don't have any binary. So we
108:40 were on the right track here, but we forgot that in order to we get the we
108:44 get the PDF file ID, but we don't actually have it. So what we need to do
108:49 here first is um basically download the file because we need the binary to then feed that into
109:04 So we need the binary. So, sorry if that's like I mean really small, but basically in order to
109:11 extract the text, we need to download the file first to get the binary and
109:16 then we can um actually do that. So, little little thing we missed in the
109:19 wireframe, but not a huge deal, right? So, we're going to extend this one off.
109:23 We're going to do a Google Drive node once again, and we're going to look at
109:27 download file. So, now we can say, okay, we're downloading a file. Um, we can
109:32 choose from a list, but this has to be dynamic because it's going to be based
109:35 on that new trigger every time. So, I'm going to do by ID. And now on the lefth
109:39 hand side, we can look for the file ID. So, I'm going to switch to schema real
109:44 quick. Um, so we can find the the ID of the file. We're just going to have to go
109:47 through. So, we have a permissions ID right here. I don't think that's the
109:51 right one. We have a spaces ID. I don't think that's the right one either. We're
109:56 looking for an actual file ID. So, let's see. parents icon link, thumbnail link, and
110:06 sometimes you just have to like find it So, I feel like I probably have just
110:13 skipped right over it. Otherwise, we'll IDs. Maybe it is this one. Yeah, I think
110:21 Okay, sorry. I think it is this one because we see the name is right here
110:23 and the ID is right here. So, we'll try this. We're referencing that
110:27 dynamically. We also see in here we could do a Google file conversion which
110:31 basically says um you know if it's docs convert it to HTML if it's drawings
110:35 convert it to that. If it's this convert it to that there's not a PDF one so
110:39 we'll leave this off and we'll hit test step. So now we will see we got the
110:43 invoice we can click view and this is exactly what we're looking at here with
110:47 the invoice. So this is the correct one. Now since we have it in our binary data
110:50 over here we have binary. Now we can extract it from the file. So um you know
110:56 on the left is the inputs on the right is going to be our output. So we're
111:00 extracting from PDF. We're looking in the input binary field called data which
111:05 is right here. So I'll hit test step and now we have text. So here's the actual
111:10 text right um the invoice the information we need and out of this is
111:12 what we're going to pass over to extract. So let's go back to the
111:18 wireframe. We have our text extracted. Now, what we want to do is extract um
111:22 the specific eight fields that we need. So, hopping back into the workflow, we
111:26 know that this is going to be an AI node. So, it's going to be an
111:28 information extractor. We have to first of all classify we we know that one item is
111:34 going in here and that's right here for us in the table, which is the actual
111:37 text of the invoice. So, we can open this up and we can see this is the text
111:40 of the invoice. We want to do it from attribute description. So, that's what it's
111:46 looking for. So, we can add our eight attributes. So, we know there's going to
111:48 be eight of them, right? So, we can create eight. But, let's just first of
111:52 all go into our database to see what we want. So, the first one's invoice
111:55 number. So, I'm going to copy this over here. Invoice number. And we just have
111:59 to describe what that is. So, I'm just going to say the number of the
112:03 invoice. And this is required. We're going to make them all required. So,
112:06 number of the invoice. Then we have the client name. Paste that in
112:11 here. Um, these should all be pretty self um explanatory. So the name of the
112:17 client we're going to make it required client email. So this is going
112:22 to be a little bit repetitive but the email of the client and let me just
112:31 quickly copy this for the next two client address. So there's client
112:36 address and we're going to required. And then what's the last one
112:48 here? Client phone. Paste that in there, which is obviously going to be the phone
112:53 number of the client. And here we can say, is this going to be a string or is
112:56 it going to be a number? I'm going to leave it right now as a string just
112:59 because over here on the left you can see the phone. We have parenthesis in
113:03 there. And maybe we want the format to come over with the parenthesis and the
113:07 little hyphen. So let's leave it as a string for now. We can always test and
113:10 we'll come back. But client phone, we're going to leave that. We have total
113:15 amount. Same reason here. I'm going to leave this one as a string because I
113:18 want to keep the dollar sign when we send it over to sheets and we'll see how
113:22 it comes over. But the total amount of the invoice required. What's coming next is invoice
113:31 date and due date. So, invoice date and due date, we can say these are
113:36 going to be dates. So, we're changing the var the data type here. They're both
113:41 required. And the date the invoice was sent. And then we're going to
113:47 say the date the invoice is due. So, we're going to make sure this works. If
113:49 we need to, we can get in here and make these descriptions more descriptive. But
113:53 for now, we're good. We'll see if we have any options. You're an expert
113:56 extraction algorithm. Only extracts relevant information from the text. If
113:59 you do not know the value of the attribute to extract, you may omit the
114:03 attributes value. So, we'll just leave that as is. Um, and we'll hit test step.
114:08 It's going to be looking at this text. And of course, we're using AI, so we
114:12 have to connect a chat model. So, this will also alter the performance. Right
114:15 now, we're going to go with a Google Gemini 20 flash. See if that's powerful
114:20 enough. I think it should be. And then, we're going to hit play once again. So,
114:22 now it's going to be extracting information using AI. And what's great
114:26 about this is that we already get everything out here in its own item. So,
114:30 it's really easy to map this now into our Google sheet. So, let's make sure
114:35 this is all correct. Um, invoice number. That looks good. I'm going to open up
114:38 the actual one. Yep. Client name ABC. Yep. Client email finance at ABC
114:44 Tech. Yep. Address and phone. We have address and phone. Perfect. We have total amount
114:53 is 141 175. 14175. We have um March 8th and March 22nd. If we go back up here, March 8th,
115:00 March 22nd. Perfect. So, that one extracted it. Well, and um okay, so we have one item coming out,
115:08 but technically there's eight like properties in there. So, anyways, let's
115:12 go back to our our uh wireframe. So, after we extracted the eight items, what
115:16 do we do next? We're going to put them into our Google Sheet um database. So,
115:21 what we know is we're going to grab a Google Sheets. We're going to do an
115:26 append row because we're adding a row. Um, we already have a credential
115:28 selected. So, hopefully we can choose our invoice database. It's just going to
115:32 be the first sheet, sheet one. And now what happens is we have to map the
115:36 columns. So, you can see these are dragable. We can grab each one. If I go
115:40 to schema, it's a little more apparent. So, we have these eight items. And it's
115:42 going to be really easy now that we use an information extractor because we can
115:46 just map, you know, invoice number to invoice number, client name, client
115:51 name, email, email. And it's referencing these variables because every time after
115:56 we do our information extractor, they're going to be coming out as JSON.output
115:59 and then invoice number. And then for client name, JSON.output client name. So
116:03 we have these dynamic variables that will happen every single time. And
116:06 obviously I'll show this when we do another example, but we can keep mapping
116:10 everything in. And we also did it in that order. So it's really really easy
116:14 to do. We're just dragging and dropping and we are finished. Cool. So if I hit
116:21 test step here, this is going to give us a message that says like here are the
116:25 fields basically. So there are the fields. They're mapped correctly. Come
116:28 into the sheets. We now have automatically gotten this updated in our
116:34 invoice database. And um that's that. So let me just change some of these nodes.
116:39 So this is going to be update database. Um this is information extractor extract
116:44 from file. I'm just going to say this is download binary. So now we know what's going on
116:50 in each step. And we'll go back to the wireframe real quick. What happens after
116:53 we update the database? Now we need to craft the email. And this is going to be
116:58 using AI. And what's going to go into this is some of the extracted fields and
117:01 maybe an email template. What we're going to do more realistically is just a
117:08 system prompt. So back into nitn let's add a um open AI message and model node. So
117:15 what we're going to do is we're going to choose our model to talk to. In this
117:19 case we'll go 40 mini. It should be powerful enough. And now we're going to
117:23 set up our system prompt and our user prompt. So at this point if you don't
117:27 understand the difference the system prompt is the instructions. So we're telling this node
117:33 how to behave. So first I'm going to change this node name to create email
117:38 because that's like obviously what's going on keeping you organized. And now
117:41 how do we explain to this node what its role is? So you are an email
117:48 expert. You will receive let me actually just open this up. You will receive
117:58 um invoice information. Your job is to notify the internal billing
118:07 team that um an invoice was received. Receive/s sent. Okay. So,
118:15 honestly, I'm going to leave it at that for now. It's really simple. If we
118:18 wanted to, we can get in here and change the prompting as far as like here is the
118:22 format. Here is the way you should be doing it. One thing I like to do is I
118:25 like to say, you know, this is like your overview. And then if we need to get
118:28 more granular, we can give it different sections like output or rules or
118:32 anything like that. I'm also going to say you are an email expert
118:38 for green grass corp named [Music] um named um Greeny. Okay, so we have
118:49 Greenie from Green Grass Corp. That's our email expert that's going to email
118:53 the billing team every time this workflow happens. So that's the
118:58 overview. Now in the user prompt, think of this as like when you're talking to
119:01 chatbt. So obviously I had chatbt create these invoices chatgbt this when we say hello
119:08 that's a user message because this is an interact like an an interaction and it's
119:12 going to change every time. But behind the scenes in this chatbt openai has a
119:16 system prompt in here that's basically like you're a helpful assistant. You
119:19 help you know users answer questions. So this window right here that we type in
119:24 is our user message and behind the scenes telling the node how to act is
119:30 our system prompt. Cool. So in here I like to have dynamic information go into
119:33 the user message while I like to have static information in the actual system
119:37 prompt. So except for maybe the except the exception usually of
119:42 like giving it the current time and day because that's an expression. So
119:46 anyways, let's change this to an expression. Let's make this full screen.
119:50 We are going to be giving it the invoice information that it needs to write the
119:55 email because that's what it that's what it's expecting. In the system prompt, we
119:59 said you will receive invoice information. So, first thing is going to
120:05 be invoice number. We are going to grab invoice number and just drag it in.
120:10 We're going to grab client name and just drag it in. So, it's going
120:14 to dynamically get these different things every time, right?
120:19 So, let's say maybe it doesn't even we don't need client email. Okay, maybe we
120:24 do. We want client email. Um, so we'll give it that. But the billing team right now doesn't need
120:30 the address or phone. Let's just say that. But it does want we do want them
120:36 to know the total amount of that invoice. And we definitely want them to
120:40 know the invoice date and the invoice due date. So we can we can now drag in these two
120:48 things. So this was us just being able to customize what the AI node sees. Just
120:53 keep in mind if we don't drag anything in here, even if it's all on the input,
120:59 the AI node doesn't see any of it. So let's hit test step and we'll see the
121:02 type of email we get. We're going to have to make some changes. I already
121:04 know because you know we have to separate everything. But what it did is
121:08 it created a subject which is new invoice received and then the invoice
121:12 number. Dear billing team, I hope this message finds you well. We've received
121:15 an invoice that requires your attention and then it lists out some information
121:18 and then it also signs off Greeny Green Grass Corp. So, first thing we want to do is um if
121:25 we go back to our wireframe, what we have to send in, and we didn't document this well enough
121:35 actually, but what goes into here is a um you know, in order to send an email,
121:40 we need a two, we need a subject, and we need the email body.
121:48 So, that those are the three things we need. the two is coming from here. So,
121:52 we know that. And the subject and email are going to come from the um craft
121:58 email node. So, we have the the two and then actually I'm going to move this up
122:01 here. So, now we can just see where we're getting all of our pieces from.
122:04 So, the two is coming from internal knowledge. This can be hardcoded, but
122:07 the subject and email are going to be dynamic from the AI note. Cool. So what
122:13 we want to do now say output and we're going to tell it how to output information. So
122:26 output output the following parameters separately and we're just
122:33 going to say subject and email. So now it should be outputting two parameters
122:37 separately, but it's not going to because even though it says here's the
122:40 subject and then it gives us a subject and then it says here's the email and
122:43 gives us an email, they're still in one field. Meaning if we hook up another
122:48 node which would be a Gmail send email as we here. Okay, so now this is the next
122:58 node. Here's the fields we need. But as you can see coming out of the create
123:03 email AI node, we have this whole parameter called content which has the
123:07 subject and the email. And we need to get these split up so that we can drag
123:10 one into the subject and one into the two. Right? So first of all, I'm just
123:14 making these expressions just so we can drag stuff in later. Um, and so that's
123:20 what we need to do. And our fix there is we come into here and we just check this
123:25 switch that says output content is JSON, which will then will rerun. And now
123:29 we'll get subject and body. Subject and email in two different
123:34 fields right here. We can see which is awesome because then we can open up our
123:38 send email node. We can grab our subject. It's going to be dynamic. And
123:41 we can grab our email. It's going to be dynamic. Perfect. We're going to change
123:45 this to text. And we're going to add an option down here. And we're just going
123:49 to say append nadn attribution and turn that off because we just don't want to see the
123:55 message at the bottom that says this was sent by nadn. And if we go back to our
124:00 wireframe wherever that is over here, we know that this is the email that's going
124:03 to be coming through or we're going to be sending to every time because we're
124:06 sending internally. So we can put that right in here, not as a variable. Every
124:10 time this is going to be sending to billing@acample.com. So this really
124:13 could be fixed. It doesn't have to be an expression. Cool. So we will now hit test step and we can
124:21 see that we got this this email sent. So let me open up a new
124:26 tab. Let me go into whoa into our Gmail. I will go to the sent items and
124:34 we will see we just got this billing email. So obviously it was a fake email
124:37 but this is what it looks like. We've received a new invoice from ABC Tech.
124:40 Please find the details below. We got invoice number, client name, client
124:45 email, total amount, total invoice date, due date. Please process these this
124:48 invoice accordingly. So that's perfect. Um, we could also, if
124:54 we wanted to, we could prompt it a little bit differently to say, you know,
124:59 like this has been updated within the database and, um, you can check it out
125:03 here. So, let's do that real quick. What we're going to do is we're going to say
125:06 because we've already updated the database, I'm going to come into our
125:09 Google sheet. I'm going to copy this link and I'm going to we're basically going to bake this
125:19 So, I'm going to say we're going to give it a section called
125:26 email. Inform the billing team of the invoice. let them know we have also updated this
125:38 in the invoice database and they can view it here and we'll just give them
125:43 this link to that Google doc. So every time they'll just be able to send that
125:45 over. So I'm going to hit test up. We should see a new email over here which
125:50 is going to include that link I hope. So there's the link. We'll run this email
125:55 tool again to send a new email. Hop back into Google Gmail. We got a new one. And
126:01 now we can see we have this link. So you can view it here. We've already updated
126:05 this in the invoice database. We click on the link. And now we have our
126:09 database as well. So cool. Now let's say at this point we're happy with our
126:13 prompting. We're happy with the email. This is done. If we go back to the
126:17 wireframe, the email is the last node. So um maybe just to make it look
126:21 consistent, we will just add over something here that just says nothing.
126:25 And now we know that the process is done because nothing to do. So this is
126:28 basically like this is what we wireframed out. So we know that we're
126:32 happy with this process. We understand what's going on. Um but now let's unpin
126:37 this data real quick and let's drop in another invoice to make sure that even
126:41 though it's formatted differently. So this XYZ formatted differently, but the
126:45 AI should still be able to extract all the information that we need. So I'm
126:48 going to come in here and download this one as a PDF. We have it right there. I'm going
126:54 to drop it into our Google Drive. So, we have XYZ Enterprises now. Come back into
127:00 the workflow and we'll hit test fetch test event. Let's just make sure this is
127:03 the right one. So, XYZ Enterprises. Nice. And I'm just going to hit test workflow and
127:10 we'll watch it download, extract, get the information, update the database,
127:13 create the email, send the email, and then nothing else should happen after
127:19 that. So, boom, we're done. Let's click into our email. Here we have our new
127:23 invoice received. So it updated differently like the subjects dynamic
127:27 because it was from XYZ a different invoice number. As you remember the ABC
127:31 one, it started with TH AI and this one starts with INV. So that's why the subject is different.
127:38 Dear billing team, we have received a new invoice from XYZ Enterprises. Please
127:42 find the details below. There's the number, the name, all this kind of
127:46 information. Um the total amount was 13856. Let's go make sure that's right.
127:51 Total amount 13856. Um March 8th March 22nd once again. Is that
128:01 correct? March 8th March 22nd. Nice. And finance XYZ XYZ. Perfect. Okay. The
128:05 invoice has been updated in the database. You can view it here. So let's
128:08 click on that link. Nice. We got our information populated into the
128:12 spreadsheet. As you can see, it all looks correct to me as well. Our strings
128:15 are coming through nice and our dates are coming through nice. So I'm going to
128:18 leave it as is. Now, keep in mind because these are technically coming
128:23 through as strings, um, that's fine for phone, but Google Sheets automatically
128:27 made these numbers, I believe. So, if we wanted to, we could sum these up because
128:31 they're numbers. Perfect. Okay, cool. So, that's how that works,
128:36 right? Um, that's the email. We wireframed it out. We tested it with two
128:40 different types of invoices. They weren't consistent formatting, which
128:43 means we couldn't probably have used a code node, but the AI is able to read
128:47 this and extract it. As you can see right here, we got the same eight items
128:51 extracted that we were looking for. So, that's perfect. Um, cool. So, going to
129:00 will yeah I will attach the actual flow and I will attach the
129:06 just a picture of this wireframe I suppose in this um post and by now you
129:12 guys have already seen that I'm sure but yeah I hope this was helpful um the
129:16 whole process of like the way that I approached and I know this was a 35minut
129:20 build so it's not like the same as like building something more complex but as
129:24 far as like a general workflow you know this is a pretty solid solid one to get
129:27 started with. Um it it shows elements of using AI within a simple workflow that's going to
129:35 be sequential and it shows like you know the way we have to reference our
129:38 variables and how we have to drag things in and um obviously the component of
129:42 like wireframing out in the beginning to understand the full flow at least 80%
129:48 85% of the full flow before you get in there. So cool. Hope you guys enjoy this
129:52 one and I will see you guys in the community. Thanks. All right. All right,
129:55 I hope you guys enjoyed those step-by-step builds. Hopefully, right
129:57 now, you're feeling like you're in a really good spot with Naden and
130:01 everything starting to piece together. This next video we're going to move into
130:05 is about APIs because in order to really get more advanced with our workflows and
130:08 our AI agents, we have to understand the most important thing, which is APIs.
130:12 They let our Nin workflows connect to anything that you actually want to use.
130:15 So, it's really important to understand how to set up. And when you understand
130:18 it, the possibilities are endless. And it's really not even that difficult. So,
130:21 let's break it down. If you're building AI agents, but you don't really
130:24 understand what an API is or how to use them, don't worry. You're not alone. I
130:27 was in that exact same spot not too long ago. I'm not a programmer. I don't know
130:31 how to code, but I've been teaching tens of thousands of people how to build real
130:34 AI systems. And what changed the game for me was when I understood how to use
130:38 APIs. So, in this video, I'm going to break it down as simple as possible, no
130:41 technical jargon, and by the end, you'll be confidently setting up API calls
130:45 within your own Agentic workflows. Let's make this easy. So the purpose of this
130:48 video is to understand how to set up your own requests so you can access any
130:52 API because that's where the power truly comes in. And before we get into NDN and
130:56 we set up a couple live examples and I show you guys my thought process when
131:00 I'm setting up these API calls. First I thought it would just be important to
131:04 understand what an API really is. And APIs are so so powerful because let's
131:08 say we're building agents within NADN. Basically we can only do things within
131:12 NADN's environment unless we use an API to access some sort of server. So
131:16 whether that's like a Gmail or a HubSpot or Air Table, whatever we want to access
131:20 that's outside of Niden's own environment, we have to use an API call
131:24 to do so. And so that's why at the end of this video, when you completely
131:27 understand how to set up any API call you need, it's going to be a complete
131:31 gamecher for your workflows and it's also going to unlock pretty much
131:35 unlimited possibilities. All right, now that we understand why APIs are
131:38 important, let's talk about what they actually do. So API stands for
131:42 application programming interface. And at the highest level in the most simple
131:45 terms, it's just a way for two systems to talk to each other. So NAND and
131:49 whatever other system we want to use in our automations. So keeping it limited
131:53 to us, it's our NAN AI agent and whatever we want it to interact with.
131:56 Okay, so I said we're going to make this as simple as possible. So let's do it.
132:00 What we have here is just a scenario where we go to a restaurant. So this is
132:03 us right here. And what we do is we sit down and we look at the menu and we look
132:06 at what food that the restaurant has to offer. And then when we're ready to
132:10 order, we don't talk directly to the kitchen or the chefs in the kitchen. We
132:13 talk to the waiter. So we'd basically look at the menu. We'd understand what
132:16 we want. Then we would talk to the waiter and say, "Hey, I want the chicken
132:19 parm." The waiter would then take our order and deliver it to the kitchen. And
132:23 after the kitchen sees the request that we made and they understand, okay, this
132:26 person wants chicken parm. I'm going to grab the chicken parm, not the salmon.
132:30 And then we're basically going to feed this back down the line through the
132:33 waiter all the way back to the person who ordered it in the first place. And
132:37 so that's how you can see we use an HTTP request to talk to the API endpoint and
132:42 receive the data that we want. And so now a little bit more of a technical
132:45 example of how this works in NADN. Okay, so here is our AI agent. And when it
132:48 wants to interact with the service, it first has to look at that services API
132:53 documentation to see what is offered. Once we understand that, we'll read that
132:56 and we'll be ready to make our request and we will make that request using an
133:00 HTTP request. From there, that HTTP request will take our information and
133:04 send it to the API endpoint. the endpoint will look at what we ordered
133:07 and it will say, "Okay, this person wants this data. So, I'm going to go
133:10 grab that. I'm going to send it back, send it back to the HTTP request." And
133:14 then the HTTP request is actually what delivers us back the data that we asked
133:17 and we know that it was available because we had to look at the API
133:21 documentation first. So, I hope that helps. I think that looking at it
133:24 visually makes a lot more sense, especially when you hear, you know, HTTP
133:28 API endpoint, all this kind of stuff. But really, it's just going to be this
133:31 simple. So now let me show an example of what actually this looks like in Naden
133:34 and when you would use one and when you wouldn't need to use one. So here we
133:37 have two examples where we're accessing a service called open weather map which
133:41 basically just lets us grab the weather data from anywhere in the world. And so
133:44 on the left what we're doing is we're using open weather's native integration
133:48 within nadn. And so what I mean by native integration is just that when we
133:51 go into nadn and we click on the plus button to add an app and we want to see
133:54 like you know the different integrations. It has air tableable it
133:58 has affinity it has airtop it has all these AWS things. It has a ton of native
134:02 integrations and all that a native integration is is an HTTP request but
134:07 it's just like wrapped up nicely in a UI for us to basically fill in different
134:11 parameters. And so once you realize that it really clears everything up because
134:14 the only time you actually need to use an HTTP request is if the service you
134:19 want to use is not listed in this list of all the native integrations. Anyways,
134:23 let me show you what I mean by that. So like I said on the left we have Open
134:27 Weather Maps native integration. So basically what we're doing here is we're
134:29 sending over, okay, I'm using open weather map. I'm going to put in the
134:32 latitude and the longitude of the city that I'm looking for. And as you can see
134:36 over here, what we get back is Chicago as well as a bunch of information about
134:39 the current weather in Chicago. And so if you were to fill this out, it's super
134:42 super intuitive, right? All you do is put in the Latin long, you choose your
134:46 format as far as imperial or metric, and then you get back data. And that's the
134:49 exact same thing we're doing over here where we use an HTTP request to talk to
134:54 Open Weather's API endpoint. And so this is just looks a little more scary and
134:57 intimidating because we have to set this up ourselves. But if we zoom in, we can
135:00 see it's pretty simple. We're making a git request to open weather maps URL
135:05 endpoint. And then we're putting over the lat and the long, which is basically
135:09 the exact same from the one on the left. And then, as you can see, we get the
135:13 same information back about Chicago, and then some weather information about
135:16 Chicago. And so the purpose of that was just to show you guys that these native
135:20 integrations, all we're doing is we're accessing some sort of API endpoint. it
135:24 just looks simpler and easier and there's a nice user interface for us
135:27 rather than setting everything up manually. Okay, so hopefully that's
135:30 starting to make a little more sense. Let's move down here to the way that I
135:34 think about setting up these HTTP requests, which is we're basically just
135:37 setting up filters and making selections. All we're doing is we're
135:42 saying, okay, I want to access server X. When I access server X, I need to tell
135:45 it basically, what do I want from you? So, it's the same way when you're going
135:48 to order some pizza. You have to first think about which pizza shop do I want
135:52 to call? And then once you call them, it's like, okay, I need to actually
135:54 order something. It has to be small, medium, large. It has to be pepperoni or
135:58 cheese. You have to tell it what you want and then they will send you the
136:01 data back that you asked for. So when we're setting these up, we basically
136:04 have like five main things to look out for. The first one you have to do every
136:08 time, which is a method. And the two most common are going to be a get or a
136:11 post. Typically, a get is when you're just going to access an endpoint and you
136:14 don't have to send over any information. You're just going to get something back.
136:17 But a post is when you're going to send over certain parameters and certain data
136:21 and say okay using this information send me back what I'm asking for. The great
136:24 news is and I'll show you later when we get into any end to actually do a live
136:29 example. It'll always tell you to say you know is this a get or a post. Then
136:32 the next thing is the endpoint. You have to tell it like which website or you
136:35 know which endpoint you want to actually access which URL. From there we have
136:38 three different parameters to set up. And also just realize that this one
136:43 should say body parameters. But this used to be the most confusing part to
136:46 me, but it's really not too bad at all. So, let's break it down. So, keep in
136:49 mind when we're looking at that menu, that API documentation, it's always
136:52 going to basically tell us, okay, here are your query parameters, here are your
136:55 header parameters, and here are your body parameters. So, as long as you
136:57 understand how to read the documentation, you'll be just fine. But
137:01 typically, the difference here is that when you're setting up query parameters,
137:04 this is basically just saying a few filters. So if you search pizza into
137:08 Google, it'll come google.com/arch, which would be Google's endpoint. And then we would have a
137:14 question mark and then Q equals and then a bunch of different filters. So as you
137:16 can see right here, the first filter is just Q equals pizza. And the Q is, you
137:21 know, it stands for query parameters. And you don't even have to understand
137:23 that. That's just me showing you kind of like a real example of how that works.
137:26 From there, we have to set up a header parameter, which is pretty much always
137:30 going to exist. And I basically just think of header parameters as, you know,
137:34 authorizing myself. So, usually when you're doing some sort of API where you
137:37 have to pay, you have to get a unique API key and then you'll send that key.
137:40 And if you don't put your key in, then you're not going to be able to get the
137:43 data back. So, like if you're ordering a pizza and you don't give them your
137:46 credit card information, they're not going to send you a pizza. And usually
137:49 an API key is something you want to keep secret because let's say, you know, you
137:53 put 10 bucks into some sort of API that's going to create images for you.
137:56 If that key gets leaked, then anyone could use that key and could go create
138:00 images for themselves for free, but they'd be running down your credits. And
138:03 these can come in different forms, but I just wanted to show you a really common
138:06 one is, you know, you'll have your key value pairs where you'll put
138:10 authorization as the name and then in the value you'll put bearer space your
138:15 API key or in the name you could just put API_key and then in the value you'd
138:19 put your API key. But once again, the API documentation will tell you how to
138:23 configure all this. And then finally, the body parameters if you need to send
138:26 something over to get something back. Let's say we're, you know, making an API
138:31 call to our CRM and we want to get back information about John. We could send
138:35 over something like name equals John. The server would then grab all records
138:39 that have name equal John and then it would send them back to you. So those
138:42 are basically like the five main things to look out for when you're reading
138:45 through API documentation and setting up your HTTP request. But the beautiful
138:51 thing about living in 2025 is that we now have the most beautiful thing in the
138:55 world, which is a curl command. And so what a curl command is is it lets you
138:59 hit copy and then you basically can just import that curl into nadn and it will
139:03 pretty much set up the request for you. Then at that point it really is just
139:07 like putting in your own API key and tweaking a few things if you want to. So
139:11 let's take a look at this curl statement for a service called Tavi. As you can
139:14 see that's the endpoint is api.tavi.com. All this basically does is
139:18 it lets you search the internet. So you can see here this curl statement tells
139:21 us pretty much everything we need to know to use this. So it's telling us
139:24 that it's going to be a post. It's showing us the API endpoint that we're
139:27 going to be accessing. It shows us how to set up our header. So that's going to
139:30 be authorization and then it's going to be bearer space API token. It's
139:34 basically just telling us that we're going to get this back in JSON format.
139:37 And then you can see all of these different key value pairs right here in
139:40 the data section. And these are basically going to be body parameters
139:43 where we can say, you know, query is who is Leo Messi? So that's what we' be
139:47 searching the internet for. We have topic equals general. We have search
139:50 depth equals basic. So hopefully you can see all of these are just different
139:54 filters where we can choose okay do we want you know do we want one max result
139:58 or do we want four or do we have a time range or do we not. So this is really
140:02 just at the end of the day. It's basically like ordering Door Dash
140:05 because what we would have up here is you know like what actual restaurant do
140:09 we want to order food from? We would put in our credit card information. We would
140:12 say do we want this to be delivered to us? Do we want to pick it up? We would
140:15 basically say you know do you want a cheeseburger? No pickles. No onions.
140:18 Like what are the different things that you want to flip? Do you want a side? Do
140:21 you want fries? Or do you want salad? Like, what do you want? And so once you
140:25 get into this mindset where all I have to do is understand this documentation
140:28 and just tweak these little things to get back what I want, it makes setting
140:33 up API calls so much easier. And if another thing that kind of intimidates
140:37 you is the aspect of JSON, it shouldn't because all it is is
140:41 a key value pair like we kind of talked about. You know, this is JSON right here
140:44 and you're going to send your body parameter over as JSON and you're also
140:48 going to get back JSON. So the more and more you use it, you're going to
140:51 recognize like how easy it is to set up. So anyways, I hope that that made sense
140:54 and broke it down pretty simply. Now that we've seen like how it all works,
140:58 it's going to be really valuable to get into niten. I'm going to open up a API
141:02 documentation and we're just going to set up a few requests together and we'll
141:06 see how it works. Okay, so here's the example. You know, I did open weather's
141:09 native integration and then also open weather as an HTTP request. And you can
141:13 see it was like basically the exact same thing. Um, so let's say that what we
141:17 want to do is we want to use Perplexity, which if you guys don't know what
141:20 Perplexity is, it is basically, you know, kind of similar to Chatbt, but it
141:23 has really good like internet search and research. So let's say we wanted to use
141:28 this and hook it up to an AI agent, so it can do web search for us. But as you
141:33 can see, if I type in Perplexity, there's no native integration for
141:37 Perplexity. So that basically signals to us, okay, we can only access Perplexity
141:41 using an HTTP request. And real quick side note, if you're ever thinking to
141:45 yourself, hm, I wonder if I can have my agent interact with blank. The answer is
141:49 yes, if there's API documentation. And all you have to do typically to find out
141:53 if there's API documentation is just come in here and be like, you know,
141:56 Gmail API documentation. And then we can see Gmail API is a restful API, which
142:01 means it has an API and we can use it within our automations. Anyways, getting
142:04 back to this example of setting up a perplexity HTTP request. We have our
142:08 HTTP request right here and it's left completely blank. So, as you can see, we
142:12 have our method, we have our endpoint, we have query, header, and body
142:16 parameters, but nothing has been set up yet. So, what we need to do is we would
142:19 head over to Perplexity, as you can see right here. And at the bottom, there's
142:22 this little thing called API. So, I'm going to click on that. And this opens
142:26 up this little page. And so, what I have here is Perplexity's API. If I click on
142:30 developer docs, and then right here, I have API reference, which is integrate
142:34 the API into your workflows, which is exactly what we want to do. This page is
142:38 where people might get confused and it looks a little bit intimidating, but
142:40 hopefully this breakdown will show you how you can understand any API doc,
142:44 especially if there's a curl command. So, all I'm going to do first of all is
142:48 I'm just going to right away hit copy and make sure you know you're in curl.
142:51 If you're in Python, it's not the same. So, click on curl. I copied this curl
142:54 command and I'm going to come back into nit hit import curl and all I have to do
142:59 is paste. Click import and basically what you're going to see is this HTTP
143:02 request is going to get basically populated for us. So now we have the
143:06 method has been changed to post. We have the correct URL which is the API
143:10 endpoint which basically is telling this node, okay, we're going to use
143:13 Perplexity's API. You can see that the curl had no query parameters. So that's
143:17 left off. It did turn on headers which is basically just having us put our API
143:21 key in there. And then of course we have the JSON body that we need to send over.
143:24 Okay. So at this point what I would do is now that we have this set up, we know
143:28 we just need to put in a few things of our own. So the first thing to tackle is
143:32 how do we actually authorize ourselves into Perplexity? All right. All right.
143:35 So, I'm back in Perplexity. I'm going to go to my account and click on settings.
143:37 And then all I'm going to do is basically I need to find where I can get
143:41 my API key. So, on the lefth hand side, if I go all the way down, I can see API
143:45 keys. So, I'll click on that. And all this is going to do is this shows my API
143:48 key right here. So, I'll have to click on this, hit copy, come back into NN,
143:52 and then all I'm going to do is I'm going to just delete where this says
143:55 token, but I'm going to make sure to leave a space after bearer and hit
144:00 paste. So now this basically authorizes ourselves to use Perplexity's endpoint.
144:04 And now if we look down at the body request, we can see we have this thing
144:08 set up for us already. So if I hit test step, this is real quick going to make a
144:12 request over. It just hit Perplexity's endpoint. And as you can see, it came
144:15 back with data. And what this did is it basically searched Perplexity for how
144:20 many stars are there in our galaxy. And that's where right here we can see the
144:23 Milky Way galaxy, which is our galaxy, is estimated to contain between 100
144:27 billion and 400 billion stars, blah blah blah. So we know basically okay if we
144:30 want to change how this endpoint works what data we're going to get back this
144:34 right here is where we would change our request and if we go back into the
144:37 documentation we can see what else we have to set up. So the first thing to
144:40 notice is that there's a few things that are required and then some things that
144:43 are not. So right here we have you know authorization that's always required.
144:47 The model is always required like which perplexity model are we going to use?
144:51 The messages are always required. So this is basically a mix of a system
144:55 message and a user message. So here the example is be precise and concise and
144:58 then the user message is how many stars are there in the galaxy. So if I came
145:03 back here and I said you know um be funny in your answer. So I'm basically
145:08 telling this model how to act and then instead of how many stars are there in
145:11 the galaxy I'm just going to say how long do cows live? And I'll make another
145:16 request off to perplexity. So you can see what comes back is this longer
145:20 content. So it's not being precise and concise and it says so you're wondering
145:25 how long cows live. Well, let's move into the details. So, as you can see,
145:28 it's being funny. Okay, back into the API documentation. We have a few other
145:31 things that we could configure, but notice how these aren't required the
145:35 same way that these ones are. So, we have max tokens. We could basically put
145:39 in an integer and say how many tokens do you want to use at the maximum. We could
145:42 change the temperature, which is, you know, like how random the response would
145:45 be. And this one says, you know, it has to be between zero and two. And as we
145:48 keep scrolling down, you can see that there's a ton of other little levers
145:51 that we can just tweak a little bit to change the type of response that we get
145:55 back from Plexity. And so once you start to read more and more API documentation,
145:58 you can understand how you're really in control of what you get back from the
146:02 server. And also you can see like, you know, sometimes you have to send over
146:05 booleans, which is basically just true or false. Sometimes you can only send
146:09 over numbers. Sometimes you can only send over strings. And sometimes it'll
146:12 tell you, you know, like what will the this value default to? and also what are
146:15 the only accepted values that you actually could fill out. So for example,
146:18 if we go back to this temperature setting, we can see it has to be a
146:22 number and if you don't fill that out, it's going to be 0.2. But we can also
146:26 see that if you do fill this out, it has to be between zero or two. Otherwise,
146:30 it's not going to work. Okay, cool. So that's basically how it works. We just
146:34 set up an HTTP request and we change the system prompt and we change the user
146:36 prompt and that's how we can customize this thing to work for us. And that's
146:40 really cool as a node because we can set up, you know, a workflow to pass over
146:44 some sort of variable into this request. So it searches the web for something
146:47 different every time. But now let's say we want to give our agent access to this
146:50 tool and the agent will decide what am I going to search the web for based on
146:54 what the human asks me. So it's pretty much the exact same process. We'll click
146:57 on add a tool and we're going to add an HTTP request tool only because Plexity
147:01 doesn't have a native integration. And then once again you can see we have an
147:04 import curl button. So if I click on this and I just import that same curl
147:07 that we did last time once again it fills out this whole thing for us. So we
147:11 have post, we have the perplexity endpoint, we have our authorization
147:14 bearer, but notice we have to put in our token once again. And so a cool little
147:18 hack is let's say you know you're going to use perplexity a lot. Rather than
147:22 having to go grab your API key every single time, what we can do is we can
147:25 just send it over right here in the authentication tab. So let me show you
147:29 what I mean by that. If I click into authentication, I can click on generic
147:33 credential type. And then from here, I can basically choose, okay, is this a
147:37 basic O, a bearer O, all this kind of stuff. A lot of times it's just going to
147:39 be header off. So that's why we know right here we can click on header off.
147:42 And as you can see, we know that because we're sending this over as a header
147:45 parameter and we just did this earlier and it worked. So as you can see, I have
147:50 header offs already set up. I probably already have a Plexity one set up right
147:53 here. But I'm just going to go ahead and create a new one with you guys to show
147:56 you how this works. So I just create a new header off and all we have to do is
148:01 the exact same thing that we had down in the request that we just sent over which
148:04 means in the name we're just going to type in authorization with a capital A
148:08 and once again we can see in the API docs this is how you do it. So
148:11 authorization and then we can see that the value has to be capital B bearer
148:16 space API token. So I'm just going to come into here bearer space API token
148:21 and then all I have to do is you know first of all name this so we can can
148:26 save it and then if I hit save now every single time we want to use perplexity's
148:30 endpoint we already have our credentials saved. So that's great and then we can
148:33 turn off the headers down here because we don't need to send it over twice. So
148:37 now all we have to do is change this body request a little bit just to make
148:41 it more dynamic. So in order to make it dynamic the first thing we have to do is
148:44 change this to an expression. Now, we can see that we can basically add a
148:48 variable in here. And what we can do is we can add a variable that basically
148:52 just tells the AI model, the AI agent, here is where you're going to send over
148:57 your internet search query. And so we already know that all that that is is
149:01 the user content right here. So if I delete this and basically if I do two
149:06 curly braces and then within the curly braces I do a dollar sign and I type in
149:10 from, I can grab a from AI function. And this from AI function just indicates to
149:15 the AI agent I need to choose something to send over here. And you guys will see
149:19 an example and it will make more sense. I also did a full video breaking this
149:21 down. So if you want to see that, I'll tag it right up here. Anyways, as you
149:25 can see, all we have to really do is enter in a key. So I'm just going to do,
149:28 you know, two quotes and within the quote, I'm going to put in search term.
149:32 And so now the agent will be reading this and say, okay, whenever the user
149:35 interacts with me and I know I need to search the internet, I'm just going to
149:39 fill this whole thing in with the search term. So, now that that's set up, I'm
149:43 just going to change this request to um actually I'm just going to call it web
149:46 search to make that super intuitive for the AI agent. And now what we're going
149:49 to do is we are going to talk to the agent and see if it can actually search
149:53 the web. Okay, so I'm asking the AI agent to search the web for the best
149:57 movies. It's going to think about it. It's going to use this tool right here
150:00 and then we'll basically get to go in there and we can see what it filled in
150:03 in that search term placeholder that we gave it. So, first of all, the answer it
150:09 gave us was IMDb, top 250, um, Rotten Tomatoes, all this kind of stuff, right?
150:12 So, that's movie information that we just got from Perplexity. And what I can
150:16 do is click into the tool and we can see in the top left it filled out search
150:20 term with best movies. And we can even see that in action. If we come down to
150:23 the body request that it sent over and we expand this, we can see on the right
150:27 hand side in the result panel, this is the JSON body that it sent over to
150:32 Perplexity and it filled it in with best movies. And then of course what we got
150:36 back was our content from Perplexity, which is, you know, here are some of the
150:40 best movies across major platforms. All right, then real quick before we wrap up
150:43 here, I just wanted to talk about some common responses that you can get from
150:47 your HTTP requests. So the rule of thumb to follow is if you get data back and
150:52 you get a 200, you're good. Sometimes you'll get a response back, but you
150:55 won't explicitly see a 200 message. But if you're getting the data back, then
150:59 you're good to go. And a quick example of this is down here we have that HTTP
151:02 request which we went over earlier in this video where we went to open weather
151:06 maps API and you can see down here we got code 200 and there's data coming
151:11 back and 200 is good that's a success code. Now if you get a request in the
151:15 400s that means that you probably set up the request wrong. So 400 bad request
151:19 that could mean that your JSON's invalid. It could just mean that you
151:22 have like an extra quotation mark or you have you know an extra comma something
151:26 as silly as that. So let me show a quick example of that. We're going to test
151:29 this workflow and what I'm doing is I'm trying to send over a query to Tavi. And
151:32 you can see what we get is an error that says JSON parameter needs to be valid
151:36 JSON. And this would be a 400 error. And the issue here is if we go into the JSON
151:40 body that we're trying to send over, you can see in the result panel, we're
151:44 trying to send over a query that has basically two sets of quotation marks.
151:47 If you can see that. But here's the great news about JSON. It is so
151:51 universally used and it's been around for so long that we could basically just
151:55 copy the result over to chatbt. Paste it in here and say, I'm getting an error
151:59 message that says JSON parameter needs to be valid JSON. What's wrong with my
152:02 JSON? And as you can see, it says the issue with your JSON is the use of
152:06 double quotes around the string value in this line. So now we'd be able to go fix
152:09 that. And if we go back into the workflow, we take away these double
152:13 quotes right here. Test the step again. Now you can see it's spinning and it's
152:16 going to work. and we should get back some information about pineapples on
152:19 pizza. Another common error you could run into is a 401, meaning unauthorized.
152:23 This typically just means that your API key is wrong. You could also get a 403,
152:26 which is forbidden. That just means that maybe your account doesn't have access
152:30 to this data that you're requesting or something like that. And then another
152:33 one you could get is a 404, which sometimes you'll get that if you type in
152:36 a URL that doesn't exist. It just means this doesn't exist. We can't find it.
152:39 And a lot of times when you're looking at the actual API documentation that you
152:43 want to set up a request to. So here's an example with Tavi, it'll show you
152:47 what typical responses could look like. So here's one where you know we're using
152:50 Tavi to search for who is Leo Messi. This was an example we looked at
152:54 earlier. And with a 200 response, we are getting back like a query, an answer
152:58 results, stuff like that. We could also see we could get a 400 which would be
153:02 for bad request, you know, invalid topic. We could have a 401 which means
153:06 invalid API key. We could get all these other ones like 429, 432, but in general
153:12 400 is bad. And then even worse is a 500. And this just basically means
153:15 something's wrong with the server. Maybe it doesn't exist anymore or there's a
153:19 bug on the server side. But the good news about a 500 is it's not your fault.
153:22 You didn't set up the request wrong. It just means something's wrong with the
153:26 server. And it's really important to know that because if you think you did
153:28 something wrong, but it's really not your fault at all, you may be banging
153:32 your head against the wall for hours. So anyways, what I wanted to highlight here
153:35 is there's never just like a one-sizefits-all. I know how to set up
153:39 this one API call so I can just set up every single other API call the exact
153:42 same way. The key is to really understand how do you read the API
153:45 documentation? How do you set up your body parameters and your different
153:48 header parameters? And then if you start to run into issues, the key is
153:52 understanding and actually reading the error message that you're getting back
153:55 and adjusting from there. All right, so that's going to do it for this video.
153:58 Hopefully this has left you feeling a lot more comfortable with diving into
154:02 API documentation, walking through it just step by step using those curl
154:06 commands and really just understanding all I'm doing is I'm setting up filters
154:09 and levers here. I don't have to get super confused. It's really not that
154:12 technical. I'm pretty much in complete control over what my API is going to
154:16 send me back. The same way I'm in complete control when I'm, you know,
154:19 ordering something on Door Dash or ordering something on Amazon, whatever
154:23 it is. Hopefully by now the concept of APIs and HTTP requests makes a lot more
154:27 sense. But really, just to drive it home, what we're going to do is hop into
154:30 some actual setups in NADN of connecting to some different popular APIs and walk
154:35 through a few more step by steps just to really make sure that we understand the
154:38 differences that can come with different API documentation and how you read it
154:41 and how you set up stuff like your credentials and the body requests. So,
154:45 let's move into this next part, which I think is going to be super valuable to
154:49 see different API calls in action. Okay, so in nodn when we're working with a
154:52 large language model, whether that's an AI agent or just like an AI node, what
154:57 happens is we can only access the information that is in the large
155:00 language models training data. And a lot of times that's not going to be super
155:04 up-to-date and real time. So what we want to do is access different APIs that
155:08 let us search the web or do real-time search. And what we saw earlier in that
155:13 third step-by-step workflow was we used a tool called Tavi and we accessed it
155:17 through an HTTP request node which as you guys know looks like this. And we
155:21 were able to use this to communicate with Tavali's API server. So if we ever
155:25 want to access real-time information or do research on certain search terms, we
155:30 have to use some sort of API to do that. So, like I said, we talked about Tavi,
155:34 but in this video, I'm going to help you guys set up Perplexity, which if you
155:38 don't know what it is, it's kind of like Chat GBT, but it's really, really good
155:42 for web search and in-depth research. And it has that same sort of like, you
155:46 know, chat interface as chatbt, but what you also have is access to the API. So,
155:50 if I click on API, we can see this little screen, but what we want to go to
155:54 is the developer docs. And in the developer docs, what we're looking for
155:57 is the API reference. We can also click on the quick start guide right here
156:00 which just shows you how you can set up your API key and get all that kind of
156:03 stuff. So that's exactly what we're going to do is set up an API call to
156:08 Perplexity. So I'm going to click on API reference and what we see here is the
156:12 endpoint to access Perplexi's API. And so what I'm going to do is just grab
156:15 this curl command from that right hand side, go back into our NEN and I'm just
156:19 going to import the curl right into here. And then all we have to do from
156:22 there is basically configure what we want to research and put in our own API
156:27 key. So there we go. We have our node pretty much configured. And now the
156:30 first thing we see we need to set up is our authorization API key. And what we
156:34 could do is set this up in here as a generic credential type and save it. But
156:37 right now we're just going to keep things as simple as possible where we
156:40 imported the curl. And now I'm just going to show you where to plug in
156:43 little things. So we have to go back over to Perplexity and we need to go get
156:46 an API key. So I'm going to come over here to the left and I'm going to click
156:49 on my settings. And hopefully in here we're able to find where our API key
156:53 lives. Now we can see in the bottom left over here we have API keys. And what I'm
156:56 going to do is come in here and just create a new secret key. And we just got
156:59 a new one generated. So I'm just going to click on this button, click on copy,
157:03 and all we have to do is replace the word right here that says token. So I'm
157:07 just going to delete that. I'm going to make sure to leave a space after the
157:10 word bearer. And I'm going to paste in my Perplexity API key. So now we should
157:14 be connected. And now what we need to do is we need to set up the actual body
157:17 request. So if I go back into the documentation, we can see this is
157:21 basically what we're sending over. So that first thing is a model, which is
157:24 the name of the model that will complete your prompt. And if we wanted to look at
157:27 different models, we could click into here and look at other supported models
157:31 from Perplexity. So it took us to the screen. We click on models, and we can
157:35 see we have Sonar Pro or Sonar. We have Sonar Deep Research. We have some
157:38 reasoning models as well. But just to keep things simple, I'm going to stick
157:41 with the default model right now, which is just sonar. Then we have an object
157:45 that we're sending over, which is messages. And within the messages
157:49 object, we have a few things. So first of all we're sending over content which
157:53 is the contents of the message in turn of conversation. It can be in a string
157:57 or an array of parts. And then we have a role which is going to be the role of
158:00 the speaker in the conversation. And we have available options system user or
158:05 assistant. So what you can see in our request is that we're sending over a
158:08 system message as well as a user message. And the system message is
158:11 basically the instructions for how this AI model on perplexity should act. And
158:15 then the user message is our dynamic search query that is going to change
158:19 every time. And if we go back into the documentation, we can see that there are
158:22 a few other things we could add, but we don't have to. We could tell Perplexity
158:26 what is the max tokens we want to use, what is the temperature we want to use.
158:30 We could have it only search for things in the past week or day. So, this
158:34 documentation is basically going to be all the filters and settings that you
158:38 have access to in order to customize the type of results that you want to get
158:41 back. But, like I said, keeping this one really simple. We just want to search
158:45 the web. All I'm going to do is keep it as is. And if I disconnect this real
158:48 quick and we come in and test step, it's basically going to be searching
158:51 perplexity for how many stars are there in our galaxy. And then the AI model of
158:56 sonar is the one that's going to grab all of these five sources and it's going
159:00 to answer us. And right here it says the Milky Way galaxy, which is our home
159:04 galaxy, is estimated to contain between 100 billion and 400 billion stars. This
159:08 range is due to the difficulty blah blah blah blah blah. So that's basically how
159:11 it was able to answer us because it used an AI model called sonar. So now if we
159:15 wanted to make this search a little bit more dynamic, we could basically plug
159:19 this in and you can see in here what I'm doing is I'm just setting a search term.
159:23 So let's test this step. What happens is the output of this node is a research
159:27 term. Then we could reference that variable of research term right in here
159:31 in our actual body request to perplexity. So I would delete this fixed
159:35 message which is how many stars are there in our galaxy. And all I would do
159:38 is I'd drag in research term from the left, put it in between the two quotes,
159:43 and now it's coming over dynamically as anthropic latest developments. And all
159:47 I'd have to do now is hit test step. And we will get an answer from Perplexity
159:51 about Anthropic's recent developments. There we go. It just came back. We can
159:54 see there's five different sources right here. It went to Anthropic, it went to
159:57 YouTube, it went to TechCrunch. And what we get is that today, May 22nd, so real
160:03 time information, Claude Opus 4 was released. And that literally came out
160:07 like 2 or 3 hours ago. So that's how we know this is searching the web in real
160:11 time. And then all we'd have to do is have, you know, maybe an AI model is
160:14 changing our search term or maybe we're pulling from a Google sheet with a bunch
160:17 of different topics we need to research. But whatever it is, as long as we are
160:21 passing over that variable, this actual search result from Perplexity is going
160:25 to change every single time. And that's the whole point of variables, right?
160:28 They they vary. They're dynamic. So, I know that one was quick, but Perplexity
160:32 is a super super versatile tool and probably an API that you're going to be
160:35 calling a ton of times. So, just wanted there. So, Firecall is going to allow us
160:43 to turn any website into LLM ready data in a matter of seconds. And as you can
160:46 see right here, it's also open source. So, once you get over to Firecol, click
160:49 on this button and you'll be able to get 500 free credits to play around with. As
160:52 you can see, there's four different things we can do with Firecrawl. We can
160:57 scrape, we can crawl, we can map or we can do this new extract which basically
161:01 means we can give firecraw a URL and also a prompt like can you please
161:04 extract the company name and the services they offer and an icebreaker
161:08 out of this URL. So there's some really cool use cases that we can do with
161:11 firecrawl. So in this video we're going to be mainly looking at extract, but I'm
161:14 also going to show you the difference between scrape and extract. And we're
161:17 going to get into end and connect up so you can see how this works. But the
161:20 playground is going to be a really good place to understand the difference
161:23 between these different endpoints. All right, so for the sake of this video,
161:26 this is the website we're going to be looking at. It's called quotes to
161:29 scrape. And as you can see, it's got like 10 on this first page and it also
161:32 has different pages of different categories of quotes. And as you can
161:35 see, if we click into them, there are different quotes. So what I'm going to
161:37 do is go back to the main screen and I'm going to copy the URL of this website
161:41 and we're going to go into niten. We're going to open up a new node, which is
161:45 going to be an HTTP request. And this is just to show you what a standard get
161:49 request to a static website looks like. So we're going to paste in the URL, hit
161:53 test step, and on the right hand side, we're going to get all the HTML back
161:57 from the quotes to scrape website. Like I said, what we're looking at here is a
162:00 nasty chunk of HTML. It's pretty hard for us to read, but basically what's
162:04 going on here is this is the code that goes to the website in order to have it
162:07 be styled and different fonts and different colors. So right here, what
162:10 we're looking at is the entire first page of this website. So if we were to
162:14 search for Harry, if I copy this, we go back into edit and we control F this.
162:19 You can see there is the exact quote that has the word Harry. So everything
162:22 from the website's in here, it's just wrapped up in kind of an ugly chunk of
162:26 HTML. Now hopping back over to the fireall playground using the scrape
162:30 endpoint, we can replace that same URL. We'll run this and it's going to output
162:33 markdown formatting. So now we can see we actually have everything we're
162:36 looking for with a different quotes and it's a lot more readable for a human. So
162:41 that's what a web scrape is, right? We get the information back, whether that's
162:44 HTML or markdown, but then we would typically feed that into some sort of
162:47 LLM in order to extract the information we're looking for. In this case, we'd be
162:52 looking for different quotes. But what we can do with extract is we can give it
162:56 the URL and then also say, hey, get all of the quotes on here. And using this
162:59 method, we can say, not just these first 10 on this page. I want you to crawl
163:03 through the whole site and basically get all of these quotes, all of these
163:06 quotes, all of these quotes, all of these quotes. So it's going to be really
163:09 cool. So I'm going to show how this works in firecrawl and then we're going
163:11 to plug it into noden. All right. So what we're doing here is we're saying
163:14 extract all of the quotes and authors from this website. I gave it the website
163:18 and now what it's doing is it's going to generate the different parameters that
163:22 the LLM will be looking to extract out of the content of the website. Okay. So
163:26 here's the run we're about to execute. We have the URL and then we have our
163:29 schema for what the LLM is going to be looking for. And it's looking for text
163:33 which would be the quote and it's a string. And then it's also going to be
163:35 looking for the author of that quote which is also a string. And then the
163:39 prompt we're feeding here to the LLM is extract all quotes and their
163:43 corresponding authors from the website. So we're going to hit run and we're
163:46 going to see that it's not only going to go to that first URL, it's basically
163:49 going to take that main domain, which is quotes to scrape.com, and it's going to
163:53 be crawling through the other sections of this website in order to come back
163:56 and scrape all the quotes on there. Also, quick plug, go ahead and use code
164:00 herk10 to get 10% off the first 12 months on your Firecrawl plan. Okay, so
164:04 it just finished up. As you can see, we have 79 quotes. So down here we have a
164:09 JSON response where it's going to be an object called quotes. And in there we
164:12 have a bunch of different items which has you know text author text author
164:17 text author and we have pretty much everything from that website now. Okay
164:20 cool. But what we want to do is look at how we can do this in n so that we have
164:25 you know a list of 20 30 40 URLs that we want to extract information from. We can
164:28 just loop through and send off that automation rather than having to come in
164:33 here and type that out in firecrawl. Okay. So what we're going to do is go
164:35 back into edit end. And I apologize because there may be some jumping around
164:38 here, but we're basically just gonna clear out this HTTP request and grab a
164:42 new one. Now, what we're going to do is we want to go into Firecrawl's
164:45 documentation. So, all we have to do is import the curl command for the extract
164:48 endpoint rather than trying to figure out how to fill out these different
164:51 parameters. So, back in Firecrawl, once you set up your account, up in the top
164:54 right, you'll see a button called docs. You want to click into there. And now,
164:57 we can see a quick start guide. We have different endpoints. And what we're
165:00 going to do is on the left, scroll down to features and click on extract. And
165:04 this is what we're looking for. So, we've got some information here. The
165:07 first thing to look at is when we're using the extract, you can extract
165:10 structured data from one or multiple URLs, including wild cards. So, what we
165:14 did was we didn't just scrape one single page. We basically scraped through all
165:18 of the pages that had the main base domain of um quotescrape.com or
165:23 something like that. And if you put a asterk after it, it's going to basically
165:26 mean this is a wild card and it's going to go scrape all pages that are after it
165:30 rather than just scraping this one predefined page. As you can see right
165:34 here, it'll automatically crawl and parse all the URLs it can discover, then
165:38 extract the requested data. And we can see that's how it worked because if we
165:41 come back into the request we just made, we can see right here that it added a
165:45 slash with an asterisk after quotes to scrape.com. Okay. Anyway, so what we're
165:48 looking for here is this curl command. This is basically going to fill out the
165:51 method, which is going to be a post request. It's going to fill out the
165:54 endpoint. It'll fill out the content type, and it'll show us how to set up
165:58 our authorization. And then we'll have a body request that we'll need to make
166:02 some minor changes to. So in the top right I'm going to click copy and I'm
166:05 going to come back into edit end. Hit import curl. Paste that in there. Hit
166:09 import. And as you can see everything pretty much just got populated. So like
166:12 I said the method is going to be a post. We have the endpoint already set up. And
166:15 what I want to do is show you guys how to set up this authorization so that we
166:18 can keep it saved forever rather than having to put it in here in the
166:22 configuration panel every time. So first of all, head back over to your
166:26 firecrawl. Go to API keys on the lefth hand side. And you're just going to want
166:29 to copy that API key. So once you have that copied, head back into NN. And now
166:33 let's look at how we actually set this up. So typically what you do is we have
166:37 this as a header parameter. Not all authorizations are headers, but this one
166:41 is a header. And the key or the name is authorization and the value is bearer
166:46 space your API key. So what you'd typically do is just paste in your API
166:50 key right there and you'd be good to go. But what we want to do is we want to
166:53 save our firecrawl credential the same way you'd save, you know, a Google
166:58 Sheets credential or a Slack credential. So, we're going to come into
167:01 authentication, click on generic. We're going to click on generic type and
167:04 choose header because we know down here it's a header off. And then you can see
167:07 I have some other credentials already saved. We're going to create a new one.
167:11 I'm just going to name this firecrawl to keep ourselves organized. For the name,
167:14 we're going to put authorization. And for the value, we're going to type
167:18 bearer with a capital B space and then paste in our API key. And we'll hit
167:21 save. And this is going to be the exact same thing that we just did down below,
167:25 except for now we have it saved. So, we can actually flick this field off. We
167:28 don't need to send headers because we're sending them right here. And now we just
167:32 need to figure out how to configure this body request. Okay, so I'm going to
167:35 change this to an expression and open it up just so we can take a look at it. The
167:38 first thing we notice is that by default there are three URLs in here that we
167:41 would be extracting from. We don't want to do that here. So I'm going to grab
167:44 everything within the array, but I'm going to keep the two quotation marks.
167:47 Now all we need to do is put the URL that we're looking to extract
167:49 information from in between these quotation marks. So here I just put in
167:53 the quotes to scrape.com. But what we want to do if you remember is we want to
167:57 put an asterisk after that so that it will go and crawl all of the pages, not
168:01 just that first page and which would only have like nine or 10 quotes. And
168:04 now the rest is going to be really easy to configure because we already did this
168:07 in the playground. So we know exactly what goes where. So I'm going to click
168:10 back into our playground example. First thing is this is the quote that
168:13 firecross sent off. So I'm going to copy that. Go back and edit in and I'm just
168:17 going to replace the prompts right here. We don't want the company mission blah
168:20 blah blah. We want to paste this in here and we're looking to extract all quotes
168:24 and their corresponding authors from the website. And then next is basically
168:27 telling the LLM, what are you pulling back? So, we just told it it's pulling
168:31 back quotes and authors. So, we need to actually make the schema down here in
168:36 the body request match the prompt. So, all we have to do is go back into our
168:39 playground. Right here is the schema that we sent over in our example. And
168:42 I'm just going to click on JSON view and I'm going to copy this entire thing
168:46 which is wrapped up in curly braces. We'll come back into end and we'll start
168:51 after schema colon space. Replace all this with what we just had in um fire
168:55 crawl. And actually I think I've noticed the way that this copied over. It's not
168:58 going to work. So let me show you guys that real quick. If we hit test step,
169:01 it's going to say JSON parameter needs to be valid JSON. So what I'm going to
169:05 do is I'm going to copy all of this. Now I came into chat GBT and I'm just saying
169:08 fix this JSON. What it's going to do is it's going to just basically push these
169:12 over. When you copy it over from Firecrol, it kind of aligns them on the
169:15 left, but you don't want that. So, as you can see, it just basically pushed
169:18 everything over. We'll copy this into our Nit end right there. And all it did
169:21 was bump everything over once. And now we should be good to go. So, real quick
169:24 before we test this out, I'm just going to call this extract. And then we'll hit
169:28 test step. And we should see that it's going to be pulling. And it's going to
169:33 give us a message that says um true. And it gives us an ID. And so now what we
169:37 need to do next is pull this ID back to see if our request has been fulfilled
169:41 yet. So I'm back in the documentation. And now we are going to look at down
169:45 here asynchronous extraction and status checking. So this is how we check the
169:49 status of a request. As you saw, we just made one. So here I'm going to click on
169:52 copy this curl command. We're going to come back and end it in and we're going
169:56 to add another HTTP request and we're going to import that in there. And you
170:00 can see this one is going to be a get command. It's going to have a different
170:02 endpoint. And what we need to do if you look back at the documentation is at the
170:07 end of the extract slash we have to put the extract ID that we're looking to
170:13 check the status of. So back in n the ID is going to be coming from the left hand
170:16 side the previous node every time. So I'm just going to change the URL field
170:21 to an expression. Put a backslash and then I'm going to grab the ID pull it
170:25 right in there and we're good to go. Except we need to set up our credential.
170:28 And this is why it's great. We already set this up as a generic as a header.
170:32 And now we can just pull in easily our fire crawl off and hit test step. So
170:37 what happens now is our request hasn't been done yet. So as you can see it
170:41 comes back as processing and the data is an empty array. So what we're going to
170:44 set up real quick is something called polling where we're basically checking
170:48 in on a specific ID which is this one right here. And we're going to check and
170:51 if it's if it's empty, if the data field is empty, then that means we're going to
170:55 wait a certain amount of time and come back and try again. So after the
170:59 request, I'm going to add a if. So, this is just basically going to help us
171:02 create our filter. So, we're dragging in JSON.data, which as you can see is an
171:06 empty array, and we're just going to say is empty. But one thing you have to keep
171:10 in mind is this doesn't match. As you can see, we're dragging in an array, and
171:14 we were trying to do a filter of a string. So, we have to go to array and
171:18 then say is empty. And we'll hit test step. And this is going to say true. The
171:23 data field is empty. And so, if true, what we want to do is we're going to add
171:27 a wait. And this will wait for, you know, let's in in this case we'll just
171:30 say five seconds. So if we hit test step, it's going to wait for five
171:33 seconds. And um I wish actually I switched the logic so that this would be
171:37 on the bottom, but whatever. And then we would just drag this right back into
171:41 here. And we would try it again. So now after 5 seconds had passed or however
171:45 much time, we would try this again. And now we can see that we have our item
171:48 back and the data field is no longer empty because we have our quotes object
171:53 which has 83 quotes. So even got more than that time we did it in the
171:56 playground. And I'm thinking this is just because, you know, the extract is
171:59 kind of still in beta. So it may not be super consistent, but that's still way
172:03 better than if we were to just do a simple getit request. And then as you
172:07 can see now, if we ran this next step, this would come out. Ah, but this is interesting. So
172:13 before it knows what it's pulling back, the JSON.data field is an array. And so
172:17 we're able to set up is the array empty? But now it's an object. So we can't put
172:21 it through the same filter because we're looking at a filter for an array. So
172:25 what I'm thinking here is we could set up this continue using error output. So
172:29 because this this node would error, we could hit test step and we could see now
172:33 it's going to go down the false branch. And so this basically just means it's
172:36 going to let us continue moving through the process. And we could do then
172:38 whatever we want to do down here. Obviously this isn't perfect because I
172:41 just set this up to show you guys and ran into that. But that's typically sort
172:45 of the way we would think is how can we make this a little more dynamic because
172:49 it has to deal with empty arrays or potentially full objects. Anyways, what
172:52 I wanted to show you guys now is back in our request if we were to get rid of
172:56 this asterk. What would happen? So, we're just going to run this whole
172:59 process again. I'll hit test workflow. And now it's going to be sending that
173:04 request only to, you know, one URL rather than the other one. Aha. And I'm
173:08 glad we are doing live testing because I made the mistake of putting this in as
173:12 JSON ID which doesn't exist if we're pulling from the weight node. So all we
173:16 have to do in here is get rid of JSON. ID and pull in a basically a you know a
173:21 node reference variable. So we're going to do two curly braces. We're going to
173:25 be pulling from the extract node. And now we just want to say
173:29 item.json ID and we should be good to go now. So I'm just going to refresh this
173:33 and we'll completely do it again. So test workflow, we're doing the exact
173:37 same thing. It's not ready yet. So, we're going to wait 5 seconds and then
173:40 we're going to go check again. We hopefully should see, okay, it's not
173:42 ready still. So, we're going to wait five more seconds. Come check again. And
173:46 then whenever it is ready now, as you can see, it goes down this branch. And
173:50 we can see that we actually get our items back. And what you see here is
173:54 that this time we only got 10 quotes. Um, you know, it says nine, but
173:57 computers count from zero. But we only got 10 quotes because um we didn't put
174:04 an asterisk after the URL. So, Firecrawl didn't know I need to go scrape
174:07 everything out of this whole base URL. I'm only going to be scraping this one
174:11 specific page, which is this one right here, which does in fact only have 10
174:15 quotes. And by the way, super simple template here, but if you want to try it
174:18 out and just plug in your API key and different URLs, you can grab that in the
174:22 free school community. You'll hop in there, you will click on YouTube
174:24 resources and click on the post associated with this video, and you'll
174:28 have the JSON right there to download. Once you download that, all you have to
174:31 do is import it from file right up here, and you'll have the workflow. So,
174:34 there's a lot of cool use cases for firecrawl. It'd be cool to be able to
174:38 pull from a a sheet, for example, of 30 or 40 or 50 URLs that we want to run
174:42 through and then update based on the results. You could do some really cool
174:45 stuff here, like researching a ton of companies and then having it also create
174:49 some initial outreach for you. So, I hope you guys enjoyed that one.
174:51 Firecrawl is a super cool tool. There's lots of functionality there and there's
174:55 lots of uses of AI in Firecrawl, which is awesome. We're going to move into a
174:57 different tool that you can use to scrape pretty much anything, which is
175:00 called Appify, which has a ton of different actors, and you can scrape,
175:04 like I said, almost anything. So, let's go into the setup video. So, Ampify is
175:08 like a marketplace for actors, which essentially let us scrape anything on
175:10 the internet. As you can see right here, we're able to explore 4,500 plus
175:14 pre-built actors for web scraping and automation. And it's really not that
175:17 complicated. An actor is basically just a predefined script that was already
175:20 built for us that we can just send off a certain request to. So, you can think of
175:23 it like a virtual assistant where you're saying, "Hey, I want you to I want to
175:26 use the Tik Tok virtual assistant and I want you to scrape, you know, videos
175:30 that have the hashtag of AI content." Or you could use the LinkedIn job scraper
175:33 and you could say, "I want to find jobs that are titled business analyst." So,
175:36 there's just so many ways you could use Appify. You could get leads from Google
175:39 Maps. You could get Instagram comments. You could get Facebook posts. There's
175:43 just almost unlimited things you can do here. You can even tap into Apollo's
175:46 database of leads and just get a ton. So today I'm just going to show you guys in
175:50 NAN the easiest way to set up this Aify actor where you're going to start the
175:53 actor and then you're going to just grab those results. So what you're going to
175:56 want to do is head over to Aify using the link in the description and then use
176:01 code 30 Nate Herk to get 30% off. Okay, like I said, what we're going to be
176:03 covering today is a two-step process where you make one request to Aify to
176:07 start up an actor and then you're going to wait for it to finish up and then
176:10 you're just going to pull those results back in. So let me show you what that
176:12 looks like. What I'm going to do is hit test workflow and this is going to start
176:15 the Google Maps actor. And what we're doing here is we're asking for dentists
176:18 in New York. And then if I go to my Appify console and I go over here to
176:22 actors and click on the Google Maps extractor one, if I click on runs, we
176:25 can see that there's one currently finishing up right now. And now that
176:28 it's finished, I can go back into our workflow. I can hook it up to the get
176:32 results node. Hit test step. And this is going to pull in those 50 dentists that
176:36 we just scraped in New York. And you can see this contains information like their
176:39 address, their website, their phone number, all this kind of stuff. So you
176:43 can just basically scrape these lists of leads. So anyways, that's how this
176:46 works, but let's walk through a live setup. So once you're in your Appify
176:49 console, you click on the Appify store, and this is where you can see all the
176:52 different actors. And let's do an example of like a social media one. So
176:55 I'm going to click on this Tik Tok scraper since it's just the first one
176:58 right here. And this may seem a little bit confusing, but it's not going to be
177:01 too bad at all. We get to basically do all this with natural language. So let
177:04 me show you guys how this works. So basically, we have this configuration
177:07 panel right here. When you open up any sort of actor, they won't always all be
177:11 the same, but in this one, what we have is videos with this hashtag. So, we can
177:14 put something in. I put in AI content to play around with earlier. And then you
177:17 can see it asks, how many videos do you want back? So, in this case, I put 10.
177:21 Let's just put 25 for the sake of this demo. And then you have the option to
177:24 add more settings. So, down here, we could do, you know, we could add certain
177:26 profiles that we want to scrape. We could add a different search
177:29 functionality. We could even have it download the videos for us. So, once
177:32 you're good with this configuration, and just don't over complicate it. Think of
177:35 it the same way you would like put in filters on an e-commerce website or the
177:39 same way you would, you know, fill in your order when you're door dashing some
177:42 food. So, now that we have this filled out the way we want it, all I'm going to
177:45 do is come up to the top right and hit API and click API endpoints. The first
177:49 thing we're going to do is we're going to use this endpoint called run actor.
177:52 This is the one that's basically just going to send a request to Apify and
177:55 start this process, but it's not going to give us the live results back. That's
177:59 why the second step later is to pull the results back. What you could do is you
178:02 could run the actor synchronously, meaning it's going to send it off and
178:05 it's just going to spin in and it end until we're done and until it has the
178:09 results. But I found this way to be more consistent. So anyways, all you have to
178:12 do is click on copy and it's already going to have copied over your appy API
178:17 key. So it's really, really simple. All we're going to do here is open up a new
178:20 HTTP request. I'm going to just paste in that URL that we just copied right here.
178:24 And that's basically all we have to do except for we want to change this method
178:28 to post because as you can see right here, it says post. And so this is
178:31 basically just us putting in the actor's phone number. And so we're giving it a
178:35 call. But now what we have to do is actually tell it what we want. So right
178:38 here, we've already filled this out. I'm going to click on JSON and all I have to
178:42 do is just copy this JSON right here. Go back into N. Flick this on to send a
178:47 body and we want to send over just JSON. And then all I have to do is paste that
178:49 in there. So, as you can see, what we're sending over to this Tik Tok scraper is
178:54 I want AI content and I want 25 results. And then all this other stuff is false.
178:57 So, I'm just going to hit test step. And so, this basically returns us with an ID
179:00 and says, okay, the actor started. If we go back into here and we click on runs,
179:04 we can see that this crawler is now running. and it's going to basically
179:07 tell us how much it costed, how long it took, and all this kind of stuff. And
179:11 now it's already done. So, what we need to do now is we need to click on API up
179:14 in the top right. Click on API endpoints again and scroll all the way down to the
179:19 bottom where we can see get last run data set items. So, all I need to do is
179:23 hit this copy button right here. Go back into Nitn and then open up another HTTP
179:27 request. And then I'm just going to paste that URL right in there once
179:30 again. And I don't even have to change the method because if we go in here, we
179:34 can see that this is a get. So, all I have to do is hit test step. And this is
179:38 going to pull in those 25 results from our Tik Tok scrape based on the search
179:42 term AI content. So, you can see right here it says 25 items. And just to show
179:46 you guys that it really is 25 items, I'm just going to grab a set field. We're
179:49 going to just drag in the actual text from here and hit test step. And it
179:53 should Oh, we have to connect a trigger. So, I'm just going to move this trigger
179:57 over here real quick. And um what you can do is because we already have our
180:00 data here, I can just pin it so we don't actually have to run it again. But then
180:03 I'll hit test step. And now we can see we're going to get our 25 items right
180:08 here, which are all of the text content. So I think just the captions or the
180:11 titles of these Tik Toks. And we have all 25 Tik Toks as you can see. So I
180:15 just showed you guys the two-step method. And why I've been using it
180:18 because here's an example where I did the synchronous run. So all I did was I
180:22 came to the Google maps and I went to API endpoints and then I wanted to do
180:26 run actor synchronously which basically means that it would run it in n and it
180:30 would spin until the results were done and then it should feed back the output.
180:34 So I copied that I put it into here and as you can see I just ran it with the
180:37 Google maps looking for plumbers and we got nothing back. So that's why we're
180:40 taking this two-step approach where as you can see here we're going to do that
180:43 exact same request. We're doing a request for plumbers and we're going to
180:47 fire this off. And so nothing came back in Nitn. But if we go to our actor and
180:50 we go to runs, we can see right here that this was the one that we just made
180:54 for plumbers. And if we click into it, we can see all the plumbers. So that's
180:57 why we're taking the two-step approach. I'm going to make the exact same request
181:00 here for New York plumbers. And what I'm going to do is just run this workflow.
181:03 And now I wanted to talk about what we have to do because what happens is we
181:07 started the actor. And as you can see, it's running right now. And then it went
181:10 to grab the results, but the results aren't done yet. So that's why it comes
181:13 back and says this is an item, but it's empty. So, what we want to do is we want
181:17 to go to our runs and we want to see how long this is taking on average for 50
181:21 leads. As you can see, the most amount of time it's ever taken was 19 seconds.
181:24 So, I'm just going to go in here and in between the start actor and grab
181:28 results, I'm going to add a wait, and I'm just going to tell this thing to
181:31 wait for 22 seconds just to be safe. And now, what I'm going to do is just run
181:33 this thing again. It's going to start the actor. It's going to wait for 22
181:37 seconds. So, if we go back into Ampify, you can see that the actor is once again
181:41 running. After about 22 seconds, it's going to pass over and then we should
181:45 get all 50 results back in our HTTP request. There we go. Just finished up.
181:48 And now you can see that we have 50 items which are all of the plumbers that
181:53 we got in New York. So from here, now that you have these 50 leads and
181:56 remember if you want to come back into Ampify and change up your input, you can
182:00 change how many places you want to extract. So if you changed this to 200
182:03 and then you clicked on JSON and you copied in that body, you would now be
182:07 searching for 200 results. But anyways, that's the hard part is getting the
182:11 leads into end. But now we have all this data about them and we can just, you
182:14 know, do some research, send them off a email, whatever it is, we can just
182:18 basically have this thing running 24/7. And if you wanted to make this workflow
182:21 more advanced to handle a little bit more dynamic amount of results. What
182:25 you'd want to use is a technique called polling. So basically, you'd wait, you
182:28 check in, and then if the results were all done, you continue down the process.
182:32 But if they weren't all done, you would basically wait again and come back. And
182:36 you would just loop through this until you're confident that all of the results
182:39 are done. So that's going to be it for this one. I'll have this template
182:42 available in my free school community if you want to play around with it. Just
182:44 remember you'll have to come in here and you'll have to switch out your own API
182:47 key. And don't forget when you get to Ampify, you can use code 30 Nate Herk to
182:51 get 30% off. Okay, so those were some APIs that we can use to actually scrape
182:54 information. Now, what if we want to use APIs to generate some sort of content?
182:58 We're going to look at an image generation API from OpenAI and we're
183:02 going to look at a video generation API called Runway. So these next two
183:05 workflows will explain how you set up those API calls and also how you can
183:09 bake them into a workflow to be a little bit more practical. So let's take a
183:13 look. So this workflow right here, all I had to do was enter in ROI on AI
183:17 automation and it was able to spit out this LinkedIn post for me. And if you
183:21 look at this graphic, it's insane. It looks super professional. It even has a
183:24 little LinkedIn logo in the corner, but it directly calls out the actual
183:28 statistics that are in the post based on the research. And for this next one, all
183:31 I typed in was mental health within the workplace and it spit out this post.
183:34 According to Deote Insights, organizations that support mental health
183:38 can see up to 25% increase in productivity. And as you can see down
183:42 here, it's just a beautiful graphic. So, a few weeks ago when Chacht came out
183:45 with their image generation model, you probably saw a lot of stuff on LinkedIn
183:48 like this where people were turning themselves into action figures or some
183:51 stuff like this where people were turning themselves into Pixar animation
183:55 style photos or whatever it is. And obviously, I had to try this out myself.
183:58 And of course, this was very cool and everyone was getting really excited. But
184:01 then I started to think about how could this image generation model actually be
184:06 used to save time for a marketing team because this new image model is actually
184:09 good at spelling and it can make words that don't look like gibberish. It opens
184:13 up a world of possibilities. So here's a really quick example of me giving it a
184:16 one-s sentence prompt and it spits out a poster that looks pretty solid. Of
184:20 course, we were limited to having to do this in chatbt and coming in here and
184:24 typing, but now the API is released, so we can start to save hours and hours of
184:27 time. And so, the automation I'm going to show with you guys today is going to
184:30 help you turn an idea into a fully researched LinkedIn post with a graphic
184:34 as well. And of course, we're going to walk through setting up the HTTP request
184:39 to OpenAI's image generation model. But what you can do is also download this
184:42 entire template for free and you can use it to post on LinkedIn or you can also
184:46 just kind of build on top of it to see how you can use image generation to save
184:50 you hours and hours within some sort of marketing process. So this workflow
184:54 right here, all I had to do was enter in ROI on AI automation and it was able to
184:58 spit out this LinkedIn post for me. And if you look at this graphic, it's
185:01 insane. It looks super professional. It even has a little LinkedIn logo in the
185:05 corner, but it directly calls out the actual statistics that are in the post
185:09 based on the research. So 74% of organizations say their most advanced AI
185:13 initiatives are meeting or exceeding ROI expectations right here. And on the
185:17 other side, we can see that only 26% of companies have achieved significant
185:21 AIdriven gains so far, which is right here. And I was just extremely impressed
185:24 by this one. And for this next one, all I typed in was mental health within the
185:28 workplace. And to spit out this post, according to Deote Insights,
185:31 organizations that support mental health can see up to 25% increase in
185:35 productivity. And as you can see down here, it's just a beautiful graphic.
185:38 something that would probably take me 20 minutes in Canva. And if you can now
185:42 push out these posts in a minute rather than 20 minutes, you can start to push
185:45 out more and more throughout the day and save hours every week. And because the
185:49 post is being backed by research, the graphic is being backed by the research
185:53 post. You're not polluting anything into the internet. A lot of people in my
185:56 comments call it AI slop. Anyways, let's do a quick live run of this workflow and
185:59 then I'll walk through step by step how to set up this API call. And as always,
186:03 if you want to download this workflow for free, all you have to do is join my
186:06 free school community. link is down in the description and then you can search
186:09 for the title of the video. You can go into YouTube resources. You need to find
186:13 the post associated with this video and then when you're in there, you'll be
186:16 able to download this JSON file and that is the template. So you download the
186:20 JSON file. You'll go back into Nitn. You'll open up a new workflow and in the
186:25 top right you'll go to import from file. Import that JSON file and then there'll
186:28 be a little sticky note with a setup guide just sort of telling you what you
186:31 need to plug in to get this thing to work for you. Okay, quick disclaimer
186:33 though. I'm not actually going to post this to LinkedIn. you certainly could,
186:37 but um I'm just going to basically send the post as well as the attachment to my
186:41 email because I don't want to post on LinkedIn right now. Anyways, as you can
186:45 see here, this workflow is starting with a form submission. So, if I hit test
186:48 workflow, it's going to pop up with a form where we have to enter in our email
186:53 for the workflow to send us the results. Topic of the post and then also I threw
186:57 in here a target audience. So, you could have these posts be kind of flavored
187:00 towards a specific audience if you want to. Okay, so this form is waiting for
187:04 us. I put in my email. I put the topic of morning versus night people and the
187:07 target audience is working adults. So, we'll hit submit, close out of here, and
187:10 we'll see the LinkedIn post agent is going to start up. It's using Tavi here
187:14 for research and it's going to create that post and then pass the post on to
187:19 the image prompt agent. And that image prompt agent is going to read the post
187:22 and basically create a prompt to feed into OpenAI's image generator. And as you can
187:28 see, it's doing that right now. We're going to get that back as a base 64
187:32 string. And then we're just converting that to binary so we can actually post
187:36 that on LinkedIn or send that in email as an attachment and we'll break down
187:39 all these steps. But let's just wait and see what these results look like here.
187:43 Okay, so all that just finished up. Let me pop over to email. So in email, we
187:46 got our new LinkedIn post. Are you a morning lark or a night owl? The science
187:49 of productivity. I'm not going to read through this right now exactly, but
187:53 let's take a look at the image we got. When are you most productive? In the
187:57 morning, plus 10% productivity or night owls thrive in flexibility. I mean, this
188:00 is insane. This is a really good graphic. Okay, so now that we've seen
188:04 again how good this is, let's just break down what's going on. We're going to
188:07 start off with the LinkedIn post agent. All we're doing is we're feeding in two
188:11 things from the form submission, which was what is the topic of the post, as
188:14 well as who's the target audience. So right here, you can see morning versus
188:18 night people and working adults. And then we move into the actual system
188:21 prompt, which I'm not going to read through this entire thing. If you
188:23 download the template, the prompt will be in there for you to look at. But
188:26 basically I told it you are an AI agent specialized in creating professional
188:30 educational and engaging LinkedIn posts based on a topic provided by the user.
188:34 We told it that it has a tool called Tavly that it will use to search the web
188:38 and gather accurate information and that the post should be written to appeal to
188:42 the provided target audience. And then basically just some more information
188:45 about how to structure the post, what it should output and then an example which
188:49 is basically you receive a topic. You search the web, you draft the post and
188:53 you format it with source citations, clean structure, optional hashtags and a
188:57 call to action at the end. And as you can see what it outputs is a super clean
189:01 LinkedIn post right here. So then what we're going to do is basically we're
189:05 feeding this output directly into that next agent. And by the way, they're both
189:10 using chat GBT 4.1 through open router. All right, but before we look at the
189:12 image prompt agent, let's just take a look at these two things down here. So
189:15 the first one is the chat model that plugs into both image prompt agent and
189:19 the LinkedIn post agent. So all you have to do is go to open router, get an API
189:22 key, and then you can choose from all these different models. And in here, I'm
189:24 using GPT4.1. And then we have the actual tool that the LinkedIn agent uses for its
189:31 research which is Tavi. And what we're doing here is we're sending off a post
189:35 request using an HTTP request tool to the Tavi endpoint. So this is where
189:39 people typically start to feel overwhelmed when trying to set up these
189:42 requests because it can be confusing when you're trying to look through that
189:45 API documentation. Which is exactly why in my paid community I created a APIs
189:49 and HTTP requests deep dive because truthfully you need to understand how to
189:54 set up these requests because being able to connect to different APIs is where
189:58 the magic really happens. So Tavi just lets your LLM connect to the web and
190:02 it's really good for web search and it also gives you a thousand free searches
190:05 per month. So that's the plan that I'm on. Anyways, once you're in here and you
190:08 have an account and you get an API key, all I did was went to the Tavali search
190:12 endpoint and you can see we have a curl statement right here where we have this
190:17 endpoint. We have post as the method we have this is how we authorize ourselves
190:20 and this is all going to be pretty similar to the way that we set up the
190:23 actual request to OpenAI's image generation API. So, I'm not going to
190:26 dive into this too much. When you download this template, all you have to
190:30 do is plug in your Tavi API. But later in this video when we walk through
190:35 setting up the request to OpenAI, this should make more sense. Anyways, the
190:38 main thing to take away from this tool is that we're using a placeholder for
190:41 the request because in the request we sent over to Tavali, we basically say,
190:44 okay, here's the search query that we're going to search the internet for. And
190:47 then we have all these other little settings we can tweak like the topic,
190:51 how many results, how many chunks per source, all this kind of stuff. All we
190:55 really want to touch right now is the query. And as you can see, I put this in
190:59 curly braces, meaning it's a placeholder. I'm calling the placeholder
191:02 search term. And down here, I'm defining that placeholder as what the user is
191:06 searching for. So, as you can see, this data in the placeholder is going to be
191:09 filled in by the model. So, based on our form submission, when we asked it to,
191:13 you know, create a LinkedIn post about morning versus night people, it fills
191:17 out the search term with latest research on productivity, morning people versus
191:21 night people, and that's basically how it searches the internet. And then we
191:24 get our results back. And now it creates a LinkedIn post that we're ready to pass
191:29 off to the next agent. So the output of this one gets fed into this next one,
191:32 which all it has to do is read the output. As you can see right here, we
191:36 gave it the LinkedIn post, which is the full one that we just got spit out. And
191:39 then our system message is basically telling it to turn that into an image
191:43 prompt. This one is a little bit longer. Not too bad, though. I'm not going to
191:46 read the whole thing, but essentially we're telling it that it's going to be
191:50 an AI agent that transforms a LinkedIn post into a visual image prompt for a
191:56 textto-image AI generation model. So, we told it to read the post, identify the
192:00 message, identify the takeaways, and then create a compelling graphic prompt
192:04 that can be used with a textto image generator. We gave it some output
192:06 instructions like, you know, if there's numbers, try to work those into the
192:10 prompt. Um, you can use, you know, text, charts, icons, shapes, overlays,
192:14 anything like that. And then the very bottom here, we just gave it sort of
192:17 like an example prompt format. And you can see what it spits out is a image
192:21 prompt. So it says a dynamic split screen infographic style graphic. Left
192:25 side has a sunrise, it's bright yellow, and it has morning larks plus 10%
192:29 productivity. And the right side is a morning night sky, cool blue gradients,
192:33 a crescent moon, all this kind of stuff. And that is exactly what we saw back in
192:38 here when we look at our image. And so this is just so cool to me because first
192:41 of all, I think it's really cool that it can read a post and kind of use its
192:44 brain to say, "Okay, this would be a good, you know, graphic to be looking at
192:47 while I'm reading this post." But then on top of that, it can actually just go
192:51 create that for us. So, I think this stuff is super cool. You know, I
192:53 remember back in September, I was working on a project where someone
192:57 wanted me to help them with LinkedIn automated posting and they wanted visual
193:00 elements as well and I was like, uh, I don't know, like that might have to be a
193:04 couple month away thing when we have some better models and now we're here.
193:07 So, it's just super exciting to see. But anyways, now we're going to feed that
193:12 output, the image prompt into the HTTP request to OpenAI. So, real quick, let's
193:16 go take a look at OpenAI's documentation. So, of course, we have
193:20 the GBT image API, which lets you create, edit, and transform images.
193:24 You've got different styles, of course. You can do like memes with a with text.
193:29 You can do creative things. You can turn other images into different images. You
193:31 can do all this kind of stuff. And this is where it gets really cool, these
193:35 posters and the visuals with words because that's the kind of stuff where
193:39 typically AI image gen like wasn't there yet. And one thing real quick in your
193:42 OpenAI account, which is different than your chatbt account, this is where you
193:46 add the billing for your OpenAI API calls. You have to have your
193:50 organization verified in order to actually be able to access this model
193:54 through API. Right now, it took me 2 minutes. You basically just have to
193:57 submit an ID and it has to verify that you're human and then you'll be verified
194:00 and then you can use it. Otherwise, you're going to get an error message
194:02 that looks like this that I got earlier today. But anyways, the verification
194:06 process does not take too long. Anyways, then you're going to head over to the
194:08 API documentation that I will have linked in the description where we can
194:12 see how we can actually create an image in NAN. So, we're going to dive deeper
194:16 into this documentation in the later part of this video where I'm walking
194:19 through a step-by-step setup of this. But, we're using the endpoint um which
194:23 is going to create an image. So, we have this URL right here. We're going to be
194:27 creating a post request and then we just obviously have our things that we have
194:30 to configure like the prompt in the body. We have to obviously send over
194:35 some sort of API key. We have to, you know, we can choose the size. We can
194:37 choose the model. All this kind of stuff. So back in NN, you can see that
194:41 I'm sending a post request to that endpoint. For the headers, I set up my
194:44 API key right here, but I'm going to show you guys a better way to do that in
194:47 the later part of this video. And then for the body, we're saying, okay, I want
194:51 to use the GBT image model. Here's the actual prompt to use for the image which
194:54 we dragged in from the image prompt agent. And then finally the size we just
194:59 left it as that 1024 * 1024 square image. And so this is interesting
195:03 because what we get back is we get back a massive base 64 code. Like this thing
195:08 is huge. I can't even scroll right now. My screen's kind of frozen. Anyways, um
195:12 yeah, there it goes. It just kind of lagged. But we got back this massive
195:15 file. We can see how many tokens this was. And then what we're going to do is
195:20 we're going to convert that to binary data. So that's how we can actually get
195:23 the file as an image. As you can see now after we turn that nasty string into a
195:28 file, we have the binary image right over here. So all I did was I basically
195:32 just dragged in this field right here with that nasty string. And then when
195:36 you hit test step, you'll get that binary data. And then from there, you
195:39 have the binary data, you have the LinkedIn post. All you have to do is,
195:43 you know, activate LinkedIn, drag it right in there. Or you can just do what
195:47 I did, which is I'm sending it to myself in email. And of course, before you guys
195:50 yell at me, let's just talk about how much this run costed me. So, this was
195:55 4,273 tokens. And if we look at this API and we go down to the pricing section,
195:59 we can see that for image output tokens, which was generated images, it's going
196:03 to be 40 bucks for a million tokens, which comes out to about 17 cents. If
196:06 you can see that right here, hopefully I did the math right. But really, for the
196:09 quality and kind of for the industry standard I've seen for price, that's on
196:13 the cheaper end. And as you can see down here, it translates roughly to 2 cents,
196:17 7 cents, 19 cents per generated image for low, medium, blah blah blah blah
196:20 blah. But anyways, now that that's out of the way, let's just set up an HTTP
196:25 request to that API and generate an image. So, I'm going to add a first
196:28 step. I'm just going to grab an HTTP request. So, I'm just going to head over
196:31 to the actual API documentation from OpenAI on how to create an image and how
196:35 to hit this endpoint. And all we're going to do is we're going to copy this
196:38 curl command over here on the right. If it you're not seeing a curl command, if
196:41 you're seeing Python, just change that to curl. Copy that. And then we're going
196:45 to go back into Nitn. Hit import curl. Paste that in there. And then once we
196:49 hit import, we're almost done. So that curl statement basically just autopop
196:52 populated almost everything we need to do. Now we just have a few minor tweaks.
196:55 But as you can see, it changed the method to post. It gave us the correct
196:59 URL endpoint already. It has us sending a header, which is our authorization,
197:02 and then it has our body parameters filled out where all we'd really have to
197:06 change here is the prompt. And if we wanted to, we can customize this kind of
197:09 stuff. And that's why it's going to be really helpful to be able to understand
197:13 and read API documentation so you know how to customize these different
197:16 requests. Basically, all of these little things here like prompt, background,
197:20 model, n, output format, they're just little levers that you can pull and
197:23 tweak in order to change your output. But we're not going to dive too deep
197:26 into that right now. Let's just see how we can create an image. Anyways, before
197:30 we grab our API key and plug that in, when you're in your OpenAI account, make
197:33 sure that your organization is verified. Otherwise, you're going to get this
197:35 error message and it's not going to let you access the model. Doesn't take long.
197:39 Just submit an ID. And then also make sure that you have billing information
197:43 set up so you can actually pay for um an image. But then you're going to go down
197:47 here to API keys. You're going to create new secret key. This one's going to be
197:53 called image test just for now. And then you're going to copy that API key. Now
197:56 back in any then it has this already set up for us where all we need to do is
198:00 delete all this. We're going to keep the space after bearer. And we can paste in
198:03 our API key like that. and we're good to go. But if you want a better method to
198:08 be able to save this key in Nadn so you don't have to go find it every time.
198:12 What you can do is come to authentication, go to general or actually no it's generic and then you're
198:17 going to choose header off and we know it's header because right here we're
198:20 sending headers as a header parameter and this is where we're authorizing
198:22 oursel. So we're just going to do the same up here with the header off. And
198:26 then we're going to create a new one. I'm just going to call this one openai
198:30 image just so we can keep ourselves organized. And then you're going to do the same
198:34 thing as what we saw down in that header parameter field. Meaning the
198:39 authorization is the name and then the value was bearer space API key. So
198:44 that's all I'm going to do. I'm going to hit save. We are now authorized to
198:49 access this endpoint. And I'm just going to turn off sending headers because
198:53 we're technically sending headers right up here with our authentication. So we
198:57 should be good now. Right now we'll be getting an image of a cute baby sea
199:01 otter. Um, and I'm just going to say making pancakes. And we'll hit test
199:05 step. And this should be running right now. Um, okay. So, bad request. Please
199:09 check your parameters. Invalid type for n. It expected an integer, but it got a
199:13 string instead. So, if you go back to the API documentation, we can see n
199:18 right here. It should be integer or null, and it's also optional. So, I'm
199:21 just going to delete that. We don't really need that. And I'm going to hit test step.
199:26 And while that's running real quick, we'll just go back at n. And this
199:29 basically says the number of images to generate must be between 1 and 10. So
199:32 that's like one of those little levers you could tweak like I was talking about
199:36 if you want to customize your request. But right now by default it's only going
199:40 to give us one. Looks like this HTTP request is working. So I'll check in
199:45 with you guys in 20 seconds when this is done. Okay. So now that that finished
199:49 up, didn't take too long. We have a few things and all we really need is this
199:52 base 64. But we can see again this one costed around 17. And now we just have
199:57 to turn this into binary so we can actually view an image. So I'm going to
200:01 add a plus after the HTTP request. I'm just going to type in binary. And we can
200:06 see convert to file, which is going to convert JSON data to binary data. And
200:11 all we want to do here is move a B 64 string to file because this is a B 64
200:15 JSON. And this basically represents the image. So I'm going to drag that into
200:19 there. And then when I hit test step, we should be getting a binary image output
200:24 in a field called data. As you can see right here, and this should be our image
200:28 of a cute sea otter making pancakes. As you can see, um it's not super
200:32 realistic, and that's because the prompt didn't have any like photorealistic,
200:36 hyperrealistic elements in there, but you can easily make it do so. And of
200:39 course, I was playing around with this earlier, and just to show you guys, you
200:41 can make some pretty cool realistic images, here was um a post I made about
200:47 um if ancient Rome had access to iPhones. And obviously, this is not like
200:51 a real Twitter account. Um, but this is a dinosaurs evolved into modern-day
200:55 influencers. This was just for me testing like an automation using this
200:59 API and auto posting, but not as practical as like these LinkedIn
201:02 graphics. But if you guys want to see a video sort of like this, let me know. Or
201:05 if you also want to see a more evolved version of the LinkedIn posting flow and
201:09 how we can make it even more robust and even more automated, then definitely let
201:15 well. Okay. Okay. So, all I have to do in this form submission is enter in a
201:19 picture of a product, enter in the product name, the product description,
201:22 and my email address. And we'll send this off, and we'll see the workflow
201:26 over here start to fire off. So, we're going to upload the photo. We're going
201:29 to get an image prompt. We're going to download that photo. Now, we're creating
201:32 a professional graphic. So, after our image has been generated, we're
201:35 uploading it to a API to get a public URL so we can feed that URL of the image
201:40 into Runway to generate a professional video. Now, we're going to wait 30
201:42 seconds and then we'll check in to see if the video is done. If it's not done
201:45 yet, we're going to come down here and pull, wait five more seconds, and then
201:48 go check in. And we're going to do this infinitely until our video is actually
201:52 done. So, anyways, it just finished up. It ended up hitting this check eight
201:55 times, which indicates I should probably increase the wait time over here. But
201:58 anyways, let's go look at our finished products. So, we just got this new
202:01 email. Here are the requested marketing materials for your toothpaste. So,
202:04 first, let's look at the video cuz I think that's more exciting. So, let me
202:06 open up this link. Wow, we got a 10-second video. It's spinning. It's 3D.
202:10 The lighting is changing. This looks awesome. And then, of course, it also
202:13 sends us that image. in case we want to use that as well. And one of the steps
202:16 in the workflow is that it's going to upload your original image to your
202:19 Google Drive. So here you can see this was the original and then this was the
202:22 finished product. So now you guys have seen a demo. We're going to build this
202:25 entire workflow step by step. So stick with me because by the end of this
202:28 video, you'll have this exact system up and running. Okay. So when we're setting
202:32 up a system where we're creating an image from text and then we're creating
202:35 a video from that image, the two most important things are going to be that
202:38 image prompt and that video prompt. So what we're going to do is head over to
202:41 my school community. The link for that will be down in the description. It's a
202:43 free school community. And then what you're going to do is either search for
202:46 the title of this video or click on YouTube resources and find the post
202:50 associated with this video. And when you click into there, there'll be a doc that
202:54 will look like this or a PDF and it will have the two prompts that you'll need in
202:57 order to run the system. So head over there, get that doc, and then we can hop
203:00 into the step by step. And that way we can start to build this workflow and you
203:03 guys will have the prompts to plug right in. Cool. So once you have those, let's
203:07 get started on the workflow. So as you guys know, a workflow always has to
203:11 start with some sort of trigger. So in this case, we're going to be triggering
203:14 this workflow with a form submission. So I'm just going to grab the native NAN
203:18 form on new form event. So we're going to configure what this form is going to
203:20 look like and what it's going to prompt a user to input. And then whenever
203:24 someone actually submits a response, that's when the workflow is going to
203:27 fire off. Okay. So I'm going to leave the authentication as none. The form
203:31 title, I'm just putting go to market. For the form description, I'm going to
203:36 say give us a product photo, title, and description, and we'll get back to you
203:40 with professional marketing materials. And if you guys are interested in what I
203:43 just used to dictate that text, there'll be a link for Whisper Flow down in the
203:46 description. And now we need to add our form elements. So the first one is going
203:50 to be not a text. We're going to have them actually submit a file. So click on
203:54 file. This is going to be required. I only want them to be allowed to upload
203:57 one file. So I'm going to switch off multiple files. And then for the field
204:01 name, we're just going to say product photo. Okay. So now we're going to add
204:04 another one, which is going to be the product title. So I'm just going to
204:07 write product title. This is going to be text. For placeholder, let's just put
204:10 toothpaste since that was the example. This will be a required field. So, the
204:13 placeholder is just going to be the gray text that fills in the text box so
204:17 people are kind of they know what to put in. Okay, we're adding another one
204:21 called product description. We'll make this one required. We'll just leave the
204:24 placeholder blank cuz you don't need it. And then finally, what we need to get
204:27 from them is an email, but instead of doing text, we can actually make it
204:31 require a valid email address. So, I'm just going to call it email and we'll
204:34 just say like namele.com so they know what a valid email looks like. We'll make that
204:38 required because we have to send them an email at the end with their materials.
204:42 And now we should be good to go. So if I hit test step, we'll see that it's going
204:45 to open up a form submission and it has everything that we just configured. And
204:48 now let me put in some sample data real quick. Okay, so I put a picture of a
204:52 clone bottle. The title's clone. I said the clone smells very clean and fresh
204:55 and it's a very sophisticated scent because we're going to have that
204:59 description be used to sort of help create that text image prompt. And then
205:02 I just put my email. So I'm going to submit this form. We should see that
205:05 we're going to get data back right here in our NIN, which is the binary photo.
205:08 This is the product photo that I just submitted. And then we have our actual
205:13 table of information like the title, the description, and the email. And so when
205:17 I'm building stuff step by step, what I like to do is I get the data in here,
205:20 and then I pretty much will just build node by node, testing the data all the
205:23 way through, making sure that nothing's going to break when variables are being
205:27 passed from left to right in this workflow. Okay, so the next thing that
205:30 we need to do is we have this binary data in here and binary data is tough to
205:34 reference later. So what I'm going to do is I'm just going to upload it straight
205:37 to our Google Drive so we can pull that in later when we need it to actually
205:41 edit that image. Okay, so that's our form trigger. That's what starts the
205:44 workflow. And now what we're going to do next is we want to upload that original
205:48 image to Google Drive so we can pull it in later and then use it to edit the
205:51 image. So what I'm going to do is I'm going to click on the plus. I'm going to
205:54 type in Google Drive. And we're going to grab a Google Drive operation. That is
205:59 going to be upload file. So, I'll click on upload file. And at this point, you
206:02 need to connect your Google Drive. So, I'm not going to walk through that step
206:05 by step, but I have a video right up here where I do walk through it step by
206:08 step. But basically, you're just going to go to Docs. You have to open up a
206:12 sort of Google Cloud profile or a console, and then you just have to
206:15 connect yourself and enable the right credentials and APIs. Um, but like I
206:19 said, that video will walk through it. Anyways, now what we're doing is we have
206:23 to upload the binary field right here to our Google Drive. So, it's not called
206:27 data. We can see over here it's called product photo. So, I'm just going to
206:29 copy and paste that right there. So, it's going to be looking for that
206:32 product photo. And then we have to give it a name. So, that's why we had the
206:36 person submit a title. So, all I'm going to do is for the name, I'm going to make
206:40 this an expression instead of fixed because this name is going to change
206:43 based on the actual product coming through. I'm going to drag in the
206:47 product title from the left right here. So now the the photo in Google Drive is
206:51 going to be called cologne and then I'm just going to in parenthesis say
206:55 original. So because this is an expression, it basically means whenever
206:58 someone submits a form, whatever the title is, it's going to be title and
207:01 then it's going to say original. And that's how we sort of control that to be
207:05 dynamic. Anyways, then I'm just choosing what folder to go in. So in my drive,
207:08 I'm going to choose it to go to a folder that I just made called um product
207:12 creatives. So once we have that configured, I'm going to hit test step.
207:15 We're going to wait for this to spin. it means that it's trying to upload it
207:18 right now. And then once we get that success message, we'll quickly go to our
207:21 Google Drive and make sure that the image is actually there. So there we go.
207:25 It just came back. And now I'm going to click into Google Drive, click out of
207:27 the toothpaste, and we can see we have cologne. And that is the image that we
207:31 just submitted in NAN. All right. Now that we've done that, what we want to do
207:35 is we want to feed the data into an AI node so that it can create a text image
207:38 prompt. So I'm going to click on the plus. I'm going to grab an AI agent. And
207:43 before we do anything in here, I'm first of all going to give it its brain. So,
207:45 I'm going to click on the plus under chat model. I'm personally going to grab
207:49 an open router chat model, which basically lets you connect to a ton of
207:53 different things. Um, let me see. Open router.ai. It basically lets you connect
207:57 your agents to all the different models. So, if I click on models up here, we can
208:00 see that it just lets you connect to Gemini, Anthropic, OpenAI, Deepseek. It
208:04 has all these models and all in one place. So, go to open router, get an API
208:08 key, and then once you come back into here, all you have to do is connect your
208:11 API key. And what I'm going to use here is going to be 4.1. And then I'm just
208:15 going to name this so we know which one I'm using here. And then we now have our
208:20 agent accessing GPT4.1. Okay. So now you're going to go to that PDF that I have in the school
208:26 community and you're just going to copy this product photography prompt. Grab
208:31 that. Go back to the AI agent and then you're going to click on add option. Add
208:35 a system message. And then we're basically just going to I'm going to
208:38 click on expression and expand this full screen so you guys can see it better.
208:40 But I'm just going to paste that prompt in here. And this is going to tell the
208:44 AI agent how to take what we're giving it and turn it into a text image
208:51 optimized prompt for professional style, you know, studio photography. So, we're
208:55 not done yet because we have to actually give it the dynamic information from our
208:59 form submission every time. So, that's a user message. That's basically what it's
209:02 going to look at. So, the user message is what the agent's going to look at
209:05 every time. And the system message is basically like here are your
209:09 instructions. So for the user message, we're not going to be using a connected
209:11 chat trigger node. We're going to define below. And when we want to make sure
209:15 that this changes every time, we have to make sure it's an expression. And then
209:19 I'm just going to drill down over here to the form submission. And I'm going to
209:21 say, okay, here's what we're going to give this agent. It's going to get the
209:26 product, which the person submitted to us in the form, and we can drag in the
209:31 product, which was cologne, as you can see on the right. And then they also
209:36 gave us a description. So, all I have to do now is drag in the product
209:38 description. And so, now every time the agent will be looking at whatever
209:42 product and description that the user submitted in order to create its prompt.
209:46 So, I'm going to hit test step. We'll see right now it's using its chat model
209:50 GPT4.1. And it's already created that prompt for us. So, let's just give it a
209:53 quick read. Hyperrealistic photo of sophisticated cologne bottle,
209:56 transparent glass, sleek minimalistic design, silver metal cap, all this. But
210:00 what we have to do is we have to make sure that the image isn't being created
210:04 just on this. It has to look at this, but it also has to look at the actual
210:08 original image. So that's why our next step is going to be to redownload this
210:11 file and then we're going to push it over to the image generation model. So
210:15 at this point, you may be wondering like why are we going to upload the file if
210:18 we're just going to download it again? And the reason why I had to do that is
210:21 because when we get the file in the form of binary, we want to send the binary
210:27 data into the HTTP request right here that actually generates the image. And
210:30 we can't reference the binary way over here if it's only coming through over
210:34 here. So, we upload it so that we can then download it and then send it right
210:37 back in. And so, if that doesn't make sense yet, it probably will once we get
210:41 over to the stage. But that's why. Anyways, next step is we're going to
210:44 download that file. So, I'm going to click on this plus. We're going to be
210:47 downloading it from Google Drive and we're going to be using the operation
210:51 download file. So, we already should be connected because we've set up our
210:54 Google credentials already. The operation is going to be download the
210:57 resources a file and instead of choosing from a list, we're going to choose by
211:00 ID. And all we're going to do is download that file that we previously
211:03 uploaded every time. So I'm going to come over here, the Google Drive, upload
211:08 photo node, drag in the ID, and now we can see that's all we have to do. If we
211:11 hit test step, we'll get back that file that we originally uploaded. And we can
211:15 just make sure it's the cologne bottle. Okay, but now it's time to basically use
211:19 that downloaded file and the image prompt and send that over to an API
211:23 that's going to create an image for us. So we're going to be using OpenAI's
211:27 image generator. So here is the documentation. we have the ability to
211:30 create an image or we can create an image edit which is what we want to do
211:33 because we wanted to look at the photo and our request. So typically what you
211:38 can do in this documentation is you can copy the curl command but this curl
211:41 command is actually broken so we're not going to do that. If you copied this one
211:44 up here to actually just create an image that one would work fine but there's
211:47 like a bug with this one right now. So anyways I'm going to go into our n I'm
211:51 going to hit the plus. I'm going to grab an HTTP request and now we're going to
211:57 configure this request. So, I'm going to walk through how I'm reading the API
212:00 documentation right here to set this up. I'm not going to go super super
212:03 in-depth, but if you get confused along the way, then definitely check out my
212:06 paid course. The link for that down in the description. I've got a full course
212:10 on deep diving into APIs and HTTP requests. Anyways, the first thing we
212:13 see is we're going to be making a post request to this endpoint. So, the first
212:16 thing I'm going to do is copy this endpoint. We're going to paste that in.
212:19 And then we're also going to make sure the method is set to post. So, the next
212:23 thing that we have to do is authorize ourselves somehow. So over here I can
212:27 see that we have a header and the name is going to be authorization and then
212:31 the value is going to be bearer space R open AI key. So that's why I set up a
212:35 header authentication already. So in authentication I went to generic and
212:39 then I went to header and then you can see I have a bunch of different headers
212:42 already set up. But what I did here is I chose my OpenAI one where basically all
212:46 I did was I typed in here authorization and then in the value I typed in bearer
212:50 space and then I pasted my API key in there. And now I have my OpenAI
212:54 credential saved forever. Okay. So the first thing we have to do in our body
212:59 request over to OpenAI is we have to send over the image to edit. So that's
213:02 going to be in a field called image. And then we're sending over the actual
213:05 photo. So what I'm going to do is I'm going to click on send body. I'm going
213:10 to use form data. And now we can set up the different names and values to send
213:13 over. So the first thing is we're going to send over this image right here on
213:16 the lefth hand side. And this is in a field called data. And it's binary. So,
213:19 I'm going to choose instead of form data, I'm going to send over an NAN
213:23 binary file. The name is going to be image because that's what it said in the
213:26 documentation. And the input data field name is data. So, I'm just going to copy
213:30 that, paste it in there. And this basically means, okay, we're sending
213:34 over this picture. The next thing we need to send over is a prompt. So, the
213:37 name of this field is going to be prompt. I'm just going to copy that, add
213:42 a new parameter, and call it prompt. And then for the value, we want to send over
213:45 the prompt that we had our AI agent write. So, I'm going to click into
213:47 schema and I'm just going to drag over the output from the AI agent right
213:51 there. And now that's an expression. So, the next thing we want to send over is
213:54 what model do we want to use? Because if we don't put this in, it's going to
213:58 default to dolly 2, but we want to use gpt-image- one. So, I'm going to copy
214:04 GPT- image- one. We're going to come back into here, and I'm going to paste
214:08 that in as the value, but then the name is model because, as you can see in
214:11 here, right there, it says model. So hopefully you guys can see that when
214:15 we're sending over an API call, we just have all of these different options
214:18 where we can sort of tweak different settings to change the way that we get
214:22 the output back. And then you have some other options, of course, like quality
214:25 or size. But right now, we're just going to leave all that as default and just go
214:28 with these three things to keep it simple. And I'm going to hit test step
214:32 and we'll see if this is working. Okay, never mind. I got an error and I was
214:35 like, okay, I think I did everything right. The reason I got the error is
214:38 because I don't have any more credits. So, if you get this error, go add some
214:42 credits. Okay, so added more credits. I'm going to try this again and I'll
214:45 check back in. But before I do that, I wanted to say me clearly, I've been like
214:50 spamming this thing with creating images cuz it's so cool. It's so fun. But
214:53 everyone else in the world has also been doing that. So, if you're ever getting
214:56 some sort of like errors where it's like a 500 type of error where it means like
215:00 something's going on on the server side of things or you're seeing like some
215:04 sort of rate limit stuff, keep in mind that there's there's a limit on how many
215:07 images you can send per minute. I don't think that's been clearly defined on
215:13 GPT- image-1. But also, if the OpenAI server is receiving way too many
215:16 requests, that is also another reason why your request may be failing. So,
215:20 just keep that in mind. Okay, so now it worked. We just got that back. But what
215:23 you'll notice is we don't see an image here or like an image URL. So, what we
215:27 have to do is we have this base 64 string and we have to turn that into
215:32 binary data. So, what I'm going to do is after this node, I'm going to add one
215:37 that says um convert to file. So we're going to convert JSON data to binary
215:41 data and we're going to do B 64. So all I have to do now is show this data on
215:45 the lefth hand side. Grab the base 64 string. And then when we hit test step,
215:48 we should get a binary file over here, which if we click into it, this should
215:52 be our professional looking photo. Wow, that looks great. It even got the
215:55 wording and like the same fonts right. So that's awesome. And by the way, if we
216:00 click into the results of the create image where we did the image edit, we
216:04 can see the tokens. And with this model, it is basically $10 for a million input
216:09 tokens and $40 for a million output tokens. So right here, you can see the
216:12 difference between our input and output tokens. And this one was pretty cheap. I
216:15 think it was like 5 cents. Anyways, now that we have that image right here as
216:19 binary data, we need to turn that into a video using an API called Runway. And so
216:23 if we go into Runway and we go first of all, let's look at the price. For a
216:27 5second video, 25 cents. For a 10-second video, 50 cents. So that's the one we're
216:30 going to be doing today. But if we go to the API reference to read how we can
216:34 turn an image into a video, what we need to look at is how we actually send over
216:38 that image. And what we have to do here is send over an HTTPS URL of the image.
216:43 So we somehow have to get this binary data in NADN to a public image that
216:48 runway can access. So the way I'm going to be doing that is with this API that's
216:53 free called image BB. And um it's a free image hosting service. And what we can
216:57 do is basically just use its API to send over the binary data and we'll get back
217:02 a public URL. So come here, make a free account. You'll grab your API key from
217:05 up top. And then we basically have here's how we set this up. So what I'm
217:08 going to do is I'm going to copy the endpoint right there. We're going to go
217:12 back into naden and I'm going to add an HTTP request. And let me just configure
217:17 this up. We'll put it over here just to keep everything sort of square. But now
217:20 what I'm going to do in here is paste that endpoint in as our URL. You can
217:24 also see that it says this call can be done using post or git. But since git
217:28 requests are limited by the max amount of length, you should probably do post.
217:30 So I'm just going to go back in here and change this to a post. And then there
217:33 are basically two things that are required. The first one is our API key.
217:37 And then the second one is the actual image. Anyways, this documentation is
217:41 not super intuitive. I can sort of tell that this is a query parameter because
217:45 it's being attached at the end of the endpoint with a question mark and all
217:47 this kind of stuff. And that's just because I've looked at tons of API
217:51 documentation. So, what I'm going to do is go into nit. We're going to add a
217:55 generic credential type. It's going to be a query off. Where where was query?
217:59 There we go. And then you can see I've already added my image BB. But all
218:02 you're going to do is you would add the name as a key. And then you would just
218:05 paste in your API key. And that's it. And now we've authenticated ourselves to
218:09 the service. And then what's next is we need to send over the image in a field
218:12 called image. So I'm going to go back in here. I'm going to send over a body
218:15 because this allows us to actually send over n binary fields. And I'm not going
218:20 to do n binary. I'm going to do form data because then we can name the field
218:23 we're sending over. Like I said, not going to deep dive into how that all
218:26 works, but the name is going to be image and then the input data field name is
218:30 going to be data because that's how it's seen over here. And this should be it.
218:33 So, real quick, I'm just going to change this to get URL. And then we're going to
218:37 hit test step, which is going to send over that binary data to image BB. And
218:42 it hopefully should be sending us back a URL. And it sent back three of them. I'm
218:45 going to be using the middle one that's just called URL because it's like the
218:48 best size and everything. You can look at the other ones if you want on your
218:52 end, but this one is going to load up and we should see it's the image that we
218:55 got generated for us. It takes a while to load up on that first time, but as
218:59 you can see now, it's a publicly accessible URL and then we can feed it
219:03 into runway. So that's exactly our next step. We're going to add another request
219:07 right here. It's going to be an HTTP and this one we're going to configure to hit
219:11 runway. So here's a good example of we can actually use a curl command. So I'm
219:14 going to click on copy over here when I'm in the runway. Generate a video from
219:19 image. Come back into Naden, hit import curl, and paste that in there and hit
219:22 import. And this is going to basically configure everything we need. We just
219:26 have to tweak a few things. Typically, most API documentation nowadays will
219:29 have a curl command. The edit image one that we set up earlier was just a little
219:33 broken. Imag is just a free service, so sometimes they don't always. But let's
219:37 configure this node. So, the first thing I see is we have a header off right
219:40 here. And I don't want to send it like this. I want to send it up as a generic
219:44 type so I can save it. Otherwise, you'd have to go get your API key every time
219:47 you wanted to use Runway. So, as you can see, I've already set up my Runway API
219:51 key. So, I have it plugged in, but what you would do is you'd go get your API
219:55 key from Runway. And then you'd see, okay, how do we actually send over
219:58 authentication? It comes through with the name authorization. And then the
220:03 header is bearer space API key. So, similar to the last one. And then that's
220:06 all you would do in here when you're setting up your runway credential.
220:11 Authorization bearer space my API key. And then because we have ourselves
220:14 authenticated up here, we can flick off that headers. And all we have to do now
220:17 is configure the actual body. Okay, so first things first, what image are we
220:21 sending over to get turned into a video in that name prompt image? We're going
220:25 to get rid of that value. And I'm just going to drag in the URL that we wanted
220:29 that we got from earlier, which was that picture I s I showed you guys. So now
220:33 runway sees that image. Next, we have the seed, which if you want to look at
220:36 the documentation, you can play with it, but I'm just going to get rid of that.
220:38 Then we have the model, which we're going to be using, Gen 4 Turbo. We then
220:42 have the prompt text. So, this is where we're going to get rid of this. And
220:46 you're going to go back to that PDF you downloaded from my free school, and
220:49 you're going to paste this prompt in there. So, this prompt basically gives
220:53 us that like 3D spinning effect where it just kind of does a slow pan and a slow
220:56 rotate. And that's what I was looking for. If you're wanting some other type
220:59 of video, then you can tweak that prompt, of course. For the duration, if
221:04 you look in the documentation, it'll say the duration only basically allows five
221:08 or 10. So, I'm just going to change this one to 10. And then the last one was
221:11 ratio. And I'm just going to make the square. So here are the accepted ratio
221:16 values. I'm going to copy 960 by 960. And we're just going to paste that in
221:19 right there. And actually before we hit test step, I've realized that we're
221:22 missing something here. So back in the documentation, we can see that there's
221:25 one thing up here which is required, which is a header. X-runway- version.
221:30 And then we need to set the value to this. So I'm going to copy the header.
221:35 And we have to enable headers. I I deleted it earlier, but we're going to
221:37 enable that. So we have the version. And then I'm just going to go copy the value
221:41 that it needs to be set to and we'll paste that in there as the value.
221:44 Otherwise, this would not have worked. Okay, so that should be configured. But
221:48 before we test it out, I want to show you guys how I set up the polling flow
221:52 like this that you saw in the demo. So what we're going to do here is we need
221:57 to go see like, okay, once we send over our request right here to get a video
222:01 from our image, it's going to return an ID and that doesn't mean anything to us.
222:06 So what we have to do is get our task. So that is the basically we send over
222:10 the ID that it gives us and then it'll come back and say like the status equals
222:14 pending or running or we'll say completed. So what I'm going to do is
222:18 copy this curl command for getting task details. We're going to hook it up to
222:23 this node as an HTTP request. We're going to import that curl. Now that's pretty much set up. We
222:28 have our authorization which I'm going to delete that because as you know we
222:32 just configured that earlier as a header off. So, I'm just going to come in here
222:37 and grab my Runway API key. There it is. I couldn't find it for some reason. Um,
222:41 we have the version set up. And now all we have to do is drag in the actual ID
222:45 from the previous one. So, real quick, I'm just going to make this an
222:48 expression. Delete ID. And now we're pretty much set up. So, first of all,
222:51 I'm going to test this one, which is going to send off that request to runway
222:54 and say, "Hey, here's our image. Here's the prompt. Make a video out of it." And
222:59 as you can see, we got back an ID. Now I'm going to use this next node and I'm
223:03 going to drag in that ID from earlier. And now it's saying, okay, we're going
223:06 to check in on the status of this specific task. And if I hit test step,
223:09 what we're going to see is that it's not yet finished. So it's going to come back
223:13 and say, okay, status of this run or status of this task is running. So
223:17 that's why what I'm going to do is add an if. And this if is going to be saying,
223:24 okay, does this status field right here, does that equal running in all caps?
223:28 Because that's what it equals right now. If yes, what we're going to do is we are
223:32 going to basically wait for a certain amount of time. So here's the true
223:36 branch. I'm going to wait and let's just say it's 5 seconds. So I'll just call
223:41 this five seconds. I'm going to wait for 5 seconds and then I'm going to come
223:44 back here and try again. So as you saw in the demo, it basically tried again
223:48 like seven or eight times. And this just ensures that it's never going to move on
223:53 until we actually have a finished photo. So what you could also do is basically
223:56 say does status equal completed or whatever it means when it completes.
223:59 That's another way to do it. You just have to be careful to make sure that
224:01 whatever you're setting here as the check is always 100% going to work. And
224:07 then what you do is you would continue the rest of the logic down this path
224:10 once that check has been complete. And then of course you probably don't want
224:13 to have this check like 10 times every single time. So what you would do is
224:17 you'd add a weight step here. And once you know about how long it takes, you'd
224:21 add this here. So last time I had it at 30 seconds and it waited like eight
224:23 times. So let's just say I'm going to wait 60 seconds here. So then when this
224:27 flow actually runs, it'll wait for a minute, check. If it's still not done,
224:30 it'll continuously loop through here and wait 5 seconds every time until we're
224:34 done. Okay, there we go. So now status is succeeded. And what I'm going to do
224:37 is just view this video real quick. Hopefully this one came out nicely.
224:41 Let's take a look. Wow, this is awesome. Super clean. It's rotating really slowly. It's a full
224:49 10-second video. You can tell it's like a 3D image. This is awesome. Okay, cool.
224:54 So now if we test this if branch, we'll see that it's going to go down the other
224:57 one which is the false branch because it's actually completed. And now we can
225:01 with confidence shoot off the email with our materials. So I'm going to grab a
225:04 Gmail node. I'm going to click send a message. And we are going to have this
225:08 configured hopefully because you've already set up your Google stuff. And
225:11 now who do we send this to? We're going to go grab that email from the original
225:14 form submission which is all the way down here. We're going to make the
225:19 subject, which I'm just going to say marketing materials, and then a colon.
225:24 And we'll just drag in the actual title of the product, which in here was
225:28 cologne. I'm changing the email type to text just because I want to. Um, we're
225:32 going to make the body an expression. And we're just going to say like,
225:39 hey, here is your photo. And obviously this can be customized however you want.
225:43 But for the photo, what we have to do is grab that public URL that we generated
225:47 earlier. So right here there is the photo URL. Here is your video. And for
225:52 the video, we're going to drag in the URL we just got from the output of that
225:58 um runway get task check. So there is the video URL. And then I'm just going
226:02 to say cheers. Last thing I want to do is down here append edit an attribution
226:07 and turn that off. This just ensures that the email doesn't say this email
226:12 was sent by NAN. And now if we hit test step right here, this is pretty much the
226:15 end of the process. And we can go ahead and check. Uh-oh. Okay, so not
226:19 authorized. Let me fix that real quick. Okay, so I just switched my credential
226:21 because I was using one that had expired. So now this should go through
226:25 and we'll go take a look at the email. Okay, so did something wrong. I can
226:28 already tell what happened is this is supposed to be an expression and
226:31 dynamically come through as the title of the product, but we accidentally somehow
226:35 left off a curly brace. So, if I come back into here and and add one more
226:38 curly brace right here to the description or sorry, the subject now,
226:42 we should be good. I'll hit test step again. And now we'll go take a look at
226:46 that email. Okay, there we go. Now, we have the cologne and we have our photo
226:49 and our video. So, let's click into the quick. I'm just so amazed. This is this
226:55 is just so much fun. It look the the lighting and the the reflections. It's
227:00 it's all just perfect. And then we'll click into the photo just in case we want to see the
227:05 actual image. And there it is. This also looks awesome. All right, so that's
227:09 going to do it for today's video. I hope you guys enjoyed this style of walking
227:12 step by step through some of the API calls and sort of my thought process as
227:16 to how I set up this workflow. Okay, at this point I think you guys probably
227:19 have a really good understanding of how these AI workflows actually function and
227:22 you're probably getting a little bit antsy and want to build an actual AI
227:26 agent. Now, so we're about to get into building your first AI agent step by
227:30 step. But before that, just wanted to drive home the concept of AI workflows
227:35 versus AI agents one more time and the benefits of using workflows. But of
227:38 course, there are scenarios where you do need to use an agent. So, let's break it
227:42 down real quick. Everyone is talking about AI agents right now, but the truth
227:46 is most people are using them completely wrong and admittedly myself included.
227:50 It's such a buzzword right now and it's really cool in n to visually see your
227:54 agents think about which tools they have and which ones to call. So, a lot of
227:57 people are just kind of forcing AI agents into processes where you don't
228:01 really need it. But in reality, a simple AI workflow is not only going to be
228:04 easier to build, it's going to be more cost- effective and also more reliable
228:07 in the long run. If you guys don't know me, my name's Nate. And for a while now,
228:10 I've been running an agency where we deliver AI solutions to clients. And
228:14 I've also been teaching people from any background how to build out these things
228:17 practically and apply them to their business through deep dive courses as
228:21 well as live calls. So, if that sounds interesting to you, definitely check out
228:23 the community with the link in the description. But let's get into the
228:26 video. So, we're going to get into Naden and I'm going to show you guys some
228:29 mistakes of when I've built agents when I should have been building AI
228:31 workflows. But before that, I just wanted to lay out the foundations here.
228:35 So, we all know what chatbt is. At its core, it's a large language model that
228:39 we talk to with an input and then it basically just gives us an output. So,
228:42 if we wanted to leverage chatbt to help us write a blog post, we would ask it to
228:46 write a blog post about a certain topic. It would do that and then it would give
228:48 us the output which we would then just copy and paste somewhere else. And then
228:52 came the birth of AI agents, which is when we actually were able to give tools
228:56 to our LLM so that they could not only just generate content for us, but they
228:59 could actually go post it or go do whatever we wanted to do with it. AI
229:02 agents are great and there's definitely a time and a place for them because they
229:05 have different tools and basically the agent will use its brain to understand,
229:08 okay, I have these three tools based on what the user is asking me. Do I call
229:12 this one and then do I output or do I call this one then this one or do I need
229:17 to call all three simultaneously? It has that option and it has the variability
229:19 there. So, this is going to be a non-deterministic workflow. But the
229:23 reality is most of the processes that we're trying to enhance for our clients
229:28 are pretty deterministic workflows that we can build out with something more
229:30 linear where we still have the same tools. We're still using AI, but we have
229:34 everything going step one, step two, step three, step four, step five, step
229:38 six, which is going to reduce the variability there. It's going to be very
229:42 deterministic and it's going to help us with a lot of things. So stick with me
229:45 because I'm going to show you guys an AI agent video that I made on YouTube a few
229:49 months back and I started re-evaluating it. Like why would I ever build out the
229:52 system like that? It's so inefficient. So I'll show you guys that in a sec. But
229:55 real quick, let's talk about the pros of AI workflows over AI agents. And I
229:59 narrowed it down to four main points. The first one is reliability and
230:02 consistency. One of the most important concepts of building an effective AI
230:05 agent is the system prompt because it has to understand what its tools are,
230:09 when to use each one, and what the end goal is. and it's on its own to figure
230:12 out which ones do I need to call in order to provide a good output. But with
230:15 a workflow, we're basically keeping it on track and there's no way that the
230:18 process can sort of deviate from the guardrails that we've set up because it
230:22 has to happen in order and it can't really go anywhere else. So this makes
230:25 systems more reliable because there's never going to be a transfer of data
230:28 between workflows where things may get messed up or incorrect mappings being
230:32 sent across, you know, agent to a different agent or agent to tool. We're
230:36 just basically able to go through the process linearly. So the next one is
230:40 going to be cost efficiency. When we're using an agent and it has different
230:44 tools, every time it hits a tool, it's going to go back to its brain. It's
230:46 going to rerun through its system prompt and it's going to think about what is my
230:49 next step here. And every time you're accessing that AI agent's brain, it
230:53 costs you money. So if we're able to eliminate that aspect of decision-m and
230:57 just say, okay, you you finished step two, now you have to go on to step
231:00 three. There's no decision to be made. We don't have to make that extra API
231:04 call to think about what comes next, and we're saving money. Number three is
231:08 easier debugging and maintenance. When we have an AI workflow, we can see
231:12 exactly which node errors. We can see exactly what mappings are incorrect and
231:16 what happened here. Whereas with an AI agent workflow, it's a little bit
231:18 tougher because there's a lot of manipulating the system prompt and
231:21 messing with different tool configurations. And like I said, there's
231:25 data flowing between agent to tool or between agent to subworkflow. And that's
231:28 where a lot of things can happen that you don't really have full visibility
231:31 into. And then the final one is scalability. kind of backpacks right off
231:35 of number three. But if you wanted to add more nodes and more functionality to
231:38 a workflow, it's as simple as, you know, plugging in a few more blocks here and
231:41 there or adding on to the back. But when you want to increase the functionality
231:44 of an AI agent, you're probably going to have to give it more tools. And when you
231:47 give it more tools, you're going to have to refine and add more lines to the
231:52 system prompt, which could work great initially, but then previous
231:55 functionality, the first couple tools you added, those might stop working or
231:59 those may become less consistent. So basically, the more control that we have
232:02 over the entire workflow, the better. AI is great. There are times when we need
232:05 to make decisions and we need that little bit of flexibility. But if a
232:09 decision doesn't have to be made, why would we leave that up to the AI to
232:13 hallucinate 5 or 10% of the time when we could basically say, "Hey, this is going
232:16 to be 100% consistent." Anyways, I've made a video that talks a little bit
232:19 more about this stuff, as well as other things I've learned over the first 6
232:22 months of building agents. If you want to watch that, I'll link it up here. But
232:25 let's hop into n and take a look at some real examples. Okay, so the first
232:29 example I want to share with you guys is a typical sort of rag agent. And for
232:32 some reason it always seems like the element of rag has to be associated with
232:36 an agent, but it really doesn't. So what we have is a workflow where we're
232:39 putting a document from Google Drive into Pine Cone. We have a customer
232:42 support agent and then we have a customer support AI workflow. And both
232:46 of the blue box and the green box, they do the exact same thing, but this one's
232:49 going to be more efficient and we also have more control. So let's break this
232:52 down. Also, if you want to download this template to play around with, you can
232:54 get it for free if you go to my free school community. The link for that's
232:57 down in the description as well. You'll come into here, click on YouTube
233:00 resources, and click on the post associated with this video. And then the
233:03 workflow will be right here for you to download. Okay, so anyways, here is the
233:06 document that we're going to be looking at. It has policy and FAQ information.
233:10 We've already put it into Pine Cone. As you can see, it's created eight vectors.
233:13 And now what we're going to do is we're going to fire off an email to the
233:16 customer support agent to see how it handles it. Okay, so we just sent off,
233:20 do you offer price matching or bulk discounts? We'll come back into the
233:23 workflow, hit run, and we should see the customer support agent is hitting the
233:26 vector database, and it's also hitting its reply email tool. But what you'll
233:29 notice is that it hit its brain. So, Google Gemini 2.0 Flash in this case,
233:33 not a huge deal because it's free. But if you were using something else, it's
233:36 going to have hit that API three different times, which would be three
233:40 separate costs. So, let's check and see if it did this correctly. So, in our
233:43 email, we got the reply, "We do not offer price matching currently, but we
233:46 do run promotions and discounts regularly. Yes, bulk orders may qualify
233:50 for a discount. Please contact our sales team at salestechhaven.com for
233:54 inquiries. So, let's go validate that that's correct. So, in the FAQ section
233:57 of this doc, we have that they don't offer price matching, but they do run
234:00 promotions and discounts regularly. And then for bulk discounts, um you have to
234:04 hit up the sales team. So, it answered correctly. Okay. So, now we're going to
234:08 run the customer support AI workflow down here. It's going to grab the email.
234:11 It's going to search Pine Cone. It's going to write the email. I'll explain
234:13 what's going on here in a sec. And then it responds to the customer. So, there's
234:17 four steps here. It's going to be an email trigger. It's going to search the
234:19 knowledge base. It's going to write the email and then respond to the customer
234:23 in an email. So, why would we leave that up to the agent to decide what it needs
234:27 to do if it's always going to happen in those four steps every time? All right,
234:30 here's the email we just got in reply. As you can see, this is the one that the
234:32 agent wrote, and this one looks a lot better. Hello, thank you for reaching
234:36 out to us. In response to your inquiry, we currently do not offer price
234:39 matching. However, we do regularly run promotions and discounts, so be sure to
234:42 keep an eye out for those. That's accurate. Regarding bulk discounts, yes,
234:47 they may indeed qualify for a discount. So reach out to our sales team. If you
234:50 have any other questions, please feel free to reach out. Best regards, Mr.
234:53 Helpful, TechHaven. And obviously, I told it to sign off like that. So, now
234:57 that we've seen that, let's actually break down what's going on. So, it's the
235:00 same trigger. You know, we're getting an email, and as you can see, we can find
235:03 the text of the email right here, which was, "Do you guys offer price matching
235:07 or bulk discounts?" We're feeding that into a pine cone node. So, if you guys
235:10 didn't know, you don't even need these to be only tools. You can have them just
235:14 be nodes. where we're searching for the prompts that is, do you guys offer price
235:18 matching or bulk discounts? And maybe you might want an AI step between the
235:22 trigger and the search to maybe like formulate a query out of the email if
235:25 the email is pretty long. But in this case, that's all we did. And now we can
235:29 see we got those four vectors back, same way we would have with the agent. But
235:32 what's cool is we have a lot more control over it. So as you can see, we
235:37 have a vector and then we have a score, which basically ranks how relevant it
235:40 the vector was to the query that we sent off. And so we have some pretty low ones
235:44 over here, but what we can do is say, okay, we only want to keep if the score
235:48 is greater than 04. So it's only going to be keeping these two, as you can see,
235:51 and it's getting rid of these two that aren't super relevant. And this is
235:54 something that's a lot easier to control in this linear flow compared to having
235:59 the agent try to filter through vector results up here. Anyways, then we're
236:02 just aggregating however many results it pulls back. if it's four, if it's three,
236:06 or if it's just one, it's still just going to aggregate them together so that
236:09 we can feed it into our OpenAI node that's going to write the email. So
236:12 basically, in the user prompt, we said, "Okay, here's the customer inquiry.
236:15 Here's the original email, and here's the relevant knowledge that we found.
236:18 All you have to do now is write an email." And so by giving this AI node
236:22 just one specific goal, it's going to be more quality and consistent with its
236:26 outputs rather than we gave the agent multiple jobs. It had to not only write
236:30 the email, but it also had to figure out how to search through information and
236:33 figure out what the next step was. So this node, it only has to focus on one
236:37 thing. It has the knowledge handed to it on a silver platter to write the email
236:40 with. And basically, we said, you're Mr. Helpful, a customer support rep for Tech
236:44 Haven. Your job is to respond to incoming customer emails with accurate
236:47 information from the knowledge base. You must only answer using relevant
236:50 knowledge provided to you. Don't make anything up. We gave it the tone and
236:53 then we said only output the body in a clean format. it outputs that body and
236:57 then all it had to do is map in the correct message ID and the correct
237:02 message content. Simple as that. So, I hope this makes sense. Obviously, it's a
237:05 lot cooler to watch the agent do something like that up here, but this is
237:08 basically the exact same flow and I would argue that it's going to be a lot
237:12 better, more consistent, and cheaper. Okay, so now to show an example where I
237:15 released this as a YouTube video and a couple weeks later I was like, why did I
237:19 do it like that? So, what we have here is a technical analyst. And so basically
237:23 we're talking to it through Telegram and it has one tool which is basically going
237:27 to get a chart image and then it's going to analyze the chart image and then it
237:30 sends it back to us in Telegram. And this is the workflow that it's actually
237:33 calling right here where we're making an HTTP request to chart- image. We're
237:37 getting the chart, downloading it, analyzing the image, sending it back,
237:40 and then responding back to the agent. So there's basically like two transfers
237:44 of data here that we don't need because as you can see down here, we have the
237:49 exact same process as one simple AI workflow. So there's going to be much
237:52 much less room for error here. But first of all, let's demo how this works and
237:56 then we'll demo the actual AI workflow. Okay, so it should be listening to us
237:59 now. I'm going to ask it to analyze Microsoft. And as you can see, it's now
238:02 hitting that tool. We won't see this workflow actually in real time just
238:06 because it's like calling a different execution, but this is the workflow that
238:08 it's calling down here. I can actually just it's basically calling this right
238:12 here. Um, so what it's going to do is it's going to send us an image and then
238:16 a second or two later it's going to send us an actual analysis. So there is
238:20 Microsoft's stock chart and now it's creating that analysis as you can see
238:22 right up here and then it's going to send us that analysis. We just got it.
238:26 So if you want to see the full video that I made on YouTube, I'll I'll tag it
238:29 right up here. But not going to dive too much into what's actually happening. I
238:32 just want to prove that we can do the exact same thing down here with a simple
238:36 workflow. Although right here, I did evolve this workflow a little bit. So
238:39 it's it's not only looking at NASDAQ, but it can also choose different
238:43 exchanges and feed that into the API call. But anyways, let's make this
238:47 trigger down here active and let's just show off that we can do the exact same
238:51 thing with the workflow and it's going to be better. So, test workflow. This
238:56 should be listening to us. Now, I'm just going to ask it to um we'll do a
239:00 different one. Analyze uh Bank of America. So, now it's getting it. It is
239:04 going to be downloading the chart. Actually, want to open up Telegram so we
239:07 can see downloading the chart, analyzing the image. It's going to send us that
239:11 image and then pretty much immediately after it should be able to send us that
239:15 analysis. So we don't have that awkward 2 to 5 second wait. Obviously we're
239:19 waiting here. But as soon as this is done, we should get the both the image
239:22 and the text simultaneously. There you go. And so you can see the results are
239:27 basically the same. But this one is just going to be more consistent. There's no
239:30 transfer of data between workflow. There's no need to hit an AI model to
239:33 decide what tool I need to use. It is just going to be one seamless flow. You
239:37 can al also get this workflow in the free school community if you want to
239:39 play around with it. Just wanted to throw that out there. Anyways, that's
239:43 going to wrap us up here. I just wanted to close off with this isn't me bashing
239:46 on AI agents. Well, I guess a little bit it was. AI agents are super powerful.
239:50 They're super cool. It's really important to learn prompt engineering
239:54 and giving them different tools, but it's just about understanding, am I
239:57 forcing an agent into something that doesn't need it? Am I exposing myself to
240:02 the risk of lower quality outputs, less consistency, more difficult time scaling
240:06 this thing? Things along those lines. And so that's why I think it's super
240:08 important to get into something like Excal wireframe out the solution that
240:12 you're looking to build. Understand what are all the steps here. What are the
240:16 different API calls or different people involved? What could happen here? Is
240:21 this deterministic or is there an aspect of decision-m and variability here?
240:24 Essentially, is every flow going to be the same or not the same? Cool. So now
240:29 that we have that whole concept out of the way, I think it's really important
240:31 to understand that so that when you're planning out what type of system you're
240:34 going to build, you're actually doing it the right way from the start. But now
240:38 that we understand that, let's finally set up our first AI agent together.
240:42 Let's move into that video. All right, so at this point you guys are familiar
240:45 with Naden. You've built a few AI workflows and now it's time to actually
240:49 build an AI agent, which gets even cooler. So before we actually hop into
240:52 there and do that, just want to do a quick refresher on this little diagram
240:55 we talked about at the beginning of this video, which is the anatomy of an AI
240:59 agent. So we have our input, we have our actual AI agent, and then we have an
241:03 output. The AI agent is connected to different tools, and that's how it
241:06 actually takes action. And in order to understand which tools do I need to use,
241:10 it will look at its brain and its instructions. The brain comes in the
241:13 form of a large language model, which in this video, we'll be using open router
241:17 to connect to as many different ones as we want. and you guys have already set
241:20 up your open router credentials. Then we also have access to memory which I will
241:23 show you guys how we're going to set up in nadn. Then finally it uses its
241:27 instructions in order to understand what to do and that is in the form of a
241:31 system prompt which we will also see in naden. So all of these elements that
241:34 we've talked about will directly translate to something in nen and I will
241:38 show you guys and call out exactly where these are so there's no confusion. So
241:43 we're going to hop in nitn and you guys know that a new workflow always starts
241:46 with a trigger. So, I'm going to hit tab and I'm going to type in a chat trigger
241:50 because we want to just basically be able to talk to our AI agent right here
241:55 in the native Nadm chat. So, there is our trigger and what I'm going to do is
241:59 click the plus and add an AI agent right after this trigger so we can actually
242:02 talk to it. And so, this is what it looks like. You know, we have our AI
242:04 agent right here, but I'm going to click into it so we can just talk about the
242:07 difference between a user message up here and a system message that we can
242:11 add down here. So going back to the example with chatbt and with our diagram
242:17 when we're talking to chat gbt in our browser every single time we type and
242:21 say something to chatbt that is a user message because that message coming in
242:25 is dynamic every time. So you can see right here the source for the prompt
242:29 that the AI agent will be listening for as if it was chatbt is the connected
242:33 chat trigger node. So we're set up right here and the agent will be reading that
242:37 every time. If we were feeding in information to this agent that wasn't
242:40 coming from the chat message trigger, we'd have to change that. But right now,
242:43 we're good. And if we go back to our diagram, this is basically the input
242:47 that we're feeding into the AI agent. So, as you can see, input goes into the
242:50 agent. And that's exactly what we have right here. Input going into the agent.
242:54 And then we have the system prompt. So, I'm going to click back into the agent.
242:57 And we can see right here, we have a system message, which is just telling
243:00 this AI agent, you are a helpful assistant. So, right now, we're just
243:03 going to leave it as that. And back in our diagram that is right here, its
243:07 instructions, which is called a system prompt. So the next thing we can see
243:10 that we need is we need to give our AI agent a brain, which will be a large
243:14 language model and also memory. So I'm going to flick back into N. And you can
243:18 see we have two options right here. The first one is chat model. So I'm first of
243:21 all just going to click on the plus for chat model. I'm going to choose open
243:24 router. And we've already connected to open router. And now I just get to
243:27 choose from all of these different chat models to use. So I'm just going to go
243:34 ahead and choose a GBT 4.1 Mini. And I'm just going to rename this node
243:38 GPT 4.1 mini just so we know which one using. Cool. So now we have our input,
243:45 our AI agent, and a brain. But let's give it some memory real quick, which is
243:48 as simple as just clicking the plus under memory. And I'm just going to for
243:52 now choose simple memory, which stores it in and it end. There's no credentials
243:56 required. And as you can see, the session ID is looking for the connected
244:00 chat trigger node. because we're using the connected chat trigger node, we
244:03 don't have to change anything. We are good to go. So, this is basically the
244:07 core part of the agent, right? So, what I can do is I can actually talk to this
244:10 thing. So, I can say, "Hey," and we'll see what it says back. It's going to use
244:15 its memory. It's going to um use its brain to actually answer us. And it
244:18 says, "Hello, how can I assist you?" I can say, "My name is Nate. I am 23 years
244:25 old." And now what I'm going to basically test is that it's storing all
244:29 of this as memory and it's going to know that. So now it says, "Nice to meet you,
244:31 Nate. How can I help you?" Now I'm going to ask you, you know, what's my name and how old am I?
244:40 So we'll send that off. And now it's going to be able to answer us. Your name
244:43 is Nate and you are 23 years old. How can I assist you further? So first of
244:47 all, the reason it's being so helpful is because its system message says you're a
244:51 helpful assistant. The next piece would be it's using its brain to answer us and
244:55 it's using its memory to make sure it's not forgetting stuff about our current
245:00 conversation. So those are the three parts right there. Input, AI agent,
245:04 brain, and instructions. And now it's time to add the tools. So in this
245:08 example, we're going to build a super simple personal assistant AI agent that
245:12 can do three things. It's going to be able to look in our contact database in
245:17 order to grab contact information. with that contact information. It's going to
245:20 be able to send an email and it's going to be able to create a calendar event.
245:24 So, first thing we're going to do is we're going to set up our contact
245:27 database. And what I'm going to do for that is just I have this Google sheet.
245:31 Really simple. It just says name and email. This could be maybe you have your
245:34 contacts in Google contacts. You could connect that or an Air Table base or
245:38 whatever you want. This is just the actual tool, the actual integration that
245:42 we want to make to our AI agent. So, what I'm going to do is throw in a few
245:46 rows of example names and emails in here. Okay. So, we're just going to
245:48 stick with these three. We've got Michael Scott, Ryan Reynolds, and Oprah
245:51 Winfrey. And now, what we're going to be able to do is have our AI agent look at
245:55 this contact database whenever we ask it to send an email to someone or make a
245:59 calendar event with someone. If I go back and add it in, the first thing we
246:02 have to do is add a tool to actually access this Google sheet. So, I'm going
246:05 to click on tool. I'm going to type in Google sheet. It's as simple as that.
246:08 And you can see we have a Google Sheets tool. So, I'm going to click on that.
246:11 And now we have to set up our credential. You guys have already
246:15 connected to Google Sheets in the previous workflow, so it shouldn't be
246:17 too difficult. So choose your credential. And then the first thing is
246:20 a tool description. What we're going to do is we are going to just set this
246:24 automatically. And this basically describes to the AI agent what does this
246:28 tool do. So we could set it manually and describe ourselves, but if you just set
246:32 it automatically, the AI is going to be pretty good at understanding what it
246:35 needs to do with this tool. The next thing is a resource. So what are we
246:38 actually looking for? We're looking for a sheet within a document, not an entire
246:41 document itself. Then the operation is we want to just get rows. So I'm going to leave it
246:47 all as that. And then what we need to do is actually choose our document and then
246:50 the sheet within that document that we want to look at. So for document, I'm
246:54 going to choose contacts. And for sheet, there's only one. I'm just going to
246:57 choose sheet one. And then the last thing I want to do is just give this
247:01 actual tool a pretty intuitive name. So I'm just going to call this
247:06 contacts database. There you go. So now it should be super clear to this AI
247:09 agent when to use this tool. We may have to do some system prompting actually to
247:12 say like, hey, here are the different tools you have. But for now, we're just
247:15 going to test it out and see if it works. So what I'm going to do is open
247:19 up the chat and just ask it, can you please get Oprah Winfreyy's contact
247:23 information. There we go. We'll send that off and we will watch it basically
247:27 think. And then there we go. Boom. It hit the Google Sheet tool that we wanted
247:31 it to. And if I open up the chat, it says Oprah Winfreyy's contact
247:35 information is email opra winfrey.com. If we go into the base, we can see that
247:39 is exactly what we put for her contact information. Okay, so we've confirmed
247:43 that the agent knows how to use this tool and that it can properly access
247:46 Google Sheets. The next step now is to add another tool to be able to send
247:49 emails. So, I'm going to move this thing over. I'm going to add another tool and
247:53 I'm just going to search for Gmail and click on Gmail tool. Once again, we've
247:57 already covered credentials. So, hopefully you guys are already logged in
248:00 there. And then what we need to do is just configure the rest of the tool. So
248:04 tool description set automatically resource message operation send and then
248:10 we have to fill out the two the subject the email type and the message. What
248:14 we're able to do with our AI agents and tools is something super super cool. We
248:19 can let our AI agent decide how to fill out these three fields that will be
248:23 dynamic. And all I have to do is click on this button right here to the right
248:26 that says let the model define this parameter. So I'm going to click on that
248:29 button. And now we can see that it says defined automatically by the model. So
248:33 basically if I said hey can you send an email to Oprah Winfrey saying this um
248:39 and this it would then interpret our message our user input and it would then
248:45 fill out who's this going to who's the subject and who's the email. So I'll
248:47 show you guys an example of that. It's super cool. So I'm just going to click
248:51 on this button for subject and also this button for message. And now we can see
248:56 the actual AI use its brain to fill out these three fields. And then also I'm just going to change
249:01 the email type to text because I like it how it comes through as text. So real
249:06 quick, just want to change this name to send email. And all we have to do now is
249:10 we're going to chat with our agent and see if it's able to send that email. All
249:13 right. So I'm sending off this message that asks to send an email to Oprah
249:17 asking how she's doing and if she has plans this weekend. And what happened is
249:21 it went straight to the send email tool. And the reason it did that is because in
249:25 its memory, it remembered that it already knows Oprah Winfreyy's contact
249:29 information. So if I open chat, it says the email's been sent asking how she's
249:32 doing and if she has plans this weekend. Is there anything else that you would
249:36 like to do? So real quick before we go see if the email actually did get sent,
249:39 I'm going to click into the tool. And what we can see is on this left hand
249:44 side, we can see exactly how it chose to fill out these three fields. So for the
249:48 two, it put oprafree.com, which is correct. For the subject, it put
249:52 checking in. And for the message, it put hi Oprah. I hope this weekend finds you
249:55 well. How are you doing? Do you have any plans? Best regards, Nate. And another
250:00 thing that's really cool is the only reason that it signed off right here as
250:03 best regards Nate is because once again, it used its memory and it remembers that
250:07 our name is Nate. That's how it filled out those fields. Let me go over to my
250:11 email and we'll take a look. So, in our sent, we have the checking in subject.
250:15 We have the message that we just read in and it in. And then we have this little
250:18 thing at the bottom that says this email was automatically sent by NADN. We can
250:23 easily turn that off if we go into NADN. Open up the tool. We add an option at
250:26 the bottom that says append naden attribution. And then we just turn off
250:30 the append naden attribution. And as you can see if we click on add options,
250:33 there are other things that we can do as well. Like we can reply to the sender
250:36 only. We can add a sender name. We can add attachments. All this other stuff.
250:41 But at a high level and real quick setup, that is the send email tool. And
250:45 keep in mind, we still haven't given our agent any sort of system prompt besides
250:48 saying you're a helpful assistant. So, super cool stuff. All right, cool. And
250:52 now for the last tool, what we want to do is add a create calendar event. So, I'm going
250:59 to search calendar and grab a Google calendar node. We already should be set
251:03 up. Or if you're not, actually, all you have to do is just create new credential
251:06 and sign in real quick because you already went and created your whole
251:10 Google Cloud thing. We're going to leave the description as automatic. The resource is an event. The
251:15 operation is create. The calendar is going to be one that we choose from our
251:19 account. And now we have a few things that we want to fill out for this tool.
251:23 So basically, it's asking what time is the event going to start and what time
251:26 is the event going to end. So real quick, I'm just going to do the same
251:29 thing. I'm going to let the model decide based on the way that we interact with
251:33 it with our input. And then real quick, I just want to add one more field, which
251:36 is going to be a summary. And basically whatever gets filled in right here for
251:39 summary is what's going to show up as the name of the event in Google
251:42 calendar. But once again we're going to let the model automatically define this
251:49 field. So let's call this node create event. And actually one more thing I
251:52 forgot to do is we want to add an attendee. So we can actually let the
251:56 agent add someone to an event as well. So that is the new tool. We're going to
252:00 hit save. And remember no system prompts. Let's see if we can create a
252:04 calendar event with Michael Scott. All right. All right. So, we're asking for
252:08 dinner with Michael at 6 p.m. What's going to happen is it probably Okay, so
252:12 we're going to have to do some prompting because we don't know Michael Scott's
252:15 contact information yet, but it went ahead and tried to create that email.
252:19 So, it said that it created the event and let's click into the tool and see
252:23 what happened. So, it tried to send the event invite to michael.scottample.com. So, it
252:29 completely made that up because in our contacts base, Michael Scott's email is
252:33 mikegreatcott.com. So, it got that wrong. That's the first thing it got
252:38 wrong. The second thing it got wrong was the actual start and end date. So, yes,
252:42 it made the event for 6 p.m., but it made it for 6 p.m. on April 27th, 2024,
252:48 which was over a year ago. So, we can fix this by using the system prompt. So,
252:52 what I'm going to do real quick is go into the system prompt, and I'm just
252:55 going to make it just an expression and open it up full screen real quick. What
253:00 I'm going to say next is you must always look in the contacts database before
253:05 doing something like creating an event or sending an email. You need the
253:10 person's email address in order to do one of those actions. Okay, so that's a really simple
253:16 thing we can add. And then also what I want to tell it is what is today's
253:19 current date and time? So that if I say create an event for tomorrow or create
253:22 an event for today, it actually gets the date right. So, I'm just going to say
253:28 here is the current date slashtime. And all I have to do to give it access to
253:31 the current date and time is do two curly braces. And then right here you
253:35 can see dollar sign now which says a date time representing the current
253:39 moment. So if I click on that on the right hand side in the result panel you
253:42 can see it's going to show the current date and time. So we're happy with that.
253:45 Our system prompt has been a little bit upgraded and now we're going to just try
253:49 that exact same query again and we'll see what happens. So, I'm going to click
253:53 on this little repost message button. Send it again. And hopefully now, there
253:57 we go. It hits the contact database to get Michael Scott's email. And then it
254:00 creates the calendar event with Michael Scott. So, down here, it says, I've
254:04 created a calendar event for dinner with Michael Scott tonight at 6. If you need
254:07 any more assistance, feel free to ask. So, if I go to my calendar, we can see
254:11 we have a 2-hour long dinner with Michael Scott. If I click onto it, we
254:16 can see that the guest that was invited was mikegreatscott.com, which is exactly
254:21 what we see in our contact database. And so, you may have noticed it made this
254:23 event for 2 hours because we didn't specify. If I said, "Hey, create a
254:27 15-minute event," it would have only made it 15 minutes. So, what I'm going
254:31 to do real quick is a loaded prompt. Okay, so fingers crossed. We're saying,
254:35 "Please invite Ryan Reynolds to a party tonight that's only 30 minutes long at 8
254:39 p.m. and send him an email to confirm." So, what happened here? It went to go
254:43 create an event and send an email, but it didn't get Ryan Reynolds email first.
254:47 So, if we click into this, we can see that it sent an email to
254:50 ryan.rrensacample.com. That's not right. And it went to create an event at
254:54 ryan.rerensacample.com. And that's not right either. But the good news is if we
254:58 go to calendar, we can see that it did get the party right as far as it's 8
255:02 p.m. and only 15 minutes. So, because it didn't take the right action, it's not
255:06 that big of a deal. We know now that we have to go and refine the system prompt.
255:10 So to do that, I'm going to open up the agent. I'm going to click into the
255:14 system prompt. And we are going to fix some stuff up. Okay. So I added two
255:17 sentences that say, "Never make up someone's email address. You must look
255:21 in the contact database tool." So as you guys can see, this is pretty natural
255:24 language. We're just instructing someone how to do something as if we were
255:27 teaching an intern. Okay. So what I'm going to do real quick is clear this
255:30 memory. So I'm just going to reset the session. And now we're starting from a
255:33 clean slate. And I'm going to ask that exact same query to do that multi-step
255:37 thing with Ryan Reynolds. All right. Take two. We're inviting Ryan Reynolds
255:40 to a party at 9:00 p.m. There we go. It's hitting the contacts database. And
255:43 now it's going to hit the create event and the send email tool at the same
255:47 time. Boom. I've scheduled a 30-minute party tonight at 9:00 p.m. and invited
255:51 Ryan Reynolds. So, let's go to our calendar. We have a 9 p.m. party for 30
255:55 minutes long, and it is ryanpool.com, which is exactly what we
255:59 see in our contacts database. And then, if we go to our email, we can see now
256:03 that we have a party invitation for tonight to ryanpool.com. But what you'll
256:08 notice is now it didn't sign off as Nate because I cleared that memory. So this
256:13 would be a super simple fix. We would just want to go to the system prompt and
256:15 say, "Hey, when you're sending emails, make sure you sign off as Nate." So
256:19 that's going to be it for your first AI agent build. This one is very simple,
256:24 but also hopefully really opens your eyes to how easy it is to plug in these
256:27 different tools. And it's really just about your configurations and your
256:31 system prompts because system prompting is a really important skill and it's
256:34 something that you kind of have to just try out a lot. You have to get a lot of
256:38 reps and it's a very iterative process. But anyways, congratulations. You just
256:42 built your first AI agent in probably less than 20 minutes and now add on a
256:46 few more tools. Play around with a few more parameters and just see how this
256:49 kind of stuff works. In this section, what I'm going to talk about is dynamic
256:54 memory for your AI agents. So if you remember, we had just set up this agent
256:58 and we were using simple memory and this was basically helping us keep
257:02 conversation history. But what we didn't yet talk about was the session ID and
257:07 what that exactly means. So basically think of a session ID as some sort of
257:12 unique identifier that identifies each separate conversation. So, if I'm
257:18 talking to you, person A, and you ask me something, I'm gonna go look at
257:22 conversations from our conversation, person A and Nate, and then I can read
257:25 that for context and then respond to you. But if person B talks to me, I'm
257:29 going to go look at my conversation history with person B before I respond
257:34 to them. And that way, I keep two people and two conversations completely
257:37 separate. So, that's what a session ID is. So, if we were having some sort of
257:41 AI agent that was being triggered by an email, we would basically want to set
257:46 the session ID as the email address coming in because then we know that the
257:50 agent's going to be uniquely responding to whoever actually sent that email that
257:55 triggered it. So, just to demonstrate how that works, what I'm going to do is
257:58 just manipulate the session ID a little bit. So, I'm going to come into here and
258:02 I'm going to instead of using the chat trigger node for the session ID, I'm
258:06 going to just define it below. And I'm just going to do that exact example that
258:09 I just talked to you guys about with person A and person B. So I'm just going
258:13 to put a lowercase A in there as the session ID key. So once I save that, what I'm going
258:20 to do is just say hi. Now it's going to respond to me. It's going to update the
258:23 conversation history and say hi. I'm going to say my name is
258:27 um Bruce. I don't know why I thought of Bruce, but my name's Bruce. And now it
258:31 says nice to meet you Bruce. How can I assist you? Now what I'm going to do is
258:36 I'm going to change the session ID to B. We'll hit save. And I'm just going to
258:39 say what's my name? What's my name? And it's going to say I don't have access to your name
258:46 directly. If you'd like, you can provide your name or any other details you want
258:50 me to know. How can I assist you today? So person A is Bruce. Person B is no
258:54 name. And what I'm going to do is go back to putting the key as A. Hit save.
259:01 And now if I say, "What is my name?" with a misspelled my, it's going to say,
259:04 "Hey, Bruce." There we go. Your name is Bruce. How can I assist you further? And so
259:10 that's just a really quick demo of how you're able to sort of actually have
259:15 dynamic um conversations with multiple users in one single agent flow because
259:20 you can make this field dynamic. So, what I'm going to do to show you guys a
259:23 practical use of this, let's say you're wanting to connect your agent to Slack
259:27 or to Telegram or to WhatsApp or to Gmail. You want the memory to be dynamic
259:31 and you want it to be unique for each person that's interacting with it. So,
259:35 what I have here is a Gmail trigger. I'm going to hit test workflow, which should
259:38 just pull in an email. So, when we open up this email, we can see like the
259:41 actual body of the email. We can see, you know, like history. We can see a
259:45 thread ID, all this kind of stuff. But what I want to look at is who is the
259:48 email from? Because then if I feed this into the AI agent and first of all we
259:53 would have to change the actual um user message. So we are no longer talking to
259:57 our agent with the connected chat trigger node, right? We're connecting to
260:01 it with Gmail. So I'm going to click to find below. The user message is
260:04 basically going to be whatever you want the agent to look at. So don't even
260:09 think about end right now. If you had an agent to help you with your emails, what
260:11 would you want it to read? You'd want it to read maybe a combination of the
260:15 subject and the body. So that's exactly what I'm going to do. I'm just going to
260:19 type in subject. Okay, here's the subject down here. And I'm going to drag
260:22 that right in there. And then I'm just going to say body. And then I would drag
260:27 in the actual body snippet. And it's a snippet right now because in the actual
260:31 Gmail trigger, we have this flicked on as simplified. If we turn that off, it
260:34 would give us not a snippet. It would give us a full email body. But for right
260:37 now, for simplicity, we'll leave it simplified. But now you can see that's
260:40 what the agent's going to be reading every time, not the connected chat
260:44 trigger node. And before we hit test step, what we want to do is we want to
260:47 make the sender of this email also the session key for the simple memory. So
260:54 we're going to define below and what I'm going to do is find the from field which
260:59 is right here and drag that in. So now whenever we get a new email, we're going
261:02 to be looking at conversation history from whoever sent that email to trigger
261:06 this whole workflow. So I'll hit save and basically what I'm going to do is
261:10 just run the agent. And what it's going to do is update the memory. It's going
261:13 to be looking at the correct thing and it's taking some action for us. So,
261:17 we'll take a look at what it does. But basically, it said the invitation email
261:20 for the party tonight has been sent to Ryan. If you need any further
261:23 assistance, please let me know. And the reason why it did that is because the
261:27 actual user message basically was saying we're inviting Ryan to a party. So,
261:31 hopefully that clears up some stuff about dynamic um user messages and
261:36 dynamic memory. And now you're on your way to building some pretty cool Jetic
261:39 workflows. And something important to touch on real quick is with the memory
261:43 within the actual node. What you'll notice is that there is a context window
261:47 length parameter. And this says how many past interactions the model receives as
261:50 context. So this is definitely more of the short-term memory because it's only
261:53 going to be looking at the past five interactions before it crafts its
261:57 response. And this is not just with a simple memory node. What we have here is
262:01 if I delete this connection and click on memory, you can see there are other
262:04 types of memory we can use for our AI agents. Let's say for example we're
262:07 doing Postgress which later in this course you'll see how to set this up.
262:10 But in Postgress you can see that there's also a context window length. So
262:14 just to show you guys an example of like what that actually looks like. What
262:16 we're going to do is just connect back to here. I'm going to drag in our chat
262:20 message trigger which means I'm going to have to change the input of the AI
262:23 agent. So we're going to get rid of this whole um defined below with the subject
262:27 and body. We're going to drag in the connected chat trigger node. Go ahead
262:31 and give this another save. And now I'm just going to come into the chat and
262:37 say, "Hello, Mr. Agent. What is going on here? We have the memory is messed up."
262:41 So remember, I just changed the session ID from our chat trigger to the Gmail
262:48 trigger um the address, the email address of whoever just sent us the
262:50 email. So I'm going to have to go change that again. I'm just going to simply
262:54 choose connected chat trigger node. And now it's referencing the correct session
262:57 ID. Our variable is green. We're good to go. We'll try this again. Hello, Mr.
263:01 Agent. It's going to talk to us. So, just save that as Nate. Okay. Nice to meet you, Nate. How
263:14 can I assist you? My favorite color is blue. And I'm going to say, you know,
263:21 tell me about myself. Okay. So, it's using all that memory, right? We
263:24 basically saw a demo of this, but it basically says, other than your name and
263:27 your favorite color is blue, what else is there about you? So if I go into the
263:31 agent and I click over here into the agent logs, we can see the basically the
263:36 order of operations that the agent took in order to answer us. So the first
263:40 thing that it does is it uses its simple memory. And that's where you can see
263:43 down here, these are basically the past interactions that we've had, which was
263:49 um hello Mr. Agent, my name is Nate, my favorite color is blue. And this would
263:52 basically cap out at five interactions. So that's all we're basically saying in
263:57 this context window length right here. So, just wanted to throw that out there
264:00 real quick. This is not going to be absolutely unlimited memory to remember
264:04 everything that you've ever said to your agent. We would have to set that up in a
264:07 different way. All right, so you've got your agent up and running. You have your
264:10 simple memory set up, but something that I alluded to in that video was setting
264:15 up memory outside of NADN, which could be something like Postgress. So in this
264:17 next one, we're going to walk through the full setup of creating a superbase
264:21 account, connecting your Postgress and your Superbase so that you can have your
264:25 short-term memory with Postgress and then you can also connect a vector
264:28 database with Superbase. So let's get started. So today I'm going to be
264:30 showing you guys how to connect Postgress SQL and Superbase to Nadin. So
264:34 what I'm going to be doing today is walking through signing up for an
264:36 account, creating a project, and then connecting them both to NADN so you guys
264:40 can follow every step of the way. But real quick, Postgress is an open- source
264:43 relational database management system that you're able to use plugins like PG
264:46 vector if you want vector similarity search. In this case, we're just going
264:49 to be using Postgress as the memory for our agent. And then Superbase is a
264:52 backend as a service that's kind of built on top of Postgress. And in
264:55 today's example, we're going to be using that as the vector database. But don't
264:58 want to waste any time. Here we are in Naden. And what we know we're going to
265:01 do here for our agent is give it memory with Postgress and access to a vector
265:05 database in Superbase. So for memory, I'm going to click on this plus and
265:07 click on Postgress chat memory. And then we'll set up this credential. And then
265:10 over here we want to click on the plus for tool. We'll grab a superbase vector
265:13 store node and then this is where we'll hook up our superbase credential. So
265:16 whenever we need to connect to these thirdparty services what we have to do
265:19 is come into the node go to our credential and then we want to create a
265:22 new one. And then we have all the stuff to configure like our host our username
265:26 our password our port all this kind of stuff. So we have to hop into superbase
265:30 first create account create a new project and then we'll be able to access
265:33 all this information to plug in. So here we are in Superbase. I'm going to be
265:35 creating a new account like I said just so we can walk through all of this step
265:38 by step for you guys. So, first thing you want to do is sign up for a new
265:41 account. So, I just got my confirmation email. So, I'm going to go ahead and
265:43 confirm. Once you do that, it's going to have you create a new organization. And
265:46 then within that, we create a new project. So, I'm just going to leave
265:48 everything as is for now. It's going to be personal. It's going to be free. And
265:52 I'll hit create organization. And then from here, we are creating a new
265:54 project. So, I'm going to leave everything once again as is. This is the
265:57 organization we're creating the project in. Here's the project name. And then
266:00 you need to create a password. And you're going to have to remember this
266:02 password to hook up to our Subabase node later. So, I've entered my password. I'm
266:06 going to copy this because like I said, you want to save this so you can enter
266:08 it later. And then we'll click create new project. This is going to be
266:11 launching up our project. And this may take a few minutes. So, um, just have to
266:15 be patient here. As you can see, we're in the screen. It's going to say setting
266:18 up project. So, we pretty much are just going to wait until our project's been
266:21 set up. So, while this is happening, we can see that there's already some stuff
266:23 that may look a little confusing. We've got project API keys with a service ro
266:27 secret. We have configuration with a different URL and some sort of JWT
266:30 secret. So, I'm going to show you guys how you need to access what it is and
266:35 plug it into the right places in Naden. But, as you can see, we got launched to
266:38 a different screen. The project status is still being launched. So, just going
266:41 to wait for it to be complete. So, everything just got set up. We're now
266:44 good to connect to NAN. And what you want to do is typically you'd come down
266:47 to project settings and you click on database. And this is where everything
266:50 would be to connect. But it says connection string has moved. So, as you
266:52 can see, there's a little button up here called connect. So, we're going to click
266:55 on this. And now, this is where we're grabbing the information that we need
266:58 for Postgress. So this is where it gets a little confusing because there's a lot
267:01 of stuff that we need for Postgress. We need to get a host, a username, our
267:05 password from earlier when we set up the project, and then a port. So all we're
267:08 looking for are those four things, but we need to find them in here. So what
267:11 I'm going to do is change the type to Postgress SQL. And then I'm going to go
267:15 down to the transaction pooler, and this is where we're going to find the things
267:17 that we need. The first thing that we're looking for is the host, which if you
267:20 set it up just like me, it's going to be after the -h. So it's going to be AWS,
267:24 and then we have our region.pool.subase.com. So we're going to grab that, copy it, and then we're
267:29 going to paste that into the host section right there. So that's what it
267:32 should look like for host. Now we have a database and a username to set up. So if
267:36 we go back into that superbase page, we can see we have a D and a U. So the D is
267:40 going to stay as Postgress, but for user, we're going to grab everything
267:43 after the U, which is going to be postgress.com, and then these um
267:47 different characters. So I'm going to paste that in here under the user. And
267:50 for the password, this is where you're going to paste in the password that you
267:53 use to set up your Subbase project. And then finally at the bottom, we're
267:56 looking for a port, which is by default 5342. But in this case, we're going to
267:59 grab the port from the transaction pooler right here, which is following
268:04 the lowercase P. So we have 6543. I'm going to copy that, paste that into here
268:07 as the port. And then we'll hit save. And we'll see if we got connection
268:10 tested successfully. There we go. We got green. And then I'm just going to rename
268:13 this so I can keep it organized. So there we go. We've connected to
268:16 Postgress as our chat memory. We can see that it is going to be using the
268:19 connected chat trigger node. That's how it's going to be using the key to store
268:22 this information. and it's going to be storing it in a table in Subabase called
268:25 Naden chat histories. So real quick, I'm going to talk to the agent. I'm just
268:28 going to disconnect the subbase so we don't get any errors. So now when I send
268:31 off hello AI agent, it's going to respond to us with something like hey,
268:34 how can I help you today? Hello, how can I assist you? And now you can see that
268:37 there were two things stored in our Postgress chat memory. So we'll switch
268:40 over to superbase. And now we're going to come up here in the left and go to
268:43 table editor. We can see we have a new table that we just created called NAN
268:46 chat histories. And then we have two messages in here. So the first one as
268:49 you can see was a human type and the content was hello AI agent which is what
268:53 we said to the AI agent and then the second one was a type AI and this is the
268:58 AI's response to us. So it said hello how can I assist you today. So this is
269:01 where all of your chats are going to be stored based on the session ID and just
269:05 once again this session ID is coming from the connected chat trigger node. So
269:08 it's just coming from this node right here. As you can see, there's the
269:11 session ID that matches the one in our our chat memory table. And that is how
269:16 it's using it to store sort of like the unique chat conversations. Cool. Now
269:20 that we have Postgress chat memory set up, let's hook up our Superbase vector
269:24 store. So, we're going to drag it in. And then now we need to go up here and
269:27 connect our credentials. So, I'm going to create new credential. And we can see
269:31 that we need two things, a host and a service role secret. And the host is not
269:34 going to be the same one as the host that we used to set up our Postgress. So
269:37 let's hop into Superbase and grab this information. So back in Superbase, we're
269:41 going to go down to the settings. We're going to click on data API and then we
269:45 have our project URL and then we have our service ro secret. So this is all
269:49 we're using for URL. We're going to copy this, go back to Subase, and then we'll
269:52 paste this in as our host. As you can see, it's supposed to be HTTPS
269:56 um and then your Superbase account. So we'll paste that in and you can see
270:00 that's what we have.co. Also, keep in mind this is because I launched up an
270:03 organization and a project in Superbase's cloud. If you were to
270:06 self-host this, it would be a little different because you'd have to access
270:09 your local host. And then of course, we need our service ro secret. So back in
270:13 Superbase, I'm going to reveal, copy, and then paste it into an end. So let me
270:16 do that real quick. And as you can see, I got that huge token. Just paste it in.
270:19 So what I'm going to do now is save it. Hopefully it goes green. There we go. We
270:22 have connection tested successfully. And then once again, just going to rename
270:25 this. The next step from here would be to create our Superbase vector store
270:28 within the platform that we can actually push documents into. So you're going to
270:31 click on docs right here. You are going to go to the quick start for setting up
270:35 your vector store and then all you have to do right here is copy this command.
270:38 So in the top right, copy this script. Come back into Subabase. You'll come on
270:42 the lefth hand side to SQL editor. You'll paste that command in here. You
270:44 don't change anything at all. You'll just hit run. And then you could should
270:48 see down here success. No rows returned. And then in the table editor, we'll have
270:51 a new table over here called documents. So this is where when we're actually
270:54 vectorizing our data, it's going to go into this table. Okay. Okay. So, I'm
270:57 just going to do a real quick example of putting a Google doc into our Subbase
271:00 vector database just to show you guys that everything's connected the way it
271:03 should be and working as it should. So, I'm going to grab a Google Drive node
271:06 right here. I'm going to click download file. I'm going to select a file to
271:10 download which in this case I'm just going to grab body shop services terms
271:13 and conditions and then hit test step. And we'll see the binary data which is a
271:17 doc file over here. And now we have that information. And what we want to do with
271:21 it is add it to superbase superbase vector store. So, I'm going to type in
271:25 superbase. We'll see vector store. The operation is going to be add documents
271:28 to vector store. And then we have to choose the right credential because we
271:31 have to choose the table to put it in. So this is in this case we already made
271:34 a table. As you can see in our superbase it's called documents. So back in here
271:38 I'm going to choose the credential I just made. I'm going to choose insert
271:41 documents and I'm going to choose the table to insert it to not the N chat
271:45 histories. We want to insert this to documents because this one is set up for
271:49 vectorization. From there I have to choose our document loader as well as
271:52 our embeddings. So I'm not really going to dive into exactly what this all means
271:55 right now. If you're kind of confused and you're wanting a deeper dive on rag
271:58 and building agents, definitely check out my paid community. We've got
272:01 different deep dive topics about all this kind of stuff. But I'm just going
272:03 to set this up real quick so we can see the actual example. I'm just choosing
272:07 the binary data to load in here. I'm choosing the embedding and I'm choosing
272:10 our text splitter which is going to be recursive. And so now all I have to do
272:13 here is hit run. It's going to be taking that binary data of that body shop file.
272:17 It split it up. And as you can see there's three items. So if we go back
272:20 into our Superbase vector store and we hit refresh, we now see three items in our
272:25 vector database and we have the different content and all this
272:28 information here like the standard oil change, the synthetic oil change is
272:31 coming from our body shop document that I have right here that we put in there
272:35 just to validate the rag. And we know that this is a vector database store
272:38 rather than a relational one because we can see we have our vector embedding
272:41 over here which is all the dimensions. And then we have our metadata. So we
272:44 have stuff like the source and um the blob type, all this kind of stuff. And
272:47 this is where we could also go ahead and add more metadata if we wanted to.
272:51 Anyways, now that we have vectors in our documents table, we can hook up the
272:55 actual agent to the correct table. So in here, what I'm going to call this is um
273:00 body shop. For the description, I'm going to say use this to get information
273:06 about the body shop. And then from the table name, we have to choose the
273:08 correct table, of course. So we know that we just put all this into something
273:11 called documents. So I'm going to choose documents. And finally, we just have to
273:15 choose our embeddings, of course, so that it can embed the query and pull
273:18 stuff back accurately. And that's pretty much it. We have our AI agent set up.
273:22 So, let's go ahead and do a test and see what we get back. So, I'm going to go
273:25 ahead and say what brake services are offered at the body shop. It's going to
273:29 update the Postgress memory. So, now we'll be able to see that query. It hit
273:32 the Superbase vector store in order to retrieve that information and then
273:36 create an augmented generated answer for us. And now we have the body shop offers
273:40 the following brake services. 120 per axle for replacement, 150 per axle for
273:45 rotor replacement, and then full brake inspection is 30 bucks. So, if we click
273:49 back into our document, we can see that that's exactly what it just pulled. And
273:53 then, if we go into our vector database within Subase, we can find that
273:56 information in here. But then we can also click on NAN chat history, and we
274:00 can see we have two more chats. So, the first one was a human, which is what we
274:03 said. What brake services are offered at the body shop? And then the second one
274:07 was a AI content, which is the body shop offers the following brake services,
274:10 blah blah blah. And this is exactly what it just responded to us with within NADN
274:14 down here as you can see. And so keep in mind this AI agent has zero prompting.
274:17 We didn't even open up the system message. All that's in here is you are a
274:20 helpful assistant. But if you are setting this up, what you want to do is
274:23 you know explain its role and you want to tell it you know you have access to a
274:28 vector database. It is called X. It has information about X Y and Z and you
274:32 should use it when a client asks about X Y and Z. Anyways that's going to be it
274:35 for this one. Subase and Postgress are super super powerful tools to use to
274:38 connect up as a database for your agents, whether it's going to be
274:41 relational or vector databases and you've got lots of options with, you
274:44 know, self-hosting and some good options for security and scalability there. Now
274:47 that you guys have built an agent and you see the way that an agent is able to
274:50 understand what tools it has and which ones it needs to use, what's really
274:55 really cool and powerful about NAND is that we can have a tool for an AI agent
274:59 be a custom workflow that we built out in Nadn or we can build out a custom
275:04 agent in Naden and then give our main agent access to call on that lower
275:08 agent. So what I'm about to share with you guys next is an architecture you can
275:11 use when you're building multi- aent systems. It's basically called having an
275:15 orchestrator agent and sub agents or parent agents and child agents. So,
275:18 let's dive into it. I think you guys will think it's pretty cool. So, a
275:22 multi- aent system is one where we have multiple autonomous AI agents working
275:26 together in order to get the job done and they're able to talk to each other
275:28 and they're able to use the tools that they have access to. What we're going to
275:31 be talking about today is a type of multi- aent system called the
275:34 orchestrator architecture. And basically what that means that we have one agent
275:38 up here. I call it the parent agent and then I call these child agents. But we
275:41 have an orchestrator agent that's able to call on different sub aents. And the
275:46 best way to think about it is this agent's only goal is to understand the
275:50 intent of the user. Whether that's through Telegram or through email,
275:53 whatever it is, understanding that intent and then understanding, okay, I
275:57 have access to these four agents and here is what each one is good at. Which
276:01 one or which ones do I need to call in order to actually achieve the end goal?
276:05 So, in this case, if I'm saying to the agent, can you please write me a quick
276:11 blog post about dogs and send that to Dexter Morgan, and can you also create a
276:14 dinner event for tonight at 6 p.m. with Michael Scott? And thank you. Cool. So,
276:20 this is a pretty loaded task, right? And can you imagine if this one agent had
276:25 access to all of these like 15 or however many tools and it had to do all
276:29 of that itself, it would be pretty overwhelmed and it wouldn't be able to
276:32 do it very accurately. So, what you can see here is it is able to just
276:35 understand, okay, I have these four agents. They each have a different role.
276:38 Which ones do I need to call? And you can see what it's doing is it called the
276:41 contact agent to get the contact information. Right now, it's calling the
276:44 content creator agent. And now that that's finished up, it's probably going
276:47 to call the calendar agent to make that event. And then it's going to call the
276:50 email agent in order to actually send that blog that we had the content
276:54 creator agent make. And then you can see it also called this little tool down
276:56 here called Think. If you want to see a full video where I broke down what that
276:59 does, you can watch it right up here. But we just got a response back from the
277:03 orchestrator agent. So, let's see what it said. All right, so it said, "The
277:06 blog post about dogs has been sent to Dexter Morgan. A dinner event for
277:09 tonight at 6 p.m. with Michael Scott has been created. And if you need anything
277:12 else, let me know." And just to verify that that actually went through, you can
277:14 see we have a new event for dinner at 6 p.m. with Michael Scott. And then in our
277:18 email and our scent, we can see that we have a full blog post sent to Dexter
277:22 Morgan. And you can see that we also have a link right here that we can click
277:24 into, which means that the content creator agent was able to do some
277:28 research, find this URL, create the blog post, and send that back to the
277:31 orchestrator agent. And then the orchestrator agent remembered, okay, so
277:34 I need to send a blog post to Dexter Morgan. I've got his email from the
277:37 contact agent. I have the blog post from the content creator agent. Now all I
277:40 have to do is pass it over to the email agent to take care of the rest. So yes,
277:44 it's important to think about the tools because if this main agent had access to
277:47 all those tools, it would be pretty overwhelming. But also think about the
277:50 prompts. So, in this ultimate assistant prompt, it's pretty short, right? All I
277:54 had to say was, "You're the ultimate assistant. Your job is to send the
277:57 user's query to the correct tool. You should never be writing emails or ever
278:00 creating summaries or doing anything. You just need to delegate the task." And
278:04 then what we did is we said, "Okay, you have these six tools. Here's what
278:06 they're called. Here's when you use them." And it's just super super clear
278:10 and concise. There's almost no room for ambiguity. We gave it a few rules, an
278:14 example output, and basically that's it. And now it's able to interpret any query
278:17 we might have, even if it's a loaded query. As you can see, in this case, it
278:20 had to call all four agents, but it still got it right. And then when it
278:24 sends over something to like the email agent, for example, we're able to give
278:27 this specific agent a very, very specific system prompt because we only
278:31 have to tell it about you only have access to these email tools. And this is
278:35 just going back to the whole thing about specialization. It's not confusing. It
278:39 knows exactly what it needs to do. Same thing with these other agents. You know,
278:41 the calendar agent, of course, has its own prompts with its own set of calendar
278:45 tools. The contact agent has its own prompt with its own set of contact
278:48 tools. And then of course we have the content creator agent which has to know
278:52 how to not only do research using its tavly tool but it also has to format the
278:57 blog post with you know proper HTML. As you can see here there was like a title
279:00 there were headings there were you know inline links all that kind of stuff. And
279:04 so because we have all of this specialized can you imagine if we had
279:08 all of that system prompt thrown into this one agent and gave it access to all
279:12 the tools just wouldn't be good. And if you're still not convinced, think about
279:15 the fact that for each of these different tasks, because we know what
279:18 each agent is doing, we're able to give it a very specific chat model because,
279:21 you know, like for something like content creation, I like to use cloud
279:24 3.7, but I wouldn't want to use something as expensive as cloud 3.7 just
279:28 to get contacts or to add contacts to my contact database. So that's why I went
279:32 with Flash here. And then for these ones, I'm using 4.1 Mini. So you're able
279:36 to have a lot more control over exactly how you want your agents to run. And so
279:39 I pretty much think I hit on a lot of that, but you know, benefits of
279:43 multi-agent system, more reusable components. So now that we have built
279:46 out, you know, an email agent, whenever I'm building another agent ever, and I
279:49 realize, okay, maybe it would be nice for this agent to have a couple email
279:53 functions. Boom, I just give it access to the email agent because we've already
279:56 built it and this email agent can be called on by as many different workflows
279:59 as we want. And when we're talking about reusable components, that doesn't have
280:04 to just mean these agents are reusable. It could also be workflows that are
280:07 reusable. So, for example, if I go to this AI marketing team video, if you
280:09 haven't watched it, I'll leave a link right up here. These tools down here,
280:14 none of these are agents. They're all just workflows. So, for example, if I
280:17 click into the video workflow, you can see that it's sending data over to this
280:20 workflow. And even though it's not an agent, it still is going to do
280:24 everything it needs to do and then send data back to that main agent. Similarly,
280:28 with this create image tool, if I was to click into it real quick, you can see
280:31 that this is not an agent, but what it's going to do is it's going to take
280:34 information from that orchestrator agent and do a very specific function. That
280:38 way, this main agent up here, all it has to do is understand, I have these
280:41 different tools, which one do I need to use. So, reusable components and also
280:45 we're going to have model flexibility, different models for different agents.
280:48 We're going to have easier debugging and maintenance because like I said with the
280:51 whole prompting thing, if you tried to give that main agent access to 25 tools
280:56 and in the prompt you have to say here's when you use all 25 tools and it wasn't
280:59 working, you wouldn't know where to start. You would feel really overwhelmed
281:02 as to like how do I even fix this prompt. So by splitting things up into
281:07 small small tasks and specialized areas, it's going to make it so much easier.
281:10 Exactly like I just covered point number four, clear prompts logic and better
281:14 testability. And finally, it's a foundation for multi-turn agents or
281:17 agent memory. Just because we're sending data from main agent to sub agent
281:20 doesn't mean we're losing that context of like we're talking to Nate right now
281:24 or we're talking to Dave right now. We can still have that memory pass between
281:28 workflows. So things get really really powerful and it's just pretty cool.
281:32 Okay, so we've seen a demo. I think you guys understand the benefits here. Just
281:35 one thing I wanted to throw out before we get into like a live build of a
281:40 multi- aent system is just because this is cool and there's benefits doesn't
281:44 mean it's always the right thing to do. So if you're forcing a multi-agent
281:47 orchestrator framework into a process that could be a simple single agent or a
281:52 simple AI workflow, all you're going to be doing is you're going to be
281:54 increasing the latency. You're going to be increasing the cost because you're
281:58 making more API calls and you're probably going to be increasing the
282:02 amount of error just because kind of the golden rule is you want to eliminate as
282:06 much data transfer between workflows as you can because that's where you can run
282:10 into like some issues. But of course there are times when you do need
282:13 dedicated agents for certain functions. So, let's get into a new workflow and
282:17 build a really simple example of an orchestrator agent that's able to call
282:21 on a sub agent. All right. So, what we're going to be doing here is we're
282:24 going to build an orchestrator agent. So, I'm going to hit tab. I'm going to
282:27 type in AI agent and we're going to pull this guy in. And we're just going to be
282:29 talking to this guy using that little chat window down here for now. So, first
282:33 thing we need to do as always is connect a brain. I'm going to go ahead and grab
282:36 an open router. And we're just going to throw in a 4.1 mini. And I'll just
282:40 change this name real quick so we can see what we're using. And from here,
282:44 we're basically just going to connect to a subworkflow. And then we'll go build
282:48 out that actual subworkflow agent. So the way we do it is we click on this
282:52 plus under tool. And what we want to do is call nen workflow tool because you
282:56 can see it says uses another nen workflow as a tool. Allows packaging any
283:00 naden node as a tool. So it's super cool. That's how we can send data to
283:03 these like custom things that we built out. As you saw earlier when I showed
283:06 that little example of the marketing team agent, that's how we can do it. So
283:10 I'm going to click on this. And basically when you click on this,
283:13 there's a few things to configure. The first one is a description of when do
283:17 you use this tool. You'll kind of tell the agent that here and you'll also be
283:20 able to tell a little bit in a system prompt, but you have to tell it when to
283:23 use this tool. And then the next thing is actually linking the tool. So you can
283:27 see we can choose from a list of our different workflows in NAN. You can see
283:30 I have a ton of different workflows here, but all you have to do is you have
283:32 to choose the one that you want this orchestrator agent to send data to. And
283:36 one thing I want to call attention to here is this text box which says the
283:40 tool will call the workflow you defined below and it will look in the last node
283:44 for the response. The workflow needs to start with an execute workflow trigger.
283:48 So what does this mean? Let's just go build another workflow and we will see
283:51 exactly what it means. So I'm going to open up a new workflow which is going to
283:54 be our sub agent. So, I'm going to hit tab to open up the nodes. And it's
283:57 obviously prompting us to choose a trigger. And we're going to choose this
284:00 one down here that says when executed by another workflow, runs the flow when
284:04 called by the execute workflow node from a different tool. So, basically, the
284:07 only thing that can access this node and send data to this node is one of these
284:12 bad boys right here. So, these two things are basically just connected and
284:15 data is going to be sending between them. And what's interesting about this
284:18 node is you can have a couple ways that you accept data. So, by default, I
284:21 usually just put it on accept all data. And this will put things into a field
284:25 right here called query. But if you wanted to, you could also have it send
284:29 over specific fields. So, if you wanted to only get like, you know, a phone
284:32 number and you wanted to get a name and you wanted to get an email and you
284:36 wanted those all to already be in three separate fields, that's how you could do
284:39 that. And a practical example of that would be in my marketing team right here
284:43 in the create image. You can see that I'm sending over an image title, an
284:47 image prompt, and a chat ID. And that's another good example of being able to
284:51 send, you know, like memory over because I have a chat ID coming from here, which
284:55 is memory to the agent right here. But then I can also send that chat ID to the
284:59 next workflow if we need memory to be accessed down here as well. So in this
285:02 case, just to start off, we're not going to be sending over specified fields.
285:06 We're just going to do accept all data and let us connect an AI agent to this
285:10 guy. So I'm going to type in AI agent. We'll pull this in. The first thing we
285:14 need to do is we need to change this because we're not going to be talking
285:17 through the connected chat trigger node as we know because we have this trigger
285:21 right here. So what we're going to do is save this workflow. So now it should
285:24 actually register an end that we have this workflow. I'm going to go back in
285:27 here and we're just going to connect it. So we know that it's called subwork sub
285:32 aent. So grab that right there. And now you can see it says the sub workflow is
285:35 set up to receive all input data. Without specific inputs, the agent will
285:39 not be able to pass data to this tool. you can define the specific inputs in
285:42 the trigger. So that's exactly what I just showed you guys with changing that
285:45 right there. So what I want to do is show how data gets here so we can
285:49 actually map it so the agent can read it. So what we need to do before we can
285:52 actually test it out is we need to make sure that this orchestrator agent
285:56 understands what this tool will do and when to use it. So let's just say that
285:59 this one's going to be an email agent. First thing I'm going to do is just
286:03 intuitively name this thing email agent. I'm then going to type in the
286:07 description call this tool to take any email actions. So now it should
286:12 basically, you know, signal to this guy whenever I see any sort of query come in
286:16 that has to do with email. I'm just going to pass that query right off to
286:19 this tool. So as you can see, I'm not even going to add a system message to
286:22 this AI agent yet. We're just going to see if we can understand. And I'm going
286:26 to come in here and say, "Please send an email to Nate asking him how he's
286:30 doing." So, we fire that off and hopefully it's going to call this tool and then we'll
286:35 be able to go in there and see the query that we got. The reason that this
286:38 errored is because we haven't mapped anything. So, what I'm going to do is
286:41 click on the tool. I'm going to click on view subexecution. So, we can pop open
286:45 like the exact error that just happened. And we can see exactly what happened is
286:49 that this came through in a field called query. But the main agent is not looking
286:53 for a field called query. It's looking for a field called chat input. So I'm
286:57 just going to click on debug and editor so we can actually pull this in. Now all
287:00 I have to do is come in here, change this to define below, and then just drag
287:04 in the actual query. And now we know that this sub agent is always going to
287:09 receive the orchestrator agents message. But what you'll notice here is that the
287:12 orchestrator agent sent over a message that says, "Hi Nate, just wanted to
287:15 check in and see how you're doing. Hope all is well." So there's a mistake here
287:19 because this main agent ended up basically like creating an email and
287:23 sending it over. All we wanted to do is basically just pass the message along.
287:27 So what I would do here is come into the system prompt and I'm just going to
287:31 say overview. You are an orchestrator agent. Your only job is to delegate the task to
287:40 the correct tool. No need to write emails or create summaries. There we go. So just with a
287:45 very simple line, that's all we're going to do. And before we shoot that off, I'm
287:48 just going to go back into the sub workflow and we have to give this thing
287:52 an actual brain. so that it can process messages. We're just going to go with a
287:57 4.1 mini once again. Save that. So, it actually reflects on this main agent.
288:00 And now, let's try to send off this exact same query. And we'll see what it
288:04 does this time. So, it's calling the email agent tool. It shouldn't error
288:09 because we we we fixed it. But, as you can see now, it just called that tool
288:12 twice. So, we have to understand why did it just call the sub agent twice. First
288:16 thing I'm going to do is click into the main agent and I'm going to click on
288:19 logs. And we can see exactly what it did. So when it called the email agent
288:23 once again it sent over a subject which is checking in and an actual email body.
288:26 So we have to fix the prompting there right? But then the output which is
288:30 basically what that sub workflow sent back said here's a slightly polished
288:33 version of your message for a warm and clear tone blah blah blah. And then for
288:36 some reason it went and called that email agent again. So now it says please
288:40 send the following email and it sends it over again. And then the sub agent says
288:45 I can't send emails directly but here's the email you can send. So, they're both
288:48 in this weird loop of thinking they are creating them an email, but not actually
288:51 being able to send them. So, let's take a look and see how we can fix that. All
288:55 right, so back in the sub workflow, what we want to do now is actually let this
288:58 agent have the ability to send emails. Otherwise, they're just going to keep
289:01 doing that endless loop. So, I'm going to add a tool and type in Gmail. We're
289:05 going to change this to a send message operation. I'm just going to rename this
289:10 send email. And we're just going to have the two be defined by the model. We're
289:14 going to have the subject be defined by the model. and we're going to have the
289:17 message be defined by the model. And all this means is that ideally, you know,
289:21 this query is going to say, hey, send an email to nateample.com asking what's up.
289:26 The agent would then interpret that and it would fill out, okay, who is it going
289:29 to? What's the subject and what's the message? It would basically just create
289:33 it all itself using AI. And the last thing I'm going to do is just turn off
289:36 the Nent attribution right there. And now let's give it another shot. And keep
289:39 in mind, there's no system prompt in this actual agent. And I actually want
289:43 to show you guys a cool tip. So when you're building these multi- aent
289:47 systems and you're doing things like sending data between flows, if you don't
289:51 want to always go back to the main agent to test out like how this one's working,
289:55 what you can do is come into here and we can just edit this query and just like
289:59 set some mock data as if the main agent was sending over some stuff. So I'm
290:02 going to say like we're pretending the orchestrator agent sent over to the sub
290:12 nate@example.com asking what's up and we'll just get rid of that R. And then
290:15 now you can see that's the query. That's exactly what this agent's going to be
290:18 looking at right here. And if we hit play above this AI agent, we'll see that
290:22 hopefully it's going to call that send email tool and we'll see what it did. So
290:25 it just finished up. We'll click into the tool to see what it did. And as you
290:29 can see, it sent it to Nate example. It made the subject checking in and then
290:32 the message was, "Hey Nate, just wanted to check in and see what's up. best your
290:36 name. So, my thought process right now is like, let's get everything working
290:39 the way we want it with this agent before we go back to that orchestrator
290:43 agent and fix the prompting there. So, one thing I don't like is that it's
290:46 signing off with best your name. So, we have a few options here. We could do
290:50 that in the system prompt, but same thing with like um specialization. If
290:54 this tool is specialized in sending emails, we might as well instruct it how
290:58 to send emails in this tool. So for the message, I'm going to add a description
291:01 and I'm going to say always sign off emails as Bob. And that really should do it. So
291:08 because we have this mock data right here, I don't have to go and, you know,
291:12 send another message. I can just test it out again and see what it's going to do.
291:15 So it's going to call the send email tool. It's going to make that message.
291:18 And now we will go ahead and look and see if it's signed off in a better way.
291:21 Right here, we can see now it's signing off best, Bob. So, let's just say right
291:25 now we're happy with the way that our sub agent's working. We can go ahead and
291:29 come back into the main agent and test it out again. All right. So, I'm just
291:32 going to shoot off that same message again that says, "Send an email to Nate
291:35 asking him how he's doing." And this will be interesting. We'll see what it
291:38 sends over. It was one run and it says, "Could you please provide Nate's email
291:42 address so I can send the message?" So, what happened here was the subexecution
291:46 realized we don't have Nate's email address. And that's why it basically
291:49 responded back to this main agent and said, "I need that if I need to send the
291:53 message." So if I click on subexecution, we will see exactly what it did and why
291:57 it did that and it probably didn't even call that send email tool. Yeah. So it
292:01 actually failed and it failed because it tried to fill out the two as Nate and it
292:05 realized that's not like a valid email address. So then because this sub agent
292:09 responds with could you please provide Nate's email address so I can send the
292:13 message. That's exactly what the main agent saw right here in the response
292:16 from this agent tool. So that's how they're able to talk to each other, go
292:19 back and forth, and then you can see that the orchestrator agent prompted us
292:22 to actually provide Nate's email address. So now we're going to try,
292:25 please send an email to nativeample.com asking him how the project is coming
292:29 along. We'll shoot that off and everything should go through this time
292:32 and it should basically say, oh, which project are you referring to? This will
292:35 help me provide you with the most accurate and relevant update. So once
292:39 again, the sub agent is like, okay, I don't have enough information to send
292:42 off that message, so I'm going to respond back to that orchestrator agent.
292:46 And just because we actually need one to get through, let me shoot off one more
292:49 example. Okay, hopefully this one's specific enough. We have an email
292:52 address. We have a specified name of a project. And we should see that
292:55 hopefully it's going to send this email this time. Okay, there we go. The email
292:58 asking Nate how Project Pan is coming along. It's been sent. Anything else you
293:02 need? So, at this point, it would be okay. Which other agents could I add to
293:06 the system to make it a bit easier on myself? The first thing naturally to do
293:09 would be I need to add some sort of contact agent. Or maybe I realize that I
293:14 don't need a full agent for that. Maybe that needs to just be one tool. So
293:17 basically what I would do then is I'd add a tool right here. I would grab an
293:20 air table because that's where my contact information lives. And all I
293:24 want to do is go to contacts and choose contacts. And now I just need to change
293:30 this to search. So now this tool's only job is to return all of the contacts in
293:34 my contact database. I'm just going to come in here and call this contacts. And
293:38 now keep in mind once again there's still nothing in the system prompt about
293:41 here are the tools you have and here's what you do. I just want to show you
293:44 guys how intelligent these models can be before you even prompt them. And then
293:47 once you get in there and say, "Okay, now you have access to these seven
293:50 agents. Here's what each of them are good at, it gets even cooler." So, let's
293:54 try one more thing and see if it can use the combination of contact database and
293:58 email agent. Okay, so I'm going to fire this off. Send an email to Dexter Morgan
294:02 asking him if he wants to get lunch. You can see that right away it used the
294:05 contacts database, pulled back Dexter Morgan's email address, and now we can
294:08 see that it sent that email address over to the email agent, and now we have all
294:12 of these different data transfers talking to each other, and hopefully it
294:15 sent the email. All right, so here's that email. Hi, Dexter. Would you like
294:18 to get lunch sometime soon? Best Bob. The formatting is a little off. We can
294:21 fix that within the the tool for the email agent. But let's see if we sent
294:24 that to the right email, which is dextermiami.com. If we go into our
294:28 contacts database, we can see right here we have dextermorggan dextermiami.com.
294:32 And like I showed you guys earlier, what you want to do is get pretty good at
294:35 reading these agent logs. So you can see how your agents are thinking and what
294:38 data they're sending between workflows. And if we go to the logs here, we can
294:42 see first of all, it used its GPT4.1 mini model brain to understand what to
294:46 do. It understood, okay, I need to go to the contacts table. So I got my contact
294:50 information. Then I need to call the email agent. And what I sent over to the
294:55 email agent was send an email to dextermiami.com asking him if he wants
294:59 to get lunch. And that was perfect. All right. All right, so that's going to do
295:02 it for this one. Hopefully this opened your eyes to the possibilities of these
295:05 multi- aent systems in N&N and also hopefully it taught you some stuff
295:08 because I know all of this stuff is like really buzzwordy sometimes with all
295:12 these agents agents agents but there are use cases where it really is the best
295:15 path but it's all about like understanding what is the end goal and
295:18 how do I want to evolve this workflow and then deciding like what's the best
295:23 architecture or system to use. So that was one type of architecture for a
295:26 multi-agent system called the orchestrator architecture. But that's
295:30 not the only way to have multiple agents within a workflow or within a system. So
295:33 in this next section, I'm going to break down a few other architectures that you
295:37 can use so that you can understand what's possible and which one fits your
295:42 use case best. So let's dive right in. So in my ultimate assistant video, we
295:45 utilize an agentic framework called parent agent. So as you can see, we have
295:48 a parent agent right here, which is the ultimate assistant that's able to send
295:52 tasks to its four child agents down here, which are different workflows that
295:56 we built out within NAND. If you haven't seen that video, I'll tag it right up
295:58 here. But how it works is that the ultimate assistant could get a query
296:01 from the human and decide that it needs to send that query to the email agent,
296:04 which looks like this. And then the email agent will be able to use its
296:08 tools in Gmail and actually take action. From there, it responds to the parent
296:11 agent and then the parent agent is able to take that response back from this
296:14 child agent and then respond to us in Telegram. So, it's a super cool system.
296:17 It allows us to delegate tasks and also these agents can be activated in any
296:20 specific order. It doesn't always have to be the same. But is this framework
296:24 always the most effective? No. So today I'm going to be going over four
296:26 different agentic frameworks that you can use in your end workflows. The first
296:29 one we're going to be talking about is prompt chaining. The second one is
296:32 routing. The third one is parallelization. And the fourth one is
296:36 evaluator optimizer. So we're going to break down how they all work, what
296:38 they're good at. But make sure you stick around to the end because this one, the
296:41 evaluator optimizer, is the one that's got me most excited. So before we get
296:45 into this first framework, if you want to download these four templates for
296:48 free so you can follow along, you can do so by joining my free school community.
296:51 You'll come in here, click on YouTube resources, click on the post associated
296:54 with this video, and then you'll have the workflow right here to download. So,
296:57 the link for that's down in the description. There's also going to be a
297:00 link for my paid community, which is if you're looking for a more hands-on
297:03 approach to learning NAND. We've got a great community of members who are also
297:06 dedicated to learning NAN, sharing resources, sharing challenges, sharing
297:09 projects, stuff like that. We've got a classroom section where we're going over
297:11 different deep dive topics like building agents, vector databases, APIs, and HTTP
297:15 requests. And I also just launched a new course where I'm doing the step-by-step
297:18 tutorials of all the videos that I've shown on YouTube. And finally, we've got
297:21 five live calls per week to make sure that you're getting questions answered,
297:24 never getting stuck, and also networking with individuals in the space. We've
297:27 also got guest speakers coming in in February, which is super exciting. So,
297:30 I'd love to see you guys in these calls. Anyways, back to the video here. The
297:33 first one we're going to be talking about is prompt chaining. So, as you can
297:36 see, the way this works, we have three agents here, and what we're doing is
297:39 we're passing the output of an agent directly as the input into the next
297:44 agent, and so on so forth. So, here are the main benefits of this type of
297:47 workflow. It's going to lead to improved accuracy and quality because each step
297:51 focuses on a specific task which will help reduce errors and hallucinations.
297:54 Greater control over each step. We can refine the system prompt of the outline
297:57 writer and then we can refine the prompt of the evaluator. So we can really tweak
298:01 what's going on and how data is being transferred. Specialization is going to
298:05 lead to more effective agents. So as you can see in this example, we're having
298:09 one agent write the outline. One of them evaluates the outline and makes
298:12 suggestions. And then finally, we pass that revised outline to the blog writer
298:15 who's in charge of actually writing the blog. So this is going to lead to a much
298:20 more cohesive, thought through actual blog in the end compared to if we would
298:24 just fed in all of this system prompt into one agent. And then finally, with
298:27 this type of framework, we've got easier debugging and optimization because it's
298:30 linear. We can see where things are going wrong. Finally, it's going to be
298:32 more scalable and reusable as we're able to plug in different agents wherever we
298:35 need them. Okay, so what we have to do here is we're just going to enter in a
298:40 keyword, a topic for this um blog. So, I'm just going to enter in coffee, and
298:43 we'll see that the agents start going to work. So, the first one is an outline
298:46 writer. Um, one thing that's also really cool about this framework and some of
298:49 the other ones we're going to cover is that because we're splitting up the
298:52 different tasks, we're able to utilize different large language models. So, as
298:55 you can see, the outline writer, we gave 20 Flash because it's it's free. Um,
298:59 it's it's powerful, but not super powerful, and we just need a brief
299:02 outline to be written here. And then we can pass this on to the next one that
299:04 uses 40 Mini. It's a little more powerful, a little more expensive, but
299:08 still not too bad. and we want this more powerful chat model to be doing the
299:12 evaluating and refining of the outline. And then finally, for the actual blog
299:15 writing content, we want to use something like Claw 3.5 or even Deepseek
299:18 R1 because it's going to be more powerful and it's going to take that
299:21 revised outline and then structure a really nice blog post for us. So that's
299:25 just part of the specialization. Not only can we split up the tasks, but we
299:29 can plug and play different chat models where we need to rather than feeding
299:33 everything through one, you know, one Deep Seeker, one blog writer at the very
299:36 beginning. So, this one's finishing up here. It's about to get pushed into a
299:39 Google doc where we'll be able to go over there and take a look at the blog
299:43 that it got for us about coffee. So, looks like it just finished up. Here we
299:47 go. Detailed blog post based on option one, a comprehensive guide to coffee.
299:51 Here's our title. Um, we have a rich history of coffee from bean to cup. We
299:55 have um different methods. We have different coffee varieties. We have all
299:59 this kind of stuff, health benefits and risks. Um, and as you can see, this
300:03 pretty much was a four-page blog. We've got a conclusion at the end. Anyways,
300:06 let's dive into what's going on here. So, the concept is passing the output
300:10 into the input and then taking that output and passing it into the next
300:14 input. So, here we have here's the topic to write a blog about which all it got
300:17 here was the word coffee. That's what we typed in. The system message is that you
300:20 are an expert outline writer. Your job is to generate a structured outline for
300:24 a blog post with section titles and key points. So, here's the first draft at
300:28 the outline using 20 flash. Then, we pass that into an outline evaluator
300:32 that's using for Mini. We said here's the outline. We gave it the outline of
300:36 course and then the system message is you're an expert blog evaluator. Your
300:39 job is to revise this outline and make sure it hits these four criteria which
300:43 are engaging introduction, clear section breakdown, logical flow, and then a
300:46 conclusion. So we told it to only output the revised outline. So now we have a
300:50 new outline over here. And finally, we're sending that into a Claude 3.5
300:54 blog writer where we gave it the revised outline and just said, "You're an expert
300:58 blog writer. Generate a detailed blog post using this outline with well
301:01 ststructured paragraphs and engaging content." So that's how this works. You
301:04 can see it will be even more powerful once we hook up, you know, like some
301:07 internet search functionality and if we added like an editor at the end before
301:10 it actually pushed it into the the Google doc or whatever it is. But that's
301:14 how this framework works. But let's move into aic framework number two. Now we're
301:17 going to talk about the routing framework. In this case, we have an
301:21 initial LLM call right here to classify incoming emails. And based on that
301:24 classification, it's going to route it up as high priority, customer support,
301:28 promotion, their finance, and billing. And as you can see, there's different
301:31 actions that are going to take place. We have different agents depending on what
301:34 type of message comes through. So the first agent, which is the text
301:37 classifier here, basically just has to decide, okay, which agent do I need to
301:41 send this email off to? Anyways, why would you want to use routing? Because
301:43 you're going to have an optimized response handling. So as we can see in
301:46 this case, we're able to set up different personas for each of our
301:49 agents here rather than having one general AI response agent. Then this can
301:54 be more scalable and modular. It's going to be faster and more efficient. And
301:56 then you can also introduce human escalation for critical issues like we
302:00 do up here with our high priority agent. And finally, it's just going to be a
302:03 better user experience for for your team and also your customers. So I hit test
302:07 step. What we're getting here is an email that I just sent to myself that
302:10 says, "Hey, I need help logging into my account. Can you help me?" So this email
302:13 classifier is going to label this as customer support. As soon as we hit
302:16 play, it's going to send it down the customer support branch right here. As
302:19 you can see, we got one new item. What's going on in this step is that we're just
302:22 labeling it in our Gmail as a customer support email. And then finally, we're
302:25 going to fire it off to the customer support agent. In this case, this one is
302:29 trained on customer support activities. Um, this is where you could hook up a
302:32 customer support database if you needed. And then what it's going to do is it's
302:35 going to create an email draft for us in reply to the email that we got. So,
302:38 let's go take a look at that. So, here's the email we got. Hey, I need help
302:41 logging into my account. As you can see, our agent was able to label it as
302:44 customer support. And then finally, it created this email, which was, "Hey,
302:46 Nate, thanks for reaching out. I'd be happy to assist you with logging into
302:49 your account. Please provide me with some more details um about the issue
302:52 you're experiencing, blah blah blah." And then this one signs off, "Best
302:55 regards, Kelly, because she's the customer support rep." Okay, let's take
302:58 a look at a different example. Um, we'll pull in the trigger again and this time
303:01 we're going to be getting a different email. So, as you can see, this one
303:03 says, "Nate, this is urgent. We need your outline tomorrow or you're fired."
303:06 So, hopefully this one gets labeled as high priority. It's going to go up here
303:10 to the high priority branch. Once again, we're going to label that email as high
303:13 priority. But instead of activating an email draft reply tool, this one has
303:17 access to a Telegram tool. So, what it's going to do is text us immediately and
303:20 say, "Hey, this is the email you got. You need to take care of this right
303:23 away." Um, and obviously the logic you can choose of what you want to happen
303:26 based on what route it is, but let's see. We just got telegram message,
303:29 urgent email from Nate Hkelman stating that an outline is needed by tomorrow or
303:33 there will be serious consequences, potential termination. So that way it
303:36 notifies us right away. We're able to get into our email manually, you know,
303:39 get get caught up on the thread and then respond how we need to. And so pretty
303:42 much the same thing for the other two. Promotional email will get labeled as
303:45 promotion. We come in here and see that we are able to set a different persona
303:49 for the pro promotion agent, which is you're in charge of promotional
303:52 opportunities. Your job is to respond to inquiries in a friendly, professional
303:55 manner and use this email to send reply to customer. Always sign off as Meredith
303:59 from ABC Corp. So, each agent has a different sort of persona that it's able
304:03 to respond to. In finance agent, we have we have this agent signing off as a as
304:08 Angela from ABC Corp. Um, anyways, what I did here was I hooked them all up to
304:11 the same chat model and I hooked them all up to the same tool because they're
304:16 all going to be sending an email draft here. As you can see, we're using from
304:19 AAI to determine the subject, the message, and the thread ID, which it's
304:23 going to pull in from the actual Gmail trigger, or sorry, the Gmail trigger is
304:26 not using from AAI. We're we're mapping in the Gmail trigger because every time
304:30 an email comes through, it can just look at that um email in order to determine
304:36 the thread ID for sending out an email. But you don't have to connect them up to
304:38 the same tool. I just did it this way because then I only had to create one
304:41 tool. Same thing with the different chat models based on the, you know,
304:44 importance of what's going through each route. You could switch out the chat
304:47 models. We could have even used a cheaper, easier one for the
304:50 classification if we wanted to, but in this case, I just hooked them all up to
304:54 a 40 mini chat model. Anyways, this was just a really simple example of routing.
304:57 You could have 10 different routes, you could have just two different routes,
305:00 but the idea is that you're using one agent up front to determine which way to
305:04 send off the data. Moving on to the third framework, we've got
305:06 parallelization. What we're going to do here is be using three different agents,
305:09 and then we're going to merge their outputs, aggregate them together, and
305:12 then feed them all into a final agent to sort of, you know, throw it all into one
305:16 response. So what this is going to do is give us faster analysis rather than
305:20 processing everything linearly. So in this case we're going to be sending in
305:22 some input and then we're going to have one agent analyze the emotion behind it,
305:26 one agent do the intent behind it, and then one agent analyze any bias rather
305:30 than doing it one by one. They're all going to be working simultaneously and
305:33 then throwing their outputs together. So it can decrease the latency there.
305:36 They're going to be specialized, which means we could have specialized system
305:39 prompts like we do here. We also could do specialized um large language models
305:42 again where we could plug in different different models if we wanted to maybe
305:46 feed through the same prompt use cloud up here, OpenAI down here and then you
305:49 know DeepSeek down here and then combine them together to make sure we're getting
305:53 the best thoughtout answer. Um comprehensive review and then more
305:56 scalability as well. But how this one's going to work is we're putting in an
305:59 initial message which is I don't trust the mainstream media anymore. They
306:02 always push a specific agenda and ignore real issues. People need to wake up and
306:05 stop believing everything they see on the news. So, we're having an emotion
306:08 agent, first of all, analyze the emotional emotional tone, categorize it
306:13 as positive, neutral, negative, or mixed with a brief explanation. The intent
306:17 agent is going to analyze the intent behind this text, and then finally, the
306:20 bias agent is going to analyze this text for any potential bias. So, we'll hit
306:24 this off. Um, we're going to get those three separate analysises um or
306:28 analysis, and then we're going to be sending that into a final agent that's
306:32 going to basically combine all those outputs and then write a little bit of
306:36 report based on our input. So, as you can see, right now, it's waiting here
306:40 for um the input from the bias agent. Once that happens, it's going to get
306:42 aggregated, and now it's being sent into the final agent, and then we'll take a
306:47 look at um the report that we got in our Google doc. Okay, just finished up.
306:50 Let's hop over to Docs. We'll see we got an emotional tone, intent, and bias
306:55 analysis report. Overview is that um the incoming text has strong negative
306:59 sentiment towards mainstream media. Yep. Emotional tone is negative sentiment.
307:03 Intent is persuasive goal. Um, the bias analysis has political bias,
307:07 generalization, emotional language, lack of evidence. Um, it's got
307:10 recommendations for how we can make this text more neutral, revised message, and
307:13 then let's just read off the conclusion. The analysis highlights a significant
307:17 level of negativity and bias in the original message directed towards
307:20 mainstream media. By implementing the suggested recommendations, the author
307:23 can promote a more balanced and credible perspective that encourages critical
307:26 assessment of media consumption, blah blah blah. So, as you can see, that's
307:30 going to be a much better, you know, comprehensive analysis than if we would
307:34 have just fed the initial input into an agent and said, "Hey, can you analyze
307:37 this text for emotion, intent, and bias?" But now, we got that split up,
307:41 merged together, put into the final one for, you know, a comprehensive review
307:45 and an output. And it's going to turn the the, you know, data in into data out
307:49 process. It's going to be a lot more efficient. Finally, the one that gets me
307:53 the most excited, um, the evaluator optimizer framework, where we're going
307:56 to have an evaluator agent decide if what's passing through is good or not.
308:00 If it's good, we're fine, but if it's not, it's going to get optimized and
308:03 then sent back to the evaluator for more evaluation. And this is going to be an
308:06 endless loop until the evaluator agent says, "Okay, finally, it's good enough.
308:10 We'll send it off." So, if you watch my human in the loop video, it's going to
308:13 be just like that where we were providing feedback and we were the ones
308:16 basically deciding if it was good to go or not. But in this case, we have an
308:19 agent that does that. So it's going to be optimizing all your workflows on the
308:23 back end without you being in the loop. So obviously the benefits here are that
308:26 it's going to ensure high quality outputs. It's going to reduce errors and
308:29 manual review. It's going to be flexible and scalable. And then it's going to
308:32 optimize the AI's performance because it's sort of an iterative approach that
308:36 um you know focuses on continuous improvement from these AI generated
308:40 responses. So what we're doing here is we have a biography agent. What we told
308:45 this agent to do is um basically write a biography. You're an expert biography
308:48 writer. You'll receive information about a person. Your job is to create an
308:51 entire profile using the information they give you. And I told it you're
308:55 allowed to be creative. From there, we're setting the bio. And we're just
308:58 doing this here so that we can continue to feed this back over and over. That
309:02 way, if we have five revisions, it'll still get passed every time. The most
309:06 recent version to the agent and also the most recent version when it's approved
309:09 will get pushed up here to the Google doc. Then we have the evaluator agent.
309:14 What we told this agent to do is um evaluate the biography. Your job is to
309:18 provide feedback. We gave a criteria. So, make sure that it includes a quote
309:21 from the person. Make sure it's light and humorous and make sure it has no
309:25 emojis. Only need to output the feedback. If the biography is finished
309:29 and all criteria are met, then all you need to output is finished. So, then we
309:33 have a check to say, okay, does the output from the evaluator agent say
309:36 finished or is it feedback? If it's feedback, it's going to go to the
309:39 optimizer agent and continue on this loop until it says finished. Once it
309:42 finally says finished, as you can see, we set JSON.output, output which is the
309:47 output from the evaluator agent equals finished. When that happens, it'll go up
309:51 here and then we'll see it in our Google doc. But then what we have in the actual
309:54 optimizer agent is we're giving it the biography and this is where we're
309:58 referencing the set field where we earlier right here where we set the bio.
310:01 This way the optimizer agent's always getting the most updated version of the
310:05 bio. And then we're also going to get the feedback. So this is going to be the
310:08 output from the evaluator agent because if it does go down this path, the
310:12 evaluator agent, it means that it output feedback rather than saying finished. So
310:16 it's getting feedback, it's getting the biography, and then we're saying you're
310:19 an expert reviser. Your job is to take the biography and optimize it based on
310:22 the feedback. So it gets all it needs in the user message, and then it outputs us
310:26 a better optimized version of that biography. Okay, so let's do an example
310:30 real quick. Um, if you remember in the biography agent, well, all we have to do
310:33 is give it a, you know, some information about a person to write a biography on.
310:36 So, I'm going to come in here and I'm just going to say Jim 42
310:44 um, lives by the ocean. Okay, so that's all we're going to put in. We'll see
310:47 that it's writing a brief biography right now. And then we're going to see
310:50 it get evaluated. We're going to see if it, you know, met those criteria. If it
310:53 doesn't, it's going to get sent to the optimizer agent. the optimizer agent is
310:58 going to get um basically the criteria it needs to hit as well as the original
311:02 biography. So here's the evaluator agent. Look at that. It decides that it
311:05 wasn't good enough. Now it's being sent to the optimizer agent who is going to
311:09 optimize the bio, send it back and then hopefully on the second run it'll go up
311:12 and get published in the docs. If it's not good enough yet, then it will come
311:15 back to the agent and it will optimize it once again. But I think that this
311:18 agent will do a good job. There we go. We can see it just got pushed up into
311:21 the doc. So let's take a look at our Google doc. Here's a biography for Jim
311:26 Thompson. He lives in California. He's 43. Um, ocean enthusiast, passion,
311:31 adventure, a profound respect for nature. It talks about his early life,
311:35 and obviously he's making all this up. Talks about his education, talks about
311:38 his career, talks about his personal life. Here we have a quote from Jim,
311:42 which is, "I swear the fish are just as curious about me as I am about them."
311:45 We've even got another quote. Um, a few dad jokes along the way. Why did the
311:49 fish blush? Because it saw the ocean's bottom. So, not sure I completely get
311:53 that one. Oh, no. I get that one. Um anyways then hobbies, philosophy, legacy
311:57 and a conclusion. So this is you know a pretty optimized blog post. It meets all
312:00 the criteria that we had put into our agents as far as you know this is what
312:03 you need to evaluate for. It's very light. There's no emojis. Threw some
312:06 jokes in there and then it has some quotes from Jim as well. So as you can
312:11 see all we put in was Jim 43 lives by the ocean and we got a whole basically a
312:14 story written about this guy. And once again just like all of these frameworks
312:17 pretty much you have the flexibility here to change out your model wherever
312:20 you want. So let's say we don't really mind up front. we could use something
312:24 really cheap and quick and then maybe for the actual optimizer agent we want
312:27 to plug in something a little more um you know with reasoning aspect like
312:31 deepse R1 potentially. Anyways, that's all I've got for you guys today. Hope
312:34 this one was helpful. Hope this one, you know, sparked some ideas for next time
312:37 you're going into edit end to build an agentic workflow. Maybe looking at I
312:40 could actually have structured my workflow in this framework and it would
312:43 have been a little more efficient than the current way I'm doing it. Like I
312:46 said, these four templates will be in the free school community if you want to
312:48 download them and just play around with them to understand what's going on,
312:52 understand, you know, when to use each framework, stuff like that. All right,
312:55 so we understand a lot of the components that actually go into building an
312:58 effective agent or an effective agent system, but we haven't really yet spent
313:02 a lot of time on prompting, which is like 80% of an agent. It's so so
313:05 important. So, in this next section, we're going to talk about my methodology
313:08 when it comes to prompting a tools agent, and we're going to do a quick
313:12 little live prompting session near the end. So, if that sounds good to you,
313:15 let's get started. Building AI agents and hooking up different tools is fun
313:18 and all, but the quality and consistency of the performance of your agents
313:21 directly ties back to the quality of the system prompt that you put in there.
313:24 Anyways, today what we're going to be talking about is what actually goes into
313:27 creating an effective prompt so that your agents perform as you want them to.
313:30 I'm going to be going over the most important thing that I've learned while
313:33 building out agents and prompting them that I don't think a lot of people are
313:36 doing. So, let's not waste any time and get straight into this one. All right,
313:39 so I've got a document here. If you want to download this one to follow along or
313:42 just have it for later, you can do so by joining my free school community. The
313:44 link for that's down in the description. You'll just click on YouTube resources
313:47 and find the post associated with this video and you'll be able to download the
313:51 PDF right there. Anyways, what we're looking at today is how we can master
313:55 reactive prompting for AI agents in NAD. And the objective of this document here
313:58 is to understand what prompting is, why it matters, develop a structured
314:02 approach to reactive prompting when building out AI agents, and then learn
314:05 about the essential prompt components. So, let's get straight into it and start
314:09 off with just a brief introduction. What is prompting? Make sure you stick around
314:12 for this one because once we get through this doc, we're going to hop into NN and
314:15 do some live prompting examples. So, within our agents, we're giving them a
314:18 system prompt. And this is basically just coding them on how to act. But
314:22 don't be scared of the word code because we're just using natural language
314:25 instead of something like Python or JavaScript. A good system prompt is
314:28 going to ensure that your agent is behaving in a very clear, very specific,
314:32 and a very repeatable way. So, instead of us programming some sort of Python
314:36 agent, what we're doing is we're just typing in, "You're an email agent. Your
314:39 job is to assist the user by using your tools to take the correct action."
314:42 Exactly as if we were instructing an intern. And why does prompting matter?
314:46 I'm sure by now you guys already have a good reason in your head of why
314:49 prompting matters, and it's pretty intuitive, but let's think about it like
314:53 this as well. Agents are meant to be running autonomously, and they don't
314:56 allow that back and forth interaction like chatbt. Now, yes, there can be some
315:00 human in the loop within your sort of agentic workflows, but ideally you put
315:05 in an input, it triggers the automation, triggers the agent to do something, and
315:08 then we're getting an output. Unlike chatbt where you ask it to help you
315:10 write an email, and you can say, "Hey, make that shorter," or you can say,
315:14 "Make it more professional." We don't have that um luxury here. We just need
315:17 to trust that it's going to work consistently and high quality. So, our
315:21 goal as prompters is to get the prompts right the first time so that the agent
315:25 functions correctly every single time it's triggered. So, the key rule here is
315:28 to keep the prompts clear, simple, and actionable. You don't want to leave any
315:32 room for misinterpretation. Um, and also, less is more. Sometimes I'll see
315:35 people just throw in a novel, and that's just obviously going to be more
315:38 expensive for you, and also just more room to confuse the agent. So, less is
315:42 more. So, now let's get into the biggest lesson that I've learned while prompting
315:46 AI agents, which is prompting needs to be done reactively. I see way too many
315:51 people doing this proactively, throwing in a huge system message, and then just
315:54 testing things out. This is just not the way to go. So let's dive into what that
315:57 actually means to be prompting reactively. First of all, what is
316:01 proactive prompting? This is just writing a long detailed prompt up front
316:04 after you have all your tools configured and all of the sort of, you know,
316:08 standard operating procedures configured and then you start testing it out. The
316:12 problem here is that you don't know all the possible edge cases and errors in
316:16 advance and debugging is going to be a lot more difficult because if something
316:19 breaks, you don't know which part of the prompt is causing the issue. You may try
316:22 to fix something in there and then the issue originally you were having is
316:26 fixed, but now you cause a new issue and it's just going to be really messy as
316:28 you continue to add more and more and you end up just confusing both yourself
316:32 and the agent. Now, reactive prompting on the other hand is just starting with
316:35 absolutely nothing and adding a tool, testing it out, and then slowly adding
316:39 sentence by sentence. And as you've seen in some of my demos, we're able to get
316:43 like six tools hooked up, have no prompt in there, and the agent's still working
316:46 pretty well. At that point, we're able to start adding more lines to make the
316:49 system more robust. But the benefits here of reactive prompting are pretty clear. The first
316:54 one is easier debugging. You know exactly what broke the agent. Whether
316:58 that's I added this sentence and then the automation broke. All I have to do
317:01 is take out that sentence or I added this tool and I didn't prompt the tool
317:04 yet. So that's what caused the automation to break. So I'm just going
317:06 to add a sentence in right here about the tool. This is also going to lead to
317:10 more efficient testing because you can see exactly what happens before you hard
317:14 prompt in fixes. And essentially, you know, I'll talk about hard prompting
317:17 more later, but essentially what it is is um you're basically seeing an error
317:22 and then you're hard prompting in the error within the system prompt and
317:25 saying, "Hey, like you just did this. That was wrong. Don't do that again."
317:28 And we can only do that reactively because we don't know how the agent's
317:31 going to react before we test it out. Finally, we have the benefit that it
317:34 prevents over complicated prompts that are hard to modify later. If you have a
317:39 whole novel in there and you're getting errors, you're not going to know where
317:40 to start. You're going to be overwhelmed. So, taking it step by step,
317:44 starting with nothing and adding on things slowly is the way to go. And so,
317:48 if it still isn't clicking yet, let's look at a real world example. Let's say
317:51 you're teaching a kid to ride a bike. If you took a proactive approach, you'd be
317:56 trying to correct the child's behavior before you know what he or she is going
318:00 to do. So, if you're telling the kid to keep your back straight, lean forward,
318:03 you know, don't tilt a certain way, that's going to be confusing because now
318:06 the kid is trying to adjust to all these things you've said and it doesn't even
318:10 know what it was going to do, what he or she was going to do in the in the
318:13 beginning. But if you're taking a reactive approach and obviously maybe
318:16 this wasn't the best example cuz you don't want your kid to fall, but you let
318:20 them ride, you see what they're doing, you know, if they're leaning too much to
318:23 the left, you're going to say, "Okay, well, maybe you need to lean a little
318:26 more to the right to center yourself up." um and only correct what they
318:30 actually need to have corrected. This is going to be more effective, fewer
318:33 unnecessary instructions, and just more simple and less overwhelming. So, the
318:37 moral of the story here is to start small, observe errors, and fix one
318:41 problem at a time. So, let's take a look at some examples of reactive prompting
318:44 that I've done in my ultimate assistant workflow. As you can see right here, I'm
318:46 sure you guys have seen that video by now. If you haven't, I'll link it right
318:50 up here. But, I did a ton of reactive prompting in here because I have one
318:53 main agent calling four different agents. And then within those sub
318:56 agents, they all have different tools that they need to call. So this was very
319:00 very reactive when I was prompting this workflow or this system of agents. I
319:04 started with no persistent prompt at all. I just connected a tool and I
319:07 tested it out to see what happened. So an an example would be I hooked up an
319:10 email agent, but I didn't give it in any instructions and I running the AI to see
319:14 if it will call the tool automatically. A lot of times it will and then it only
319:17 comes to when you add another different agent that you need to prompt in, hey,
319:20 these are the two agents you have. Here's when to use each one. So anyways,
319:24 adding prompts based on errors. Here I have my system prompts. So if you guys
319:27 want to pause it and read through, you can take a look. But you can see it's
319:30 very very simple. I've got one example. I've got basically one brief rule and
319:34 then I just have all the tools it has and when to use them. And it's very very
319:38 concise and not overwhelming. And so what I want you guys to pay attention to
319:41 real quick is in the overview right here. I said, you know, you're the
319:44 ultimate personal assistant. Your job is to send the user's query to the correct
319:48 tool. That's all I had at first. And then I was getting this error where I
319:51 was saying, "Hey, write an email to Bob." And what was happening is it
319:56 wasn't sending that query to the email tool, which is supposed to do. It itself
319:59 was trying to write an email even though it has no tool to write an email. So
320:03 then I reactively came in here and said, "You should never be writing emails or
320:06 creating event summaries. You just need to call the correct tool." And that's
320:09 not something I could have proactively put in there because I didn't really
320:12 expect the agent to be doing that. So I saw the error and then I basically
320:16 hardcoded in what it should not be doing and what it should be doing. So another
320:19 cool example of hard coding stuff in is using examples. You know, we all
320:22 understand that examples are going to help the agent understand what it needs
320:26 to do based on certain inputs and how to use different tools. And so right here
320:29 you can see I added this example, but we'll also look at it down here because
320:32 I basically copied it in. What happened was the AI failed in a very specific
320:36 scenario. So I added a concrete example where I gave it an input, I showed the
320:40 actions it should take, and then I gave it the output. So in this case, what
320:43 happened was I asked it to write an email to Bob and it tried to send an
320:46 email or try it tried to hit the send email agent, but it didn't actually have
320:49 Bob's email address. So the email didn't get sent. So what I did here was I put
320:53 in the input, which was send an email to Bob asking him what time he wants to
320:56 leave. I then showed the two actions it needs to take. The first one was use the
321:00 contact agent to get Bob's email. Send this email address to the email agent
321:03 tool. And then the second action is use the email agent to send the email. And
321:07 then finally, the output that we want the personal assistant to say back to
321:10 the human is, "The email has been sent to Bob. Anything else I can help you
321:13 with?" The idea here is you don't need to put examples in there that are pretty
321:16 intuitive and that the agent's going to get right already. You only want to put
321:19 in examples where you're noticing common themes of the agents failing to do this
321:23 every time. I may as well hardcode in this example input and output and tool
321:28 calls. So, step four is to debug one error at a time. always change one thing
321:32 and one thing only at a time so you know exactly what you changed that broke the
321:36 automation. Too too often I'll see people just get rid of an entire section
321:40 and then start running things and now it's like okay well we're back at square
321:43 one because we don't know exactly what happened. So you want to get to the
321:46 point where you're adding one sentence, you're hitting run and it's either
321:49 fixing it or it's not fixing it and then you know exactly what to do. You know
321:52 exactly what broke or fixed your automation. And so one thing honestly I
321:55 want to admit here is I created that system prompt generator on my free
321:59 school community. Um, and really the idea there was just to help you with the
322:02 formatting because I don't really use that thing anymore because the fact that
322:06 doing that is very proactive in the sense that we're dropping in a sort of a
322:10 query into chat GBT, the custom GPT I built, it's giving us a system prompt
322:13 and then we're putting that whole thing in the agent and then just running it
322:16 and testing it. And in that case, you don't know exactly what you should
322:19 change to fix little issues. So, just wanted to throw that out there. I don't
322:22 really use that system prompt generator anymore. I now always like handcraft my
322:27 prompts. Anyways, from there, what you want to do is scale up slowly. So once
322:30 you confirm that the agent is consistently working with its first tool
322:34 and its first rule in its prompt, then you can slowly add more tools and more
322:38 prompt rules. So here's an example. You'll add a tool. You'll add a sentence
322:42 in the prompt about the tool. Test out a few scenarios. If it's working well, you
322:45 can then add another tool and keep testing out and slowly adding pieces.
322:48 But if it's not, then obviously you'll just hard prompt in the changes of what
322:52 it's doing wrong and how to fix that. From there, you'll just test out a few
322:55 more scenarios. Um, and then you can just kind of rinse and repeat until you
322:58 have all the functionality that you're looking for. All right, now let's look
323:01 at the core components of an effective prompt. Each agent you design should
323:04 follow a structured prompt to ensure clarity, consistency, and efficiency.
323:09 Now, there's a ton of different types of prompting you can do based on the role
323:12 of agent. Ultimately, they're going to fall under one of these three buckets,
323:16 which is toolbased prompting, conversational prompting, or like
323:19 content creation type prompting, and then categorization/ealuation prompting. And
323:23 the reason I wanted to highlight that is because obviously if we're creating like
323:26 a content creation agent, we're not going to say what tools it has if it has
323:29 no tools. But um yeah, I just wanted to throw that out there. And another thing
323:32 to keep in mind is I really like using markdown formatting for my prompts. As
323:35 you can see these examples, we've got like different headers with pound signs
323:39 and we're able to specify like different sections. We can use bolded lists. We
323:42 can use numbered lists. I've seen some people talk about using XML for
323:45 prompting. I'm not a huge fan of it because um as far as human readability,
323:49 I think markdown just makes a lot more sense. So that's what I do. Anyways, now
323:53 let's talk about the main sections that I include in my prompts. The first one
323:56 is always a background. So whether this is a role or a purpose or a context, I
324:01 typically call it something like an overview. But anyways, just giving it
324:04 some sort of background that defines who the agent is, what its overall goal is.
324:08 And this really sets the foundation of, you know, sort of identifying their
324:12 persona, their behavior. And if you don't have the section, the agent is
324:16 kind of going to lack direction and it's going to generate really generic or
324:21 unfocused outputs. So set its role and this could be really simple. You can
324:23 kind of follow this template of you are a blank agent designed to do blank. Your
324:27 goal is blank. So you are a travel planning AI assistant that helps users
324:31 plan their vacations. Your goal is to pro provide detailed personalized travel
324:35 itineraries based on the user's input. Then we have tools. This is obviously
324:38 super super important when we're doing sort of non-deterministic agent
324:41 workflows where they're going to have a bunch of different tools and they have
324:44 to use their brain, their chat model to understand which tool does what and when
324:48 to use each one. So, this section tells the agent what tools it has access to
324:52 and when to use them. It ensures the AI selects the right tool for the right
324:55 task. And a well structured tools section prevents confusion and obviously
324:59 makes AI more efficient. So, here's an example of what it could look like. We
325:02 have like the markdown header of tools and then we have like a numbered list.
325:05 We're also showing that the tools are in bold. This doesn't have to be the way
325:08 you do it, but sometimes I like to show them in bold. Um, and it's you can see
325:12 it's really simple. It's it's not too much. It's not overwhelming. It's not
325:16 too um you know, it's just very clear. Google search, use this tool when the
325:19 user asks for real-time information. Email sender, use this tool when the
325:22 user wants to send a message. Super simple. And what else you can do is you
325:26 can define when to use each tool. So right here we say we have a contact
325:29 database. Use this tool to get contact information. You must use this before
325:34 using the email generator tool because otherwise it won't know who to send the
325:36 email to. So you can actually define these little rules. Keep it very clear
325:41 within the actual tool layer of the prompt. And then we have instructions. I
325:44 usually call them rules as you can see. Um, you could maybe even call it like a
325:48 standard operating procedure. But what this does, it outlines specific rules
325:52 for the agent to follow. It dictates the order of operations at a high level.
325:55 Just keep in mind, you don't want to say do this in this order every time because
325:58 then it's like, why are you even using an agent? The whole point of an agent is
326:01 that it's, you know, it's taking an input and something happens in this
326:04 black box where it's calling different tools. It may call this one twice. It
326:07 may call this one three times. It may call them none at all. Um, the idea is
326:11 that it's variable. It's not deterministic. So, if you're saying do
326:15 it and this every time, then you should just be using a sequential workflow. It
326:18 shouldn't even be an agent. But obviously, the rules section helps
326:22 prevent misunderstandings. So, here's like a high level instruction, right?
326:25 You're greeting the user politely. If the user provides incomplete
326:27 information, you ask follow-up questions. Use the available tools only
326:31 when necessary. Structure your response in clear, concise sentences. So, this
326:34 isn't saying like you do this in this order every time. It's just saying when
326:37 this happens, do this. If this happens, do that. So, here's an example for AI
326:41 task manager. When a task is added, you confirm with the user. If a deadline is
326:45 missing, ask the user to specify one. If a task priority is high, send a
326:48 notification. Store all tasks in the task management system. So, it's very
326:52 clear, too. Um, we don't need all these extra filler words because remember, the
326:56 AI can understand what you're saying as long as it has like the actual context
326:59 words that have meaning. You don't need all these little fillers. Um, you don't
327:03 need these long sentences. So, moving on to examples, which you know, sample
327:07 inputs and outputs and also actions within those between the inputs and
327:11 outputs. But this helps the AI understand expectations by showing real
327:14 examples. And these are the things that I love to hard code in there, hard
327:18 prompt in there. Because like I said, there's no point in showing an example
327:21 if the AI was already going to get that input and output right every time. You
327:24 just want to see what it's messing up on and then put an example in and show it
327:28 how to fix itself. So more clear guidance and it's going to give you more
327:31 accurate and consistent outputs. Here's an example where we get the input that
327:34 says, can you generate a trip plan for Paris for 5 days? The action you're
327:37 going to take is first call the trip planner tool to get X, Y, and Z. Then
327:41 you're going to take another action which is calling the email tool to send
327:44 the itinerary. And then finally, the output should look something like this.
327:49 Here's a 5-day Paris itinerary. Day 1, day 2, day 3, day 4, day 5. And then I
327:53 typically end my prompts with like a final notes or important reminders
327:56 section, which just has like some miscellaneous but important reminders.
327:59 It could be current date and time, it could be rate limits, it could be um
328:03 something as simple as like don't put any emojis in the output. Um, and
328:09 sometimes why I do this is because something can get lost within your
328:13 prompt. And sometimes like I I've thrown the today's date up top, but then it
328:16 only actually realizes it when it's in the bottom. So playing around with the
328:19 actual like location of your things can be sometimes help it out. Um, and so
328:23 having a final notes section at the bottom, not with too many notes, but
328:26 just some quick things to remember like always format responses as markdown.
328:30 Here's today's date. If unsure about an answer, say I don't have that
328:33 information. So just little miscellaneous things like that. Now, I
328:36 wanted to quickly talk about some honorable mentions because like I said
328:40 earlier, the prompt sections and components varies based on the actual
328:44 type of agent you're building. So, in the case of like a content creator agent
328:48 that has no tools, um you wouldn't give it a tool section, but you may want to
328:51 give it an output section. So, here's an output section that I had recently done
328:55 for my voice travel agent. Um, which if you want to see that video, I'll drop a
328:59 link right here. But what I did was I just basically included rules for the
329:01 output because the output was very specific with HTML format and it had to
329:05 be very structured and I wanted horizontal lines. So I created a whole
329:09 section dedicated towards output format as you can see. And because I used three
329:13 pound signs for these subsections, the agent was able to understand that all
329:17 this rolled up into the format of the output section right here. So anyways, I
329:21 said the email should be structured as HTML that will be sent through email.
329:25 Use headers to separate each section. Add a horizontal line to each section.
329:28 Um, I said what it what should be in the subject. I said what should be in the
329:31 introduction section. I said how you should list these departure dates,
329:34 return dates, flights for the flight section. Um, here's something where I
329:39 basically gave it like the HTML image tag and I showed how to put the image in
329:42 there. I showed to make I said like make a inline image rather than um an
329:47 attachment. I said to have each resort with a clickable link. I also was able
329:51 to adjust the actual width percentage of the image by specifying that here in the
329:56 prompt. Um, so yeah, this was just getting really detailed about the way we
329:59 want the actual format to be structured. You can see here we have activities that
330:02 I actually misspelled in my agent, but it didn't matter. Um, and then finally
330:06 just a sign off. And then just some final additional honorable mentions,
330:10 something like memory and context management, um, some reasoning, some
330:14 error handling, but typically I think that these can be just kind of one or
330:17 two sentences that can usually go in like the rules or instructions section,
330:21 but it depends on the use case, like I said. So, if it needs to be pretty
330:24 robust, then creating an individual section at the bottom called memory or
330:28 error handling could be worth it. It just depends on, like I said, the actual
330:32 use case and the goal of the agent. Okay, cool. So, now that we've got
330:35 through that document, let's hop into Nitn and we'll just do some really quick
330:38 examples of some reactive live prompting. Okay, so I'm going to hit
330:42 tab. I'm going to type in AI agent. We're going to grab one and we're going
330:44 to be communicating with it through this connected chat trigger node. Now, I'm
330:47 going to add a chat model real quick just so we can get set up up and
330:51 running. We have our 40 Mini. We're good to go. And just a reminder, there is
330:54 zero assistant prompt in here. All it is is that you are a helpful assistant. So,
330:58 what's the first thing to do is we want to add a tool. Test it out. So, I'm
331:03 going to add a um Google calendar tool. I'm just going to obviously select my
331:07 calendar to pull from. I'm going to, you know, fill in those parameters using the
331:10 model by clicking that button. And I'm just going to say this one's called
331:14 create event. So, we have create event. And so, now we're going to do our test
331:17 and see if the tool is working properly. I'm going to say create an event for
331:23 tonight at 700 p.m. So send this off. We should see the agents able to understand
331:27 to use this create event tool because it's using an automatic description. But
331:31 now we see an issue. It created the start time for October 12th, 2023 and
331:36 the end time for also October 12th, 2023. So this is our first instance of
331:39 reactive prompting. It's calling the tool correctly. So we don't really need
331:43 to prompt in like the actual tool name yet. Um it's probably best practice just
331:47 to just to do so. But first, I'm just going to give an overview and say you
331:52 are a calendar. Actually, no. I'm just going to say you are a helpful assistant
331:56 because that's all it is right now. And we don't know what else we're adding
331:59 into this guy. But now we'll just say tools is create event just so it's
332:06 aware. Use this to create an event. And then we want to say final notes. um here
332:14 is the current date and time because that's where it messed up is because it
332:17 didn't know the current date and time even though it was able to call the
332:20 correct tool. So now we'll just send this same thing off again and that
332:24 should have fixed it. We reactively fixed the error and um we're just making
332:28 sure that it is working as it should now. Okay, there we go. It just hit the
332:31 tool and it says the event has been created for tonight at 7 p.m. And if I
332:35 click into my calendar, you can see right there we have the event that was
332:38 just created. So cool. Now that's working. What we're going to do now is
332:40 add another tool. So, we'll drag this one over here. And let's say we want to
332:44 do a send email tool. We're going to send a message. We're going to change
332:48 the name to send email. And just so you guys are aware like how it's able to
332:52 know right here, tool description, we're setting automatically. If we set
332:55 manually, we would just say, you know, use this tool to send an email. But we
332:59 can just keep it simple. Leave it as set automatic. I'm going to turn on to
333:04 subject and message as defined by the model. And that's going to be it. So now
333:07 we just want to test this thing again before we add any prompts. We'll say
333:12 send an email to bobacample.com asking what's up. We'll send this off. Hopefully it's hitting
333:17 the right tool. So we should see there we go. It hit the send email tool and
333:21 the email got sent. We can come in here and check everything was sent correctly.
333:24 Although what we noticed is it's signing off as best placeholder your name and we
333:28 don't want to do that. So let's come in here and let's add a tool section for
333:33 this tool and we'll tell it how to how to act. So send email. That's another
333:37 tool it has. And we're going to say use this to send an email. Then we're going to say sign off
333:45 emails as Frank. Okay. So that's reactively fixing an error we saw. I'm
333:48 just now going to send off that same query. We already know that it knows how
333:51 to call the tool. So it's going to do that once again. There we go. We see the
333:55 email was sent. And now we have a sign off as Frank. So that's two problems
333:58 we've seen. And then we've added one super short line into the system prompt
334:02 and fixed those problems. Now let's do something else. Let's say in Gmail we
334:08 want to be able to label an email. And in order to label an email, as you can
334:13 see, add label to a message, we need a message ID and we need a label name or
334:17 an ID for that label. And this is we could choose from a list, but more
334:21 realistically, we want the label ID to be pulled in dynamically. So if we need
334:24 to get these two things, what we have to do is first get emails and also get
334:28 labels. So first I'm going to do get many. I'm going to say this is using,
334:32 you know, we're we're calling this tool get get emails. And then we don't want
334:37 to return all. We want to do a limit. And we also want to choose from a
334:40 sender. So we'll have this also be dynamically chosen. So cool. We don't
334:45 have a system prompt in here about this tool, but we're just going to say get my
334:52 last email from Nate Herkman. So we'll send that off. It should be hitting the
334:55 get emails tool, filling in Nate Herkman as the sender. And now we can see that
334:59 we just got this email with a subject hello. We have the message ID right
335:03 here. So that's perfect. And now what we need to do is we need to create a tool
335:06 to get the label ID. So I'm going to come in here and I'm going to say um get
335:11 many and we're going to go to label. We're going to do um actually we'll just
335:16 return all. That works. There's not too many labels in there. Um and we have to
335:19 name this tool of course. So we're going to call this get labels. So once again
335:24 there's no tools in or no prompt in here about these two tools at all. and we're
335:29 gonna say get my email labels. We'll see if it hits the right tool. There we go. It did. And it
335:36 is going to basically just tell us, you know, here they are. So, here are our
335:40 different labels. Um, and here are the ones that we created. So, promotion,
335:43 customer support, high priority, finance, and billing. Cool. So, now we
335:48 can try to actually label an email. So, that email that we just got from um from
335:54 Nate Hkelman that said hello, let's try to label that one. So, I'm going to add
335:57 another Gmail tool, and this one's going to be add a label to a message. And we
336:01 need the message ID and the label ID. So, I'm just going to fill these in with
336:05 the model parameter, and I'm going to call this tool add label. So, there's no
336:12 prompting for these three tools right here, but we're going to try it out
336:17 anyway and see what happens. So, add a promotion label to my last email
336:22 from Nate Herklman. Send that off. See what happens? It's getting emails. It tried
336:29 to add a label before. So, now we're kind of We got in that weird loop. As
336:32 you can see, it tried to add a label before it got labels. So, it didn't know
336:36 what to do, right? Um, we'll click into here. We'll see that I don't really
336:40 exactly know what happened. Category promotions. Looking in my inbox,
336:43 anything sent from Nate Hkelman, we have the email right here, but it wasn't
336:47 accurately labeled. So, let's go back into our agent and prompt this thing a
336:50 little bit better to understand how to use these tools. So, I'm going to
336:53 basically go into the tools section here and I'm going to tell it about some more
336:57 tools that it has. So, get emails, right? This one was it was already
337:01 working properly and we're just saying use this to get emails. Now, we have to
337:07 add get labels. We're just saying use this to get labels. Um, and we know that we want it
337:14 to use this before actually trying to add a label, but we're not going to add
337:17 that yet. We're going to see if it can work with a more minimalistic prompt.
337:21 And then finally, I'm going to say add labels. And this one is use this tool to
337:28 add a label to an email. Okay. So now that we just have very basic tool
337:32 descriptions in here, we don't actually say like when to use it or how. So I'm
337:35 going to try this exact same thing again. Add a promotion label to my last
337:39 email from Nate HKman. Once again, it tried to use ad label before and it
337:42 tried to just call it twice as you can see. So not working. So back in email, I
337:47 just refreshed and you can see the email is still not labeled correctly. So,
337:51 let's do some more reactive prompting. What we're going to do now is just say
337:55 in order to add labels, so in the description of the ad label tool, I'm
337:59 going to say you must first use get emails to get the message ID. And
338:07 actually, I want to make sure that it knows that this is a tool. So, what I'm
338:10 going to do is I'm going to put it in in a quote, and I'm going to make it the
338:13 exact same capitalization as we defined over here. So you must first use get
338:17 emails to get the message ID of the email to label. Then you must use get
338:29 labels to get the label ID of the email to label. Okay. So we added in this one
338:33 line. So if it's still not working, we know that this line wasn't enough. I'm
338:36 going to hit save and I'm going to try the exact same thing again. Add a
338:39 promotion label to my last email. So it's getting now it's getting labels and
338:43 now it still had an error with adding labels. So, we'll take a look in here.
338:46 Um, it said that it did it successfully, but obviously didn't. It filled in label
338:53 127 blah blah blah. So, I think the message ID is correct, but the label ID
338:57 is not. So, what I'm going to try now is reactively prompting in here. I'm going
339:03 to say the label ID of the email to label. We'll try that. We'll see if that
339:07 fixes it. It may not. We'll have to keep going. So, now we'll see. It's going to
339:10 at least it fixed the order, right? So, it's getting emails and getting labels
339:14 first. And now look at that. We successfully got a labeled email. As you
339:19 can see, we have our um maybe we didn't. We'll have to go into Gmail and actually
339:22 check. Okay, never mind. We did. As you can see, we got the promotion email for
339:26 this one from Nate Hookman that says hello. And um yeah, that's just going to
339:32 be a really cool simple example of how we sort of take on the process of
339:36 running into errors, adding lines, and being able to know exactly what caused
339:38 what. So, I know the video was kind of simple and I went through it pretty
339:41 fast, but I think that it's going to be a good lesson to look back on as far as
339:45 the mindset you have and approaching reactively prompting and adding
339:48 different tools and testing things because at the end of the day, building
339:52 agents is a super super testheavy iterative, you know, refining process of
339:57 build, test, change, build, test, change, all that kind of stuff. All
339:59 right, so these next sections are a little bit more miscellaneous, but cool
340:02 little tips that you can play around with with your AI agents. We're going to
340:06 be talking about output parsing, human in the loop, error workflows, and having
340:11 an agent have a dynamic brain. So, let's get into it. All right, so output
340:15 parsing. Let's talk about what it actually means and why you need to use
340:18 it. So, just to show you guys what we're working with, I'm just going to come in
340:21 here real quick and ask our agent to create an email for us. And when it does
340:25 this, the idea is that it's going to create a subject and a body so that we
340:30 could drag this into a Gmail node. So actually before I ask it to do that,
340:33 let's just say we're we're dragging in a Gmail node and we want to have this guy
340:38 send an email. We're if I can find this node which is right up here. Okay, send
340:42 a message. Now what you can see is that we have different fields that we need to
340:46 configure the two, the subject and the message. So ideally when we're asking
340:51 the agent to create an email, it will be able to output those three different
340:54 things. So let me just show an example of that. Please send an email to
341:01 nateample.com asking what's up. We need the to the subject and the
341:06 email. Okay, so ideally we wouldn't say that every time because um we would have
341:11 that in the system prompt. The issue is this workflow doesn't let you run if the
341:14 node is errored and it's errored because we didn't fill out stuff. So I'm just
341:17 going to resend this message. But let me show you guys exactly why we need to use
341:21 an output parser. So it outputs the two to the subject and the message. And if
341:24 we actually click into it though, the issue is that it comes through all in
341:30 one single, you know, item called output. And so you can see we have the
341:33 two, the subject, and the message. And now if I went into here to actually map
341:37 these variables, I couldn't have them separated or I would need to separate
341:41 them in another step because I want to drag in the dynamic variable, but I can
341:45 only reference all of it at once. So that's not good. That's not what we
341:48 want. That's why we need to connect an output parser. So I'm going to click
341:52 into here. And right here there's an option that says require specific output
341:55 format. And I'm going to turn that on. What that does is it just gave us
341:59 another option to our AI agent. So typically we basically right here have
342:03 chat model, memory, and tool. But now we have another one called output parser.
342:06 So this is awesome. I'm going to click onto the output parser. And you can see
342:10 that we have basically three options. 99.9% of the time you are just going to
342:14 be using a structured output parser which means you're able to give your
342:20 agent basically a defined JSON schema and it will always output stuff in that
342:25 schema if you need to have it kind of be a little bit automatically fixed with AI
342:29 like I said I almost never have to use this but that's what you would use the
342:33 autofixing output parser for so if I click on the structured output parser
342:37 what happens is right now we see a JSON example so if we were to talk to our
342:41 agent and say hey can you um you know tell me some information about
342:44 California. It would output the state in one string item called state and then
342:50 would also output an array of cities LA, San Francisco, San Diego. So what we
342:54 want to do is we want to quickly define to our AI agent how to output
342:58 information and we know that we wanted to output based on this node. We need a
343:03 two, we need a subject, and we need a message. So don't worry, you're not
343:07 going to have to write any JSON yourself. I'm going to go to chatgbt and
343:12 say, "Help me write a JSON example for a structured output parser in NADN. I need
343:18 the AI agent to output a two field, a subject field, and a body field." We'll
343:24 just go ahead and send this off. And as you guys know, all LLMs are trained
343:28 really well on JSON. It's going to know exactly what I'm asking for here. And
343:31 all I'm going to have to do is copy this and paste that in. So once this finishes
343:36 up, it's very simple. Two, subject body. And it's being a little extra right now
343:39 and giving me a whole example body, but I just have to copy that. I have to go
343:43 into here and just replace that JSON example. Super simple. And now hopefully
343:49 I don't even have to prompt this guy at all. And we'll give it a try. But if
343:53 it's not working, what we would do is we would prompt in here and say, "Hey, here
343:56 is basically how we want you to output stuff. Here's your job." All that kind
343:59 of stuff, right? But let me just resend this message. We'll take a look. We'll
344:02 see that it called its output parser because this is green. And now let's
344:07 activate the Gmail node and click in. Perfect. So what we see on this left
344:11 hand side now is we have a two, we have a subject, and we have a body, which
344:14 makes this so much easier to actually map out over here and drag in. So in
344:21 different agents in this course, you're going to see me using different
344:24 structured output parsers, whether that is to get to subject and body, whether
344:28 that is to create different stories, stuff like that. Let me just show one
344:31 more quick example of like a different way you could use this. I'm going to
344:35 delete this if I can actually delete it. And we are going to just change up the
344:40 structure output parser. So let's say we want an AI agent to create a story for us. So I'm going
344:48 to just talk to this guy again. Help me write a different JSON example where I
344:54 want to have the agent output a title of the story, an array of characters, and
345:00 then three different scenes. Okay. So we'll send that off and see what it
345:03 does. And just keep in mind it's creating this JSON basically a template
345:07 that's telling your agent how to output information. So we would basically say,
345:12 "Hey, create me a story about um a forest." And it would output a title,
345:16 three characters, and three different scenes as you can see here. So we'll
345:22 copy this. We'll paste this into here. And once again, I'm not even going to
345:25 prompt the agent. And let's see how it does. Please create me a story about an
345:31 airplane. Okay, we'll go ahead and take a look at what this is going to do. This one's
345:36 going to spin a little bit longer. Oh, wow. Didn't even take too long. So, it
345:40 called the structured output parser. And now, let's click into the agent and see
345:44 how it output. Perfect. So, we have the title is the adventure of Skyward the
345:49 airplane. We have four characters, Skyward, Captain Jane, Navigator Max,
345:54 and ground engineer Leo. And then you can see we have four different scenes
345:57 that each come with a scene number and a description. So if we wanted to, we
346:01 could have this be like, you know, maybe we want an image prompt for each of
346:04 these scenes. So we can feed that into an image generation model and we would
346:07 just have to go into that chatbt and say, "Hey, for each scene, add another
346:11 field called image prompt." And it would just basically take care of it. So just
346:14 wanted to show you how this works, how easy it is to set up these different
346:18 JSON structured output parsers and why it's actually valuable to do within. So
346:22 hopefully that opened your eyes a little Okay, our workflow is actively listening
346:30 for us in Telegram and I'm going to ask it to make an expost about coffee at
346:33 night. So, as you can see, this first agent is going to search the internet
346:37 using Tavi and create that initial X post for us. Now, we just got a message
346:41 back in our Telegram that says, "Hey, is this post good to go?" Drinking coffee
346:44 at night can disrupt your sleep since caffeine stays in your system for hours,
346:48 often leading to poorer sleep quality. So, what I'm going to do is click on
346:50 respond. And this gives us the ability to give our agent feedback on the post
346:54 that it initially created. So, here is that response window and I'm going to
346:56 provide some feedback. So, I'm telling the agent to add at the end of the tweet
347:00 unless it's decaf. And as soon as I hit submit, we're going to see this go down
347:03 the path. It's going to get classified as a denial message. And now the
347:07 revision agent just made those changes and we have another message in our
347:10 telegram with a new X post. So now, as you can see, we have a new post. I'm
347:13 going to click on respond and open up that window. And what we can see here is
347:16 now we have the changes made that we requested. At the end, it says unless
347:20 it's decaf. So now all we have to do is respond good to go. And as soon as we
347:23 submit this, it's going to go up down the approval route and it's going to get
347:27 submitted and posted to X. So, here we go. Let's see that in action. I'll hit
347:29 submit and then we're going to watch it get posted onto X. And let's go check
347:33 and make sure it's there. So, here's my beautiful X profile. And as you can see,
347:35 I was playing around with some tweets earlier. But right here, we can see
347:38 drinking coffee at night can disrupt your sleep. We have the most recent
347:41 version because it says unless it's decaf. And then we can also click into
347:45 the actual blog that Tavi found to pull this information from. So, now that
347:48 we've seen this workflow in action, let's break it down. So the secret that
347:51 we're going to be talking about today is the aspect of human in the loop, which
347:55 basically just means somewhere along the process of the workflow. In this case,
347:58 it's happening right here. The workflow is going to pause and wait for some sort
348:02 of feedback from us. That way, we know before anything is sent out to a client
348:06 or posted on social media, we've basically said that we 100% agree that
348:10 this is good to go. And if the initial message is not good to go, we have the
348:14 ability to have this unlimited revision loop where it's going to revise the
348:18 output over and over until we finally agree that it's good to go. So, we have
348:21 everything color coded and we're going to break it down as simple as possible.
348:24 But before we do that here, I just wanted to do a real quick walkthrough of
348:28 a more simple human in the loop because what's going on up here is it's just
348:31 going to say, "Do you like this?" Yes or no compared to down here where we
348:35 actually give textbased feedback. So, we'll break them both down, but let's
348:37 start up here real quick. And by the way, if you want to download the
348:40 template for free and play around with either of these flows, you can get that
348:43 in my free school community. The link for that will be down in the
348:45 description. And when it comes to human in the loop in Naden, if you click on
348:48 the plus, you can see down here, human in the loop, wait for approval or human
348:53 input before continuing. You click on it, you can see there's a few options,
348:56 and they all just use the operation called send and wait for response. So
348:59 obviously there's all these different integrations, and I'm sure more will
349:03 even start to roll out, but in this example, we're just using Telegram.
349:05 Okay, so taking a look at this more simple workflow, we're going to send off
349:08 the message, make an expost about AI voice agents. What's happening is the
349:11 exact same thing as the demo where this agent is going to search the web and
349:14 then it's going to create that initial content for us. And now we've hit that
349:18 human in the loop step. As you can see, it's spinning here purple because it's
349:21 waiting for our approval. So in our Telegram, we see the post. It asks us if
349:25 this is good to go. And let's just say that we don't like this one, and we're
349:27 going to hit decline. So when I hit decline, it goes down this decision
349:31 point where it basically says, you know, did the human approve? Yes or no. If
349:34 yes, we'll post it to X. If no, it's going to send us a denial message, which
349:38 basically just says post was denied. Please submit another request. And so
349:41 that's really cool because it gives us the ability to say, okay, do we like
349:44 this? Yes. And it will get posted. Otherwise, just do nothing with it. But
349:48 what if we actually want to give it feedback so that it can take this post?
349:51 We can give it a little bit of criticism and then it will make another one for us
349:55 and it just stays in that loop rather than having to start from square one. So
349:58 that's exactly what I did down here with the human and loop 2.0 know where we're
350:02 able to give textbased feedback instead of just saying yes or no. So now we're
350:05 going to break down what's going on within every single step here. So what
350:08 I'm going to do is I'm going to click on executions. I'm going to go to the one
350:12 that we did in the live demo and bring that into the workflow so we can look at
350:15 it. So what we're going to do is just do another live run and walk through step
350:20 by step the actual process of this workflow. So I'm going to hit test
350:23 workflow. I'm going to pull up Telegram and then I'm going to ask it to make us
350:26 an expost. Okay, so I'm about to fire off make me an expost about crocodiles.
350:31 So sent that off. This expost agent is using its GPT41 model as well as Tavly
350:37 search to do research, create that post, and now we have the human in the loop
350:40 waiting for us. So before we go look at that, let's break down what's going on
350:43 up front. So the first phase is the initial content. This means that we have
350:46 a telegram trigger, and that's how we're communicating with this workflow. And
350:50 then it gets fed into the first agent here, which is the expost agent. Let's
350:53 click into the expost agent and just kind of break down what's going on here.
350:56 here. So, the first thing to notice is that we're looking for some sort of
350:59 prompt. The agent needs some sort of user message that it's going to look at.
351:03 In this case, we're not doing the connected chat trigger node. We're
351:06 looking within our Telegram node because that's where the text is actually coming
351:09 through. So, on this lefth hand side, we can see all I basically did was right
351:13 here is the text that we typed in, make me an expost about crocodiles. And all I
351:17 did was I dragged this right into here as the user message. And that is what
351:20 the agent is actually looking at in order to take action. And then the other
351:23 thing we did was gave the agent a system message which basically defines its
351:27 behavior. And so here's what we have. The overview is you are an AI agent
351:31 responsible for creating expost based on a user's request. Your instructions are
351:35 to always use the Tavly search tool to find accurate information. Write an
351:39 informative engaging tweet, include a brief reference to the source directly
351:43 in the tweet and only output the tweet. We listed its tool which it only has one
351:46 called tavly search and we told it to use this for real-time web search and
351:50 then just gave it an example basically saying okay here's an input you may get
351:54 here's the action you will take and then here's the output that we want you to
351:57 output and then we just gave them final notes and I know I may be read through
352:01 this pretty quick but keep in mind you can download the template for free and
352:03 the prompt will be in there and then what you could do is you can click on
352:06 the logs for an agent and you can basically look at its behavior so we can
352:11 see that it used its chat model GBT4.1 read through the system prompt decided
352:14 said, "Okay, I need to go use tably search." So, here's how it searched for
352:18 crocodile information. And then it used its model again to actually create that
352:22 short tweet right here. And then we'll just take a quick look at what's going
352:25 on within the actual Tavi search tool here. So, if you download this template,
352:28 all you'll have to do is plug in your own credential. Everything else should
352:32 be set up for you. But, let me just break it down real quick. So, if you go
352:35 to tavly.com and create an account, you can get a,000 free searches per month.
352:39 So, that's the kind of plan I'm on. But anyways, here is the documentation. You
352:42 can see right here we have the Tavi search endpoint which is right here. All
352:46 we have to do is authorize ourselves. So we'll have an authorization as a header
352:50 parameter and then we'll do bearer space our API token. So that's how you'll set
352:53 up your own credential. And then all I did was I copied this data field into
352:57 the HTTP request. And this is where you can do some configuration. You can look
353:00 through the docs to see how you want to make this request. But all I wanted to
353:03 do here was just change the search query. So back in end you can see in my
353:08 body request I I changed the query by using a placeholder. Right here it says
353:11 use a placeholder for any data to be filled in by the model. So I changed the
353:15 query to a placeholder called search term. And then down here I defined the
353:18 search term placeholder as what the user is searching for. So what this means is
353:22 the agent is going to interpret our query that we sent in telegram. It's
353:27 then going to use this tavly tool and basically use its brain to figure out
353:30 what should I search for. And in this case on the lefth hand side you can see
353:33 that it filled out the search term with latest news or facts about crocodiles.
353:38 And then we get back our response with information and a URL. And then it uses
353:42 all of this in order to actually create that post. Okay. So, here's where it may
353:45 seem like it's going to get a little tricky, but it's not too bad. Just bear
353:48 with me. And I wanted to do some color coding here so we could all sort of stay
353:52 on the same page. So, what we're doing now is we're setting the post. And this
353:56 is super important because we need to be able to reference the post later in the
353:59 workflow. whether that's when we're actually sending it over to X or when
354:04 we're making a revision and we need the revision agent to look at the original
354:07 post as well as the feedback from the human. So in the set node, all we're
354:11 doing is we're basically setting a field called post and we're dragging in a
354:15 variable called JSON.output. And this just means that it's going to be
354:19 grabbing the output from this agent or the revision agent no matter what. As
354:22 you can see, it's looped back into this set because if we're defining a variable
354:26 using dollar sign JSON, it means that we're going to be looking for whatever
354:29 node immediately finished right before this one. And so that's why we have to
354:32 keep this one kind of flexible because we want to make sure that at the end of
354:36 the day, if we made five or six or seven revisions, that only the most recent
354:41 version will actually be posted on X. So then we move into the human in the loop
354:44 phase of this workflow. And as you can see, it's still spinning. It's been
354:46 spinning this whole time while we've been talking, but it's waiting for our
354:50 response. So anyways, it's a send and wait for a response operation. As you
354:54 can see right here, the chat ID is coming from our Telegram trigger. So if
354:57 I scroll down in the Telegram trigger on the lefth hand side, you can see that I
355:00 have a chat ID right here. And all I did was I dragged this in right here.
355:03 Basically just meaning, okay, whoever communicates with this workflow, we need
355:07 to send and get feedback from that person. So that's how we can make this
355:10 dynamic. And then I just made my message basically say, hey, is this good to go?
355:14 And then I'm dragging in the post that we set earlier. So this is another
355:18 reason why it's important is because we want to request feedback on the most
355:21 recent version as well, not the first one we made. And then like I mentioned
355:24 within all of these human in the loop nodes, you have a few options. So you
355:28 can do free text, which is what we're doing here. Earlier what we did was
355:31 approval, which is basically you can say, hey, is there an approve button? Is
355:34 there an approve and a denial button? How do you want to set that up? But this
355:37 is why we're doing free text because it allows for us to actually give feedback,
355:40 not just say yes or no. Cool. So what we're going to do now is actually give
355:45 our feedback. So, I'm going to come into here. We have our post about crocodiles.
355:49 So, I'm going to hit respond and it's going to open up this new page. And so,
355:51 yes, it's a little annoying that this form has to pop up in the browser rather
355:56 than natively in Telegram or whatever, you know, Slack, Gmail, wherever you're
355:59 doing the human in the loop, but I'm sure that'll be a fix that'll come soon.
356:01 But, it's just right now, I think it's coming through a web hook. So, they just
356:04 kind of have to do it like this. Anyways, let's say that we want to
356:07 provide some feedback and say make this shorter. So, I'm going to say make this
356:11 shorter. And as I submit it, you're going to see it go to this decision
356:14 point and then it's going to move either up or down. And this is pretty clearly a
356:18 denial message. So we'll watch it get denied and go down to the revision agent
356:22 as you can see. And just like that that quickly, we already have another one to
356:26 look at. So before we look at it and give feedback, let's just look at what's
356:29 actually going on within this decision point. So in any automation, you get to
356:32 a point where you have to make a decision. And what's really cool about
356:36 AI automation is now we can use AI to make a decision that typically a
356:39 computer couldn't because a typical decision would be like is this number
356:42 greater than 10 or less than 10. But now it can read this text that we submitted.
356:46 Make this shorter and it can say okay is this approved or declined. And basically
356:50 I just gave it some short definitions of like what an approval message might look
356:53 like and what a denial message might look like. And you can look through that
356:56 if you download the template. But as you can see here it pushed this message down
357:00 the declined branch because we asked it to make a revision. And so it goes down
357:03 the denial branch which leads into the revision agent. And this one's really
357:07 really simple. All we did here was we gave it two things as the user message.
357:12 We said here's the post to revise. So as you can see it's this is the initial
357:15 post that the first agent made for us. And then here is the human feedback. So
357:18 it's going to look at this. It's going to look at this and then it's going to
357:21 make those changes because all we said in the system prompt was you're an
357:24 expert Twitter writer. Your job is to take an incoming post and revise it
357:27 based on the feedback that the human submitted. And as you can see here is
357:31 the output. it made the tweet a lot shorter. And that's the beauty of using
357:34 the set node is because now we loop that back in. The most recent version has
357:38 been submitted to us for feedback. So, let's open that up real quick in our
357:41 Telegram. And now you can see that the shorter tweet has been submitted to us.
357:45 And it's asking for a response. So, at this point, let's say we're good to go
357:47 with this tweet. I'm going to click respond. Open up this tab. Recent
357:50 crocodile attacks in Indonesia. Highlight the need for caution in their
357:54 habitats. Stay safe. We've got a few emojis. And I'm just going to say, let's
357:58 just say send it off because it can interpret multiple ways of saying like
358:02 yes, it's good to go. So, as soon as I hit submit, we're going to watch it go
358:05 through the decision point and then post on our X. So, you see that right here,
358:09 text classifier, and now it has been posted to X. And I just gave our X
358:12 account a refresh. You can see that we have that short tweet about recent
358:16 crocodile attacks. Okay, so now that we've seen another example of a live run
358:19 through, a little more detailed, let me talk about why I made the color coding
358:23 like this. So the set note here, its job is basically just I'm going to be
358:27 grabbing the most recent version of the post because then I can feed it into the
358:31 human in the loop. I can then feed that into the revision if we need to make
358:34 another revision because you want to be able to make revisions on top of
358:38 revisions. You don't want to be only making revisions on the first one.
358:40 Otherwise, you're going to be like, what's the point? And then also, of
358:44 course, you want to post the most recent version, not the original one, because
358:47 again, what's the point? So in here, you can see there's two runs. The first one
358:52 was the first initial content creation and then the second one was the revised
358:55 one. Similarly, if we click into the next node which was request feedback,
358:59 the first time we said make this shorter and then the second time we said send it
359:02 off. And then if we go into the next node which was the text classifier, we
359:06 can see the first time it got denied because we said make this shorter and
359:10 the second time it said send it off and it got approved. And that's basically
359:14 the flow of you know initial creation. We're setting the most recent version.
359:18 We're getting feedback. We're making a decision using AI. And as you can tell
359:22 for the text classifier, I'm using 2.0 Flash rather than GPT 4.1. And then of
359:27 course, if it's approved, it gets posted. If it's not, it makes revisions.
359:30 And like I said, this is unlimited revisions. And it's revisions on top of
359:34 revisions. So when it comes to Human in the Loop, you can do it in more than
359:37 just Telegram, too. So if you click on the plus, you can see right here, Human
359:40 in the Loop, wait for approval or human input before continuing. We've got
359:45 Discord, Gmail, Chat, Outlook, Telegram, Slack. We have a lot of stuff you can
359:48 do. However, so far with my experience, it's been limited to one workflow. And
359:52 what do I mean by that? It's kind of tough to do this when you're actually
359:56 giving an agent a tool that's supposed to be waiting for human approval. So,
359:59 let me show you what I mean by that. Okay, so here's an agent where I tried
360:02 to do a human in the loop tool because we have the send and wait message
360:06 operation as a tool for an agent. So, let me show you what goes on here. We'll
360:09 hit test workflow. Okay, so I'm going to send off get approval for this message.
360:13 Hey John, just wanted to see if you had the meeting minutes. And you're going to
360:15 watch that it's going to call the get approval tool, but here's the issue. So,
360:21 it's waiting for a response, right? And we have the ability to respond, but the
360:25 waiting is happening at the agent level. It really should be waiting down here
360:28 for the tool because, as you saw in the previous example, the response from this
360:32 should be the actual feedback from the human. And we haven't submitted that
360:35 yet. And right now, the response from this tool is literally just the message.
360:40 So, what you'll see here is if I go back into Telegram and I click on respond and
360:43 we open up this tab, it basically just says no action required. And if I go
360:47 back into the workflow, you can see it's still spinning here and there's no way
360:51 for this to give another output. So, it just doesn't really work. And so, what I
360:54 was thinking was, okay, why don't I just make another workflow where I just use
360:57 the actual node like we saw on the previous one. That should work fine
361:00 because then it should just spin down here on the tool level until it's ready.
361:04 And so, let me show you what happens if we do that. Okay, so like I said, I
361:07 built a custom tool down here which is called get approval. It would be sending
361:10 the data to this workflow, it would send off an approval message using the send
361:14 and wait, and it doesn't really work. I even tried adding a wait here. But what
361:18 happens typically is when you use a workflow to call another one, it's going
361:21 to be waiting and looking in the last node of that workflow for the response,
361:26 but it doesn't yet work yet with these operations. And I'll show you guys why,
361:29 and I'm sure NNN will fix this soon, but it's just not there yet. So, I'm just
361:32 going to send off the exact same query, get approval for this message. We'll see
361:35 it call the tool and basically as you can see it finished up instantly and now
361:40 it's waiting here and we already did get a message back in telegram which
361:43 basically said ready to go hey John just wanted to see if you had the meeting
361:45 minutes and it gives us the option to approve or deny and if we click into the
361:50 subworkflow this one that it actually sent data to. We can see that the
361:53 execution is waiting. So this workflow is properly working because it's waiting
361:57 here for human approval. But if we go back into the main flow it's waiting
362:01 here at the agent level rather than waiting here. So, there's no way for the
362:05 agent to actually get our live feedback and use that to take action how it needs
362:09 to. So, I just wanted to show you guys that I had been experimenting with this
362:12 as a tool. It's not there yet, but I'm sure it will be here soon. And when it
362:16 is, you can bet that I'll have a video it. Today, I'm going to be showing you
362:22 guys how you can set up an error workflow in NAN so that you can log all
362:26 of your errors as well as get notified every time one of your active workflows
362:29 fails. The cool part is all we have to do is set up one error workflow and then
362:32 we can link that one to all of our different active workflows. So I think
362:35 you'll be pretty shocked how quick and easy this is to get set up. So let's get
362:38 into the video. All right, so here's the workflow that we're going to be using
362:41 today as our test workflow that we're going to purposely make error and then
362:44 we're going to capture those errors in a different one and feed that into a
362:47 Google sheet template as well as some sort of Slack or email notification. And
362:51 if you haven't seen my recent video on using this new think tool in edit then
362:54 I'll tag it right up here. Anyways, in order for a workflow to trigger an error
362:58 workflow, it has to be active. So, first things first, I'm going to make this
363:00 workflow active. There we go. This one has been activated. And now, what I'm
363:04 going to do is go back out to my NAD. We're going to create a new workflow.
363:07 And this is going to be our error logger workflow. Okay. So, you guys are going
363:10 to be pretty surprised by how simple this workflow is going to be. I'm going
363:13 to add a first step. And I'm going to type an error. And as you can see,
363:16 there's an error trigger, which says triggers the workflow when another
363:18 workflow has an error. So, we're going to bring this into the workflow. We
363:21 don't have to do anything to configure it. You can see that what we could do is
363:24 we could fetch a test event just to see what information could come back. But
363:28 what we're going to do is just trigger a live one because we're going to get a
363:31 lot more information than what we're seeing right here. So, quickly pay
363:34 attention to the fact that I named this workflow error logger. I'm going to go
363:38 back into my ultimate assistant active workflow. Up in the top right, I'm going
363:41 to click on these three dots, go down to settings, and then right here, there's a
363:45 setting called error workflow, which as you can see, a second workflow to run if
363:48 the current one fails. The second workflow should always start with an
363:52 error trigger. And as you saw, we just set that up. So, all I have to do is
363:55 choose a workflow. I'm going to type an error and we called it error logger. So
363:59 I'm going to choose that one, hit save. And now these two workflows are
364:02 basically linked so that if this workflow ever has an error that stops
364:06 the workflow, it's going to be captured in our second one over here with the
364:09 information. So let's see a quick example of that. Okay, so this workflow
364:13 is active. It has a telegram trigger as you can see. So I'm going to drag in my
364:16 telegram and I'm just going to say, "Hey." And what's going to happen is
364:19 obviously we're going to get a response back because this workflow is active and
364:23 it says, "How can I assist you today?" Now, what I'm going to do is I'm just
364:26 going to get rid of the chat model. So, this agent essentially has no brain. I'm
364:29 going to hit save. We're going to open up Telegram again, and we're going to
364:33 say, "Hey." And now, we should see that we're not going to get any response back
364:36 in Telegram. If we go into the executions of this ultimate assistant,
364:39 you can see that we just got an error right now. And that was when we just
364:42 sent off the query that said, "Hey," and it errored because the chat model wasn't
364:46 connected. So, if we hop into our error logger workflow and click on the
364:49 executions, we should see that we just had a new execution. And if we click
364:53 into it, we'll see all the information that came through. So what it's going to
364:57 tell us is the ID of the execution, the URL of the workflow, the name of the
365:01 workflow, and then we'll also see what node errored and the error message. So
365:05 here under the object node, we can see different parameters. We can see what's
365:08 kind of how the node's configured. We can see the prompt even. But what we're
365:11 interested in is down here we have the name, which is ultimate assistant. And
365:14 then we have the message, which was a chat model sub node must be connected
365:19 and enabled. So anyways, we have our sample data. I'm going to hit copy to
365:22 editor, which just brings in that execution into here so we can play with
365:26 it. And now what I want to do is map up the logic of first of all logging it in
365:30 a Google sheet. So here's the Google sheet template I'm going to be using.
365:32 We're going to be putting in a timestamp, a workflow name, the URL of
365:36 the workflow, the node that errored, and the error message. If you guys want to
365:39 get this template, you can do so by joining my free school community. The
365:42 link for that's down in the description. Once you join the community, all you
365:44 have to do is search for the title of the video up top, or you can click on
365:48 YouTube resources, and you'll find the post. And then in the post is where
365:51 you'll see the link to the Google sheet template. Anyways, now that we have this
365:54 set up, all we have to do is go back into our error logger. We're going to
365:57 add a new node after the trigger. And I'm going to grab a sheets node. What we
366:02 want to do is append a row in sheets. Um I'm just going to call this log error.
366:05 Make sure I choose the right credential. And then I'm going to choose the sheet
366:08 which is called error logs. And so now we can see we have the values we need to
366:11 send over to these different columns. So for time stamp, all I'm going to do is
366:14 I'm actually going to make an expression and I'm just going to do dollar sign
366:18 now. And this is basically just going to send over to Google Sheets the current
366:21 time whenever this workflow gets triggered. And if you don't like the way
366:23 this is coming through, you can play around with format after dollar sign
366:27 now. And then you'll be able to configure it a little bit more. And you
366:30 can also ask chat to help you out with this JavaScript function. And feel free
366:34 to copy this if you want. I'm pulling in the full year, month, day, and then the
366:37 time. Okay, cool. Then we're just pretty much going to drag and drop the other
366:40 information we need. So the first thing is the workflow name. And to get to
366:43 that, I'm going to close out of the execution. and then we'll see the
366:45 workflow and we can pull in the name right there which is ultimate personal
366:49 assistant. For the URL, I'm going to open back up execution and grab the URL
366:54 from here. For node, all I have to do is look within the node object. We're going
366:57 to scroll down until we see the name of the node which is right here, the
367:01 ultimate assistant. And then finally, the error message, which should be right
367:04 under that name right here. Drag that in, which says a chat model sub node
367:08 must be connected and enabled. So now that we're good to go here, I'm going to
367:10 test step and then we'll check our Google sheet and make sure that that
367:13 stuff comes through correctly. And as you can see, it just got
367:16 populated and we have the URL right here, which if we clicked into, it would
367:20 take us to that main ultimate personal assistant workflow. As you can see, when
367:23 this loads up, and it takes us to the execution that actually failed as well,
367:27 so we could sort of debug. So now we have an error trigger that will update a
367:31 Google sheet and log the information. But maybe we also want to get notified
367:34 when there's an error. So I'm just going to drag this off right below and I'm
367:38 going to grab the Slack node and I'm going to choose to send a message right
367:41 here. and then we'll just configure what we want to send over. Okay, so we're
367:44 going to be sending a message to a channel. I'm going to choose the channel
367:48 all awesome AI stuff. And then we just need to configure what the actual
367:51 message is going to say. So I'm going to change this to an expression, make this
367:55 full screen and let's fill this out. So pretty much just customize this however
367:58 you want. Let's say I want to start off with workflow error and we will put the
368:04 name of the workflow. So I'll just close out of here, throw that in there. So now
368:06 it's going to come through saying workflow error ultimate personal
368:10 assistant. And then I'm going to say like what node errored at what time and
368:14 what the error message was. So first let's grab the name of the node. Um if I
368:17 just have to scroll down to name right here. So ultimate assistant errored at
368:23 and I'm going to do that same dollar sign now function. So it says ultimate
368:27 assistant errored at 2025417. And then I'm just going to go down and say the error message was and
368:36 we're just going to drag in the error message. There we go. And then finally, we'll just provide a
368:43 link to the workflow. So see this execution here. And then we'll drag in
368:48 the link, which is all the way up top. And we should be good to go. And then if
368:50 you want to make sure you're not sending over a little message at the bottom that
368:54 says this was sent from Nad, you're going to add an option. You're going to
368:56 click on include link to workflow. And then you're going to turn that off. And
368:59 now we hit test up. And we hop into Slack. And we can see we got workflow
369:02 error ultimate personal assistant. We have all this information. We can click
369:05 into this link. and we don't have this little message here that says automated
369:09 with this NN workflow. So maybe you could just set up a channel dedicated
369:12 towards error logging, whatever it is. Okay, so let's save this real quick and
369:17 um let's just do another sort of like example. Um one thing to keep in mind is
369:20 there's a difference between the workflow actually erroring out and going
369:24 red and just something not working correctly. And I'll show you exactly
369:27 what I meant by that. So this example actually triggered the error workflow
369:32 because the execution on this side is red and it shows an error. But what
369:35 happens is, for example, with our Tavly tool right here, I have no
369:38 authentication pulled up. So this tool is not going to work. But if I come into
369:42 Telegram and say search the web for Apples, it's going to work. This this
369:46 workflow is going to go green even though this tool is not going to work.
369:49 And we'll see exactly why. So as you can see, it says I'm currently unable to
369:52 search the web due to a connection error. So if we go into the execution,
369:56 we can see that this thing went green even though it didn't work the way we
369:59 wanted to. But what happened is the tool came back and it was green and it
370:02 basically just didn't work because our authentication wasn't correct. And then
370:06 you can even see in the think node, it basically said the web search function
370:09 is encountering an authentication error. I need to let the user know the search
370:11 isn't currently available and offer alternative ways to help blah blah blah.
370:15 But all of these nodes actually went green and we're fine. So this example
370:19 did not trigger the error logger. As you can see, if we check here, there's
370:22 nothing. We check in Slack, there's nothing. So what we can do is we'll
370:26 actually make something error. So I'll go into this memory and it's going to be
370:29 looking for a session ID within the telegram trigger and I'm just going to
370:33 add an extra d. So this variable is not going to work. It's probably going to
370:36 error out and now we'll actually see something happen with our error
370:40 workflow. So I'm just going to say hey and we will watch basically nothing will
370:47 here. Okay, that confused me but I realized I didn't save the workflow. So
370:49 now that we've saved it, this is not going to work. So let's once again say
370:52 hey. And we should see that nothing's going to come back over here. Um, I
370:56 believe if we go into our error logger, we should see something pop through. We
370:59 just got that row and you can see the node changed, the error message changed,
371:02 all that kind of stuff. And then in Slack, we got another workflow error at
371:07 a new time and it was a different node. And finally, we can just come into our
371:09 error logger workflow and click on executions. And we'll see the newest run
371:14 was the one that we just saw in our logs and in our Slack, which was the memory
371:19 node that aired. As you can see right here, simple memory. And so really the
371:22 question then becomes, okay, well what happens if this workflow in itself
371:26 errors too. I really don't foresee that happening unless you're doing some sort
371:30 of crazy AI logic over here. But it really needs to just be as simple as
371:32 you're mapping variables from here somewhere else. So you really shouldn't
371:36 see any issues. Maybe an authentication issue, but I don't know. Maybe if this
371:39 if this workflow is erring for you a ton, you probably are just doing
371:42 something wrong. Anyways, that's going to do it for this one. I know it was a
371:45 quicker one, but hopefully if you didn't know about this, it's something that
371:53 If you've ever wondered which AI model to use for your agents and you're tired
371:56 of wasting credits or overpaying for basic tasks, then this video is for you
371:59 because today I'm going to be showing you a system where the AI agent picks
372:04 its brain dynamically based on the task. This is not only going to save you
372:06 money, but it's going to boost performance and we're also getting full
372:09 visibility into the models that it's choosing based on the input and we'll
372:13 see the output. That way, all we have to do is come back over here, update the
372:16 prompt, and continue to optimize the workflow over time. As you can see,
372:19 we're talking to this agent in Slack. So, what I'm going to do is say, "Hey,
372:22 tell me a joke." You can see my failed attempts over there. And it's going to
372:25 get this message. As you can see, it's picking a model, and then it's going to
372:29 answer us in Slack, as well as log the output. So, we can see we just got the
372:32 response, why don't scientists trust Adams? Because they make up everything.
372:35 And if I go to our model log, we can see we just got the input, we got the
372:38 output, and then we got the model which was chosen, which in this case was
372:42 Google Gemini's 2.0 Flash. And the reason it chose Flash is because this
372:45 was a simple input with a very simple output and it wanted to choose a free
372:48 model so we're not wasting credits for no reason. All right, let's try
372:51 something else. I'm going to ask it to create a calendar event at 1 p.m. today
372:55 for lunch. Once this workflow fires off, it's going to choose the model. As you
372:58 can see, it's sending that over to the dynamic agent to create that calendar
373:02 event. It's going to log that output and then send us a message in Slack. So,
373:06 there we go. I just have created the calendar event for lunch at 1 p.m.
373:09 today. If you need anything else, just let me know. We click into the calendar
373:12 real quick. There is our launch event at one. And if we go to our log, we can see
373:16 that this time it used OpenAI's GBT 4.1 mini. All right, we'll just do one more
373:20 and then we'll break it down. So, I'm going to ask it to do some research on
373:23 AI voice agents and create a blog post. Here we go. It chose a model. It's going
373:27 to hit Tavi to do some web research. It's going to create us a blog post, log
373:31 the output, and send it to us in Slack. So, I'll check in when that's done. All
373:35 right, so it just finished up and as you can see, it called the Tavly tool four
373:38 times. So, it did some in-depth research. It logged the output and we
373:42 just got our blog back in Slack as you can see. Wow. It is pretty thorough. It
373:47 talks about AI voice agents, the rise of voice agents. Um there's key trends like
373:51 emotionally intelligent interactions, advanced NLP, real-time multilingual
373:55 support, all this kind of stuff. Um that's the whole blog, right? It ends
373:57 with a conclusion. And if you're wondering what model it used for this
374:00 task, let's go look at our log. We can see that it ended up using Claude 3.7
374:04 sonnet. And like I said, it knew it had to do research. So it hit the tabletly
374:08 tool four different times. The first time it searched for AI voice agents
374:12 trends, then it searched for case studies, then it searched for growth
374:16 statistics, and then it searched for ethical considerations. So, it made us a
374:21 pretty like holistic blog. Anyways, now that you've seen a quick demo of how
374:24 this works, let's break down how I set this up. So, the first things first,
374:27 we're talking to it in Slack and we're getting a response back in Slack. And as
374:31 you can see, if I scroll up here, I had a a few fails at the beginning when I
374:34 was setting up this trigger. So, if you're trying to get it set up in Slack,
374:37 um it can be a little bit frustrating, but I have a video right up here where I
374:40 walk through exactly how to do that. Anyways, the key here is that we're
374:44 using Open Router as the chat model. So, if you've never used Open Router, it's
374:47 basically a chat model that you can connect to and it basically will let you
374:50 route to any model that you want. So, as you can see, there's 300 plus models
374:54 that you can access through Open Router. So, the idea here is that we have the
374:57 first agent, which is using a free model like Gemini 2.0 Flash. we have this one
375:02 choosing which model to use based on the input. And then whatever this model
375:05 chooses, we're using down here dynamically for the second agent to
375:09 actually use in order to use its tools or produce some sort of output for us.
375:12 And just so you can see what that looks like, if I come in here, you can see
375:15 we're using a variable. But if I got rid of that and we change this to fixed, you
375:18 can see that we have all of these models within our open router dynamic brain to
375:23 choose from. But what we do is instead of just choosing from one of these
375:26 models, we're basically just pulling the output from the model selector agent
375:30 right into here. And that's the one that it uses to process the next steps. Cool.
375:34 So let's first take a look at the model selector. What happens in here is we're
375:38 feeding in the actual text that we sent over in Slack. So that's pretty simple.
375:42 We're just sending over the message. And then in the system message here, this is
375:45 where we actually can configure the different models that the AI agent has
375:49 access to. So I said, "You're an agent responsible for selecting the most
375:52 suitable large language model to handle a given user request. Choose only one
375:55 model from the list below based strictly on each model's strengths." So we told
375:59 it to analyze the request and then return only the name of the model. We
376:02 gave it four models. Obviously, you could give it more if you wanted to. And
376:05 down here, available models and strengths. We gave it four models and we
376:09 basically defined what each one's good at. You could give it more than four if
376:12 you wanted to, but just for this sake of the demo, I only gave it four. And then
376:16 we basically said, return only one of the following strings. And as you can
376:19 see in this example, it returned anthropic claude 3.7 sonnet. And so one
376:23 quick thing to note here is when you use Gemini 2.0 flash, for some reason it
376:28 likes to output a new line after a lot of these strings. So all I had to do later is I
376:33 clean up this new line and I'll show you exactly what I mean by that. But now we
376:37 have the output of our model and then we move on to the actual Smartyp Pants
376:40 agent. So in this one, we're giving it the same user message as the previous
376:44 agent where we're just basically coming to our Slack trigger and we're dragging
376:47 in the text from Slack. And what I wanted to show you guys is that here we
376:51 have a system message and all I gave it was the current date and time. So I
376:54 didn't tell it anything about using Tavi for web search. I didn't tell it how to
376:57 use its calendar tools. This is just going to show you that it's choosing a
377:00 model intelligent enough to understand the tools that it has and how to use
377:04 them. And then of course the actual dynamic brain part. We looked at this a
377:07 little bit, but basically all I did is I pulled in the output of the previous
377:12 agent, the model selector agent. And then, like I said, we had to just trim
377:16 up the end because if you just dragged this in and Open Router was trying to
377:20 reference a model that had a new line character after it, it would basically
377:23 just fail and say this model isn't available. So, I trimmed up the end and
377:26 that's why. And you can see in my Open Router account, if I go to my activity,
377:31 we can see which models we've used and how much they've costed. So, anyways,
377:35 Gemini 2.0 Flash is a free model, but if we use it through open router, they have
377:38 to take a little bit of a, you know, they got to get some kickback there. So,
377:41 it's not exactly free, but it's really, really cheap. But the idea here is, you
377:44 know, Claude 3.7 sonnet is more expensive and we don't need to use it
377:48 all the time, but if we want our agent to have the capability of using Claude
377:51 at some point, then we probably would just have to plug in Claude. But now, if
377:55 you use this method, if you want to talk to the agent just about some general
377:58 things or looking up something on your calendar or sending an email, you don't
378:01 have to use Claude and waste these credits. You could go ahead and use a
378:05 free model like 2.0 Flash or still a very powerful cheap model like GPT 4.1
378:09 Mini. And that's not to say that 2.0 Flash isn't super powerful. It's just
378:12 more of a lightweight model. It's very cheap. Anyways, that's just another cool
378:15 thing about Open Router. That's why I've gotten in the habit of using it because
378:19 we can see the tokens, the cost, and the breakdown of different models we've
378:22 used. From there, we're feeding the output into a Google sheet template,
378:24 which by the way, you can download this workflow as well as these other ones
378:28 down here that we'll look at in a sec. You can download all this for free by
378:31 joining my Free School community. All you have to do is go to YouTube
378:34 resources or search for the title of this video and when you click on the
378:37 post associated with this video, you'll have the JSON which is the end workflow
378:41 to download as well as you'll see this Google sheet template somewhere in that
378:44 post so that you can just basically copy it over and then you can plug everything
378:48 into your environment. Anyways, just logging the output of course and we're
378:51 sending over a timestamp. So I just said, you know, whatever time this
378:54 actually runs, you're going to send that over the input. So the Slack message
378:57 that triggered this workflow. The output, I'm basically just bringing the
379:01 output from the Smartyp Pants agent right here. And then the model is the
379:04 output from the model selector agent. And then all that's left to do is send
379:08 the response back to the human in Slack where we connected to that same channel
379:11 and we're just sending the output from the agent. So hopefully this is just
379:15 going to open your eyes to how you can set up a system so that your actual main
379:19 agent is dynamically picking a brain to optimize your cost and performance. And
379:23 in a space like AI where new models are coming out all the time, it's important
379:26 to be able to test out different ones for their outputs and see like what's
379:30 going on here, but also to be able to compare them. So, two quick tools I'll
379:34 show you guys. This first one is Vellum, which is an LLM leaderboard. You can
379:38 look at like reasoning, math, coding, tool use. You have all this stuff. You
379:41 can compare models right here where you can select them and look at their
379:45 differences. And then also down here is model comparison with um all these
379:49 different statistics you can look at. You can look at context, window, cost,
379:53 and speed. So, this is a good website to look at, but just keep in mind it may
379:55 not always be completely up to date. Right here, it was updated on April
380:00 17th, and today is the 30th, so doesn't have like the 4.1 models. Anyways,
380:04 another one you could look at is this LM Arena. So, I'll leave the link for this
380:07 one also down in the description. You can basically compare different models
380:09 by chatting with them like side by side or direct. People give ratings and then
380:13 you can look at the leaderboard for like an overview or for text or for vision or
380:17 for whatever it is. just another good tool to sort of compare some models.
380:21 Anyways, we'll just do one more quick before we go on to the example down
380:24 below. Um because we haven't used the reasoning model yet and those are
380:28 obviously more expensive. So, I'm asking you a riddle. I said you have three
380:31 boxes. One has apples, one has only oranges, and one has a mix of both.
380:35 They're all incorrectly labeled and you can pick one fruit from the box without
380:38 looking. How can you label all boxes correctly? So, let's see what it does.
380:42 Hopefully, it's using the reasoning model. Okay, so it responded with a
380:45 succinct way to see it is to pick one piece of fruit from the box labeled
380:49 apples and oranges. Since that label is wrong, the box must actually contain
380:53 only apples or only oranges. Whatever fruit you draw tells you which single
380:56 fruit box that really is. Once you know which box is purely apples or purely
381:00 oranges, you can use the fact that all labels are incorrect to deduce the
381:04 proper labels for the remaining two boxes. And obviously, I had chatbt sort
381:07 of give me that riddle and that's basically the answer it gave back. So,
381:11 real quick, let's go into our log and we'll see which model it used. And it
381:16 used OpenAI's 01 reasoning model. And of course, we can just verify that by
381:18 looking right here. And we can see it is OpenAI 01. So, one thing I wanted to
381:22 throw out there real quick is that Open Router does have sort of like an auto
381:26 option. You can see right here, Open Router/auto, but it's not going to give
381:30 you as much control over which models you can choose from, and it may not be
381:34 as costefficient as being able to define here are the four models you have, and
381:37 here's when to use each one. So, just to show you guys like what that would do if
381:40 I said, "Hey," it's going to use its model and it's going to pick one based
381:43 on the input. And here you can see that it used GPT4 mini. And then if I go
381:47 ahead and send in that same riddle that I sent in earlier, remember earlier it
381:51 chose the reasoning model, but now it's going to choose probably not the
381:54 reasoning model. So anyways, looks like it got the riddle right. And we can see
381:57 that the model that it chose here was just GPT40. So I guess the argument is
382:00 yes, this is cheaper than using 01. So if you want to just test out your
382:04 workflows by using the auto function, go for it. But if you do want more control
382:07 over which models to use, when to use each one, and you want to get some
382:10 higher outputs in certain scenarios, then you want to take probably the more
382:13 custom route. Anyways, just thought I'd drop that in there. But let's get back
382:16 to the video. All right, so now that you've seen how this agent can choose
382:19 between all those four models, let's look at like a different type of example
382:22 here. Okay, so down here we have a rag agent. And this is a really good use
382:25 case in my mind because sometimes you're going to be chatting with a knowledge
382:28 base and it could be a really simple query like, can you just remind me what
382:31 our shipping policy is? Or something like that. But if you wanted to have
382:35 like a comparison and like a deep lookup for something in the knowledge base,
382:38 you'd probably want more of a, you know, a more intelligent model. So we're doing
382:41 a very similar thing here, right? This agent is choosing the model with a free
382:45 model and then it's going to feed in that selection to the dynamic brain for
382:49 the rag agent to do its lookup. And um what I did down here is I just put a
382:52 very simple flow if you wanted to download a file into Superbase just so
382:57 you can test out this Superbase rag agent up here. But let's chat with this
383:00 thing real quick. Okay, so here is my policy and FAQ document, right? And then
383:04 I have my Subbase table where I have these four vectors in the documents
383:07 table. So what we're going to do is query this agent for stuff that's in
383:11 that policy and FAQ document. And we're going to see which model it uses based
383:15 on how complex the query is. So if I go ahead and fire off what is our shipping
383:18 policy, we'll see that the model selector is going to choose a model,
383:22 send it over, and now the agent is querying Superbase and it's going to
383:25 respond with here's Tech Haven's shipping policy. Orders are processed
383:28 within 1 to two days. standard shipping takes 3 to seven business days, blah
383:31 blah blah. And if we compare that with the actual documentation, you can see
383:35 that that is exactly what it should have responded with. And you'll also notice
383:38 that in this example, we we're not logging the outputs just because I
383:41 wanted to show a simple setup. But we can see the model that it chose right
383:46 here was GPT 4.1 mini. And if we look in this actual agent, you can see that we
383:50 only gave it two options, which was GPT 4.1 mini and anthropic cloud 3.5 sonnet,
383:54 just because of course I just wanted to show a simple example. But you could up
383:58 this to multiple models if you'd like. And just to show that this is working
384:02 dynamically, I'm going to say what's the difference between our privacy policy
384:05 and our payment policy. And what happens if someone wants to cancel their order
384:08 or return an item? So, we'll see. Hopefully, it's choosing the cloud model
384:11 because this is a little bit more complex. Um, it just searched the vector
384:15 database. We'll see if it has to go back again or if it's writing an answer. It
384:18 looks like it's writing an answer right now. And we'll see if this is accurate.
384:22 So, privacy versus payment. We have privacy focuses on data protection.
384:26 payment covers accepted payment methods. Um what happens if someone wants to
384:28 cancel the order? We have order cancellation can be cancelled within 12
384:32 hours. And we have a refund policy as well. And if we go in here, we could
384:36 validate that all this information is on here. And we can see this is how you
384:41 cancel. And then this is how you refund. Oh yeah, right here. Visit our returns
384:44 and refund page. And we'll see what it says is that here is our return and
384:47 refund policy. And all this information matches exactly what it says down here.
384:52 Okay. So those are the two flows I wanted to share with you guys today.
384:54 Really, I just hope that this is going to open your eyes to the fact that you
384:58 can have models be dynamic based on the input, which really in the long run will
385:02 save you a lot of tokens for your different chat models. All right, so now
385:05 we're going to move on to web hooks, which I remember seemed really
385:08 intimidating as well, just like APIs and HTTP requests, but they're really even
385:12 simpler. So, we're going to dive into what exactly they are and show an
385:16 example of how that works in NN. And then I'm going to show you guys two
385:19 workflows that are triggered by NAN web hooks and then we send data back to that
385:22 web hook. So, don't worry, you guys will see exactly what I'm talking about.
385:27 Let's get into it. Okay, web hooks. So, I remember when I first learned about
385:31 APIs and HTTP requests, and then I was like, what in the world is a web hook?
385:36 They're pretty much the same thing except for, think about it like this.
385:41 The web hook, rather than us sending off data somewhere or like sending off an
385:46 API call, we are the one that's waiting for an API call. We're just waiting and
385:51 listening for data. So let me show you an example of what that actually looks
385:55 like. So here you can see we have a web hook trigger and a web hook is always
385:58 going to come in the form of a trigger because essentially our end workflow is
386:03 waiting for data to be sent to it. Whether that's like a form is submitted
386:07 and now the web hook gets it or whatever that is we are waiting for the data
386:10 here. So when I click into the web hook what we see is we have a URL and this is
386:15 basically the URL that wherever we're sending data from is going to send data
386:21 to. So later in this course, I'm going to show you an example where I do like
386:25 an 11labs voice agent. And so our end web hook URL right here, that's where 11
386:29 Labs is sending data to. Or I'll show you an example with lovable where I
386:33 build a little app and then our app is sending data to this URL. So that's how
386:37 it works, right? Important things to remember is you still have to set up
386:40 your method. So if you are setting up some sort of request on a different
386:44 service and you're sending data to this web hook, it's probably going to be a
386:47 post. So, you'll want to change that and make sure that these actually align.
386:51 Anyways, what we're going to do is we're going to change this to a post because I
386:54 I just know it's going to be post. And this is our web hook URL, which is a
386:59 test URL. So, I'm going to click on the button to copy it. And what I'm going to
387:02 do is take this and I'm going to go into Postman, which is just kind of like an
387:06 API platform that I use to show some demos of how we can send these requests.
387:10 So, I'm going to click on send an API request in here. This basically just
387:14 lets us test out and see if our web hook's working. So what I'm going to do
387:18 is I'm going to change the request to post. I'm going to enter the web hook
387:23 URL from my NAND web hook. Okay. And now basically what we have is the ability to
387:27 send over certain information. So I'm just going to go to body. I'm going to
387:31 send over form data. And now you can see just like JSON, it's key value pairs. So
387:35 I'm just going to send over a field called text. And the actual value is
387:39 going to be hello. Okay. So that's it. I'm going to click on send. But what
387:43 happens is we're going to get a request back which is a 404 error. It says the
387:48 web hook is not registered and basically there's a hint that says click the test
387:53 workflow button. So because this workflow I'm sorry because the web hook
387:57 is supposed to be listening right now it's not listening. And the reason it's
388:00 not listening is because we are in an act inactive workflow like we've talked
388:03 about before and we haven't clicked listen for test event. So if I click
388:08 listen now you can see it is listening for this URL. Okay. So, I go back into
388:13 Postman. I hit send. And now it's going to say workflow was started. I come back
388:17 into here. We can see that the node has executed. And what we have is our text
388:21 that we entered right there in the body, which was text equals hello. We also get
388:24 all this other random stuff that we don't really need. Um, I don't even know
388:28 what this stuff really stands for. We can see our host was our our nit cloud
388:33 account. All this kind of stuff. Um, but that's not super important for us right
388:36 now, right? I just wanted to show that that's how it works. So, they're both
388:41 configured as post. Our postman is sending data to this address and we saw
388:47 that it worked right. So what comes next is the fact that we can respond to this
388:52 web hook. So right now we're set to respond immediately which would be
388:56 sending data back right away. But let's say later you'll see an example in 11
389:00 labs and my end workflow we where we want to grab data do something with it
389:04 and send something back. And how that would work is we would change this
389:08 method to using respond to web hook node and then we basically like add like
389:11 maybe an AI agent right here and then after the AI agent we would basically
389:15 just go respond to web hook and as you can see it just returns data to that
389:18 same web hook address. So that would just be sending data back to lovable 11s
389:23 postman wherever we actually got the initial request from we would do
389:26 something with that data and send it back to that web hook. So really, it's
389:31 the exact same way that we think about APIs except for we are now the API
389:35 server rather than the person who's sending the request. So if you think
389:39 about the example with um being at a restaurant, you know, like we would look
389:44 at the menu, we would send off a request through the waiter to the kitchen and
389:47 then the kitchen would get that request via a web hook. They would create the
389:52 food and then they would send it back to the actual person via responding to web
389:56 hook. So that's all it is. is we are just on the other side of the table now.
390:00 And then one last important thing to know about web hooks in NN is ideally
390:05 you have to make it active. So we would click on active but what it's going to
390:09 tell us is that you can now make calls to your production web hook URL no
390:14 longer the test URL. So if you remember in here we have test and production and
390:18 what I copied into Postman was the test. So if I go back into Postman and I hit
390:23 send, nothing happens, right? But if I now go to production URL, copy the
390:28 production, go into Postman and paste in the production URL, hit send. There we
390:33 go. Workflow was started because that's the actual active workflow production
390:38 URL. So I can click on executions and we can see this is the newest one we just
390:41 got when we made our workflow active and it did in fact start. And just to prove
390:45 to you that that's how it works, let me change the body which now instead of
390:49 saying hello, it's going to Oops. And we'll send that off. Go into
390:58 edit end and we'll basically just watch a new execution will pop up because it's
391:02 actively listening. I'm not doing anything. There you go. You see it pop
391:06 up. And if we click into the actual web hook data, we will see right here the
391:11 text says active run. Cool. So that's web hooks. Hopefully that clears up any
391:13 confusion that you may have had regarding web hooks. But let's dive into
391:17 some examples in NAND where we have a web hook trigger and then we're also
391:20 configuring that other third party app that's going to send data to our web
391:29 workflow. Hey there, I'm Archer. What can I do for you? Hey Archer, can you
391:32 help me send an email? Of course, I'd be delighted to help. Who's the email for and what's it
391:40 about? Just give me the details and I'll get it sent out. Awesome. Yeah, I just
391:44 wanted to send one to Phil. I wanted to ask about an update for the project that
391:49 we discussed last week. Got it. So, you want to send an email to Phil asking
391:53 about a project update? No problem. I'll handle that. Just hang
391:58 tight while I send this off. All done. It's been sent to Phil. He'll be getting
392:02 that email any minute. Anything else I can help you with? Yeah, could you
392:04 actually go ahead and help me send another one? Of course. I'm always happy to lend a
392:10 hand, or rather an email address. Who is this email for and what's it about? Just
392:15 give me the details and I'll whip it up. Just another one to Phil. I forgot to
392:18 mention that I wanted to see if he wants All right, another email to Phil. This
392:26 time to ask about lunch tomorrow. I'm on it. Just a sec. Gotcha. The email is on
392:31 its way to Phil. He'll be sure to see your invitation for lunch tomorrow. Let
392:35 me know if you need anything else. No, that's going to be it. Thanks, Archer.
392:39 You're very welcome. Always happy to help, especially when it comes to
392:42 emails. happy to handle any other messages you need to send in the future.
392:46 Have a great day. Okay, so that was a quick demo of the super cool
392:49 conversational voice agent that we built using 11 Labs and Nad. His name's Archer
392:54 and he helps us send emails. As you can tell, he was very enthusiastic about
392:58 sending emails. So, here's that first email that we asked Archer to send to
393:01 Phil. Hi, Phil. I hope this message finds you well. I'm writing to inquire
393:03 about the project update we discussed last week. Could you please share the
393:06 latest developments at your earliest convenience? Looking forward to your
393:09 response. Best Nate. And then we asked Archer to send another email just asking
393:12 if he wants to get lunch tomorrow to Phil. So, hi Phil. I was wondering if
393:16 you're available for lunch tomorrow. Let me know what works for you. Best Nate.
393:19 So, now that we've seen a quick demo, we heard the voice. We've seen the emails
393:21 actually come through. We're going to hop back into Nad and we're going to
393:24 explain what's going on here so that you guys can get this sort of system up and
393:27 running for yourselves. Okay, so there are a few things that I want to break
393:30 down here. First of all, just within NAN, whenever you're building an AI
393:33 agent, as you guys should know, there's going to be an input and then that input
393:37 is going to be fed into the agent. The agent's going to use its system prompt
393:41 and its brain to understand what tools it needs to hit. It's going to use those
393:44 tools to take action and then there's going to be some sort of output. So, in
393:47 the past when we've done tutorials on personal assistants, email agents,
393:51 whatever it was, rag agents, usually that the input that we've been using has
393:56 been something like Telegram or Gmail or even just the NAN chat trigger. Pretty
393:59 much all we're switching out here for the input and the output is 11 Labs. So,
394:04 we're going to be getting a post request from 11 Labs, which is going to send
394:07 over the body parameters like who the email is going to, um, what the message
394:11 is going to say, stuff like that. And then the agent once it actually does
394:14 that, it's going to respond using this respond to web hook node. So, we'll get
394:17 into 11 Labs and I'll show you guys how I prompted the agent and everything like
394:21 that in 11 Labs. But first, let's take a quick look at what's going on in the
394:25 super simple agent setup here in NN. So, these are tools that I've used multiple
394:28 times on videos on my channel. The first one is contact data. So, it's just a
394:31 simple Google sheet. This is what it looks like. Here's Phil's information
394:34 with the correct Gmail that we were having information sent to. And then I
394:37 just put other ones in here just to sort of dummy data. But all we're doing is
394:42 we're hooking up the tool um Google Sheets. It's going to be reading get
394:46 rows sheet within the document. We link the document. That's pretty much all we
394:49 had to do. Um and then we just called it contact data so that when we're
394:52 prompting the agent, it knows when to use this tool, what it has. And then the
394:56 actual tool that sends emails is the send email tool. So in here we're
395:01 connecting a Gmail tool. Um this one is you know we're using all the from AI
395:05 functions which makes it really really easy. Um we're sending a message of
395:09 course and so the from AI function basically takes the query coming in from
395:14 the agent and understands um dynamically the AI is looking for okay what's the
395:17 email address based on the user's message. Okay we grab the email address
395:20 we're going to put it in the two parameter. How can we make a subject out
395:24 of this message? We'll put it here. And then how can we actually construct an
395:28 email body and we put it there. So that's all that's going on here. Here
395:31 we've got our tools. We've obviously got a chat model. In this case, we're just
395:34 using um GPT40. And then we have the actual what's taking place within the agent. So
395:41 obviously there's an input coming in. So that's where we define this information.
395:46 Input agent output and then the actual system message for the agent. So the
395:49 system message is a little bit different than the user agent. The system message
395:53 is defining the role. This is your job as an agent. This is what you should be
395:57 doing. These are the tools you have. And then the user message is like each
396:01 execution, each each run, each time that we interact with the agent through 11
396:05 Labs, it's going to be a different user message coming in, but the system
396:07 message is always going to remain the same as it's the prompt for the AI
396:11 agents behavior. Anyways, let's take a look at the prompt that we have here.
396:15 First, the overview is that you are an AI agent responsible for drafting and
396:19 sending professional emails based on the user's instructions. You have access to
396:23 two tools, contact data to find email addresses and send email to compose and
396:27 send emails. Your objective is to identify the recipient's contact
396:31 information, draft a professional email and sign off as Nate before sending. The
396:36 tools you have obviously uh contact data. It retrieves email addresses based
396:39 on the name. So we have an example input John Doe. Example output an email
396:43 address and then send email. Sends an email with a subject and a body. The
396:47 example input here is an email address. Um subject and a body with example email
396:52 subject body. Um so that's what we have for the system message. And then for the
396:57 um user message, as you can see, we're basically just saying um okay, so the
397:01 email is going to be for this person and the email content is going to be this.
397:04 So in this case, this execution it was the email's for Phil and the email
397:08 content is asking about lunch tomorrow. So that's all that we're being fed in
397:13 from 11 Labs. And then the agent takes that information to grab the contact
397:17 information and then it uses its AI brain to make the email message.
397:22 Finally, it basically just responds to the web hook with um the email to Phil
397:26 regarding lunch tomorrow has been successfully sent and then 11 Labs
397:30 captures that response back and then it can respond to us with gotcha. We were
397:33 able to send that off for you. Is there anything else you need? So, that's
397:37 pretty much all that's going on here. Um if you see in the actual web hook, what
397:41 we're getting here is, you know, there's different things coming back. We have
397:44 different little technical parameters, all this kind of stuff. all that we want
397:48 to configure and I'll show you guys how we configure this in 11 Labs is the the
397:52 JSON body request that's being sent over. So we're in table format. If we
397:56 went to JSON, we could see down here we're looking at body. In the body, we
398:00 set up two fields to send over from 11 Labs to Nidend using that post request
398:05 web hook. The first field that we set up was two. And as you can see, that's when
398:11 the 11 Labs model based on what we say figures out who the email is going to
398:15 and puts that there and then figures out what's the email content. What do you
398:17 want me to say in this email? And then throws that in here. So, um, that's how
398:22 that's going to work as far as setting up the actual web hook node right here.
398:27 Um, we have a we wanted to switch this to a post method because 11 Labs is
398:32 sending us information. Um, we have a test URL and a production URL. The test
398:36 one we use for now and we have to manually have naden listen for a test
398:41 event. Um I will show an example of what happens if we don't actually do this
398:44 later in the video. But when you push the app into production, you make the
398:48 workflow active, you would want to put this web hook in 11 Labs as the
398:52 production URL rather than the test URL so that you can make sure that the
398:56 stuff's actually coming over. We put our path as end just to clean up this URL.
399:01 All that it does is changes the URL. Um and then authentication we put none. And
399:04 then finally for response instead of doing immediately or wait when last node
399:08 finishes we want to do using respond to web hook node. That way we get the
399:12 information the agent takes place and then responds and then all we have here
399:15 is respond to web hook. So it's very simple as you can see it's only you know
399:19 really four nodes you know the email the brain um and then the two tools and the
399:24 web hooks. So um hopefully that all made sense. We are going to hop into 11 labs
399:29 and start playing around with this stuff. Also, quick side note, if you
399:32 want to hop into this workflow, check out the prompts, play around with how I
399:36 configured things. Um, you'll be able to download this workflow for free in the
399:39 free school community. Link for that will be down in the description. You'll
399:41 just come into here, you'll click on YouTube resources, you will click on the
399:45 post associated with this video, and then you're able to download the
399:48 workflow right here. Once you download the workflow, you can import it from
399:52 file, and then you will have this exact canvas pop up on your screen. Then, if
399:55 you're looking to take your skills with Naden a little bit farther, feel free to
399:58 check out my paid community. The link for that will also be down in the
400:01 description. Great community in here. A lot of people obviously are learning NAN
400:05 and um asking questions, sharing builds, sharing resources. Got a great classroom
400:09 section going over, you know, client builds and some deep dive topics as well
400:13 as five live calls per week. So, you can always make sure you're getting your
400:15 questions answered. Okay. Anyways, back to the video. So, in 11 Labs, this is
400:19 the email agent. This is just the test environment where we're going to be
400:22 talking to it to try things out. So, we'll go back and we'll see how we
400:25 actually configured this agent. And if you're wondering why I named him
400:28 Marcher, it's just because his actual voice is Archer. So, um, that wasn't my
400:32 creativity there. Anyways, once we are in the configuration section of the
400:35 actual agent, we need to set up a few things. So, first is the first message.
400:39 Um, we pretty much just when we click on call the agent, it's going to say, "Hey
400:42 there, I'm Marcher. What can I do for you?" Otherwise, um, if we leave this
400:45 blank, then we will be the ones to start the conversation. But from there, you
400:49 will set up a system prompt. So in here, this is a prompt I have is you are a
400:53 friendly and funny personal assistant who loves helping the user with tasks in
400:56 an upbeat and approachable way. Your role is to assist the user with sending
401:00 emails. When the user provides details like who the email is for and what's it
401:04 about, you will pass that information to the NAN tool and wait for its response.
401:08 I'll show you guys in a sec how we configure the NAN tool and how all that
401:12 works. But anyways, once you get confirmation from NAN that the email was
401:15 sent, cheerfully let the user know it's done and ask if there's anything else
401:19 you can help with. Keep your tone light, friendly, and witty while remaining
401:22 efficient and clear in your responses. So, as you can see in the system prompt,
401:25 I didn't even really put in anything about the way it should be conversating
401:30 as far as like sounding natural and using filler words and um and sometimes
401:34 I do that to make it sound more natural, but this voice I found just sounded
401:38 pretty good just as is. Then, we're setting up the large language model. Um,
401:42 right now we're using Gemini 1.5 Flash just because it says it's the fastest.
401:44 You have other things you can use here, but I'm just sticking with this one. And
401:48 so this is what it uses to extract information pretty much out of the
401:52 conversation to pass it to NAND or figure out how it's going to respond to
401:55 you. That's what's going on here. And then with temperature, um, I talked
401:59 about I like to put a little bit higher, especially for some fun use cases like
402:02 this. Um, basically this is just the randomness and creativity of the
402:05 responses generated so that it's always going to be a little different and it's
402:08 going to be a little fun um, the higher you put it. But if you wanted it to be
402:11 more consistent and you had like, you know, you were trying to get some sort
402:15 of information back um right the way you want it, then you would probably want to lower this a
402:19 little bit. Um and then you have stuff like knowledge base. So if this was
402:24 maybe like um a customer support, you'd be able to put some knowledge base in
402:27 there. Or if you watch my previous voice video about um sort of doing voice rag,
402:32 you could still do the sending it to NADN, hitting a vector database from
402:35 Naden and then getting the response back. But anyways, um, in this case,
402:39 this is where we set up the tool that we were able to call up here as you saw in
402:44 the system prompt. So the tool NAN, this is where you're putting the web hook
402:49 from your the web hook URL from NAN. That's where you're putting that right
402:52 here. As you can see, um, web hook- test NAN. The method is going to be a post.
402:56 So we can send information from 11 Labs to NAN. And we just named it Naden to
403:01 make the system prompt make more sense for um just just for me when I was
403:05 creating this. It makes sense to send something to the tool called Naden.
403:09 Anyways, as you can see, the description is use this tool to take action upon the
403:12 user's request. And so we can pretty much just leave it as that. We don't
403:15 have any headers or authorization going in here, but we do need to send over
403:19 body parameters. Um otherwise, if we didn't have this, nothing would be sent
403:23 over to Naden at all. So the description of the body parameters is in a friendly
403:28 way ask the user to provide the name of the recipient and what the email is
403:31 about unless they already provided that information. So the LLM is understanding
403:36 when it's conversating with the human it needs to extract the name of this person
403:41 to send an an email to and then actually what is the email going to have. As you
403:44 can see we didn't have to put in like oh what's the subject of the email because
403:49 our AI agent in N8N can use its brain to figure out a subject. So we have two
403:53 parameters that we're setting over. If you remember when we were in here, we
403:56 got the two parameters from the body right here, which was two and email
404:00 content. And so that's as you can see what we set up here. So the first one
404:04 was a string. It was called two and this is who the email is going to. And then
404:07 the second one was also a string. It's called email content. And this is what
404:12 is the email about. So we hit save changes. Now we have a tool that gets
404:16 called whenever um the voice agent realizes it has information to send over
404:20 to N to send an email. and it has the right parameters to do so. So now we
404:23 have this voice agent pretty much set up here with its tools and its system
404:26 prompt. We're going to test the agent. And remember earlier I talked about the
404:30 fact that right now we're testing. So we have to have the web hook actively
404:33 listening. Once this is in production, it's going to be listening all the time.
404:36 But right now we have to manually hit test event before we call it. Otherwise,
404:39 it's not going to work. So I'm going to talk to Archer, ask him to send an
404:42 email. It's going to go through and then we're going to ask him to send another
404:45 email where we don't hit this web hook test event and then we'll see what he
404:49 says. You're probably just going to say he couldn't access the database or he
404:52 couldn't find the contact information. Something like that. Hey there, I'm
404:56 Archer. What can I do for you? Hey Archer, can you help me send an email?
405:00 Of course. I'd love to help. Who's the email for and what's it about? Tell me
405:03 the details and I'll whip it up in a jiffy. Sweet. Um, it's going to be for
405:09 Phil. Okay, so the email is for Phil. What's it about? I'm ready to get this
405:13 email sent out. Yeah, just let him know that I'm caught in traffic and I'm going
405:17 to be late to the meeting. Got it. So, you want to let Phil know
405:21 you're running late because of traffic. I'll get that email sent out right away.
405:26 All done. I just sent the email to Phil letting him know you're running late.
405:30 Anything else I can help you with? Yeah, I'm actually going to need to send
405:34 another one. No problem. Who's this one for and what's it about? I'm ready to
405:38 send email number two. Yeah. Can you let Phil know that um I actually just drove
405:43 my car off a bridge? Oh my goodness. I can't send an email like that. Maybe you should call Phil
405:49 instead. Is there someone else you want Um, no. Can you send an email to Phil?
405:58 I'm so sorry, but I can't send an email like that. It sounds like you're in a
406:02 difficult situation. Is there someone else you can reach out to for help?
406:07 Okay, didn't expect that. So, I guess the model doesn't want to say something
406:11 like that. We'll try again. Um, workflow is not active. So, let's see. Hey there,
406:16 I'm Archer. What can I do for you? Hey, Archer, can you send an email?
406:21 Of course, I'd be happy to help you with that. Who is the email for and what is
406:25 it about? I'll get it sent out in a jify. Just wanted to touch base with Phil
406:32 about um the meeting later today. Okay. So, you want to send an email to
406:36 Phil about the meeting later today. Got it. I'll get this done right away.
406:39 Anything else I can help with today? Oops. Looks like there was an error
406:43 sending that email. It's possible there's a typo in the email address or
406:47 something is wrong on my end. Let me try again. Can you please double check the
406:50 email address? Okay. Anyway, so that's how that works. As you can see, the
406:54 second time the web hook wasn't active, so wasn't able to send that um that
407:03 endnot passed through. So, that's going to be it for this one. I hope that
407:05 everything made sense. Um it's just really cool how easy it basically is to
407:09 switch out an input and you can have the agent function the same. Obviously, a
407:12 few things would change as you start to add more tools. Your user message would
407:15 have to be tweaked a little bit. You'd have to tweak the actual system prompts
407:18 a little bit. But, as you can see in this one, kept it very, very simple.
407:21 Basically, just told it its role. Gave it the the two tools and how to use
407:25 them. And as you can see, um it was pretty seamless as far as being able to
407:29 have the agent fill in things, make the easily. Today, we're going to be talking
407:36 about how you can build anything with Lovable and NAN. So we're going to be
407:40 doing a live build of spinning up a web page with lovable and then also building
407:44 the backend on nit. But first of all, I wanted to go over high level what this
407:47 architecture looks like. So right here is lovable. This is what we're starting
407:50 off with. And this is where we're going to be creating the interface that the
407:53 user is interacting with. What we do here is we type in a prompt in natural
407:57 language and Lovable basically spins up that app in seconds. And then we're able
408:01 to talk back and forth and have it make minor fixes for us. So what we can do is
408:05 when the user inputs information into our lovable website it can send that
408:09 data to nadn the nadn workflow that we're going to set up can you know use
408:13 an agent to take action in something like gmail or slack air tableable or
408:17 quickbooks and then naden can send the data back to lovable and display it to
408:21 the user and this is really just the tip of the iceberg there's also some really
408:25 cool integrations with lovable and superbase or stripe or resend so there's
408:29 a lot of ways you can really use lovable to develop a full web app and so while
408:32 we're talking high We just wanted to show you an example flow of what this
408:35 naden could look like where we're capturing the information the user is
408:40 sending from lovable via web hook. We're feeding that to a large language model
408:43 to create some sort of content for us and then we're sending that back and it
408:46 will be displayed in the Lovable web app. So let's head over to Lovable and
408:49 get started. So if you've never used Lovable before, don't worry. I'm going
408:52 to show you guys how simple it is. You can also sign up using the link in the
408:56 description for double the credits. Okay, so this is all I'm going to start
408:58 with just to show you guys how simple this is. I said, "Help me create a web
409:02 app called Get Me Out of This, where a user can submit a problem they're
409:05 having." Then I said to use this image as design inspiration. So, I Googled
409:09 landing page design inspiration, and I'm just going to take a quick screenshot of
409:12 this landing page, copy that, and then paste it into Lovable. And then we'll
409:16 fire this off. Cool. So, I just sent that off. And on the right hand side,
409:19 we're seeing it's going to spin up a preview. So, this is where we'll see the
409:22 actual web app that it's created and get to interact with it. Right now, it's
409:25 going to come through and start creating some code. And then on the lefth hand
409:27 side is where we're going to have that back and forth chat window to talk to
409:31 lovable in order to make changes for us. So right now, as you can see, it's going
409:34 to be creating some of this code. We don't need to worry about this. Let's go
409:37 into nit real quick and get this workflow ready to send data to. Okay, so
409:42 here we are in Nen. If you also haven't used this app before, there'll be a link
409:44 for it down in the description. It's basically just going to be a workflow
409:47 builder and you can get a free trial just to get started. So you can see I
409:50 have different workflows here. We're going to come in and create a new one.
409:53 And what I'm going to do is we're gonna add a first step that's basically
409:56 saying, okay, what actually triggers this workflow. So I'm gonna grab a web
410:00 hook. And so all a web hook is is, you know, it looks like this. And this is
410:04 basically just a trigger that's going to be actively listening for something to
410:08 send data to it. And and data is received at this URL. And so right now
410:11 there's a test URL and there's a production URL. Don't worry about that.
410:15 We're going to click on this URL to copy it to our clipboard. And basically we
410:18 can give this to Lovable and say, "Okay, whenever a user puts in a problem
410:22 they're having, you're going to send the data to this web hook." Cool. So hopping
410:26 over to Lovable. As you can see, it's still coding away and looks like it's
410:29 finishing up right now. And it's saying, "I've created a modern problem-solving
410:32 web app with a hero section, submission form, and feature section in blue
410:36 color." Um, looks like there's an error. So all we have to do is click on try to
410:39 fix, and it should go back in there and continue to spin up some more code.
410:42 Okay, so now it looks like it finished that up. And as you can see, we have the
410:46 website filled up. And so it created all of this with just uh an image as
410:50 inspiration as well as just me telling it one sentence, help me create a web
410:53 app called get me out of this where a user can submit a problem they're
410:55 having. So hopefully this should already open your eyes to how powerful this is.
410:59 But let's say for the sake of this demo, we don't want all this. We just kind of
411:02 want one simple landing page where they send a problem in. So all I'd have to do
411:05 is on this lefth hand side, scroll down here and say make this page more simple.
411:12 We only need one field which is what problem can we help with? So we'll just
411:21 send that off. Very simple query as if we were just kind of talking to a
411:24 developer who was building this website for us and we'll see it modify the code
411:27 and then we'll see what happens. So down here you can see it's modifying the code
411:31 and now we'll see what happens. It's just one interface right here. So it's
411:34 created like a title. It has these different buttons and we could easily
411:36 say like, okay, when someone clicks on the home button, take them here. Or when
411:40 someone clicks on the contact button, take them here. And so there's all this
411:42 different stuff we can do, but for the sake of this video, we're just going to
411:45 be worrying about this interface right here. And just to give it some more
411:49 personality, what we could do is add in a logo. So I can go to Google and search
411:54 for a thumbs up logo PNG. And then I can say add this logo in the top left. So
412:01 I'll just paste in that image. We'll fire this off to lovable. And it should
412:05 put that either right up here or right up here. We'll see what it does. But
412:08 either way, if it's not where we like it, we can just tell it where to put it.
412:11 Cool. So, as you can see, now we have that logo right up there. And let's say
412:14 we didn't like this, all we'd have to do is come up to a previous version, hit on
412:18 these three dots, and hit restore. And then it would just basically remove
412:21 those changes it just made. Okay. So, let's test out the functionality over
412:24 here. Let's say a problem is we want to get out of Oh, looks like the font is
412:28 coming through white. So, we need to And boom, we just told it to change the
412:43 text to black and now it's black and we can see it. So anyways, I want to say
412:49 get me out of a boring meeting. So we'll hit get me out of this and we'll see
412:53 what happens. It says submitting and nothing really happens. Even though it
412:56 told us, you know, we'll get back to you soon. Nothing really happened. So, what
413:00 we want to do is we want to make sure that it knows when we hit this button,
413:03 it's going to send that data to our Naden web hook. So, we've already copied
413:06 that web hook to our clipboard, but I'm just going to go back into Naden. We
413:09 have the web hook. We'll click on this right here back into lovable. Basically
413:13 just saying when I click get me out of this, so this button right here, send
413:16 the data to this web hook. And also, what we want to do is say as a
413:22 post request because it's going to be sending data. So, we're going to send
413:25 that off. And while it's making that change to the code, real quick, we want
413:29 to go into edit end and make sure that our method for this web hook is indeed
413:32 post. So I don't want to dive into too much what that means really, but Lovable
413:37 is going to be sending a post request to our web hook. Meaning there's going to
413:40 be stuff within this web hook like body parameters and different things. And so
413:43 if this wasn't configured as a post request, it might not work. So you'll
413:47 see once we actually get the data and we catch it in any of them. But anyways,
413:51 now when the users click on get me out of this, the form will send the problem
413:55 description to your web hook via a post request. So let's test it out. So we're
413:58 going to say I forgot to prepare a brief for my meeting. We're going to go back
414:01 and end it in real quick and make sure we hit listen for test event. So now our
414:05 web hook is actively listening back in lovable. We'll click get me out of this
414:08 and we will see what happens. We can come and end it in and we can now see we
414:12 got this information. So here's the body I was talking about where we're
414:15 capturing a problem which is I forgot to prepare a brief for my meeting. So, we
414:19 now know that Lovable is able to send data to NAND. And now it's on us to
414:24 configure what we want to happen in NAND so we can send the data back to Lovable.
414:27 Cool. So, what I'm going to do is I'm going to click on the plus that's coming
414:30 off of the web hook. And I'm going to grab an AI agent. What this is going to
414:34 do is allow us to connect to a different chat model and then the agent's going to
414:38 be able to take this problem and produce a response. And I'm going to walk
414:41 through the step by step, but if you don't really want to worry about this
414:43 and you just want to worry about the lovable side of things, you can download
414:47 the finished template from my free school community. I'll link that down in
414:50 the description. That way, you can just plug in this workflow and just give
414:53 lovable your noden web hook and you'll be set up. But anyways, if you join the
414:56 free school community, you'll click on YouTube resources, click on the post
415:00 associated with this video, and you'll be able to download the JSON right here.
415:03 And then when you have that JSON, you can come into Nadn, open up a new
415:07 workflow, click these three dots on the top, and then click import from file.
415:10 And when you open that up, it'll just have the finished workflow for you right
415:13 here. But anyways, what I'm going to do is click into the AI agent. And the
415:17 first thing is we have to configure what information the agent is going to
415:21 actually read. So first of all, we're going to set up that as a user prompt.
415:24 We're going to change this from connected chat trigger node to define
415:28 below because we don't have a connected chat trigger node. We're using a web
415:31 hook as we all know. So we're going to click on define below and we are
415:34 basically just going to scroll down within the web hook node where the
415:38 actual data we want to look at is which is just the problem that was submitted
415:42 by the user. So down here in the body we have a problem and we can just drag that
415:46 right in there and that's basically all we have to do. And maybe we just want to
415:48 define to the agent what it's looking at. So we'll just say like the problem
415:53 and then we'll put a colon. So now you can see in the result panel this is what
415:57 the agent will be looking at. And next we need to give it a system message to
416:00 understand what it's doing. So, I'm going to click on add option, open up a
416:04 system message, and I am going to basically tell it what to do. So, here's
416:07 a system message that I came up with just for a demo. You're an AI excuse
416:11 generator. Your job is to create clever, creative, and context appropriate
416:15 excuses that someone could use to avoid or get out of a situation. And then we
416:19 told it to only return the excuse and also to add a touch of humor to the
416:22 excuses. So, now before we can actually run this to see how it's working, we
416:26 need to connect its brain, which is going to be an AI chat model. So, what
416:28 I'm going to do is I'm going to click on this plus under chat model. For this
416:33 demo, we'll do an OpenAI chat model. And you have to connect a credential if you
416:36 haven't done so already. So, you would basically come into here, click create
416:39 new credential, and you would just have to insert your API key. So, you can just
416:44 Google OpenAI API. You'll click on API platform. You can log in, and once
416:47 you're logged in, you just have to go to your dashboard, and then on the left,
416:50 you'll have an API key section. All you'll have to do is create a new key.
416:56 We can call this one um test lovable. And then when you create that, you just
416:59 copy this value. Go back into Nitn. Paste that right here. And then when you
417:03 hit save, you are now connected to OpenAI's API. And we can finally run
417:07 this agent real quick. If I come in here and hit test step, we will see that it's
417:11 going to create an excuse for I forgot to prepare a brief for my meeting, which
417:15 is sorry, I was too busy trying to bond with my coffee machine. Turns out it
417:19 doesn't have a prepare briefs setting. So basically what we have is we're
417:22 capturing the problem that a user had. We're using an AI agent to create a
417:26 excuse. And then we need to send the data back to Lovable. So all we have to
417:30 do here is add the plus coming off of the agent. We're going to call this a
417:34 respond to web hook node. And we're just going to respond with the first incoming
417:37 item, which is going to be the actual response from the agent. But all we have
417:41 to do also to configure this is back in the web hook node, there's a section
417:45 right here that says respond, instead of responding immediately, we want to
417:49 respond using the respond to web hook node. So now it will be looking over
417:52 here, and that's how it's going to send data back to lovable. So this is pretty
417:56 much configured the way we need it, but we have to configure Lovable now to wait
418:00 for this response. Okay. So what I'm telling Lovable is when the data gets
418:04 sent to the web hook, we wait for the response from the web hook, then output
418:08 that in a field that says here is your excuse. So we'll send this off to
418:12 Lovable and see what it comes up with. Okay, so now it said that I've added a
418:14 new section that displays here is your excuse along with the response message
418:18 from the web hook when it's received. So let's test it out. First, I'm going to
418:21 go back and edit in and we're going to hit test workflow. So the web hook is
418:24 now listening for us. So we'll come into our lovable web app and say I want to
418:30 skip a boring meeting. We'll hit get me out of this. So now that data should be
418:34 captured in Naden. It's running. And now the output is I just realized my pet
418:38 goldfish has a lifealtering decision to make regarding his tank decorations and
418:41 I simply cannot miss this important family meeting. So it doesn't look
418:45 great, but it worked. And if we go into edit end, we can see that this run did
418:48 indeed finish up. And the output over here was I just realized my pet goldfish
418:52 has a lifealtering decision blah blah blah. So basically what what's happening
418:56 is the web hook is returning JSON which is coming through in a field called
418:59 output and then we have our actual response which is exactly what lovable
419:03 sent through. So it's not very pretty and we can basically just tell it to
419:06 clean that up. So what I just did is I said only return the output fields value
419:11 from the web hook response not the raw JSON. So we wanted to just output this
419:15 right here which is the actual excuse. And so some of you guys may not even
419:18 have had this problem pop up. I did a demo of this earlier just for testing
419:21 and I basically walked through these same steps and this wasn't happening.
419:26 But you know sometimes it happens. Anyways, now it says the form only
419:29 displays the value from the output field. So let's give it another try. So
419:32 back in we're going to hit test workflow. So it's listening for us in
419:35 lovable. We're going to give it a problem. So I'm saying I overslept and
419:38 I'm running late. I'm going to click get me out of this. And we'll see the
419:41 workflow just finished up. And now we have the response in a clean format
419:44 which is I accidentally hit the snooze button until it filed for a restraining
419:48 order against me for harassment. Okay. So now that we know that the
419:51 functionality within N is working. It's sending data back. We want to customize
419:55 our actual interface a little bit. So the first thing I want to do just for
419:58 fun is create a level system. So every time someone submits a problem, they're
420:01 going to get a point. And if they get five points, they'll level up. If they
420:04 get 20 total points, they'll level up again. Okay. So I just sent off create a
420:08 dynamic level system. Every time a user submits a problem, they get a point.
420:12 Everyone starts at level one and after five points, they reach level two. Then
420:15 after 50 more points, they reach level three. And obviously, we'd have to bake
420:18 in the rest of the the levels and how many points you need. But this is just
420:21 to show you that this is going to increase every time that we submit a
420:24 problem. And also, you'd want to have some sort of element where people
420:28 actually log in and get authenticated. And you can store that data in Superbase
420:31 or in um you know, Firebase, whatever it is, so that everyone's levels are being
420:36 saved and it's specific to that person. Okay, so looks like it just created a
420:40 level system. It's reloading up our preview so we can see what that looks
420:43 like now. Um, looks like there may have been an error, but now, as you can see
420:47 right here, we have a level system. So, let's give it another try. I'm going to
420:50 go into Nitn. We're going to hit test workflow. So, it's listening once again,
420:53 and we're going to describe a problem. So, I'm saying my boss is mean. I don't
420:56 want to talk to him. We're going to hit submit. The NN workflow is running right
420:59 now on the back end. And we just got a message back, which is, I'd love to
421:02 chat, but I've got a hot date with my couch and binge watching the entire
421:05 season of Awkward Bosses. And you can see that we got a point. So, four more
421:09 points to unlock level two. But before we continue to throw more prompts so
421:12 that we get up to level two, let's add in one more cool functionality. Okay, so I'm just firing
421:18 off this message that says add a drop down after what problem can we help with
421:22 that gives the user the option to pick a tone for the response. So the options
421:26 can be realistic, funny, ridiculous, or outrageous. And this data of course
421:31 should be passed along in that web hook to NADN because then we can tell the
421:35 agent to say okay here's the problem and here's the tone of an excuse the user is
421:39 requesting and now it can make a request or a response for us. So looks like it's
421:44 creating that change right now. So now we can see our dropown menu that has
421:47 realistic, funny, ridiculous, and outrageous. As you can see before you
421:50 click on it, it's maybe not super clear that this is actually a drop down. So
421:53 let's make that more clear. And what I'm going to do is I'm going to take a
421:56 screenshot of this section right here. I'm going to copy this and I'm just
422:00 going to paste it in here and say make this more clear that it is a drop-own
422:06 selection and we'll see what it does here. Okay, perfect. So, it just added a
422:10 little arrow as well as a placeholder text. So, that's way more clear. And now
422:13 what we want to do is test this out. Okay, so now to test this out, we're
422:16 going to hit test workflow. But just keep in mind that this agent isn't yet
422:20 configured to also look at the tone. So this tone won't be accounted for yet.
422:23 But what we're going to do is we have I overslept and the response is going to
422:28 be funny. We'll hit generate me a or sorry get me out of this. So we have a
422:32 response and our level went up. We got another point. But if we go into Nit, we
422:36 can see that it didn't actually account for the tone yet. So all we have to do
422:40 is in the actual user message, we're basically just going to open this up and
422:43 also add a tone. And we can scroll all the way down here and we can grab the
422:48 tone from the body request. And now it's getting the problem as well as the tone.
422:51 And now in the system prompt, which is basically just defining to the agent its
422:55 role. We have to tell it how to account for different tones. Okay, so here's
422:58 what I came up with. I gave it some more instructions and I said, "You're going
423:02 to receive a problem as well as a tone. And here are the possible tones, which
423:05 are realistic, funny, ridiculous, and outrageous." And I kind of said what
423:09 that means. And then I said, "Your excuse should be one to three sentences
423:12 long, and match the selected tone." So that's all we're going to do. We're
423:15 going to hit save. Okay. So now that it's looking at everything, we're going
423:18 to hit test workflow. The web hook's listening. We'll come back into here and
423:20 we're going to submit. I broke my friend's iPhone and the response tone
423:23 should be outrageous. So, we're going to send that off. And it's loading because
423:27 our end workflow is triggering. And now we just got it. We also got a message
423:31 that says we earned a point. So, right here, we now only need two more for
423:34 level two. But the excuse is I was trying to summon a unicorn with my
423:38 telekinetic powers and accidentally transformed your iPhone into a rogue
423:41 toaster that launched itself off the counter. I swear it was just trying to
423:45 toast a bagel. Okay, so obviously that's pretty outrageous and that's how we know
423:48 it's working. So, I'm sure you guys are wondering what would you want to do if
423:51 you didn't want to come in here and every single time make this thing, you
423:55 know, test workflow. What you would do is you'd switch this to an active
423:59 workflow. Now, basically, we're not going to see the executions live anymore
424:03 with all these green outlines. But what's happening now is it's using the
424:06 production URL. So, we're going to have to copy the production URL, come back
424:11 into Lovable, and just basically say I switched the URL or sorry, let's call I
424:17 switched the web hook to this. And we'll paste that in there, and it should just
424:21 change the data. The logic should be all the exact same because we've already
424:25 built that into this app, but we're just going to switch the web hook. So, now we
424:28 don't have to go click test workflow every time in NAN. And super excited. We
424:32 have two more problems to submit and then we'll be level two. So now it says
424:36 the web hook URL has been updated. So let's test it out. As you can see in
424:39 here, we have an active workflow. We're not hitting test workflow. We're going
424:42 to come in here and submit a new problem. So we are going to say um I
424:51 want to take four weeks off work, but my boss won't let me. We are going to make
424:56 the response tone. Let's just do a realistic one. And we'll click get me
425:00 out of this. It's now calling that workflow that's active and it's
425:02 listening. So we got a point. We got our response which is I've been dealing with
425:06 some unforeseen family matters that need my attention. I believe taking 4 weeks
425:09 off will help me address them properly. I plan this time to use this time to
425:12 ensure everything is in order so I can return more focused and productive. I
425:15 would definitely say that that's realistic. What we can do is come into
425:19 NAN. We can click up here on our executions and we can see what just
425:22 happened. So this is our most recent execution and if we click into here it
425:26 should have been getting the problem which was I want to take four weeks off
425:30 work and the tone which was realistic. Cool. Cool. So, now that we know that
425:33 our active new web hook is working, let's just do one more query and let's
425:38 earn our level two status. I'm also curious to see, you know, we haven't
425:41 worked in any logic of what happens when you hit level two. Maybe there's some
425:44 confetti. Maybe it's just a little notification. We're about to find out.
425:47 Okay, so I said I got invited on a camping trip, but I hate nature. We're
425:50 going to go with ridiculous and we're going to send this off. See what we get
425:55 and see what level two looks like. Okay, so nothing crazy. We could have worked
425:58 in like, hey, you know, make some confetti pop up. All we do is we get
426:01 promoted to level two up here. But, you know, as you can see, the bar was
426:04 dynamic. It moved and it did promote us to level two. But the excuse is, I'd
426:08 love to join, but unfortunately, I just installed a new home system that detects
426:11 the presence of grass, trees, and anything remotely outdoorsy. If I go
426:15 camping, my house might launch an automated rescue mission to drag me back
426:19 indoors. So, that's pretty ridiculous. And also, by the way, up in the preview,
426:22 you can make it mobile. So, we can see what this would look like on mobile.
426:24 Obviously, it's not completely optimized yet, so we'd have to work on that. But
426:28 that's the ability to do both desktop and mobile. And then when you're finally
426:31 good with your app, up in the top right, we can hit publish, which is just going
426:34 to show us that we can connect it to a custom domain or we can publish it at
426:38 this domain that is made for us right here. Anyways, that is going to be it
426:42 for today's video. This is really just the tip of the iceberg with, you know,
426:45 nodn already has basically unlimited capabilities. But when you connect that
426:49 to a custom front end when you don't have to have any sort of coding
426:52 knowledge, as you can see, all of these prompts that I use in here was just me
426:56 talking to it as if I was talking to a developer. And it's really, really cool
426:59 how quick we spun this up. All right, hopefully you guys thought that was
427:02 cool. I think that 11 Labs is awesome and it's cool to integrate agents with
427:06 that as well as lovable or bolt. or these other vibe coding apps that let
427:09 you build things. That would have taken so much longer and you would have kind
427:13 of had to know how to code. So, really cool. So, we're nearing the end of the
427:16 course, but it would be pretty shameful if I didn't at least cover what MCP
427:20 servers are because they're only going to get more commonly used as we evolve
427:25 through the space. So, we're going to talk about MCP servers. We're going to
427:28 break it down as simple as possible. And then I'm going to do a live setup where
427:31 I'm self-hosting NADN in front of you guys step by step. and then I'm going to
427:36 connect to a community node in NN that lets us access some MCP servers. So,
427:40 let's get into it. Okay, so model context protocol. I swear the past week
427:45 it's been the only thing I've seen in YouTube comments, YouTube videos,
427:49 Twitter, LinkedIn, it's just all over. And I don't know about you guys, but
427:52 when I first started reading about this kind of stuff, I was kind of intimidated
427:56 by it. I didn't completely understand what was going on. It was very techy
428:00 and, you know, kind of abstract. I also felt like I was getting different
428:04 information based on every source. So, we're going to break it down as simple
428:08 as possible how it makes AI agents more intelligent. Okay, so we're going to
428:11 start with the basics here. Let's just pretend we're going back to Chad GBT,
428:15 which is, you know, a large language model. What we have is an input on the
428:19 left. We're able to ask it a question. You know, help me write this email, tell
428:22 me a joke, whatever it is. We feed in an input. The LM thinks about it and
428:26 provides some sort of answer to us as an output. And that's really all that
428:30 happens. The next evolution was when we started to give LLM tools and that's
428:36 when we got AI agents because now we could ask it to do something like write
428:39 an email but rather than just writing the email and giving it back to us it
428:42 could call a tool to actually write that email and then it would tell us there
428:46 you go the job's done. And so this really started to expand the
428:49 capabilities of these LLM because they could actually take action on our behalf
428:53 rather than just sort of assisting us and getting us 70% of the way there. And
428:57 so before we start talking about MCP servers and how that enhances our agents
429:01 abilities, we need to talk about how these tools work and sort of the
429:05 limitations of them. Okay, so sticking with that email example, let's pretend
429:08 that this is an email agent that's helping us take action in email. What
429:12 it's going to do is each tool has a very specific function. So this first tool
429:15 over here, you can see this one is going to label emails. The second tool in the
429:18 middle is going to get emails and then this third one on the right is going to
429:21 send emails. So if you watched my ultimate assistant video, if you
429:24 haven't, I'll tag it right up here. What happened was we had a main agent and
429:27 then it was calling on a separate agent that was an email agent. And as you can
429:31 see here was all of its different tools and each one had one very specific
429:35 function that it could do. And it was basically just up to the email agent
429:38 right here to decide which one to use based on the incoming query. And so the
429:42 reason that these tools aren't super flexible is because within each of these
429:46 configurations, we basically have to hardcode in what is the operation I'm
429:50 doing here and what's the resource. And then we can feed in some dynamic things
429:54 like different message ids or label ids. Over here you know the operation is get
429:58 the resources message. So that won't change. And then over here the operation
430:02 is that we're sending a message. And so this was really cool because agents were
430:06 able to use their brains whatever large language model we had plugged into them
430:09 to understand which tool do I need to use. And it still works pretty well. But
430:12 when it comes to being able to scale this up and you want to interact with
430:15 multiple different things, not just Gmail and Google Calendar, you also want
430:19 to interact with a CRM and different databases, that's where it starts to get
430:24 a little confusing. So now we start to interact with something called an MCP
430:28 server. And it's basically just going to be a layer between your agent and
430:32 between the tools that you want to hit, which would be right here. And so when
430:35 the agent sends a request to the specific MCP server, in this case, let's
430:39 pretend it's notion, it's going to get more information back than hey, what
430:42 tools do I have? And what's the functionality here? It's also going to
430:45 get information about like what are the resources there, what are the schemas
430:48 there, what are the prompts there, and it uses all of that to understand how to
430:53 actually take the action that we asked back here in the whole input that
430:56 triggered the workflow. when it comes to different services talking to each
431:01 other. So in this case Nadn and notion there's been you know a standard in the
431:04 way that we send data across and we get data back which has been the rest APIs
431:08 and these standards are really important because we have to understand how can we
431:12 actually format our data and send it over and know that it's going to be
431:16 received in the way that we intend it to be. And so that's exactly what was going
431:19 on back up here where every time that we wanted to interact with a specific tool
431:24 we were hitting a specific endpoint. So the endpoint for labeling emails was
431:27 different for the endpoint for sending emails. And besides just those endpoints
431:31 or functions being different, there was also different things that we had to
431:35 configure within each tool call. So over here you can see what we had to do was
431:39 in order to send an email, we have to give it who it's going to, what the
431:43 subject is, the email type, and the message, which is different from the
431:45 information we need to send to this tool, which is what's the message ID you
431:49 want to label, and what is the label name or ID to give to that message. By
431:54 going through the MCP server, it's basically going to be a universal
431:57 translator that takes the information from the LLM and it enriches that with
432:01 all of the information that we need in order to hit the right tool with the
432:04 right schema, fill in the right parameters, access the right resources,
432:08 all that kind of stuff. The reason I put notion here for an example of an MCP
432:13 server is because within your notion, you'll have multiple different databases
432:16 and within those databases, you're going to have tons of different columns and
432:20 then all of those, you know, are going to have different pages. So, being able
432:23 to have the MCP server translate back to your agent, here are all of the
432:27 databases you have. Here is the schema or the different fields or columns that
432:32 are in each of your databases. Um, and also here are the actions you can take.
432:35 Now, using that information, what do you want to do? Real quick, hopping back to
432:39 the example of the ultimate assistant. What we have up here is the main agent
432:43 and then it had four child workflows, child agents that it could hit that had
432:48 specializations in certain areas. So the Gmail agent, which we talked about right
432:51 down here, the Google calendar agent, the contact agent, which was Air Table,
432:56 and then the content creator agent. So all that this agent had to do was
432:59 understand, okay, based on what's coming in, based on the request from the human,
433:03 which of these different tools do I actually access? And we can honestly
433:07 kind of think of these as MCP servers. Because once the query gets passed off
433:11 to the Gmail agent, the Gmail agent down here is the one that understands here
433:14 are the tools I have, here are the different like, you know, parameters I
433:17 need to fill out. I'm going to take care of it and then we're going to respond
433:21 back to the main agent. This system made things a little more dynamic and
433:24 flexible because then we didn't have to have the ultimate assistant hooked up to
433:28 like 40 different tools, you know, all the combinations of all of these. And it
433:31 made its job a little more easier by just delegating the work to different
433:35 MCP servers. And obviously, these aren't MCP servers, but it's kind of the same
433:39 concept. The difference here is that let's say all of a sudden Gmail adds
433:42 more functionality. We would have to come in here and add more tools in this
433:46 case. But what's going on with the MCP servers is whatever MCP server that
433:51 you're accessing, it's on them to continuously keep that server updated so
433:56 that people can always access it and do what they need to do. By this point, it
433:59 should be starting to click, but maybe it's not 100% clear. So, we're going to
434:03 look at an actual example of like what this really looks like in action. But
434:06 before we do, just want to cover one thing, which is, you know, the agent
434:10 sending over a request to a server. the server translates it in order to get all
434:14 this information and get the tool calls, all that kind of stuff. Um, and what's
434:18 going on here is called MCP protocol. So, we have the client, which is just
434:21 the interface that we're using. In this case, it's NN. It could be your claude
434:25 or your, you know, coding window, whatever it is. And then we're sending
434:29 over something to the MCP server, and that's called MCP protocol. Also, one
434:32 thing to keep in mind here that I'm not going to dive into, but if you were to
434:35 create your own MCP server and it had access to all of your own resources,
434:39 your schemas, your tools, all that kind of stuff, you just got to be careful
434:42 there. There's some security concerns because if anyone was getting into that
434:45 server, they could basically ask for anything back. So, that's something that
434:49 was brought up in my paid community. We were having a great discussion about MCP
434:53 and stuff like that. So, just keep it in mind. So, let's look more at an example
434:57 in Naden once again. So coming down here, let's pretend that we have this
435:01 beautiful air table agent that we built out in NAN. As you can see, it has these
435:06 um seven different tools, which is get record, update record, get bases, create
435:10 record, search record, delete record, and get bases schema. The reason we
435:14 needed all of these different tools is because, as you know, they each have
435:17 different operations inside of them, and then they each have different parameters
435:20 to be filled out. So the agent takes care of all of that. But this could be a
435:24 lot more lean of a system if we were able to access Air Table's MCP server as
435:28 you see what we're doing right here because this is able to list all the
435:32 tools that we have available in Air Table. So here you can see I asked the
435:36 Air Table agent what actions do I have? It then listed these 13 different
435:39 actions that we have which are actually more than the seven we had built out
435:42 here. And we can see we have list records, search records, and then 11
435:47 more. And this is actually just the agent telling us the human what we have
435:51 access to. But what the actual agent would look at in order to use the tool
435:54 is a list of the tools where it would be here's the name, here's the description
435:57 of when you use this tool, and then here's the schema of what you need to
436:01 send over to this tool. Because when we're listing records, we have to send
436:04 over different information like the base ID, the table ID, max records, how we
436:08 want to filter, which is different than if we want to list tables because we
436:12 need a base ID and a detail label. So all of this information coming back from
436:17 the MCP server tells the agent how it needs to fill out all of these
436:19 parameters that we were talking about earlier where it's like send email. You
436:22 have different things than you need to fill out for labeling emails. So once
436:27 the agent gets this information back from the MCP server, it's going to say
436:31 okay well I know that I need to use the search records tool because the user
436:35 asked me to search for records with the name Bob in it. So I have this schema
436:39 that I need to use and I'm going to use my air tableable execute tool in order
436:43 to do so. And basically what it's going to do is going to choose which tool it
436:46 needs based on the information it was fed previously. So in this case the air
436:50 table execute tool would search records and it would do it by filling in this
436:53 schema of information that we need to pass over to air tableable. So now I
436:57 hope you can see how basically what's going on in this tool is all 13 tools
437:01 wrapped up into one and then what's going on here is just feeding all the
437:04 information we need in order to make the correct decision. So, this is the
437:07 workflow we were looking at for the demo. We're not going to dive into this
437:10 one because it's just a lot to look at. I just wanted to put a ton of MCP
437:14 servers in one agent and see that even if we had no system prompt, if it could
437:18 understand which one to use and then still understand how to call its tools.
437:20 So, I just thought that was a cool experiment. Obviously, what's next is
437:23 I'm going to try to build some sort of huge, you know, personal type assistant
437:27 with a ton of MCP servers. But for now, let's just kind of break it down as
437:30 simple as possible by looking at an individual MCP agent. And so I I don't
437:35 know why I called it an MCP agent. In this case, it's just kind of like a
437:39 firecrawl agent with access to firecraws MCP server. So yeah. Okay. So taking a
437:43 look at firecraw agent, we're going to ask what tools do you have? It's hitting
437:48 the firecrawl actions right now in order to pull back all of the resources. And
437:51 as you can see, it's going to come back and say, hey, we have these, you know,
437:54 nine actions you can take. I don't know if it's nine, but it's going to be
437:57 something like that. It was nine. So as you can see, we have access to scrape,
438:01 map, crawl, batch scrape, all this other stuff. And what's really cool is that if
438:04 we click into here, we can see that we have a description for when to use each
438:07 tool and what you actually need to send over. So if we want to scrape, it's a
438:11 different schema to fill out than if we want to do something like extract. So
438:14 let's try actually asking it to do something. So let's say um extract the
438:23 rewards program name from um chipotle.com. So we'll see what it does
438:27 here. Obviously, it's going to do the same thing, listing its actions, and
438:31 then it should be using the firecrawl extract method. So, we'll see what comes
438:36 back out of that tool. Okay, it went green. Hopefully, we actually got a
438:39 response. It's hitting it again. So, we'll see what happened. We'll dive into
438:43 the logs after this. Okay, so on the third run, it finally says the rewards
438:47 program is called Chipotle Rewards. So, let's take a look at run one. It used
438:51 firecrawl extract and it basically filled in the prompt extract the name of
438:54 the rewards program. It put it in as a string. We got a request failed with
438:58 status code 400. So, not sure what happened there. Run two, it did a
439:01 firecross scrape. We also got a status code 400. And then run three, what it
439:06 did was a firecall scrape once again, and it was able to scrape the entire
439:09 thing. And then it used its brain to figure out what the rewards program was
439:12 called. Taking a quick look at the firewall documentation, we can see that
439:16 a 400 error code means that the parameters weren't filled out correctly.
439:20 So what happened here was basically it just didn't fill these out exactly
439:23 correctly the schema of like the prompt and everything to send over. And so
439:26 really these kind of issues just come down to a matter of you know making the
439:31 tool parameters more robust and also more prompting within the actual
439:34 firecrawl agent itself. But it's pretty cool that it was able to understand okay
439:37 this didn't work. Let me just try some other things. Okay, so real quick just
439:41 wanted to say if you want to hop into Nit and test out these MCP nodes, you're
439:44 going to have to self-host your environment because you need to use the
439:48 community nodes and you can only access self-hosted. Today we're going to be
439:54 going through the full setup of connecting MCP servers to NN. I'm going
439:58 to walk through how you self-host your NN. I'm going to walk through how you
440:01 can install the community node and then how to actually set up the community
440:04 node. The best part is you don't have to open up a terminal or a shell and type
440:08 in any install commands. All we have to do is connect to the servers through
440:11 NIND. So, if that sounds like something that interests you, let's dive into it.
440:14 Make sure you guys stick around for the end of this one because we're going to
440:17 talk about the current limitations of these MCP nodes. We're going to talk
440:20 about some problems you may face that no one else is talking about and really
440:23 what does it mean to actually be able to scale these agents with MCP servers.
440:26 Now, there are a ton of different platforms that you can use to host NN.
440:29 The reason I'm using Alstio is because it's going to be really simple for
440:32 deploying and managing open- source software. Especially with something like
440:35 NN, you can pretty much deploy it in just a few clicks and it's going to take
440:39 care of installation, configuration, security, backups, updates, all this
440:43 kind of stuff. So, there's no need for you to have that DevOps knowledge
440:47 because I certainly don't. So, that's why we're going with Alstio. It's also
440:51 SOCK 2 and GDPR compliant. So, that's important. Anyways, like I said, we're
440:54 going to be going through the full process. So, I'm going to click on free
440:57 trial. I'm going to sign up with a new account. So, I'm going to log out and
441:00 sign up as a new user. Okay. Now that I entered that, we're already good to go.
441:02 The first thing I'm going to do is set up a payment method so that we can
441:05 actually spin up a service. So, I went down to my account in the lefth hand
441:08 side and then I clicked on payment options and I'm going to add a card real
441:11 quick. Now that that's been added, it's going to take a few minutes for our
441:14 account to actually be approved. So, I'm just going to wait for that. You can see
441:16 we have a support ticket that got opened up, which is just waiting for account
441:21 activation. Also, here's the approval email I got. Just keep in mind it says
441:23 it'll be activated in a few minutes during business hours, but if you're
441:26 doing this at night or on the weekends, it may take a little longer. Okay, there
441:29 we go. We are now activated. So, I'm going to come up here to services and
441:32 I'm going to add a new service. What we're going to do is just type in nadn
441:35 and it's going to be a really quick oneclick install. Basically, I'm going
441:39 to just be deploying on htzner as a cloud provider. I'm going to switch to
441:43 my region and then you have different options for service plans. So, these
441:46 options obviously have different numbers of CPUs, different amount of RAM and
441:50 storage. I'm going to start right now on just the medium. I would keep in mind
441:53 that MCP servers can be kind of resource intensive. So, if you are running
441:57 multiple of them and your environment is crashing, then you're probably just
441:59 going to want to come in here and upgrade your service plan. So, we can
442:02 see down here, here is the estimated hourly price. Here's the plan we're
442:05 going with. And I'm going to go ahead and move forward. Now, we're going to
442:08 set up the name. So, this will pop up as your domain for your NAND environment.
442:11 Then, I went ahead and called this Nad- demo. What you can do here is you can
442:15 add more volume. So, if you wanted to, you could increase the amount of
442:17 storage. And as you can see down here, it's going to increase your hourly
442:20 price. I'm not going to do that, but you do have that option. And then of course
442:24 you have some advanced configuration for software updates and system updates.
442:26 Once again, I'm just going to leave that as is. And then you can also choose the
442:30 level of support that you need. You can scan through your different options
442:32 here. Obviously, you'll have a different price associated with it. But on the
442:36 default plan, I'm just going to continue with level one support. And now I'm
442:40 going to click on create service. Okay. So, I don't have enough credits to
442:42 actually deploy this service. So, I'm going to have to go add some credits in
442:46 my account. So, back in the account, I went to add credits. And now that I have
442:49 a card, I can actually add some credits. So, I'm going to agree to the terms and
442:52 add funds. Payment successful. Nice. We have some money to play with. Down here,
442:55 we can see 10 credits. This is also where we'll see how much we're spending
442:59 per hour once we have this service up and running. Unfortunately, we have to
443:01 do that all again. So, let me get back to the screen we were just on. Okay, now
443:04 we're back here. I'm going to click create service. We're deploying your
443:07 service. Please wait. And this is basically just going to take a few
443:10 minutes to spin up. Okay, so now what we see is the service is currently running.
443:13 We can click into the service and we should be able to get a link that's
443:16 going to take us to our NN instance. So, here's what it looks like. We can see
443:18 it's running. and we have all these different tabs and all these different
443:20 things to look through. We're going to keep it simple today and not really dive
443:23 into it. But what we're going to do is come down here to our network and this
443:27 is our actual domain to go to. So if I select all of this and I just click go
443:31 to this app, it's going to spin up our NN environment and because this is the
443:34 first time we're visiting it. We just have to do the setup. Okay. So once we
443:37 have that configured, going to hit next. We have to do some fun little onboarding
443:39 where it's going to ask us some questions right here. So then when
443:42 you're done with that, you just got to click get started and you now have this
443:45 option to get some paid features for free. I'm going to hit skip and we're
443:49 good to go. We are in NAN. So what's next is we need to install a community
443:52 node. So if you come down here to your settings and you click on settings, um
443:56 you can see you're on the community plan. We can go all the way down here to
443:59 community nodes. And now we have to install a community node. So in the
444:02 description we have the GitHub repository for this NAD nodes MCP that
444:06 was made by Nerding. And you can see there's some information on how to
444:09 actually install this. But all we have to do is basically just copy this line
444:12 right here. I'm just going to copy NAN- Nodes MCP. Click on install community
444:17 node. Put that in there. Hit understand. And so the reason you can only do this
444:20 on self-hosted is because these nodes are not a native verified node from
444:24 Naden. So it's just like, you know, we're downloading it from a public
444:27 source, at least the code. And then we hit install. Package installed. We can
444:31 now see we have one community node, which is the MCP. Cool. So I'm going to
444:35 leave my settings and I'm going to open up a new workflow. And we're just going
444:38 to hit tab to see if we have the actual node. So if I type in MCP, we can see
444:42 that we have MCP client and we have this little block, which just means that it
444:45 is part of the community node. So, I'm going to click into here, and we can see
444:48 we have some different options. We can execute a tool. We can get a prompt
444:52 template. We can list available prompts, list available resources, list available
444:56 tools, and read a resource. Right now, let's go with list available tools. Um,
444:59 the main one we'll be looking at is listing tools and then executing tools.
445:03 So, quick plug for the school community. If you're looking for a more hands-on
445:06 learning experience, as well as wanting to connect with over 700 members who are
445:10 also dedicated to this rapidly evolving space, then definitely give this
445:13 community a look. We have great discussions, great guest speakers as you
445:16 can see. We also have a classroom section with stuff like building agents,
445:20 vector databases, APIs and HTTP request, step-by-step builds. All the live calls
445:23 are recorded, all this kind of stuff. So, if this sounds interesting to you,
445:26 then I'd love to see you in a live call. Anyways, let's get back to the video.
445:29 So, obviously, we have the operation, but we haven't set up a credential yet.
445:32 So now what you're going to do is go to a different link in the description
445:36 which is the GitHub repository for different MCP servers and we can pretty
445:40 much connect to any of these like I said without having to run any code in our
445:44 terminal and install some stuff at least because we're hosting in the cloud. If
445:47 we're hosting locally it may be a little different. Okay, so I've seen a ton of
445:50 tutorials go over like Brave Search or Firecrawl. Um so let's try to do
445:53 something a little more fun. I think first we'll start off with Airbnb
445:56 because this one is going to be free. You don't even have to go get an API
445:59 key. So that's really cool. So, I'm going to click into this Airbnb MCP
446:02 server. There's a bunch of stuff going on here. And if you understand GitHub
446:06 and repositories and some code, you can look through like the Docker file and
446:09 everything, which is pretty cool. But for us non techies, all we have to do is
446:13 come down here. It's going to tell us what tools are available. But we just
446:16 need to look at how to actually install this. And so, all we're looking for is
446:21 the MPX type installer. And so after my testing, I tried this one first, but it
446:25 wouldn't let us execute the tool because we need to use this thing that is ignore
446:28 robots text, which just basically lets us actually access the platform. So you
446:32 can see here we have a command, which is npx, and then we have an array of
446:36 arguments, which is -y, this open B&B thing, and then also the ignore robots
446:39 text. So first of all, I'm just going to grab the command, which is npx. Copy
446:43 that. Go back and edit in, and we're going to create a new credential. This
446:47 one's going to be for Airbnb. So I'm just going to name this so we have it
446:49 kept. And then we're just going to paste the command right into there, mpx. Now
446:53 we can see we have arguments to fill out. So I'm going to go back into that
446:57 documentation. We can see the arguments are -ash-y. And then the next one is the
447:01 open B&B. And then the next one is ignore robots text. So we're going to
447:05 put them in one by one. So first is the dashy. Now I'm going to go back and grab
447:09 the at@ openb put a space and then paste it in there. And then I'm going to put another
447:14 space. And then we're going to grab this last part which is the ignore robots
447:18 txt. So once we paste that in there, we can basically just hit save. As you can
447:21 see, we've connected successfully. Um the credential is now in our space and
447:24 we didn't have to type in anything in a terminal. And now if we hit test step,
447:28 we should be able to pull in the tools that this MCP server gives us access to.
447:32 So it's as easy as that. As you can see, there are two tools. The first one is
447:35 Airbnb search. Here's when you use it, and then here's the schema to send over
447:39 to that tool. And then the second one is Airbnb listing details. Here's when you
447:43 want to use that, and then here's the schema that you would send over. And now
447:46 from here, which is really cool, we can click on another node, which is going to
447:49 be an MCP client once again. And this time, we want to execute a tool. We
447:53 already have our credential set up. We just did that together. And now all we
447:56 have to do is configure the tool name and the tool parameters. So just as a
447:59 quick demo that this actually works. The tool name, we're going to drag in Airbnb
448:03 search, as you can see. And then for the parameter, we can see these are the
448:05 different things that we need to fill out. And so all I'm going to do is just
448:10 send over a location. So I obviously hardcoded in location equals Los
448:13 Angeles. That's all we're going to try. And now we're going to hit test step and
448:16 we should see that we're getting Airbnbs back that are in Los Angeles. There we
448:21 go. We have um ton of different items here. So, let's actually take a look at
448:25 this listing. So, if I just copy this into the browser, we should see an
448:29 Airbnb arts district guest house. This is in Culver City, California. And
448:32 obviously, we could make our search more refined if we were able to also put in
448:36 like a check-in and checkout date, how many adults, how many children, how many
448:40 pets. We could specify the price, all this kind of stuff. Okay, cool. So that
448:43 was an example of how we search through an MCP server to get the tools and then
448:46 how we can actually execute upon that tool. But now if we want to give our
448:51 agent access to an MCP server, what we would do is obviously we're going to add
448:54 an AI agent. We are first of all going to come down here, give it a chat input
448:58 so we can actually talk to the agent. So we'll add that right here. And now we
449:01 obviously need to connect a chat model so that the agent has a brain and then
449:05 give it the MCP tools. So first of all, just to connect a chat model, I'm just
449:08 going to grab an open AI. I'm sorry for being boring, but all we have to do is
449:11 create a credential. So, if you go to your OpenAI account and grab an API key.
449:15 So, here's my account. As you can see, I have a ton of different keys, but I'm
449:17 just going to create a new one. This is going to be MCP test and then all we
449:21 have to do is copy that key. Come back and end it in and we're going to paste
449:25 that right in here. So, there's our key. Hit save. We'll go green. We're good to
449:29 go. We're connected to OpenAI. And now we can choose our model. So, for Mini is
449:32 going to work just fine here. Now, to add a tool once again, we're going to
449:35 add the MCP client tool right here. And let's just do Airbnb one more time. So,
449:40 we're connected to Airbnb list tools and I'm just going to say what tools do I
449:46 have and what's going to happen is it errored because the NAND nodes MCP tool
449:50 is not recognized yet even though the MCP nodes are. So, we have to go back
449:55 into Alstio real quick and change one thing. So, coming back into the GitHub
449:59 repository for the NN MCP node, we can see it gives us some installation
450:03 information, right? But if we go all the way down to how to use it as a tool, um
450:06 if I can scroll all the way down here. So here is an example of using it as a
450:10 tool. You have to set up the environment variable within your hosting
450:14 environment. So whether it's Allesio or Render or Digital Ocean or wherever
450:17 you're doing it, it'll be a little different, but you just have to navigate
450:19 down to where you have environment variables. We have to set nad community
450:26 package_allow tool usage. We have to set that to equal true. So I'm going to come
450:30 back into our Alstio service. And right here we have the software which is NAN
450:33 version latest. And what we can do is we can you know restart, view app logs. We
450:37 can change the version here or we can update the config which if we open this
450:41 up it may look a little intimidating but all we're looking for is right here we
450:45 have environment and we can see we have like different stuff with our Postgress
450:49 with our web hook tunnel URLs all this kind of stuff and so at the bottom I'm
450:52 just going to add a new line and I'm just going to paste in that command we
450:55 had which was nadn community packages allow and then instead of an equal I'm
450:59 going to put a colon and now we have that nadn community packages allow is
451:03 set to true and I'm just adding a space after the colon so now it's link and all
451:07 we're just going to do is hit update and restart. And so this is going to respin
451:10 up our instance. Okay, so it looks like we are now finished up. I'm going to go
451:13 ahead and close out of this. We can see that our instance is running. So now I'm
451:16 going to come back into here and I actually refresh this. So our agent's
451:20 gone. So let me get him back real quick. All right, so we have our agent back.
451:23 We're going to go ahead and add that MCP tool once again. Right here we are going
451:27 to have our credential already set up. The operation is list tools. And now
451:31 let's try one more time asking it what tools do you have? And it knows to use this node because
451:37 it's the operation here is list tools. So it's going to be pretty intelligent
451:40 about it. Now it's able to actually call that tool because we set up that
451:43 environment variable. So let's see what Airbnb responds with as far as what
451:47 tools it actually can use. Cool. So I have access to the following tools.
451:50 Airbnb search and listing details. Now let's add the actual tool that's going
451:56 to execute on that tool. So Airbnb um once again we have a credential already
451:58 set up. The operation we're going to choose execute tool instead. And now we
452:02 have to set up what is going on within this tool. So the idea here is that when
452:06 the client responds with okay I have Airbnb search and I have Airbnb listing
452:10 details the agent will then figure out based on what we asked which one do I
452:15 use and the agent has to pass that over to this next one which is actually going
452:19 to execute. So what we want to do here is the tool name cannot be fixed because
452:23 we want to make this dynamic. So, I'm going to change this to an expression
452:26 and I'm going to use the handy from AI function here, which is basically we're
452:30 just going to tell the AI agent, okay? You know, based on what's going on, you
452:34 choose which tool to use and you're going to put that in here. So, I'm going
452:38 to put in quotes tool and then I'm going to just define what that means. And in
452:43 quotes after a comma, I'm going to say the tool selected. So, we'll just leave
452:47 it as simple as that. And then what's really cool is for the tool parameters,
452:51 this is going to change based on the actual tool selected because there's
452:54 different schemas or parameters that you can send over to the different tools. So
452:58 we're going to start off by just hitting this button, which lets the model define
453:02 this parameter. It's going to get back what not only what tool am I using, but
453:05 what schema do I need to send over. So it should be intelligent enough to
453:09 figure it out for simple queries. So let's change this name to Airbnb
453:14 execute. I'm going to change this other one to Airbnb tools and then we'll have
453:18 the agent try to figure out what's going on. And just a reminder, there's no
453:22 system prompt in here. It literally just says your helpful assistant. So, we'll
453:25 see how intelligent this stuff is. Okay, so I'm asking it to search for Airbnbs
453:29 in Chicago for four adults. Let's try that off. We should obviously be using
453:34 the Airbnb search tool. And then we want to see if it can fill out the parameters
453:37 with a location, but also how many adults are going to be there because
453:41 earlier all we did was location. So, we got a successful response already back.
453:45 Once this finishes up, we should see potentially a few links down here that
453:49 actually link to places. So, here we go. Um, luxury designer penthouse Gold
453:53 Coast. It's apartment. It has three bedrooms, eight beds. So, that
453:56 definitely fits four guests. And you can also see it's going to give us the price
453:59 per night as well as, you know, the rating and just some other information.
454:02 So, let's click into this one real quick and we'll take a look. Make sure it
454:05 actually is in Chicago and it has all the stuff. This one does have 10 guests.
454:09 So, awesome. And we can see we got five total listings. So without having to
454:14 configure, you know, here's the API documentation and here's how we set up
454:18 our HTTP request, we're already able to do some pretty cool Airbnb searches. So
454:21 let's take a look in the Airbnb execute tool. We can see that what it sent over
454:25 was a location as well as a number of adults, which is absolutely perfect. The
454:29 model was able to determine how to format that and send it over as JSON.
454:33 And then we got back our actual search results. And now we're going to do
454:35 something where you actually do need an API key because most of these you are
454:39 going to need an API key. So we're going to go ahead and do Brave search because
454:42 you can search the web um using Brave Search API. So we're going to click into
454:46 this and all we have to do is once again we can see the tools here but we want to
454:49 scroll down and see how you actually configure it. So the first step is to go
454:53 to Brave Search and get an API key. You can click on this link right here and
454:56 you'll be able to sign in and get 2,000 free queries and then you'll grab your
454:59 API key. So I'm going to log in real quick. So it may send you a code to your
455:03 email to verify it. You'll just put in the code, of course, and then we're
455:06 here. As you can see, I've only done one request so far. I'm going to click on
455:10 API keys on this lefth hand side, and we're just going to copy this token, and
455:12 then we can put it into our configuration. So, let's walk through
455:15 how we're going to do that. So, I'm going to come in here and add a new
455:18 tool. We're going to add another MCP client tool, and we're going to create a
455:21 new credential because we're no longer connecting to Airbnb's server. We're
455:26 connecting to Brave Search Server. So, create new credential. Let me just name
455:28 this one real quick so we don't get confused. And then of course we have to
455:32 set up our command, our arguments, and our environments. And this is where
455:35 we're going to put our actual API key. Okay, so first things first, the
455:38 command. Coming back into the Brave Search MCP server documentation, we can
455:42 see that we can either do Docker, but what we're doing every time we're
455:46 connecting to this in NN is going to be MPX. So our command once again is MPX.
455:51 Copy that, paste it into the command. And now let's go back and get our
455:53 arguments, which is always going to start off with -ashy. Then after that, put a space.
455:59 We're going to connect to this MCP server, which is the Brave Search. And
456:02 then you can see that's it. In the Airbnb one, we had to add the robots
456:05 text. In this one, we didn't. So, everyone is going to configure a little
456:08 bit differently, but all you have to do is just read through the command, the
456:11 arguments, and then the environment variables. And in this case, unlike the
456:15 Airbnb one, we actually do need an API key. So, what we're going to do is we're
456:19 going to put in all caps brave_api_key. So, in the environment
456:22 variables, I'm going to change this to an expression just so we can actually
456:27 see. Brave API_key. And then I'm going to put an equals and then it says to put
456:30 your actual API key. So that's where we're going to paste in the API key from
456:34 Brave Search. Okay. So I put in my API key. Obviously I'm going to remove that
456:37 after this video gets uploaded. But now we'll hit save and we'll make sure that
456:40 we're good to go. Cool. And now we're going to actually test this out. So I'm
456:44 going to call this Brave Search Tools. Um and then before we add the actual execute tool, I'm just
456:52 going to ask and make sure it works. So what Brave Search tools do you have?
456:58 And it knows of course to hit the brave search because we gave it a good name.
457:01 And it should be pulling back with its different functions which I believe
457:04 there are two. Okay. So we have Brave web search and we have Brave local search. We also
457:09 have, you know, of course the description of when to use each one and
457:13 the actual schemas to send over. So let's add a tool and make sure that it's
457:16 working. We're going to click on the plus. We're going to add an MCP client
457:19 tool. We already have our Brave credential connected. We're going to
457:23 change the operation to execute tool. And once again, we're going to fill in
457:26 the tool name and the parameters. So for the tool name, same exact thing. We're
457:30 going to do from AI. And once again, this is just telling the AI what to fill
457:34 in here. So we're going to call it tool. We're going to give it a very brief
457:37 description of the tool selected. And then we are just going to
457:41 enable the tool parameters to be filled out by the model automatically. Final
457:46 thing is just to call this Brave search execute. Cool. There we go. So now we
457:52 have um two functions for Airbnb, two for Brave search, and let's make sure
457:54 that the agent can actually distinguish between which one to use. So I'm going
458:00 to say search the web for information about AI agents. So we'll send that off.
458:05 Looks like it's going straight to the Brave Search execute. So we may have to
458:08 get into the system prompt and tweak it a little bit. Now it's going back to the
458:12 Brave Search tools to understand, okay, what actions can I take? And now it's
458:15 going back to the Brave Search execute tool. And hopefully this time it'll get
458:19 it right. So, it looks like it's going to compile an answer right now based on
458:23 its search result and then we'll see exactly what happened. There we go. So,
458:27 we have looks like Oh, wow. It gave us nine different articles. Um, what are AI
458:32 agents by IBM? We can click into here to read more. So, this takes us straight to
458:36 IBM's article about AI agents. We have one also from AWS. We can click into
458:40 there. There's Amazon. And let's go all the way to the bottom. We also have one
458:44 on agents from Cloudflare. So, let's click into here. And we can see it took
458:48 us exactly to the right place. So super cool. We didn't have to configure any
458:51 sort of API documentation. As you can see in Brave Search, if we wanted to
458:55 connect to this a different way, we would have had to copy this curl
458:58 command, statically set up the different headers and the parameters. But now with
459:02 this server, we can just hit it right away. So let's take a look in the agent
459:05 logs, though, because we want to see what happened. So the first time it
459:09 tried to go straight to the execute tool and as you can see it filled in the
459:12 parameters incorrectly as well as the actual tool name because it didn't have
459:16 the information from the server. Then it realized okay I need to go here first so
459:20 that I can find out what I can do. I tried to use a tool called web search as
459:24 you can see earlier web search. But what I needed to do was use a tool called
459:28 brave web search. So now on the second try back to the tool it got it right and
459:32 it said brave web search. It also filled out some other information like how many
459:35 articles are we looking for and what's the offset. So if we were to come back
459:41 in here and say get me one article on dogs. Let's see what it would do. So
459:44 hopefully it's going to fill in the count as one. Once again it went
459:48 straight to the tool and it may I was going to say if we had memory in the
459:51 agent it probably would have worked because it would have seen that it used
459:55 brave web search previously but there's no memory here. So, it did the exact
459:58 same pattern and we would basically just have to prompt in this agent, hey,
460:03 search the MCP server to get the tools before you try to execute a tool. But
460:07 now we can see it found one article. It's called it's just Wikipedia. So, we
460:10 can click in here and see it's dog on Wikipedia. But if we click into the
460:14 actual Brave search execute tool, we can see that what it filled out for the
460:17 query was dogs and it also knew to make the count one rather than last time it
460:21 was 10. Okay. Okay, so something I want you guys to keep in mind is when you're
460:25 connecting to different MCP servers, the setup will always be the same where
460:29 you'll look in the GitHub repository, you'll look at the command, which will
460:32 be npx, you'll look at the arguments, which will be -ashy, space, the name of
460:36 the server, and then sometimes there'll be more. And then after that, you'll do
460:39 your environment variable, which is going to be a credential, some sort of
460:43 API key. So here, what we did was we asked Air Table to list its actions. And
460:46 in this case, as you can see, it has 13 different actions. And within each
460:49 action, there's going to be different parameters to send over. So, when you
460:52 start to scale up to some of these MCP servers that have more actions and more
460:56 parameters, you're going to have to be a little more specific with your
461:00 prompting. As you can see in this agent, there's no prompting going on. It's just
461:03 your helpful assistant. And what I'm going to try is in my Air Table, I have
461:07 a base called contacts, a table called leads, and then we have this one record.
461:11 So, let's try to ask it to get that record. Okay. So, I'm asking it to get
461:14 the records in my Air Table base called contacts, in my table called leads.
461:19 Okay, so we got the error receive tool input did not match expected schema. And
461:23 so this really is because what has to happen here is kind of complex. It has
461:26 to first of all go get the bases to grab the base ID and then it has to go grab
461:31 the tables in that base to get the table ID and then it has to formulate that
461:35 over in a response over here. As you can see, if the operation was to list
461:38 records, it has to fill out the base ID and the table ID in order to actually
461:42 get those records back. So that's why it's having trouble with that. And so a
461:45 great example of that is within my email agent for my ultimate assistant. In
461:50 order to do something like label emails, we have to send over the message ID of
461:54 the email that we want to label. And we have to send over the ID of the label to
461:57 actually add to that message. And in order to do those two things, we first
462:01 have to get all emails to get the message ID. And then we have to get
462:04 labels to get the label ID. So it's a multi-step process. And that's why this
462:08 agent with minimal prompting and not a super robust parameter in here. It's
462:12 literally just defining by the model, it's a little bit tough. But if I said
462:16 something like get my air table bases, we'll see if it can handle that because
462:20 that's more of a one-step function. And it looks like it's having trouble
462:23 because if we click into this actions, we can see that the operation of listing
462:28 bases sends over an empty array. So it's having trouble being able to like send
462:31 that data over. Okay, so I'm curious though. I went into my Air Table and I
462:35 grabbed a base ID. Now I'm going to ask what tables are in this air table base
462:39 ID and I gave it the base ID directly so it won't have to do that list basis
462:42 function. And now we can see it actually is able to call the tool hopefully. So
462:46 it's green and it probably put in that base ID and we'll be able to see what
462:50 it's doing here. But this just shows you there are still obviously some
462:53 limitations and I'm hoping that Nad will make a native you know MCP server node.
462:57 But look what it was able to do now is it has here are the tables within the
463:01 air table base ID that we provided and these are the four tables and this is
463:04 correct. And so now I'm asking it what records are in the air table base ID of
463:09 this one and the table ID of this one. And it should be able to actually use
463:12 its list records tool now in order to fill in those different parameters. And
463:16 hopefully we can see our record back which should be Robert California. So we
463:20 got a successful tool execute as you can see. Let's wait for this to pop back
463:24 into the agent and then respond to us. So now we have our actual correct
463:28 record. Robert California Saber custom AI solution all this kind of stuff. And
463:31 as you can see, that's exactly what we're looking at within our actual Air
463:34 Table base. And so, I just thought that that would be important to show off here
463:38 how this is like really cool, but it's not fully there yet. So, I definitely
463:41 think it will get there, especially if we get some more native integrations
463:43 with Naden. But, I thought that that would be a good demo to show the way
463:46 that it needs to fill in these parameters in order to get records. And,
463:51 you know, this type of example applies to tons of different things that you'll
463:55 do within MCP servers. So, there's one more thing I want to show you guys real
463:58 quick, just so you will not be banging your head against the wall the way I was
464:01 a couple days ago when I was trying to set up Perplexity. So, because you have
464:03 all these different servers to choose from, you may just trust that they're
464:06 all going to be the exact same and they're going to work the same. So, when
464:10 I went to set up the Plexity ask MCP server, I was pretty excited. Command
464:13 was mpx. I put in my arguments. I put in my environment variable, which was my
464:17 perplexity API key. And you can see I set this up exactly as it should be. My
464:20 API keys in there. I triple checked to make sure it was correct. And then when
464:23 I went to test step, basically what happened was couldn't connect to the MCP
464:27 server. Connection closed. And so after digging into what this meant, because I
464:31 set up all these other ones, as you can see in here, I did these and I have more
464:34 that I've connected to. The reason why this one isn't working, I imagine, is
464:38 because on the server side of things, on Perplexity side of things, it's either
464:42 going undergoing maintenance or it's not fully published yet. And it's not
464:45 anything wrong with the way that you're deploying it. So, I just wanted to throw
464:48 that out there because there may be some other ones in this big list that are not
464:52 fully there yet. So, if you are experiencing that error and you know
464:54 that you're filling out that, you know, MPX and the arguments and the
464:58 environment variable correct, then that's probably why don't spend all day
465:01 on it. Just wanted to throw that out there because, you know, I had I had a
465:06 moment the other day. Well, it's been a fun journey. I appreciate you guys
465:08 spending all this time with me. We've got one more section to close off on and
465:12 this is going to be kind of just the biggest lessons that I had learned over
465:16 the first six months of me building AI agents as a non-programmer. Let's go.
465:20 Because everyone's talking about this kind of stuff, there's a lot of hype and
465:22 there's a lot of noise to cut through. So, the first thing I want to do is talk
465:25 about the hard truths about AI agents and then I'll get into the seven lessons
465:28 that I've learned over the past six months building these things. So, the
465:32 first one is that most AI agent demos online are just that, they're demos. So,
465:35 the kind of stuff that you're going to see on LinkedIn, blog posts, YouTube,
465:40 admittedly, my own videos as well, these are not going to be productionready
465:43 builds or productionready templates that you could immediately start to plug into
465:46 your own business or try to sell to other businesses. You'll see all sorts
465:50 of cool use cases like web researchers, salespeople, travel agents. Just for
465:54 some context, these are screenshots of some of the videos I've made on YouTube.
465:57 This one is like a content creator. This one is a human and loop calendar agent.
466:01 We've got a technical analyst. We have a personal assistant with all its agents
466:04 over here. stuff like that. But the reality is that all of these pretty much
466:07 are just going to be, you know, proof of concepts. They're meant to open
466:11 everyone's eyes to what this kind of stuff looks like visually, how you can
466:14 spin this kind of stuff up, the fundamentals that go into building these
466:17 workflows. And at least me personally, my biggest motivation in making these
466:21 videos is to show you guys how you can actually start to build some really cool
466:24 stuff with zero programming background. And so why do I give all those templates
466:27 away for free? It's because I want you guys to download them, hit run, see the
466:31 data flow through and understand what's going on within each node rather than
466:34 being able to sell that or use it directly in your business because
466:37 everyone has different integrations. Everyone's going to have different
466:40 system prompting and different little tweaks that they need for an automation
466:44 to be actually high value for them. Besides that, a lot of this is meant to
466:47 be within a testing environment, but if you push it into production and you
466:50 expose it to all the different edge cases and tons of different users,
466:53 things are going to come through differently and the automation is going
466:56 to break. And what you need to think about is even these massive companies in
467:00 the space like Apple, Google, Amazon, they're also having issues with AI
467:04 reliability like what we saw with Apple intelligence having to be delayed. So if
467:07 a company like this with a massive amount of resources is struggling with
467:10 some of these productionready deployments, then it's kind of
467:14 unrealistic to think that a beginner or non-programmer in these tools can spin
467:18 up something in a few days that would be fully production ready. And by that I
467:21 just mean like the stuff you see online. You could easily get into nodn, build
467:25 something, test it, and get it really robust in order to sell it. That's not
467:28 what I'm saying at all. Just kind of the stuff you see online isn't there yet.
467:32 Now, the second thing is being able to understand the difference between AI
467:36 agents and AI workflows. And it's one of those buzzwords that everyone's kind of
467:39 calling everything an agent when in reality that's not the truth. So, a lot
467:42 of times people are calling things AI agents, even if they're just sort of
467:46 like an AI powered workflow. Now, what's an AI powered workflow? Well, as you can
467:49 see right down here, this is one that I had built out. And this is an AI powered
467:53 workflow because it's very sequential. As you can see, the data moves from here
467:57 to here to here to here to here to here. And it goes down that process every
468:00 time. Even though there are some elements in here using AI like this
468:04 basic chain and this email writing agent. Now, this has a fixed amount of
468:07 steps and it flows in this path every single time. Whereas something over here
468:11 like an AI agent, it has different tools that it's able to call and based on the
468:14 input, we're not sure if it's going to call each one once or it's going to call
468:17 this one four times or if it's going to call this one and then this one. So
468:20 that's more of a non-deterministic workflow. And that's when you need to
468:24 use something like an AI agent. The difference here is that it's choosing
468:27 its own steps. The process is not predefined, meaning every time we throw
468:31 an input, we're not sure what's going to happen and what we're going to get back.
468:35 And then the agent also loops, calls its tools, it observes what happens, and
468:38 then it reloops and thinks about it again until it realizes, okay, based on
468:43 the input, I've done my job. Now I'm going to spit something out. And so
468:46 here's just a different visualization of, you know, an AI agent with an input,
468:49 the agent has decision, and then there's an output or this AI workflow where we
468:54 have an input, tool one, LLM call, tool two, tool three, output where it's going
468:58 to happen in that process every single time. And the truth is that most
469:02 problems don't require true AI agents. they can simply be solved with building
469:07 a workflow that is enhanced with AI. And a common mistake, and I think it's just
469:09 because of all the hype around AI agents, is that people are opting
469:13 straight away to set up an agent. Like in this example right here, let's say
469:16 the input is a form trigger where we're getting a form response. We're using
469:19 this tool to clean up the data. We're using this LLM call. So it's an AI
469:23 enhanced workflow to actually write a personalized email. We're using this to
469:26 update the CRM and then we're using this to send the email and then we get the
469:30 output back as the human. We could also set this up as a AI agent where we're
469:34 getting the form response. We're sending this agent the information and it can
469:37 choose, okay, first I'm going to clean the data and then I'm going to come back
469:40 here and think about it and then I'm going to update the CRM and then I'm
469:43 going to create an email and then I'm going to send the email and then I'm
469:46 going to output and respond to the human and tell it that, you know, we we did
469:50 that job for you. But because this process is pretty linear, it's going to
469:53 be a lot more consistent if we do a workflow. It's going to be easier to
469:56 debug. Whereas over here, the agent may mess up some tool calls and do things in
470:00 the wrong order. So it's better to just structure it out like that. And so if we
470:03 start approaching using these no code tools to build AI workflows first, then
470:07 we can start to scale up to agents once we need more dynamic decision-m and tool
470:11 calling. Okay, but that's enough of the harsh truths. Let's get into the seven
470:15 most important lessons I've learned over the six months of building AI agents as
470:19 a non-programmer. So the first one is to build workflows first. And notice I
470:23 don't even say AI workflows here, I say workflows. So, over the past six months
470:27 of building out these systems and hopping on discovery calls with clients
470:30 where I'm trying to help them implement AI into their business processes, we
470:34 always start by, you know, having them explain to me some of their pain points
470:38 and we talk through processes that are repetitive and processes that are a big
470:42 time suck. And a lot of times they'll come in, you know, thinking they need an
470:45 AI agent or two. And when we really start to break down this process, I
470:48 realize this doesn't need to be an agent. This could be an AI workflow. And
470:51 then we break down the process even more and I'm like, we don't even need AI
470:55 here. We just need rule-based automation and we're going to send data from A to B
470:59 and just do some manipulation in the middle. So let's look at this flowchart
471:02 for example. Here we have a form submission. We're going to store data.
471:05 We're going to route it based on if it's sales, support, or general. We'll have
471:09 that ticket or notification. Send an automated acknowledgement. And then
471:13 we'll end the process. So this could be a case where we don't even need AI. If
471:16 we're having the forms come through and there's already predefined these three
471:19 types which are either sales, support, or general, that's a really easy
471:24 rules-based automation. Meaning, does inquiry type equal sales? If yes, we'll
471:28 go this way and so on and so forth. Now, maybe there's AI we need over here to
471:32 actually send that auto acknowledgement or it could be as simple as an automated
471:35 message that we're able to define based on the inquiry type. Now, if this the
471:41 form submission is just a a block of text and we need an AI to read it,
471:45 understand it and decide if it's sales, support, or general, then we would need
471:49 AI right here. And that's where we would have to assess what the data looks like
471:52 coming in and then what we need to do with the data. So, it's always important
471:56 to think about, do we even need AI here? Because a lot of times when we're trying
471:59 to cut off some of that lowhanging fruit, when we realize that we're doing
472:02 some of this stuff too manually, we don't even need AI yet. We're just going
472:06 to create a few workflow automations and then we can start getting more advanced
472:09 with adding AI in certain steps. So hopefully this graphic adds a little
472:12 more color here. On the left we're looking at a rule-based sort of filter
472:16 and on the right we're looking at an AI powered filter. So let's take a look at
472:20 the left one first. We have incoming data. So let's just say we're routing
472:24 data off based on if someone's budget is greater than 10 or less than 10.
472:28 Hopefully it's greater than 10. Um so the filter here is is X greater than 10?
472:34 If yes, we'll send it up towards tool one. If no, we're going to send it down
472:37 towards tool two. And those are the only two options because those are the only
472:41 two buckets that a number can fit into. Unless I guess it's exactly equal to 10.
472:45 I probably should have made this sign a greater than or equal to, but anyways,
472:48 you guys get the point. Now, over here, if we're looking at an AI powered sort
472:51 of filter right here, we're using a large language model to evaluate the
472:55 incoming data, answer some sort of question, and then route it off based on
473:00 criteria. So incoming data we have to look at or sorry not we the AI is
473:04 looking at what type of email this is because this uses some element of
473:09 reasoning or logic or decision-m something that actually needs to be able
473:12 to read the context and understand the meaning of what's coming through in
473:15 order to make that decision. This is where before AI we would have to have a
473:19 human in the loop. We'd have to have a human look at this data and analyze
473:22 which way it's going to go rather than being able to write some sort of code or
473:28 filter to do so because it's more than just like what words exist. It's
473:31 actually like when these words come together in sentences and paragraphs,
473:35 what does it mean? So AI is able to read that and understand it and now it can
473:39 decide if it's a complaint, if it's billing or if it's promotion and then
473:42 based on what type it is, we'll send it off to a different tool to take the next
473:46 action. So the big takeaway here is to find the simplest approach first. You
473:51 may not even need an agent at all. So why would you add more complexity if you
473:54 don't have to? And also if you start to learn the fundamentals of workflow
473:59 automation, data flow, logic, creative problem solving, all that kind of stuff,
474:02 it's going to make it so much easier when you decide to scale up and start
474:05 building out these multi-aggentic systems as far as, you know, sending
474:09 data between workflows and understanding routing. Your life's going to be a lot
474:13 easier. So only use AI where it actually is going to provide value. And also
474:18 using AI and hitting an LLM isn't free typically. And I mean if you're
474:22 self-hosting, but anyways, it's not free. So why would you want to spend
474:24 that extra money in your workflow if you don't have to? You can scale up when you
474:28 need the system to decide the steps on its own, when you need it to handle more
474:32 complex multi-step reasoning, and when you needed to control usage dynamically.
474:36 And I highlighted those three words because that's very like human sounding,
474:41 right? Decide, reason, dynamic. Okay, moving on to lesson number two. This is
474:45 to wireframe before you actually get in there and start building. One of the
474:48 biggest mistakes that I made early on and that I see a ton of people making
474:51 early on is jumping straight into their builder, whatever it is, and trying to
474:56 get the idea in their head onto a canvas without mapping it out at all. And this
475:00 causes a lot of problems. So, the three main ones here are you you start to
475:03 create these messy, over complicated workflows because you haven't thought
475:06 out the whole process yet. You're going to get confused over where you actually
475:10 need AI and where you don't. and you may end up spending hours and hours
475:15 debugging, trying to revise um all this kind of stuff because you didn't
475:18 consider either a certain integration up front or a certain functionality up
475:21 front or you didn't realize that this could be broken down into different
475:24 workflows and it would make the whole thing more efficient. I can't tell you
475:27 how many times when I started off building these kind of things that I got
475:31 almost near the end and I realized I could have done this with like 20 less
475:34 nodes or I could have done this in two workflows and made it a lot simpler. So,
475:37 I end up just deleting everything and restarting. So what we're looking at
475:40 right here are a different Excalibraw wireframes that I had done. As you can
475:43 see, I kind of do them differently each time. There's not really a, you know,
475:47 defined way that you need to do this correctly or correct color coding or
475:51 shapes. The idea here is just to get your thoughts from your head onto a
475:56 paper or onto a screen and map it out before you get into your workflow
476:00 builder because then in the process of mapping things out, you're going to
476:02 understand, okay, there may be some complexities here or I need all of this
476:05 functionality here that I didn't think of before. And this isn't really to say
476:08 that there's one correct way to wireframe. As you can see, sometimes I
476:12 do it differently. Um, there's not like a designated schema or color type or
476:16 shape type that you should be using. Whatever makes sense to you really. But
476:19 the idea here is even if you don't want to wireframe and visually map stuff out,
476:23 it's just about planning before you actually start building. So, how can you
476:28 break this whole project? You know, a lot of people ask me, I have an input
476:31 and I know what that looks like and I know what I want to get over here, but
476:34 in between I have no idea what that looks like. So, how can we break this
476:39 whole project into workflows? And each workflow is going to have like
476:42 individual tasks within that workflow. So, breaking it down to as many small
476:46 tasks as possible makes it a lot more easy to handle. Makes it a lot less
476:49 overwhelming than looking at the entire thing at once and thinking, how do I get
476:53 from A to Z? And so, what that looks like to either wireframe or to just
476:57 write down the steps is you want to think about what triggers this workflow.
477:01 How does this process start? And what does the data look like coming in that
477:05 triggers it? From there, how does the data move? Where does it go? Am I able
477:09 to send it down one path? Do I have to send it off different ways based on some
477:13 conditional logic? Do I need some aspect of AI to take decisions based on the
477:17 different types of data coming through? You know, what actions have to be taken?
477:21 Where do we need rag or API calls involved? Where do we need to go out
477:26 somewhere to get more external data to enrich the context going through to the
477:29 next LLM? What integrations are involved? So, if you ask yourself these
477:33 kind of questions while you're writing down the steps or while you're
477:36 wireframing out the skeleton of the build, you are going to answer so many
477:39 more questions, especially if it comes to, you know, you're trying to work with
477:42 a client and you're trying to understand the scope of work and understand what
477:46 the solution is going to look like. If you wireframe it out, you're going to
477:49 have questions for them that they might have not have thought of either, rather
477:52 than you guys agree on a scope of work and you start building this thing out
477:54 and then all of a sudden there's all these complexities. Maybe you priced way
477:58 too low. Maybe you don't know the functionality. And the idea here is just
478:02 to completely align on what you're trying to build and what the client
478:06 wants or what you're trying to build and what you're actually going to do in your
478:09 canvas. So there's multiple use cases, but the idea here is that it's just
478:13 going to be so so helpful. And because you're able to break down every single
478:17 step and every task involved, you'll have a super clear idea on if it's going
478:20 to be an agent or if it's going to be a workflow because you'll see if the stuff
478:23 happens in the same order or if there's an aspect of decision-m involved. So,
478:28 when I'm approaching a client build or an internal automation that I'm trying
478:31 to build for myself, there is no way that more than half my time is spent in
478:36 the builder. pretty much upfront I'm doing all of the wireframing and
478:38 understanding what this is going to look like because the goal here is that we're
478:42 basically creating a step-by-step instruction manual of how to put the
478:45 pieces together. You should think of it as if you're putting together a Lego
478:48 set. So, you would never grab all the pieces from your bag of Legos, rip it
478:52 open, and just start putting them together and trying to figure out where
478:55 what goes where. You're always going to have right next to you that manual where
478:58 you're looking at like basically the step-by-step instructions and flipping
479:01 through. So, that's what I do with my two monitors. On the left, I have my
479:04 wireframe. On the right, I have my NADN and I'm just looking back and forth and
479:07 connecting the pieces where I know the integrations are supposed to be. You
479:10 need a clear plan. Otherwise, you're not going to know how everything fits
479:14 together. It's like you were trying to, you know, build a 500 piece puzzle, but
479:17 you're not allowed to look at the actual picture of a completed puzzle, and
479:20 you're kind of blindly trying to put them together. You can do it. It can
479:24 work, but it's going to take a lot longer. Moving on to number three, we
479:27 have context is everything. The AI is only going to be as good as the
479:31 information that you provide it. It is really cool. The tech has come so far.
479:34 These AI models are super super intelligent, but they're pre-trained.
479:37 So, they can't just figure things out, especially if they're operating within a
479:41 specific domain where there's, you know, industry jargon or your specific
479:45 business processes. It needs your subject matter expertise in order to
479:48 actually be effective. It doesn't think like we do. It doesn't have past
479:52 experiences or intuition, at least right away. We can give it stuff like that. It
479:56 only works with the data it's given. So, garbage in equals garbage out. So, what
479:59 happens if you don't provide high quality context? Hallucination. The AI
480:03 is going to start to make up stuff. Tool misuse. It's not going to use the tools
480:06 correctly and it's going to fail to achieve your tasks that you need it to
480:10 do. And then vague responses. If it doesn't have clear direction and a clear
480:13 sight of like what is the goal? What am I trying to do here? It is just not
480:16 going to be useful. It's going to be generic and it's going to sound very
480:20 obviously like it came from a chat GBT. So, a perfect example here is the
480:24 salesperson analogy. Let's say you hire a superstar salesman who is amazing,
480:29 great sales technique. He understands how to build rapport, closing
480:32 techniques, communication skills, just like maybe you're taking a GPT40 model
480:36 out of the box and you're plugging it into your agent. Now, no matter how good
480:41 that model is or the salesperson is, there are going to be no closed sales
480:45 without the subject matter expertise, the business process knowledge, you
480:48 know, understanding the pricing, the features, the examples, all that kind of
480:52 stuff. So, the question becomes, how do you actually provide your AI agents with
480:56 better context? And there are three main ways here. The first one is within your
480:59 agent you have a system prompt. So this is kind of like the fine-tuning of the
481:03 model where we're training it on this is your role. This is how you should
481:05 behave. This is what you're supposed to do. Then we have the sort of memory of
481:09 the agent which is more of the short-term memory we're referring to
481:13 right here where it can understand like the past 10 interactions it had with the
481:17 user based on the input stuff like that. And then the final aspect which is very
481:21 very powerful is the rag aspect where it's able to go retrieve information
481:24 that it doesn't currently have but it's able to understand what do I need to go
481:28 get and where can I go get it. So it can either hit different APIs to get
481:31 real-time data or it can hit its knowledge base that hopefully is syncing
481:35 dynamically and is updated. So either way it's reaching outside of its system
481:40 prompt to get more information from these external sources. So anyways
481:44 preloaded knowledge. This is basically where you tell the agent its job, its
481:48 description, its role. As if on day one of a summer internship, you told the
481:52 intern, "Okay, this is what you're going to do all summer." You would define its
481:55 job responsibilities. You would give key facts about your business, and you would
481:59 give it rules and guidelines to follow. And then we move on to the user specific
482:02 context, which is just sort of its memory based on the person it's
482:06 interacting with. So, this reminds the AI what the customer has already asked,
482:09 previous troubleshooting steps that have been taken, maybe information about the
482:14 customer. And without this user context, specific memory, the AI is going to ask
482:17 the same questions over and over. It's going to forget what's already been
482:21 conversated about, and it'll probably just annoy the end user with repetitive
482:26 information and not very tailored information. So we're able to store
482:29 these past interactions so that the AI can see it before it responds and before
482:33 it takes action so that it's actually more seamless like a human conversation.
482:36 It's more natural and efficient. And then we have the aspect of the real-time
482:40 context. This is because there's some information that's just too dynamically
482:44 changing or too large to fit within the actual system prompt of the agent. So
482:48 instead of relying on this predefined knowledge, we can retrieve this context
482:51 dynamically. So maybe it's as simple as we're asking the agent what the weather
482:55 is. So, it hits that weather API in order to go access real-time current
482:58 information about the weather. It pulls it back and then it responds to us. Or
483:01 it could be, you know, we're asking about product information within a
483:04 database. So, it could go hit that knowledge base what that has all of our
483:07 product information and it will search through it, look for what it needs, and
483:11 then pull it back and then respond to us with it. So, that's the aspect of Rag
483:15 and it's super super powerful. Okay. And this is a great segue from Rag. Now,
483:19 we're talking about vector databases and when not to use a vector database. So, I
483:23 think something similar happened here with vector databases as the same way it
483:26 happened with AI agents is that it was just some cool magic buzzword and it
483:31 sounded like almost too good to be true. So, everyone just started overusing them
483:35 and overusing the term. And that's definitely something that I have to
483:39 admit that I fell victim to because when I first started building this stuff, I
483:42 was taking all types of data, no matter what it was, and I was just chucking it
483:46 into a vector database and chatting with it. And because you know 70% of the time
483:50 I was getting the right answers. I was like this is so cool because it's that
483:53 you know as you can see based on this illustration it is sort of like that
483:58 multi-dimensional data representation. It's a multi-dimensional space where the
484:02 data points that you were storing are stored as these little vectors these
484:05 little dots everywhere. And they're not just placed in there. They're placed in
484:09 there intelligently because the actual context of the chunk that you're putting
484:13 into the vector database it's placed based on its meaning. So, it's embedded
484:17 based on all these numerical representations of data. As you can see,
484:21 like right up here, this is what the sort of um embedding dimensions look
484:25 like. And each point has meaning. And so, it's placed somewhere where other
484:29 things are placed that are similar. They're placed near them. So, over here
484:32 we have like, you know, animals, cat, dog, wolf, those are placed similarly.
484:35 We have like fruits over here, but also like tech stuff because Google's here
484:39 and Apple, which isn't the fruit, but it's also the tech brand. So, you know,
484:43 it also kind of shifts as as you embed more vectors in there. So, it's just
484:46 like multi-changing. It's very intelligent and the agent's able to scan
484:50 everything and grab back all the chunks that are relevant really quickly. And
484:53 like I said, it's just kind of one of those buzzwords that super cool.
484:58 However, even though it sounds cool, after building these systems for a
485:01 while, I learned that vector databases are not always necessary for most
485:05 business automation needs. If your data is structured and it needs exact
485:09 retrieval, which a lot of times company data is very structured and you do need
485:13 exact retrieval, a relational database is going to be much better for that use
485:17 case. And you know, just because it's a buzzword, that's exactly what it is, a
485:21 buzz word. So that doesn't always mean it's the best tool for the job. So
485:24 because in college I studied business analytics, I've had a little bit of a
485:28 background with like databases, relational databases, and analytics. Um,
485:32 but if you don't really understand the difference between structured and
485:35 unstructured data and what a relational database is, we'll go over it real
485:39 quick. Structured data is basically anything that can fit into rows and
485:44 columns because it has an organized sort of predictable schema. So in this
485:47 example, we're looking at customer data and we have two different tables and
485:51 this is relational data because over here we have a customer ID column. So
485:56 customer ID 101 is Alice and we have Alice's email right here. Customer ID
486:00 102 is Bob. We have Bob's email and then we have a different table that is able
486:04 to relate back to this customer lookup table because we match on the fields
486:08 customer ID. Anyways, this is an order table it looks like. So we have order
486:12 one by customer ID 101 and the product was a laptop. And we may think okay well
486:16 we're looking at order one. Who was that? We can relate it back to this
486:20 table based on the customer ID and then we can look up who that user was. So
486:23 there's a lot of use cases out there. When I said, you know, a lot of business
486:27 data is going to be structured like user profiles, sales records, you know,
486:32 invoice details, all this kind of stuff. You know, even if it's not a relational
486:35 aspect of linking two tables together, if it's structured data, which is going
486:39 to be, you know, a lot of chart stuff, number stuff, um, Excel sheets, Google
486:44 Sheets, all that kind of stuff, right? And if it's structured data, it's going
486:47 to be a lot more efficient to query it using SQL rather than trying to
486:51 vectorize it and put it into a vector database for semantic search.
486:56 So we said as a non-programmer, if you're, you know, I'm sure you've been
486:59 hearing SQL quering and maybe you don't understand exactly what it is. This is
487:03 what it is, right? So we're almost kind of using natural language to extract
487:07 information, but we could have, you know, half a million records in a table.
487:10 And so it's just a quicker way to actually filter through that stuff to
487:14 get what we need. So in this case, let's say the SQL query we're doing is based
487:19 on the user question of can you check the status of my order for a wireless
487:23 mouse placed on January 10th. On the left, we have an orders table. And this
487:26 is the information we need. These are the fields, but there may be 500,000
487:30 records. So we have to filter through it really quickly. And how we would do this
487:33 is we would say, okay, first we're going to do a select statement, which just
487:36 means, okay, we just want to see order ID, order date, order status, because
487:39 those are the only columns we care about. We want to grab it from the
487:42 orders table. So, this table and then now we set up our filters. So, we're
487:46 just looking for only rows where product name equals wireless mouse because
487:50 that's the product she bought. And then um and the order date is January 10,
487:57 2024. So, we're just saying whenever these two conditions are met, that's
488:01 when we want to grab those records and actually look at them. So, that's an
488:04 example of like what a SQL query is doing. And then on the other side of
488:08 things, we have unstructured data, which is usually the best use case for
488:11 unstructured data going into a vector database, based on my experience, is
488:16 just vectorizing a ton of text. So big walls of text, chunking them up,
488:19 throwing them into a vector database, and they're placed, you know, based on
488:21 the meaning of those chunks, and then can be grabbed back semantically,
488:26 intelligently by the agent. But anyways, this is a quick visualization that I
488:29 made right over here. Let's say we have um a ton tons of PDFs, and they're just
488:33 basically policy information. We take that text, we chunk it up. So, we're
488:36 just splitting it based on the characters within the actual content.
488:40 And then each chunk becomes a ve a vector, which is just one of these dots
488:43 in this threedimensional space. And they're placed in different areas, like
488:48 I said, based on the actual meaning of these chunks. So, super cool stuff,
488:52 right? So then when the agent wants to, you know, look in the vector database to
488:55 pull some stuff back, it basically makes a query and vectorizes that query
488:59 because it will be placed near other things that are related and then it will
489:02 grab like everything that's near it and that's how it pulls back if we're doing
489:05 like a nearest neighbor search. But don't want to get too technical here. I
489:09 wanted to show an example of like why that's beneficial. So on the left we
489:15 have product information about blankets and on the right we also have product
489:18 information about blankets and we just decided on the right it's a vector
489:22 database on the left it's a relational database and so let's say we hooked this
489:27 up to you know a customer chatbot on a website and the customer asked I'm
489:32 looking for blankets that are fuzzy now if it was a relational database the
489:36 agent would be looking through and querying for you know where the
489:40 description contains the word fuzzy or Maybe material is contains the word
489:44 fuzzy. And because there's no instances of the word fuzzy right here, we may get
489:49 nothing back. But on the other side of things, when we have the vector
489:52 database, because each of these vectors are placed based on the meaning of their
489:56 description and their material, the agent will be able to figure out, okay,
490:00 if I go over here and I pull back these vectors, these are probably fuzzy
490:03 because I understand that it's cozy fleece or it's um, you know, handwoven
490:08 cotton. So that's like why there's some extra benefits there because maybe it's
490:12 not a word for word match, but the agent can still intelligently pull back stuff
490:15 that's similar based on the actual context of the chunks and the meaning.
490:20 Okay, moving on to number five. Why prompting is critical for AI agents. Um,
490:24 we already talked about it a little bit, I guess, in the context is everything
490:28 section because prompting is giving it more context, but this should be a whole
490:34 lesson in itself because it is truly an art. And you have to find that fine line
490:38 between you don't want to over prompt it and you want to minimize your token
490:40 usage, but you also want to give it enough information. But, um, when people
490:44 think of prompting, they think of chatgbt, as you can see right here,
490:47 where you have the luxury of talking to chat, it's going to send you something
490:50 back. You can tell it, hey, make that shorter, or hey, make it more
490:53 professional. It'll send it back and you can keep going back and forth and making
490:57 adjustments until you're happy with it and then you can finally accept the
491:00 output. But when we're dealing with AI agents and we're trying to make these
491:04 systems autonomous, we only have one shot at it. So, we're going to put in a
491:07 system prompts right here that the agent will be able to look at every time
491:10 there's like an input and we have to trust that the output and the actions
491:14 taken before the output are going to be high quality. And so, like I said, this
491:18 is a super interesting topic and if you want to see a video where I did more of
491:20 a deep dive on it, you can check it out. I'll tag it right here. Um, where I
491:24 talked about like this lesson, but the biggest thing I learned building these
491:28 agents over the past six months was reactive prompting is way better than
491:33 proactive prompting. Admittedly, when I started prompting, I did it all wrong. I
491:38 was lazy and I would just like grab a custom GPT that I saw someone use on
491:42 YouTube for, you know, a prompt generator that generates the most
491:45 optimized prompts for your AI agents. I think that that's honestly a bunch of
491:49 garbage. I even have created my own AI agent system prompt architect and I
491:52 posted it in my community and people are using it, but I wouldn't recommend to
491:57 use it to be honest. Um, nowadays I think that the best practice is to write
492:01 all of your prompts from scratch by hand from the beginning and start with
492:04 nothing. So, that's what I meant by saying reactive prompting. Because if
492:07 you're grabbing a whole, you know, let's say you have 200 lines of prompts and
492:10 you throw it in here into your system prompt and then you just start testing
492:14 your agent, you don't know what's going on and why the agent's behaving as it
492:19 is. You could have an issue pop up and you add a different line in the system
492:23 prompt and the issue that you originally were having is fixed, but now a new
492:26 issues popped up and you're just going to be banging your head against the wall
492:30 trying to debug this thing by taking out lines, testing, adding lines, testing.
492:34 it's just going to be such a painful process when in reality what you should
492:38 do is reactive prompt. So start with nothing in the system prompt. Give your
492:42 agent a tool and then test it. Throw in a couple queries and see if you're
492:45 liking what's coming back. You're going to observe that behavior and then you
492:49 have the ability to correct the system prompt reactively. So based on what you
492:53 saw, you can add in a line and say, "Hey, don't do that." Or, you know, this
492:57 worked. Let's add another tool and add another prompt now or another line in
493:01 the prompt. Because what we know right now is that it's working based on what
493:06 we have. That way if we do add a line and then we test and then we observe the
493:10 behavior and we see that it broke, we know exactly what broke this automation
493:13 and we can pinpoint it rather than if we threw in a whole pre-generated system
493:17 prompt. So that's the main reason why I don't do that anymore. And then it's
493:22 just that loop of test, reprompt, test again, reprompt. Um, and what's super
493:27 cool about this is because you can basically hard prompt your agent with
493:31 things in the system prompt because you're able to show it examples of, you
493:35 know, hey, I just asked you this and you took these steps. That was wrong. Don't
493:39 do that again. This is what you should have done. And basically, if you give it
493:42 that example within the system prompt, you're training this thing to not behave
493:45 like that. And you're only improving the consistency of your agent's performance.
493:50 So the the key elements of a strong AI agent prompt and this isn't like every
493:54 single time. These are the five things you should have because every agent's
493:57 different. For example, if you're creating a context creation agent, you
494:01 wouldn't need a tool section really if it's not if it doesn't have any tools.
494:03 You' just be prompting it about its output and about its role. But anyways,
494:07 the first one that we're talking about is role. This is sort of just like
494:10 telling the AI who it is. So this could be as simple as like you're a legal
494:13 assistant specializing in corporate law. Your job is to summarize contracts in
494:18 simple terms and flag risky clauses. Something like that. It gives the AI
494:22 clear purpose and it helps the model understand the tone and the scope of its
494:26 job. Without this, the AI is not going to know how to frame responses and
494:28 you're going to get either very random outputs or you're going to get very
494:32 vague outputs that are very clearly generated by AI. Then of course you have
494:36 the context which is going to help the agent understand, you know, what is
494:39 actually coming in every time because essentially you're going to have
494:41 different inputs every time even though the system prompt is the same. So saying
494:44 like this is what you're going to receive, this is what you're going to do
494:48 with it, um this is your end goal. So that helps tailor the whole process and
494:51 make it more seamless as well. That's one common mistake I actually see with
494:54 people's prompting when they start is they forget to define what are you going
494:57 to be getting every time? Because the agency, they're going to be getting a
494:59 ton of different emails or maybe a ton of different articles, but it needs to
495:02 know, okay, this information that you're throwing at me, what is it? Why am I
495:06 getting it? Then of course the tool instructions. So when you're building a
495:09 tools agent, this is the most important thing in my mind. Yes, it's good to add
495:13 rules and show like when you use each thing, but having an actual section for
495:17 your tools is going to increase the consistency a lot. At least that's what
495:21 I found because this tells the AI exactly what tools are available, when
495:26 to use them, how they work. Um, and this is really going to ensure correct tool
495:29 usage rather than the AI trying to go off of like these sort of guidelines
495:34 because it's a nondeterministic workflow and um trying to trying to guess of
495:39 which one will do what and um yeah, have a tool section and define your tools.
495:42 Then you've got your rules and constraints and this is going to help
495:45 prevent hallucination. It's going to help the agent stick to sort of like a
495:49 standard operating procedure. Now, you just have to be careful here because you
495:52 don't want to say something like do all of these in this order every time
495:56 because then it's like why are you even using an agent? You should just be using
495:59 a workflow, right? But anyways, just setting some foundational like if X do
496:06 Y, if Z do A, like that sort of thing. And then finally, examples, which I
496:10 think are super super important, but I would never just put these in here
496:13 blind. I would only use examples to directly counter and directly uh correct
496:18 something that's happened. So what I alluded to earlier with the hard
496:20 prompting. So let's say you give the agent an input. It calls tool one and
496:24 then it gives you an output that's just incorrect, completely incorrect. You'd
496:28 want to give it in the example, you could show, okay, here's the input I
496:30 just gave you. Now here's what you should have done. Call tool two and then
496:34 call tool three and then give me the output. And then it knows like, okay,
496:37 that's what I did. This is what I should have done. So if I ever get an input
496:40 similar to this, I can just call these two tools because I know that's an
496:44 example of like how it should behave. So hard prompting is really really going to
496:48 come in handy and not just in the examples but also just with the rest of
496:52 your system prompt. All right, moving on to number six. We have scaling agents
496:57 can be a nightmare. And this is all part of like one of the hard truths I talked
497:00 about earlier where a lot of the stuff you see online is a great proof of
497:04 concept, a great MVP, but if you were to try to push this into production in your
497:07 own business, you're going to notice it's not there yet. Because when you
497:10 first build out these AI agents, everything can seem to work fine. It's a
497:13 demo. It's cool. It really opens your eyes to, you know, the capabilities, but
497:17 it hasn't usually gone under that rigorous testing and evaluation and
497:23 setting up these guard rails um and all of that, you know, continuous monitoring
497:26 that you need to do to evaluate its performance before you can push it out
497:30 to all, you know, 100 users that you want to eventually push it out to. You
497:34 know, on a single user level, if you have a few hallucinations every once in
497:38 a while, it's not a huge deal. But as you scale the use case, you're just
497:42 going to be scaling hallucinations and scaling problems and scaling all these
497:45 failures. So that's where it gets tricky. You can start to get retrieval
497:48 issues as your database grows. It's going to be harder for your agent to cut
497:51 through the noise and grab what it needs. So you're going to get more, you
497:54 know, inaccuracies. You're going to have some different performance bottlenecks
497:57 and the agents, you know, latency is going to increase. you're going to start
498:00 to get inconsistent outputs and you're going to experience all those edge cases
498:04 that you hadn't thought of when you were the only one testing because, you know,
498:06 there's just an infinite amount of scenarios that an agent could be exposed
498:10 to. So, a good little lesson learned here would be to scale vertically before
498:14 you start to try to scale horizontally, which um we'll break that down. And I
498:17 made this little visualization so we can see what that means. So, let's say we
498:21 want to help this business with their customer support, sales, inventory, and
498:26 HR management. Rather than building out little building blocks of tools within
498:30 each of these processes, let's try to perfect one area first vertically and
498:34 then we'll start to move across the organization and look at doing other
498:37 things and scaling to more users. So, because we can focus on this one area,
498:41 we can set up a knowledge base and set up like the data sources and build that
498:45 automated pipeline. We can set up how this kind of stuff gets organized with
498:48 our sub workflows. We can set up, you know, an actual agent that's going to
498:52 have different tool calls. And within this process, what we have over here are
498:57 evaluations, monitoring performance, and then setting up those guard rails
499:01 because we're testing throughout this, you know, ecosystem vertically and and
499:04 getting exposure to all these different edge cases before we try to move into
499:08 other, you know, areas where we need to basically start this whole process
499:11 again. We want to have done the endto-end system first, understand you
499:16 know the problems that may arise, how to make it robust and how to evaluate and
499:20 you know iterate over it before we start making more automations. And so like I
499:24 alluded to earlier, if you try to start scaling horizontally too quick before
499:28 you've done all these testing, you are going to notice that hallucinations are
499:31 going to increase. Your retrieval quality is going to drop as more users
499:35 users come in. The agent's handling more memory. it's handling more um more
499:39 knowledge in its database to try to cut through your response times are going to
499:42 slow and then you're just going to get more inconsistent results. And so you
499:45 can do things like, you know, setting strict retrieval rules and guard rails.
499:49 You could do stuff like segmenting your data into different vector databases
499:53 based on the context or different name spaces or different, you know,
499:56 relational databases. You could use stuff like um asynchronous processing or
500:00 caching in order to improve that response time. And then you could also
500:04 look at doing stuff like only you know um having an agent evaluate and making
500:08 sure that the confidence on these responses are above a certain threshold
500:12 otherwise we'll you know request human help and not actually respond ourselves
500:17 or the agent wouldn't respond itself. So the seventh one is that no code tools
500:21 like nadn have their limits. They're super great. They're really empowering
500:25 non-developers to get in here and spin up some really cool automations. And
500:29 it's really like the barrier to entry is so low. you can learn how to build this
500:32 stuff really quickly, which is why I think it's awesome and you know, it's
500:35 the main tool that I use um personally and also within my agency. But when you
500:39 start to get into what we just talked about with lesson number six, scaling
500:43 and making these things really robust and production ready, you may notice
500:47 some limits with no code tools like NE. Now, the reason I got into building
500:51 stuff like this is because, you know, obviously non-programmer and it has a
500:55 really nice drag and drop interface where you can build these workflows very
500:59 visually without writing scripts. So, Nen is, you know, basically open source
501:03 because you can self-host it. The code is accessible. Um, and it's built on top
501:07 of Langchain, which is just like a basically a language that helps connect
501:10 to different things and create these like agents. Um, and because of that,
501:14 it's just wrapped up really pretty for us in a graphical user interface where
501:18 we can interact with it in that drag and drop way without having to get in there
501:23 and hands- on keyboard write code. And it has a ton of pre-built integrations
501:26 as you can see right here. Connect anything to everything. Um, I think
501:29 there's like a thousand integrations right here. And all of these different
501:33 integrations are API calls. They're just wrapped up once again in a pretty way
501:37 for us in a user interface. And like I talked about earlier, when I started, I
501:40 didn't even know what an API was. So the barrier entry was super low. I was able
501:43 to configure this stuff easily. And besides these built-in integrations,
501:46 they have these really simple tool calls. So development is really fast
501:49 with building workflows compared to traditional coding. um the modularity
501:53 aspect because you can basically build out a workflow, you can save that as a
501:57 tool and then you can call that tool from any other workflow you want. So
502:00 once you build out a function once, you've basically got it there forever
502:03 and it can be reused which is really cool. And then my favorite aspect about
502:07 n and the visual interface is the visual debugging because rather than having 300
502:11 lines of code and you know you're getting errors in line 12 and line 45,
502:15 you're going to see it's going to be red or it's going to be green and you know
502:18 if it's green you're good and if you see red there's an error. So you know
502:22 exactly usually you know exactly where the problem's coming from and you're
502:25 able to get in there look at the execution data and get to the bottom of
502:29 it pretty quick. But overall these no code platforms are great. They allow us
502:32 to connect APIs. We can connect to pretty much anything because we have an
502:37 HTTP request within NADN. Um they're going to be really really good for
502:40 rulebased decision-m like super solid. Um if we're creating workflows that's
502:43 just going to do some data manipulation and transferring data around you know
502:47 your typical ETL based on the structured logic super robust. you can make some
502:51 really really awesome basic AI powered workflows where you're integrating
502:54 different LLMs. You've got all the different chat models basically that you
502:57 can connect to um for you know different classification or content generation or
503:01 outreach anything like that. Um your multi-agentic workflows because like I
503:05 said earlier you have the aspect of creating different tools um as workflows
503:09 as well as creating agents as workflows that you can call on from multiple
503:12 agents. So you can really get into some cool multi-agentic inception thing going
503:16 on with with agents calling agents calling agents. um and passing data
503:21 between different workflows and then just the orchestration of AI services,
503:25 coordinating multiple AI tools within a single process. So that's the kind of
503:29 stuff that NN is going to be super super good at. And now the hidden limitations
503:34 of these noode AI workflow/ aent builders. Let's get into it. Now, in my
503:38 opinion, this stuff really just comes down to when we're trying to get into
503:41 like enterprise solutions at scale with a ton of users and a ton of
503:44 authentication and a ton of data. Because if you're building out your own
503:48 internal automations, you're going to be solid. Like there's not going to be
503:50 limitations. If you're building out, you know, proof of concepts and MVPs, um,
503:55 YouTube videos, creating content, like you can do it all, I would say. But when
503:58 you need to start processing, you know, massive data sets that are going to
504:03 scale to thousands or millions of users, your performance can slow down or even
504:06 fail. And that's maybe where you'd want to rely on some custom code backend to
504:11 actually spin up these more robust functionalities. In these agentic
504:14 systems, tool calling is really, really critical. The agent needs to be able to
504:18 decide which one to use and do it efficiently. And like I talked about,
504:22 Nen is built on top of lang chain. It provides a structured way to call AI
504:26 models and APIs, but it lacks some of that flexibility of writing that really
504:30 custom code within there for complex decision-m. And then when it comes to
504:34 authentication at scale, it can struggle with like secure large scale
504:38 authentication and data access control. Obviously, you can hook up to external
504:41 systems to help with some of that processing, but when it comes to maybe
504:45 like handling OOTH tokens and all these encrypted credentials and session
504:49 management, not that it's not doable with NN, um it just seems like getting
504:54 in there with some custom code, it could be quicker and more robust. And also,
504:59 that's coming from someone who doesn't actually do that myself. Um, just some
505:03 stuff I've heard and you know, with what's going on within the agency. Now
505:06 ultimately it seems like if you are delivering this stuff at scale for some
505:10 big clients um the best approach is going to be a mix a hybrid of no code
505:15 and custom code because you can use NN to spin up stuff really quick. You've
505:20 got that modularity you can orchestrate automate you know connect to anything
505:23 you need but then working in every once in a while some custom Python script for
505:27 some of that complex you know large scale processing and data handling. And
505:31 when you combine these two together, you're going to be able to spin some
505:34 stuff up a lot quicker, and that's going to be pretty robust and powerful. Thanks
505:38 so much for making it all the way to the end of this course. I know it's pretty
505:42 massive, but I wanted to pack it with a ton of value, and hopefully you guys did
505:45 find it valuable as well, and you feel a lot more comfortable right now building
505:49 AI workflows and AI agents than when you started this course. If you enjoyed or
505:52 if you learned something new, please give it a like and a subscribe. It
505:55 definitely helps me out a ton. Um, like I said, super happy that you made it to
505:59 the end of the course. And if you did and if you appreciate my teaching style
506:02 or you want to even go more in depth than this course right here, then
506:05 definitely check out my paid community. The link for that is down in the
506:08 description. It's a community full of people who are learning how to build and
506:12 are building a automations using naden and a lot of them are coming from no
506:15 code backgrounds as well. So great place to get some questions answered,
506:19 brainstorm people, collaborate on projects, stuff like that. And we also
506:22 have five live calls per week. So, make sure you jump in there and meet other
506:25 people in the space as well as make sure you're not getting stuck and you can get
506:28 your questions answered on a call. Like I said, would be great to see you in the
506:31 community. But anyways, thanks so much for making it to the very end of this
506:35 huge course. Appreciate you guys and I will see you in the next video. Thanks
$

Build & Sell n8n AI Agents (8+ Hour Course, No Code)

@nateherk 8:26:38 34 chapters
[AI agents and automation][developer tools and coding][e-commerce and conversion optimization][open source and self-hosting][marketing and growth hacking]
// chapters
// description

Full courses + unlimited support: https://www.skool.com/ai-automation-society-plus/about All my FREE resources: https://www.skool.com/ai-automation-society/about Work with me: https://uppitai.com/ My Tools💻 14 day FREE n8n trial: https://n8n.partnerlinks.io/22crlu8afq5r Code NATEHERK to Self-Host n8n for 10% off (annual plan): http://hostinger.com/nateherk Welcome to the most comprehensive free course on AI automation and AI agents for beginners using n8n. In this 8+ hour deep-dive, I’ll w

now: 0:00
// tags
[AI agents and automation][developer tools and coding][e-commerce and conversion optimization][open source and self-hosting][marketing and growth hacking]