// transcript — 891 segments
0:00 Intro
0:01 So, you want to use Claude Code. You want to get the most of it, but you
0:06 don't know exactly how. This is a crash course how to master Claude Code, and we
0:11 explain it in the most simple way. There are thousands, literally thousands
0:15 [music] of other Claude code tutorials on the internet, but there are none as
0:19 simple as this. I brought on Professor Ross Mike. He comes on and he shares it
0:24 in the simplest way so that anyone could create jawdropping startups and software using cloud code.
0:29 We're going to give you the exact steps [music] for how you can set it up, thinking
0:34 about the beginner, how to think about [music] the terminal, how to think about
0:37 prompting. But if you stick around to the end of this episode, there's a tips
0:41 and tricks section, which I think is super valuable. And uh I can't wait to
0:46 see what you build. >> We got Ross Mike on the pod. By the end
0:58 of this episode, what are people going to learn? >> Hopefully, you're going to not feel
1:03 overwhelmed with claude code. I know the terminal is scary and it's a big
1:07 boogeyman, but I'm going to give you the blueprint, how to use it. I'm also going
1:11 to share, consider this the ultimate crash course on how to use Claude Code
1:16 or any agent effectively. Okay, let's let's get into it. >> So, I mean the best way to start these
1:24 episodes is with sharing our screen. So, when we think of building applications
1:29 using AI, using some sort of agent like cloud code or open code or codec,
1:33 whatever it is, there's a couple of things that you always have to keep in
1:37 mind. You know, the principles never really change. One thing that it's
1:41 important for us to understand is however good your inputs are will
1:46 dictate how good your output is. Right? We're getting to a point where the
1:52 models are so freakishly good that if you are producing quote unquote slop,
1:56 it's because you've given it slop, right? Um there was a time where the
1:59 models weren't good enough. There was a time where, you know, we had serious
2:04 qualms and issues with the quality of code the models gave us. But now we're
2:07 starting to get to a point where even myself like I'm reviewing a lot more
2:12 code than I write. And I never thought I'd be able to say that in the early uh
2:18 months of 2026. So very important for us to understand our inputs, how good they
2:22 are, how precise they are, how articulate they are are just as good as
2:27 our outputs and will dictate just how good our outputs will be. And the way I
2:30 want people to think about this is Greg is like imagine you were communicating
2:34 this to a human to a human engineer, right? If you give them sparse
2:39 instructions and if anyone is in like client work, you realize that most
2:43 clients they they they tell you one thing but you have to sort of extract
2:47 the deeper thoughts of what it is they want. Um it's the same way when we work
2:52 with these agents. When we work with claude code, we need to be really really
2:57 precise with how we build our inputs. Now, what do I mean by inputs? What I
3:04 mean is our PRDS or our to-do list or our plans, right? Like there's, you
3:07 know, people are giving you different names. Um, it doesn't really matter.
3:11 It's all the same thing, right? And when we think of a PRD or when we think of a
3:15 to-do list or when we think of a plan, I want us to think in such a way as this.
3:20 Let's say I'm trying to build this product, right? Let's say um I don't
3:25 know, Greg, any product ideas um that >> me have product ideas?
3:29 >> Yeah, that's actually the best best person to ask, right? [laughter]
3:36 Um let's say I go on idealbrowser.com and >> I was just going to it. I was just going
3:41 to it. Yeah, pick pick the idea of the day from idea browser. Says it's a
3:45 diagnostic tool for appliance text losing hundreds of repeat visits. See, I
3:49 have no idea what that means, but let's say I know what that means. Essentially,
3:54 when thinking of this idea and looking to build this into a full-fledged
3:58 product, generally the way you're going to think is, okay, if the if product X
4:05 does Y, Z, A, B, and C, how I would reach that is I'm going to think of
4:08 features, right? So, let's say there's four core features to this application
4:14 that um Greg just mentioned. And if I have these four features built out, we
4:19 can safely assume that we have said product, right? The way we are to design
4:26 our PRDs, to-do lists, and plans is such that we want the agent, the model to
4:30 build out all these features, right? Because all these features put together
4:35 is our product. You see, a lot of times people will describe a product, um, not
4:40 describe features, and will be frustrated with AI. Like AI is supposed
4:43 to magically know what you're thinking about. Um, by the way, Greg, am I making
4:47 sense so far, or am I >> 100% I'm with you. >> Yeah. So, we really need to think in
4:53 features. But here's the cool part. When developing features, often times the
4:58 issue with models is like you'll develop a feature or like let's say the model
5:02 develops a feature. We don't know if it works. We don't know if it did it the
5:05 right way. That's where with all the cool Ralph stuff that's happening, we
5:10 can introduce tests, right? So let's say uh the model the agent bu builds feature
5:15 one. Before moving on moving on to feature two, what I'm going to do is I'm
5:18 going to get the model to write a test. If that test passes, then we'll work on
5:23 the second feature. If that test passes, we work on the third feature. Right? So
5:28 we're finally entering an era where you can really build something serious with
5:33 these models. So, instead of telling you about just uh planning, why don't we do
5:38 actual planning together? So, I'm going to pop up my terminal. So, I know
5:42 everyone's afraid of the terminal, but in all honesty, if you don't know how to
5:46 use a terminal, ask AI. Like, it's the like simplest thing. And if not, you can
5:51 even download the Cloud Code app and go on code section, give it a specific
5:55 folder you want to work on and use the app. Like, there's literally no excuse
5:59 to not use cloud code. If you're afraid, boohoo, just jump into use AI. have all
6:03 the tools. That being said, I'm just going to type in Claude and we're going
6:08 to have uh Claude code open. And usually how people plan is they'll click shift
6:12 tab, right? And then you have plan mode on and you can say, let's say I want to
6:17 build um Tik Tok UGC generating app for my marketing agency.
6:28 I see like these UGC apps everywhere. Um, please help me create a plan. Write
6:39 this in the in uh PRD.MD file. So, this is how most people have
6:47 planning set up, right? you'll tell Claude Code or Cursor or whatever agent
6:52 uh to do the plan for you and you ask it to be in some file and like it says it'd
6:57 be happy to help you plan this out and it'll ask you some questions etc etc.
7:02 But I found that there's a better way to get an even more concise plan. And this
7:08 way it actually gets you to think a lot more about tradeoffs, concerns, UIUIUX
7:13 decisions because most of the time you're sort of allowing the AI to have
7:17 free reign over certain decisions which I think uh will lead you with a finished
7:21 product that you're not excited about. And that's invoking a special tool. Um I
7:26 was going to show you guys the tweet but unfortunately Twitter's down right now.
7:30 But Claude Code has a specific tool called ask user question tool. And
7:34 essentially what this tool does, it starts to interview you about the
7:40 specifics of your plan. Right? So I'm going to drop this prompt where it says
7:44 read this plan file. Interview me in detail using the ask user question tool
7:48 about literally anything. Technical implementation, UI, UX concerns, and
7:52 trade-offs. I spelled implementation wrong. Do not judge me. Um, and what
7:56 this is going to do is it's going to go past the plan that we have and start to
8:01 ask us about minute details. So, let's finish off this plan first. I'm just
8:06 going to accept um this is internal use uh text. We'll use React. I just want
8:13 core features. We'll submit answers. And then cloud code, you'll see might ask us
8:16 a few more questions, but this will generally be the plan, >> right? So it's it's not just it's not
8:23 just the plan, it's the right plan, right? Like to what you were saying like
8:27 go back go scroll back up here the features and yeah the features and test
8:34 like the way I think about this and I don't know if you agree is like if you
8:38 ask claude code to build you a car it doesn't really know what a car is. It
8:41 doesn't understand like you need a steering wheel and a you know a radio
8:47 and you need wheels. So the the the hard part is trying to figure out is
8:50 basically explaining what those things are in a really succinct and clear way.
8:55 And that's what this interview is basically doing. It's it's explaining
8:59 each of them and then we're going to test each of those features. Exactly.
9:02 Like think of think of it this like a simple example. Let's say you ask the AI
9:09 agent to build you a specific feature, right? How is it going to present that
9:12 specific feature? Did you want it in a dashboard? Did you want it to be a
9:15 modal? Did did it have to be a separate page? Like when you don't specify these
9:20 minute details, it will make the assumption for you. And with Ralph loops
9:24 and all these type of things, like you might have a whole application built out
9:28 and it's not exactly to the liking or the expectations you had. Right? So, let
9:33 me continue. I'll just make some selections here just so we can move on.
9:39 Um, and then hit submit. And then I'm going to pause this planning here and
9:44 then I'm going to paste this. I'm going to say read this plan file and I'm going
9:48 to tag the plan file. It's called prd.md. We have that right here. Um, and I'm
9:54 going to say interview me the details about this question or I don't even need
9:57 to tag it because it has it in its context. But I just want to show you how
10:02 annoyingly uh annoying this is going to get. Meaning it's going to keep asking me
10:09 questions about said plan or said uh app idea. So notice how it says round one
10:15 core workflow and technical foundation, right? And some of the questions it
10:18 might even ask you are things that you might not know about cuz you're not
10:21 technical. So what do I do when I don't know something, Greg? I'm going to copy
10:24 this and I'm going to go to the chatbot of my choice, whether it's claude, chat,
10:28 GBT, whatever, and I'm going to ask it questions. So if you remember earlier,
10:32 it asked me generic questions about the app. Now it's saying, "What's your ideal
10:36 workflow for generating UGC video from start to finish?" Like notice how the questions are even
10:42 more specific now. So it says linear stepbystep template based batch
10:48 processing iterative conversational. So let's say I select that and it says how
10:53 should the app handle agent API cost and usage. So now it's talking about cost
10:57 right again most of the times when you just have a basic plan this is not
11:00 included in the plan. Right? Let's say we want to have a hard hard budget. Um
11:05 what database and hosting approach do you want to use? Most of you probably
11:08 watching this have no idea. So I can copy this over, go to Chad GBT and ask
11:12 what's the best decision. This is my current situation. And then you keep
11:16 going. You keep going and you submit answers. So when you use this ask user
11:21 question tool, the questions become more granular. So it asks me about core
11:25 workflow and technical foundation. Now it's going to ask me about UI, UX, and
11:30 script generation. If you notice the first plan that it came up with, the
11:35 default plan for claude code, it was pretty basic. Now it's asking me, okay,
11:39 what AI do I want to use for the script generation? I'll use Claude. Uh, what UI
11:43 style aesthetic are you going for? Minimal clean, dashboard heavy, creative
11:49 tool field, chat first. Right. So hopefully, Greg, I'm making sense with
11:53 like how much more questions I'm being asked when I'm invoking this ask user
11:59 question tool. Yeah, it makes complete sense. You're also you're also going to use less
12:05 tokens in the end, right? Because you're right. >> Yeah. Because the thing is the better
12:10 your plan, the better your input, the better the initial set of documents that
12:17 you give the model, um the the better the outcome. And if the better the
12:19 outcome, there's no back and forth, right? Most people will have a Ralph
12:23 loop running. It'll be a basic plan and it'll do what you told it to do, but you
12:27 weren't specific. So now you're going back and then maybe you're running
12:29 another loop or you're going back and doing all these changes. But if you get
12:35 it done right, if you invest the time in the planning stage, I 100% believe
12:40 you'll save a lot more money. And this will help you clear up a lot of ideas.
12:44 So like for example, this idea that we just had, this Tik Tok UGC farm, um, how
12:49 do we want it set up? Do we want it to be flat with search? Do we want it to be
12:53 client campaign assets? There's a lot of like these minute details that you're
12:58 not thinking about and because you're not thinking about it, you're allowing
13:01 cloud code to make those assumptions for you, right? Which at the end after it's
13:06 burned through a ton of tokens, now you're going back to change, right? We
13:11 can save so much headache if we do the proper planning from the beginning. And
13:18 hopefully um people see value in this um ask user question tool. Make sure you
13:22 specify it in your prompt. And hopefully, Greg, that that made sense.
13:25 >> It does. >> So, I would say step number one for this
13:30 Claude C crash course is I would get good at planning. I would get really
13:34 really good at planning. I would get good at generating these, right? Like
13:38 look, it it keeps on asking me questions. If you notice the very first
13:42 plan that we generated with Claude, it was two sets of questions and it was
13:46 ready to build. But with this, it's asking me, do I want basic avatars,
13:50 custom avatars, multi-seene videos? How do I want to handle storage? Do I want
13:54 to download the videos instantly? Cloud storage, external storage, like there's
14:00 so much to software engineering. And I think in our last video, you um someone
14:03 shared this on Twitter. I don't know if it was you or someone else. Like
14:07 software um building personal software is easy, but building software others
14:11 are going to use is very, very difficult. And if you don't have the
14:16 audacity or the decency to to set up a little time, a little extra time to
14:19 plan, then I guarantee whatever you generate is going to be AI slop. And you
14:23 might blame the model, but really the problem is you. So invest in your plans.
14:28 Spend time using planning. Um don't use the generic plan uh mode that cursor or
14:34 claude code has. I would use claude code. And then I would specify the ask
14:39 user question tool. um it's going to continue to know you with questions like
14:43 it keeps asking, right? Cuz until it knows exactly what it is you want, it
14:47 won't start building. Um so I would say that's step number one to building with
14:52 Don’t start with Ralph automation (get reps first)
14:53 cloud code. Step number two, and everyone's talking about Ralph and it's
14:59 exciting. Um but I wouldn't use it. I wouldn't use Ralph. And the reason I
15:03 wouldn't use Ralph if I was just starting out, Greg, is because um how
15:09 are you going to like imagine this, like imagine not knowing how to drive, but
15:15 then buying a Tesla for uh like the self-driving stuff. Like cool in theory,
15:20 but maybe it's a great idea to know how to drive, how to steer, how to hit the
15:24 corners, how to maybe yell at someone when they cut you off before you get the
15:29 full automated version. I say this to say because when you get good at
15:35 developing plans and then working with the AI to build each feature and testing
15:40 each feature, you you start to develop this sense on product building on on
15:46 like you know even uh I heard someone call vibe QA testing. You get this sense
15:51 by going one-on-one yourself. And this is why a lot of people who were fighting
15:55 with claude code all these months are really really good at using it now
15:59 because they spent the time building without using these crazy automation
16:03 loops. So if you're using cloud code for the first time or you're just getting
16:08 into it good plan number one and number two get your reps in by not using Ralph.
16:13 So develop the features one by one. Now that you have your plan, you can
16:17 literally tell Claude Code, hey, okay, let's build the first feature. Um, you
16:21 know, go ahead and do it. And then once the feature is done, you can test it
16:24 out. Ask it, how can I test this? How can I run this app? I wouldn't jump into
16:31 using Ralph right away. Um, build without Ralph. But let's say you've
16:36 What are “Ralph loops” and why plans and documentation matter most
16:37 built these reps now and you're you're comfortable with Cloud Code. Now you
16:42 hear about all these things. skills MCP uh prompt MD agent MD um what else is
16:49 there something MD you you hear all these conventions plugins um you have
16:54 Ralph all these things so what do I need to perfectly uh build something um using
17:01 cloudc any agent I'll be honest with you most of these things are all the same
17:07 prompt MD and agent MD are just markdown files um plugins are skills with you know a little bit
17:14 extra. What you need to build successfully using these agents is first
17:20 of all you need a good plan right which are documents which is the prd we just
17:26 generated and then you need um to document um the progress that's being
17:33 made. Um for anyone who's familiar with for with Ralph you know what I'm talking
17:37 about. For those who aren't, what's cool about a Ralph loop is as follows. A
17:42 Ralph loop is basically you have a list of things that need to be get that need
17:47 to get done. Uh the uh whatchamacallit the prd or the plan you give it to the
17:52 AI model. The model works on the first task. It finishes it then documents it
17:57 in another file and then it it goes again and it stops until it's completed
18:04 the whole list. Now, this isn't anything special, but the reason why it's now
18:08 super powerful is because the models are getting so so good. But here is the
18:13 issue. If you have a terrible plan, if you have a terrible PRD, this doesn't
18:17 matter. You're just donating money to Enthropic and I wish you the best of
18:21 luck if that's what you want to do. But if you want to make sure that your
18:25 tokens are not wasted, you're going to invest in a good PRD. MD file or a good
18:31 plan file. Greg, am I making sense so far? >> 100%. >> Okay,
18:36 >> you're driving the point home. >> Yes. So, I'll talk a little bit about um
18:41 Ras’s Ralph setup: progress tracking + tests + linting
1:22 Claude Code Best Practices
1:24 episodes is with sharing our screen. So, when we think of building applications
1:29 using AI, using some sort of agent like cloud code or open code or codec,
1:33 whatever it is, there's a couple of things that you always have to keep in
1:37 mind. You know, the principles never really change. One thing that it's
1:41 important for us to understand is however good your inputs are will
1:46 dictate how good your output is. Right? We're getting to a point where the
1:52 models are so freakishly good that if you are producing quote unquote slop,
1:56 it's because you've given it slop, right? Um there was a time where the
1:59 models weren't good enough. There was a time where, you know, we had serious
2:04 qualms and issues with the quality of code the models gave us. But now we're
2:07 starting to get to a point where even myself like I'm reviewing a lot more
2:12 code than I write. And I never thought I'd be able to say that in the early uh
2:18 months of 2026. So very important for us to understand our inputs, how good they
2:22 are, how precise they are, how articulate they are are just as good as
2:27 our outputs and will dictate just how good our outputs will be. And the way I
2:30 want people to think about this is Greg is like imagine you were communicating
2:34 this to a human to a human engineer, right? If you give them sparse
2:39 instructions and if anyone is in like client work, you realize that most
2:43 clients they they they tell you one thing but you have to sort of extract
2:47 the deeper thoughts of what it is they want. Um it's the same way when we work
2:52 with these agents. When we work with claude code, we need to be really really
2:57 precise with how we build our inputs. Now, what do I mean by inputs? What I
3:04 mean is our PRDS or our to-do list or our plans, right? Like there's, you
3:07 know, people are giving you different names. Um, it doesn't really matter.
3:11 It's all the same thing, right? And when we think of a PRD or when we think of a
3:15 to-do list or when we think of a plan, I want us to think in such a way as this.
3:20 Let's say I'm trying to build this product, right? Let's say um I don't
3:25 know, Greg, any product ideas um that >> me have product ideas?
3:29 >> Yeah, that's actually the best best person to ask, right? [laughter]
3:36 Um let's say I go on idealbrowser.com and >> I was just going to it. I was just going
3:41 to it. Yeah, pick pick the idea of the day from idea browser. Says it's a
3:45 diagnostic tool for appliance text losing hundreds of repeat visits. See, I
3:49 have no idea what that means, but let's say I know what that means. Essentially,
3:54 when thinking of this idea and looking to build this into a full-fledged
3:58 product, generally the way you're going to think is, okay, if the if product X
4:05 does Y, Z, A, B, and C, how I would reach that is I'm going to think of
4:08 features, right? So, let's say there's four core features to this application
4:14 that um Greg just mentioned. And if I have these four features built out, we
4:19 can safely assume that we have said product, right? The way we are to design
4:26 our PRDs, to-do lists, and plans is such that we want the agent, the model to
4:30 build out all these features, right? Because all these features put together
4:35 is our product. You see, a lot of times people will describe a product, um, not
4:40 describe features, and will be frustrated with AI. Like AI is supposed
4:43 to magically know what you're thinking about. Um, by the way, Greg, am I making
4:47 sense so far, or am I >> 100% I'm with you. >> Yeah. So, we really need to think in
4:53 features. But here's the cool part. When developing features, often times the
4:58 issue with models is like you'll develop a feature or like let's say the model
5:02 develops a feature. We don't know if it works. We don't know if it did it the
5:05 right way. That's where with all the cool Ralph stuff that's happening, we
5:10 can introduce tests, right? So let's say uh the model the agent bu builds feature
5:15 one. Before moving on moving on to feature two, what I'm going to do is I'm
5:18 going to get the model to write a test. If that test passes, then we'll work on
5:23 the second feature. If that test passes, we work on the third feature. Right? So
5:28 we're finally entering an era where you can really build something serious with
5:33 these models. So, instead of telling you about just uh planning, why don't we do
5:38 actual planning together? So, I'm going to pop up my terminal. So, I know
5:42 everyone's afraid of the terminal, but in all honesty, if you don't know how to
5:46 use a terminal, ask AI. Like, it's the like simplest thing. And if not, you can
5:51 even download the Cloud Code app and go on code section, give it a specific
5:55 folder you want to work on and use the app. Like, there's literally no excuse
5:59 to not use cloud code. If you're afraid, boohoo, just jump into use AI. have all
6:03 the tools. That being said, I'm just going to type in Claude and we're going
6:08 to have uh Claude code open. And usually how people plan is they'll click shift
6:12 tab, right? And then you have plan mode on and you can say, let's say I want to
6:17 build um Tik Tok UGC generating app for my marketing agency.
6:28 I see like these UGC apps everywhere. Um, please help me create a plan. Write
6:39 this in the in uh PRD.MD file. So, this is how most people have
6:47 planning set up, right? you'll tell Claude Code or Cursor or whatever agent
6:52 uh to do the plan for you and you ask it to be in some file and like it says it'd
6:57 be happy to help you plan this out and it'll ask you some questions etc etc.
7:02 But I found that there's a better way to get an even more concise plan. And this
7:08 way it actually gets you to think a lot more about tradeoffs, concerns, UIUIUX
7:13 decisions because most of the time you're sort of allowing the AI to have
7:17 free reign over certain decisions which I think uh will lead you with a finished
7:21 product that you're not excited about. And that's invoking a special tool. Um I
7:26 was going to show you guys the tweet but unfortunately Twitter's down right now.
7:30 But Claude Code has a specific tool called ask user question tool. And
7:34 essentially what this tool does, it starts to interview you about the
7:40 specifics of your plan. Right? So I'm going to drop this prompt where it says
7:44 read this plan file. Interview me in detail using the ask user question tool
7:48 about literally anything. Technical implementation, UI, UX concerns, and
7:52 trade-offs. I spelled implementation wrong. Do not judge me. Um, and what
7:56 this is going to do is it's going to go past the plan that we have and start to
8:01 ask us about minute details. So, let's finish off this plan first. I'm just
8:06 going to accept um this is internal use uh text. We'll use React. I just want
8:13 core features. We'll submit answers. And then cloud code, you'll see might ask us
8:16 a few more questions, but this will generally be the plan, >> right? So it's it's not just it's not
8:23 just the plan, it's the right plan, right? Like to what you were saying like
8:27 go back go scroll back up here the features and yeah the features and test
8:34 like the way I think about this and I don't know if you agree is like if you
8:38 ask claude code to build you a car it doesn't really know what a car is. It
8:41 doesn't understand like you need a steering wheel and a you know a radio
8:47 and you need wheels. So the the the hard part is trying to figure out is
8:50 basically explaining what those things are in a really succinct and clear way.
8:55 And that's what this interview is basically doing. It's it's explaining
8:59 each of them and then we're going to test each of those features. Exactly.
9:02 Like think of think of it this like a simple example. Let's say you ask the AI
9:09 agent to build you a specific feature, right? How is it going to present that
9:12 specific feature? Did you want it in a dashboard? Did you want it to be a
9:15 modal? Did did it have to be a separate page? Like when you don't specify these
9:20 minute details, it will make the assumption for you. And with Ralph loops
9:24 and all these type of things, like you might have a whole application built out
9:28 and it's not exactly to the liking or the expectations you had. Right? So, let
9:33 me continue. I'll just make some selections here just so we can move on.
9:39 Um, and then hit submit. And then I'm going to pause this planning here and
9:44 then I'm going to paste this. I'm going to say read this plan file and I'm going
9:48 to tag the plan file. It's called prd.md. We have that right here. Um, and I'm
9:54 going to say interview me the details about this question or I don't even need
9:57 to tag it because it has it in its context. But I just want to show you how
10:02 annoyingly uh annoying this is going to get. Meaning it's going to keep asking me
10:09 questions about said plan or said uh app idea. So notice how it says round one
10:15 core workflow and technical foundation, right? And some of the questions it
10:18 might even ask you are things that you might not know about cuz you're not
10:21 technical. So what do I do when I don't know something, Greg? I'm going to copy
10:24 this and I'm going to go to the chatbot of my choice, whether it's claude, chat,
10:28 GBT, whatever, and I'm going to ask it questions. So if you remember earlier,
10:32 it asked me generic questions about the app. Now it's saying, "What's your ideal
10:36 workflow for generating UGC video from start to finish?" Like notice how the questions are even
10:42 more specific now. So it says linear stepbystep template based batch
10:48 processing iterative conversational. So let's say I select that and it says how
10:53 should the app handle agent API cost and usage. So now it's talking about cost
10:57 right again most of the times when you just have a basic plan this is not
11:00 included in the plan. Right? Let's say we want to have a hard hard budget. Um
11:05 what database and hosting approach do you want to use? Most of you probably
11:08 watching this have no idea. So I can copy this over, go to Chad GBT and ask
11:12 what's the best decision. This is my current situation. And then you keep
11:16 going. You keep going and you submit answers. So when you use this ask user
11:21 question tool, the questions become more granular. So it asks me about core
11:25 workflow and technical foundation. Now it's going to ask me about UI, UX, and
11:30 script generation. If you notice the first plan that it came up with, the
11:35 default plan for claude code, it was pretty basic. Now it's asking me, okay,
11:39 what AI do I want to use for the script generation? I'll use Claude. Uh, what UI
11:43 style aesthetic are you going for? Minimal clean, dashboard heavy, creative
11:49 tool field, chat first. Right. So hopefully, Greg, I'm making sense with
11:53 like how much more questions I'm being asked when I'm invoking this ask user
11:59 question tool. Yeah, it makes complete sense. You're also you're also going to use less
12:05 tokens in the end, right? Because you're right. >> Yeah. Because the thing is the better
12:10 your plan, the better your input, the better the initial set of documents that
12:17 you give the model, um the the better the outcome. And if the better the
12:19 outcome, there's no back and forth, right? Most people will have a Ralph
12:23 loop running. It'll be a basic plan and it'll do what you told it to do, but you
12:27 weren't specific. So now you're going back and then maybe you're running
12:29 another loop or you're going back and doing all these changes. But if you get
12:35 it done right, if you invest the time in the planning stage, I 100% believe
12:40 you'll save a lot more money. And this will help you clear up a lot of ideas.
12:44 So like for example, this idea that we just had, this Tik Tok UGC farm, um, how
12:49 do we want it set up? Do we want it to be flat with search? Do we want it to be
12:53 client campaign assets? There's a lot of like these minute details that you're
12:58 not thinking about and because you're not thinking about it, you're allowing
13:01 cloud code to make those assumptions for you, right? Which at the end after it's
13:06 burned through a ton of tokens, now you're going back to change, right? We
13:11 can save so much headache if we do the proper planning from the beginning. And
13:18 hopefully um people see value in this um ask user question tool. Make sure you
13:22 specify it in your prompt. And hopefully, Greg, that that made sense.
13:25 >> It does. >> So, I would say step number one for this
13:30 Claude C crash course is I would get good at planning. I would get really
13:34 really good at planning. I would get good at generating these, right? Like
13:38 look, it it keeps on asking me questions. If you notice the very first
13:42 plan that we generated with Claude, it was two sets of questions and it was
13:46 ready to build. But with this, it's asking me, do I want basic avatars,
13:50 custom avatars, multi-seene videos? How do I want to handle storage? Do I want
13:54 to download the videos instantly? Cloud storage, external storage, like there's
14:00 so much to software engineering. And I think in our last video, you um someone
14:03 shared this on Twitter. I don't know if it was you or someone else. Like
14:07 software um building personal software is easy, but building software others
14:11 are going to use is very, very difficult. And if you don't have the
14:16 audacity or the decency to to set up a little time, a little extra time to
14:19 plan, then I guarantee whatever you generate is going to be AI slop. And you
14:23 might blame the model, but really the problem is you. So invest in your plans.
14:28 Spend time using planning. Um don't use the generic plan uh mode that cursor or
14:34 claude code has. I would use claude code. And then I would specify the ask
14:39 user question tool. um it's going to continue to know you with questions like
14:43 it keeps asking, right? Cuz until it knows exactly what it is you want, it
14:47 won't start building. Um so I would say that's step number one to building with
14:53 cloud code. Step number two, and everyone's talking about Ralph and it's
14:59 exciting. Um but I wouldn't use it. I wouldn't use Ralph. And the reason I
15:03 wouldn't use Ralph if I was just starting out, Greg, is because um how
15:09 are you going to like imagine this, like imagine not knowing how to drive, but
15:15 then buying a Tesla for uh like the self-driving stuff. Like cool in theory,
15:20 but maybe it's a great idea to know how to drive, how to steer, how to hit the
15:24 corners, how to maybe yell at someone when they cut you off before you get the
15:29 full automated version. I say this to say because when you get good at
15:35 developing plans and then working with the AI to build each feature and testing
15:40 each feature, you you start to develop this sense on product building on on
15:46 like you know even uh I heard someone call vibe QA testing. You get this sense
15:51 by going one-on-one yourself. And this is why a lot of people who were fighting
15:55 with claude code all these months are really really good at using it now
15:59 because they spent the time building without using these crazy automation
16:03 loops. So if you're using cloud code for the first time or you're just getting
16:08 into it good plan number one and number two get your reps in by not using Ralph.
16:13 So develop the features one by one. Now that you have your plan, you can
16:17 literally tell Claude Code, hey, okay, let's build the first feature. Um, you
16:21 know, go ahead and do it. And then once the feature is done, you can test it
16:24 out. Ask it, how can I test this? How can I run this app? I wouldn't jump into
16:31 using Ralph right away. Um, build without Ralph. But let's say you've
16:37 built these reps now and you're you're comfortable with Cloud Code. Now you
16:42 hear about all these things. skills MCP uh prompt MD agent MD um what else is
16:49 there something MD you you hear all these conventions plugins um you have
16:54 Ralph all these things so what do I need to perfectly uh build something um using
17:01 cloudc any agent I'll be honest with you most of these things are all the same
17:07 prompt MD and agent MD are just markdown files um plugins are skills with you know a little bit
17:14 extra. What you need to build successfully using these agents is first
17:20 of all you need a good plan right which are documents which is the prd we just
17:26 generated and then you need um to document um the progress that's being
17:33 made. Um for anyone who's familiar with for with Ralph you know what I'm talking
17:37 about. For those who aren't, what's cool about a Ralph loop is as follows. A
17:42 Ralph loop is basically you have a list of things that need to be get that need
17:47 to get done. Uh the uh whatchamacallit the prd or the plan you give it to the
17:52 AI model. The model works on the first task. It finishes it then documents it
17:57 in another file and then it it goes again and it stops until it's completed
18:04 the whole list. Now, this isn't anything special, but the reason why it's now
18:08 super powerful is because the models are getting so so good. But here is the
18:13 issue. If you have a terrible plan, if you have a terrible PRD, this doesn't
18:17 matter. You're just donating money to Enthropic and I wish you the best of
18:21 luck if that's what you want to do. But if you want to make sure that your
18:25 tokens are not wasted, you're going to invest in a good PRD. MD file or a good
18:31 plan file. Greg, am I making sense so far? >> 100%. >> Okay,
18:36 >> you're driving the point home. >> Yes. So, I'll talk a little bit about um
18:44 Ralph uh now. So, with Mr. Ralph Wiggum, um how do we use this? Now, there's a
18:48 lot of different um iterations like people are coming with their own style.
18:51 I'm going to share with you my Ralph setup in a second. Um Greg, um one thing
18:57 I will say is Cloud Code has a plugin, a Ralph Wigum plugin. I wouldn't use that.
19:01 And the reason I wouldn't use that is even the person who invented the whole
19:05 Ralph system um is against it. It's not the best use of Ralph. But I just want
19:10 to share this concept of how Ralph works. It's essentially going to go
19:15 through our plan and it's going to build out each feature step by step. And it's
19:21 not going to stop until it's done. This is cool when your plan rocks. If your
19:26 plan sucks, then it's terrible. It doesn't matter. Now, in terms of how to
19:33 set up um Ralph Wigum, I have my own setup, and I don't want anyone to think
19:37 I'm shilling my own setup for any reason, but the reason why I built my
19:42 own setup is there's a couple things my Ralph loop does. The first thing is it
19:48 makes sure that there's a plan, a prd file, and there's a progress.txt file.
19:54 But it also every feature it builds, it then writes a test and it then lints.
19:59 And basically what this does is it makes sure that every feature that's built
20:03 actually works, right? Cuz there's no point on working on feature two if
20:07 feature one doesn't work. If feature one doesn't work, if the test fails, guess
20:11 what the AI model is going to do? It's going to go back to working on feature
20:16 one. And once the test passes, we work on feature two. And then once feature
20:20 two test passes, we work on feature three. Right? All this is awesome, but
20:25 I'm going to go back to the same point. If your plan sucks, then the Ralph loop
20:31 won't matter. Now, in order to set up this loop, um you can find the uh get up
20:35 here. How to set it up, you honestly, I'm not even going to explain it. Uh
20:39 Greg, people can literally copy the link, pass it to Claude, and then be
20:43 like, I want to run this Ralph loop, and it will tell you exactly what to do.
20:48 That's how good the models have become. But I'll show you an example of this
20:54 running. So I have a simple prd file. It's nothing crazy. It's just to show
20:58 you the point. But basically there are a couple tasks here. I want to build a
21:02 basic server that has some basic endpoints. And I just want to show you
21:07 how my Ralph loop works. So when I run this Ralph loop and again if you don't
21:12 know how to run this the you paste the GitHub URL in cloud code in your agent
21:16 and ask it and it will tell you how to do it. I have a few different
21:21 configurations. I can use open code if I want. I can use codeex if I want. But
21:24 I'm just going to use cloud code. And I'm just going to run this script. And
21:29 basically what it's going to start doing is it's going to start running through
21:34 each task as you can see. And it's going to update the PRD and it's just going to
21:40 continue to work. Now I can go and leave, right? I can go about my day,
21:46 hang with um hang with uh Greg and this loop will continue to work and I'm going
21:51 to see that at some point whether it's 5 minutes, 3 minutes, 10 minutes, however
21:55 long this is, this is going to finish all the tasks. I'm going to have a
21:59 working product built and all this is cool, but it doesn't matter if I'm going
22:06 to go back to the original document if the plan isn't good. Now, skills are
22:11 great, MCPs are great, all these different markdown files are great. You
22:16 would do yourself a serious service if your if your plan is good. So, the key to
22:24 successfully building with cloud code is you have an absolutely great plan. And
22:29 if you use the ask user question tool, you will spend so much time on the plan
22:33 where it starts to get annoying. It doesn't get fun. But those of us who
22:37 focus on this will end up having better outputs. Um, let's continue. If you
22:43 notice here, my Ralph loop is continuing to go and it took care of the first
22:47 task. I can see some files already generated. If I go to the progress.txt
22:53 file, you can see Greg, it's started to make some progress. It's documenting
22:57 that. And this is just going to continue to work. This is just going to continue
23:00 to run. So, people have different iterations. I know the AMP code people
23:04 have their own iteration. Um, and different people have their own
23:06 iteration. It doesn't really matter, right? Someone's Ralph is could be
23:10 better, someone's can be worse, someone's could be all of that is cool,
23:15 but don't get stuck in the weeds. The main sauce is how you can articulately
23:21 perfectly in a beautiful presentation create the perfect input because if you
23:25 create the perfect input, we have reached a point where the models will
23:30 give you perfect output. So that's my main uh tip crash course for people. Use
23:36 the ask user question tool. Build without using Ralph. And if you are
23:40 going to use Ralph, understand if your plan sucks, you're just donating money
23:44 to Anthropic. And I think Anthropic has enough money that they don't need your
23:48 Tips & tricks: don’t obsess over MCP/skills/plugins
23:49 money being donated to them. >> Amen. >> Amen. Is there anything else people need
23:55 to know? Like little tips and tricks. I notice you know you're not using the Mac
24:00 terminal. You're using ghosty. >> Yes. Yes. So, honestly, it's all
24:05 preference, right? So, like the terminal you use and all this stuff is all
24:09 preference. Here's what I would say. Like, let's have a tips and tricks list.
24:17 Tips and tricks. So, first I would say is my goodness spelling today. First I
24:23 would say is use the ask what was the specific tool? I just want to make sure
24:28 I don't forget. ask user questions tool. Slept on. I don't know why no one's not
24:32 talking about it. It literally I saw the tweet from the Enthropic team. 100% I
24:38 would use that when planning. Uh number two, um don't over obsess
24:48 obsess on uh MCP skills, etc., etc. I'm not saying don't get into these. I'm not
24:51 saying don't read about them. I'm not saying don't use them. But I I can
24:56 almost guarantee you these things are not the reason why your product isn't
25:00 working. Right? Most of the time it's your plan sucks. Right? That's number
25:08 two. Um number three, I would use Ralph after I've built something without. And
25:13 the reason being is again listen if you are a baller shot caller and you have
25:17 all the money to blow and you don't care and you want to donate money to
25:21 Anthropic, go ahead and use Ralph. But if we were to sit here eye to eye and
25:25 you haven't built anything, deployed anything, there isn't a URL that I
25:30 myself or Greg can click on that you've built, you have no business using Ralph.
25:34 You literally have no business using Ralph. I would first get good at
25:39 prompting and building something using a plan, whether it's whatever AG1, cloud
25:43 code, open code, whatever. Once you have something deployed to Verscell or like
25:48 there's a URL and we can use it, then you can use Ralph. Number four, um this
25:56 is a little in the weeds, but context is more important than ever. And a lot of
26:01 times cloud code or even cursor will tell you what percent of context has
26:06 been used. Um I generally wouldn't go over 50%. Meaning like the enthropic
26:12 model opus 4.5 has a 200,000 token context limit. The moment in my opinion
26:17 you've got over a 100,000 tokens meaning you're using the same session it starts
26:21 to sort of deteriorate that's when you have people Greg who say oh like I
26:25 started off good but it started going bad. That's because you've filled it
26:29 with so much context. And the best way to think about this is like yourself
26:33 right? Like let's say we went to some English class and or some you know
26:38 whatever class and the professor just kept dumping information information at
26:42 some point we're going to feel overwhelmed and we're going to actually
26:46 start forgetting stuff um and I'm not saying that's how the models work but
26:49 that's how the models act right so context is very much important the
26:55 moment you see 50% or even 40% I would start a new session and last but not
27:01 least um have audacity and what I mean by that is software development is
27:05 starting to become easy but software engineering is very very hard and what
27:09 do I mean by that? Um to architect software to make sure things are usable
27:15 to create great UX UI to have great taste to make something that people
27:19 actually use requires time and in order to spend time it requires audacity. I
27:23 know the models are good and you can clone a $6 billion software but if all
27:28 of us can do it now what makes software different I think thinking about those
27:32 things and thinking about the art of building products and building something
27:36 that's tasteful is very very important and I think anyone who uses these five
27:43 uh tips should kick cheeks in 2025 2026 sorry >> um I agree on the audacity thing I think
27:44 Scroll-stopping software wins
5:31 Claude Code Plan Mode
5:33 these models. So, instead of telling you about just uh planning, why don't we do
5:38 actual planning together? So, I'm going to pop up my terminal. So, I know
5:42 everyone's afraid of the terminal, but in all honesty, if you don't know how to
5:46 use a terminal, ask AI. Like, it's the like simplest thing. And if not, you can
5:51 even download the Cloud Code app and go on code section, give it a specific
5:55 folder you want to work on and use the app. Like, there's literally no excuse
5:59 to not use cloud code. If you're afraid, boohoo, just jump into use AI. have all
6:03 the tools. That being said, I'm just going to type in Claude and we're going
6:08 to have uh Claude code open. And usually how people plan is they'll click shift
6:12 tab, right? And then you have plan mode on and you can say, let's say I want to
6:17 build um Tik Tok UGC generating app for my marketing agency.
6:28 I see like these UGC apps everywhere. Um, please help me create a plan. Write
6:39 this in the in uh PRD.MD file. So, this is how most people have
6:47 planning set up, right? you'll tell Claude Code or Cursor or whatever agent
6:52 uh to do the plan for you and you ask it to be in some file and like it says it'd
6:57 be happy to help you plan this out and it'll ask you some questions etc etc.
7:02 But I found that there's a better way to get an even more concise plan. And this
7:08 way it actually gets you to think a lot more about tradeoffs, concerns, UIUIUX
7:13 decisions because most of the time you're sort of allowing the AI to have
7:17 free reign over certain decisions which I think uh will lead you with a finished
7:21 product that you're not excited about. And that's invoking a special tool. Um I
7:26 was going to show you guys the tweet but unfortunately Twitter's down right now.
7:30 But Claude Code has a specific tool called ask user question tool. And
7:34 essentially what this tool does, it starts to interview you about the
7:40 specifics of your plan. Right? So I'm going to drop this prompt where it says
7:44 read this plan file. Interview me in detail using the ask user question tool
7:48 about literally anything. Technical implementation, UI, UX concerns, and
7:52 trade-offs. I spelled implementation wrong. Do not judge me. Um, and what
7:56 this is going to do is it's going to go past the plan that we have and start to
8:01 ask us about minute details. So, let's finish off this plan first. I'm just
8:06 going to accept um this is internal use uh text. We'll use React. I just want
8:13 core features. We'll submit answers. And then cloud code, you'll see might ask us
8:16 a few more questions, but this will generally be the plan, >> right? So it's it's not just it's not
8:23 just the plan, it's the right plan, right? Like to what you were saying like
8:27 go back go scroll back up here the features and yeah the features and test
8:34 like the way I think about this and I don't know if you agree is like if you
8:38 ask claude code to build you a car it doesn't really know what a car is. It
8:41 doesn't understand like you need a steering wheel and a you know a radio
8:47 and you need wheels. So the the the hard part is trying to figure out is
8:50 basically explaining what those things are in a really succinct and clear way.
8:55 And that's what this interview is basically doing. It's it's explaining
8:59 each of them and then we're going to test each of those features. Exactly.
9:02 Like think of think of it this like a simple example. Let's say you ask the AI
9:09 agent to build you a specific feature, right? How is it going to present that
9:12 specific feature? Did you want it in a dashboard? Did you want it to be a
9:15 modal? Did did it have to be a separate page? Like when you don't specify these
9:20 minute details, it will make the assumption for you. And with Ralph loops
9:24 and all these type of things, like you might have a whole application built out
9:28 and it's not exactly to the liking or the expectations you had. Right? So, let
9:30 The Ask User Question Tool
9:33 me continue. I'll just make some selections here just so we can move on.
9:39 Um, and then hit submit. And then I'm going to pause this planning here and
9:44 then I'm going to paste this. I'm going to say read this plan file and I'm going
9:48 to tag the plan file. It's called prd.md. We have that right here. Um, and I'm
9:54 going to say interview me the details about this question or I don't even need
9:57 to tag it because it has it in its context. But I just want to show you how
10:02 annoyingly uh annoying this is going to get. Meaning it's going to keep asking me
10:09 questions about said plan or said uh app idea. So notice how it says round one
10:15 core workflow and technical foundation, right? And some of the questions it
10:18 might even ask you are things that you might not know about cuz you're not
10:21 technical. So what do I do when I don't know something, Greg? I'm going to copy
10:24 this and I'm going to go to the chatbot of my choice, whether it's claude, chat,
10:28 GBT, whatever, and I'm going to ask it questions. So if you remember earlier,
10:32 it asked me generic questions about the app. Now it's saying, "What's your ideal
10:36 workflow for generating UGC video from start to finish?" Like notice how the questions are even
10:42 more specific now. So it says linear stepbystep template based batch
10:48 processing iterative conversational. So let's say I select that and it says how
10:53 should the app handle agent API cost and usage. So now it's talking about cost
10:57 right again most of the times when you just have a basic plan this is not
11:00 included in the plan. Right? Let's say we want to have a hard hard budget. Um
11:05 what database and hosting approach do you want to use? Most of you probably
11:08 watching this have no idea. So I can copy this over, go to Chad GBT and ask
11:12 what's the best decision. This is my current situation. And then you keep
11:16 going. You keep going and you submit answers. So when you use this ask user
11:21 question tool, the questions become more granular. So it asks me about core
11:25 workflow and technical foundation. Now it's going to ask me about UI, UX, and
11:30 script generation. If you notice the first plan that it came up with, the
11:35 default plan for claude code, it was pretty basic. Now it's asking me, okay,
11:39 what AI do I want to use for the script generation? I'll use Claude. Uh, what UI
11:43 style aesthetic are you going for? Minimal clean, dashboard heavy, creative
11:49 tool field, chat first. Right. So hopefully, Greg, I'm making sense with
11:53 like how much more questions I'm being asked when I'm invoking this ask user
11:59 question tool. Yeah, it makes complete sense. You're also you're also going to use less
12:05 tokens in the end, right? Because you're right. >> Yeah. Because the thing is the better
12:10 your plan, the better your input, the better the initial set of documents that
12:17 you give the model, um the the better the outcome. And if the better the
12:19 outcome, there's no back and forth, right? Most people will have a Ralph
12:23 loop running. It'll be a basic plan and it'll do what you told it to do, but you
12:27 weren't specific. So now you're going back and then maybe you're running
12:29 another loop or you're going back and doing all these changes. But if you get
12:35 it done right, if you invest the time in the planning stage, I 100% believe
12:40 you'll save a lot more money. And this will help you clear up a lot of ideas.
12:44 So like for example, this idea that we just had, this Tik Tok UGC farm, um, how
12:49 do we want it set up? Do we want it to be flat with search? Do we want it to be
12:53 client campaign assets? There's a lot of like these minute details that you're
12:58 not thinking about and because you're not thinking about it, you're allowing
13:01 cloud code to make those assumptions for you, right? Which at the end after it's
13:06 burned through a ton of tokens, now you're going back to change, right? We
13:11 can save so much headache if we do the proper planning from the beginning. And
13:18 hopefully um people see value in this um ask user question tool. Make sure you
13:22 specify it in your prompt. And hopefully, Greg, that that made sense.
13:25 >> It does. >> So, I would say step number one for this
13:30 Claude C crash course is I would get good at planning. I would get really
13:34 really good at planning. I would get good at generating these, right? Like
13:38 look, it it keeps on asking me questions. If you notice the very first
13:42 plan that we generated with Claude, it was two sets of questions and it was
13:46 ready to build. But with this, it's asking me, do I want basic avatars,
13:50 custom avatars, multi-seene videos? How do I want to handle storage? Do I want
13:54 to download the videos instantly? Cloud storage, external storage, like there's
14:00 so much to software engineering. And I think in our last video, you um someone
14:03 shared this on Twitter. I don't know if it was you or someone else. Like
14:07 software um building personal software is easy, but building software others
14:11 are going to use is very, very difficult. And if you don't have the
14:16 audacity or the decency to to set up a little time, a little extra time to
14:19 plan, then I guarantee whatever you generate is going to be AI slop. And you
14:23 might blame the model, but really the problem is you. So invest in your plans.
14:28 Spend time using planning. Um don't use the generic plan uh mode that cursor or
14:34 claude code has. I would use claude code. And then I would specify the ask
14:39 user question tool. um it's going to continue to know you with questions like
14:43 it keeps asking, right? Cuz until it knows exactly what it is you want, it
14:47 won't start building. Um so I would say that's step number one to building with
14:53 cloud code. Step number two, and everyone's talking about Ralph and it's
14:59 exciting. Um but I wouldn't use it. I wouldn't use Ralph. And the reason I
15:03 wouldn't use Ralph if I was just starting out, Greg, is because um how
15:09 are you going to like imagine this, like imagine not knowing how to drive, but
15:15 then buying a Tesla for uh like the self-driving stuff. Like cool in theory,
15:20 but maybe it's a great idea to know how to drive, how to steer, how to hit the
15:24 corners, how to maybe yell at someone when they cut you off before you get the
15:29 full automated version. I say this to say because when you get good at
15:35 developing plans and then working with the AI to build each feature and testing
15:40 each feature, you you start to develop this sense on product building on on
15:46 like you know even uh I heard someone call vibe QA testing. You get this sense
15:51 by going one-on-one yourself. And this is why a lot of people who were fighting
15:55 with claude code all these months are really really good at using it now
15:59 because they spent the time building without using these crazy automation
16:03 loops. So if you're using cloud code for the first time or you're just getting
16:08 into it good plan number one and number two get your reps in by not using Ralph.
16:13 So develop the features one by one. Now that you have your plan, you can
16:17 literally tell Claude Code, hey, okay, let's build the first feature. Um, you
16:21 know, go ahead and do it. And then once the feature is done, you can test it
16:24 out. Ask it, how can I test this? How can I run this app? I wouldn't jump into
16:31 using Ralph right away. Um, build without Ralph. But let's say you've
16:37 built these reps now and you're you're comfortable with Cloud Code. Now you
16:42 hear about all these things. skills MCP uh prompt MD agent MD um what else is
16:49 there something MD you you hear all these conventions plugins um you have
16:54 Ralph all these things so what do I need to perfectly uh build something um using
17:01 cloudc any agent I'll be honest with you most of these things are all the same
17:07 prompt MD and agent MD are just markdown files um plugins are skills with you know a little bit
17:14 extra. What you need to build successfully using these agents is first
17:20 of all you need a good plan right which are documents which is the prd we just
17:26 generated and then you need um to document um the progress that's being
17:33 made. Um for anyone who's familiar with for with Ralph you know what I'm talking
17:37 about. For those who aren't, what's cool about a Ralph loop is as follows. A
17:42 Ralph loop is basically you have a list of things that need to be get that need
17:47 to get done. Uh the uh whatchamacallit the prd or the plan you give it to the
17:52 AI model. The model works on the first task. It finishes it then documents it
17:57 in another file and then it it goes again and it stops until it's completed
18:04 the whole list. Now, this isn't anything special, but the reason why it's now
18:08 super powerful is because the models are getting so so good. But here is the
18:13 issue. If you have a terrible plan, if you have a terrible PRD, this doesn't
18:17 matter. You're just donating money to Enthropic and I wish you the best of
18:21 luck if that's what you want to do. But if you want to make sure that your
18:25 tokens are not wasted, you're going to invest in a good PRD. MD file or a good
18:31 plan file. Greg, am I making sense so far? >> 100%. >> Okay,
18:36 >> you're driving the point home. >> Yes. So, I'll talk a little bit about um
18:44 Ralph uh now. So, with Mr. Ralph Wiggum, um how do we use this? Now, there's a
18:48 lot of different um iterations like people are coming with their own style.
18:51 I'm going to share with you my Ralph setup in a second. Um Greg, um one thing
18:57 I will say is Cloud Code has a plugin, a Ralph Wigum plugin. I wouldn't use that.
19:01 And the reason I wouldn't use that is even the person who invented the whole
19:05 Ralph system um is against it. It's not the best use of Ralph. But I just want
19:10 to share this concept of how Ralph works. It's essentially going to go
19:15 through our plan and it's going to build out each feature step by step. And it's
19:21 not going to stop until it's done. This is cool when your plan rocks. If your
19:26 plan sucks, then it's terrible. It doesn't matter. Now, in terms of how to
19:33 set up um Ralph Wigum, I have my own setup, and I don't want anyone to think
19:37 I'm shilling my own setup for any reason, but the reason why I built my
19:42 own setup is there's a couple things my Ralph loop does. The first thing is it
19:48 makes sure that there's a plan, a prd file, and there's a progress.txt file.
19:54 But it also every feature it builds, it then writes a test and it then lints.
19:59 And basically what this does is it makes sure that every feature that's built
20:03 actually works, right? Cuz there's no point on working on feature two if
20:07 feature one doesn't work. If feature one doesn't work, if the test fails, guess
20:11 what the AI model is going to do? It's going to go back to working on feature
20:16 one. And once the test passes, we work on feature two. And then once feature
20:20 two test passes, we work on feature three. Right? All this is awesome, but
20:25 I'm going to go back to the same point. If your plan sucks, then the Ralph loop
20:31 won't matter. Now, in order to set up this loop, um you can find the uh get up
20:35 here. How to set it up, you honestly, I'm not even going to explain it. Uh
20:39 Greg, people can literally copy the link, pass it to Claude, and then be
20:43 like, I want to run this Ralph loop, and it will tell you exactly what to do.
20:48 That's how good the models have become. But I'll show you an example of this
20:54 running. So I have a simple prd file. It's nothing crazy. It's just to show
20:58 you the point. But basically there are a couple tasks here. I want to build a
21:02 basic server that has some basic endpoints. And I just want to show you
21:07 how my Ralph loop works. So when I run this Ralph loop and again if you don't
21:12 know how to run this the you paste the GitHub URL in cloud code in your agent
21:16 and ask it and it will tell you how to do it. I have a few different
21:21 configurations. I can use open code if I want. I can use codeex if I want. But
21:24 I'm just going to use cloud code. And I'm just going to run this script. And
21:29 basically what it's going to start doing is it's going to start running through
21:34 each task as you can see. And it's going to update the PRD and it's just going to
21:40 continue to work. Now I can go and leave, right? I can go about my day,
21:46 hang with um hang with uh Greg and this loop will continue to work and I'm going
21:51 to see that at some point whether it's 5 minutes, 3 minutes, 10 minutes, however
21:55 long this is, this is going to finish all the tasks. I'm going to have a
21:59 working product built and all this is cool, but it doesn't matter if I'm going
22:06 to go back to the original document if the plan isn't good. Now, skills are
22:11 great, MCPs are great, all these different markdown files are great. You
22:16 would do yourself a serious service if your if your plan is good. So, the key to
22:24 successfully building with cloud code is you have an absolutely great plan. And
22:29 if you use the ask user question tool, you will spend so much time on the plan
22:33 where it starts to get annoying. It doesn't get fun. But those of us who
22:37 focus on this will end up having better outputs. Um, let's continue. If you
22:43 notice here, my Ralph loop is continuing to go and it took care of the first
22:47 task. I can see some files already generated. If I go to the progress.txt
22:53 file, you can see Greg, it's started to make some progress. It's documenting
22:57 that. And this is just going to continue to work. This is just going to continue
23:00 to run. So, people have different iterations. I know the AMP code people
23:04 have their own iteration. Um, and different people have their own
23:06 iteration. It doesn't really matter, right? Someone's Ralph is could be
23:10 better, someone's can be worse, someone's could be all of that is cool,
23:15 but don't get stuck in the weeds. The main sauce is how you can articulately
23:21 perfectly in a beautiful presentation create the perfect input because if you
23:25 create the perfect input, we have reached a point where the models will
23:30 give you perfect output. So that's my main uh tip crash course for people. Use
23:36 the ask user question tool. Build without using Ralph. And if you are
23:40 going to use Ralph, understand if your plan sucks, you're just donating money
23:44 to Anthropic. And I think Anthropic has enough money that they don't need your
23:49 money being donated to them. >> Amen. >> Amen. Is there anything else people need
23:55 to know? Like little tips and tricks. I notice you know you're not using the Mac
24:00 terminal. You're using ghosty. >> Yes. Yes. So, honestly, it's all
24:05 preference, right? So, like the terminal you use and all this stuff is all
24:09 preference. Here's what I would say. Like, let's have a tips and tricks list.
24:17 Tips and tricks. So, first I would say is my goodness spelling today. First I
24:23 would say is use the ask what was the specific tool? I just want to make sure
24:28 I don't forget. ask user questions tool. Slept on. I don't know why no one's not
24:32 talking about it. It literally I saw the tweet from the Enthropic team. 100% I
24:38 would use that when planning. Uh number two, um don't over obsess
24:48 obsess on uh MCP skills, etc., etc. I'm not saying don't get into these. I'm not
24:51 saying don't read about them. I'm not saying don't use them. But I I can
24:56 almost guarantee you these things are not the reason why your product isn't
25:00 working. Right? Most of the time it's your plan sucks. Right? That's number
25:08 two. Um number three, I would use Ralph after I've built something without. And
25:13 the reason being is again listen if you are a baller shot caller and you have
25:17 all the money to blow and you don't care and you want to donate money to
25:21 Anthropic, go ahead and use Ralph. But if we were to sit here eye to eye and
25:25 you haven't built anything, deployed anything, there isn't a URL that I
25:30 myself or Greg can click on that you've built, you have no business using Ralph.
25:34 You literally have no business using Ralph. I would first get good at
25:39 prompting and building something using a plan, whether it's whatever AG1, cloud
25:43 code, open code, whatever. Once you have something deployed to Verscell or like
25:48 there's a URL and we can use it, then you can use Ralph. Number four, um this
25:56 is a little in the weeds, but context is more important than ever. And a lot of
26:01 times cloud code or even cursor will tell you what percent of context has
26:06 been used. Um I generally wouldn't go over 50%. Meaning like the enthropic
26:12 model opus 4.5 has a 200,000 token context limit. The moment in my opinion
26:17 you've got over a 100,000 tokens meaning you're using the same session it starts
26:21 to sort of deteriorate that's when you have people Greg who say oh like I
26:25 started off good but it started going bad. That's because you've filled it
26:29 with so much context. And the best way to think about this is like yourself
26:33 right? Like let's say we went to some English class and or some you know
26:38 whatever class and the professor just kept dumping information information at
26:42 some point we're going to feel overwhelmed and we're going to actually
26:46 start forgetting stuff um and I'm not saying that's how the models work but
26:49 that's how the models act right so context is very much important the
26:55 moment you see 50% or even 40% I would start a new session and last but not
27:01 least um have audacity and what I mean by that is software development is
27:05 starting to become easy but software engineering is very very hard and what
27:09 do I mean by that? Um to architect software to make sure things are usable
27:15 to create great UX UI to have great taste to make something that people
27:19 actually use requires time and in order to spend time it requires audacity. I
27:23 know the models are good and you can clone a $6 billion software but if all
27:28 of us can do it now what makes software different I think thinking about those
27:32 things and thinking about the art of building products and building something
27:36 that's tasteful is very very important and I think anyone who uses these five
27:43 uh tips should kick cheeks in 2025 2026 sorry >> um I agree on the audacity thing I think
27:48 like it's for me it's like about creating scroll stopping software
27:53 You know what I mean? Like there's so many people and there's a lot of
27:56 tutorials about this like cloning billion dollar software. You know, I
28:00 cloned a $4 billion software. Look at me. But that's not the type of software
28:06 that's going to work in 2026, right? Um I saw this uh let me just share it real
28:13 quick. I saw this guy who created a running app based on how you're feeling.
28:16 So it's like how are you feeling? Stressed, angry. Um, and it's an AI
28:22 assisted running app that interprets your current emotions to generate a
28:25 personalized route. And I just thought it was interesting, you know what I
28:29 mean? Like I had never seen an app like this. And I think that like as you know,
28:34 you call it Audacity. I think this is an audacious app, right? It's scroll
28:39 stopping. You haven't seen it before. So I think push you want to push Claude
28:46 code to like get you to this basically. And and and this is why I'm like so pro
28:50 people not using Ralph if they haven't built anything fully cuz like now we're
28:54 people are getting to a point where they they want the model to think for them,
28:58 right? Where like if you look at the app you just shared the animations and how
29:02 things were floating and like even the colors used for the different emotions
29:06 like that required thought, right? And that's what stops people now. Like if
29:11 building the AI chat interface is easy, what's going to make your app different?
29:14 I think a little bit of audacity, a little bit of thought and care, and a
29:17 little bit of taste goes a long way nowadays. Um, and more than the models
29:22 getting better, cuz it's going to get easier, it's going to get better, it's
29:26 going to get faster. But unfortunately, if you don't change, then it doesn't all
29:28 matter. >> Yeah. And don't be afraid to use pen and paper. Like this this person literally
29:35 just like started sketching out the features. >> Yeah.
29:38 >> Like how should this thing work? >> Yeah. How should it feel? Like And I
29:41 love it. I love it. Right. And and this is why the app if I don't know the
29:45 metrics, but I'm willing to bet it's doing really really well because all
29:49 this stuff matters. Like we could clone something like this feature-wise, but
29:53 I'm willing to bet like the feel, the animations, the colors, we would not be
29:57 able to get it exactly like this. >> 100%. All right, man. Thanks for coming on.
30:04 You got me fired up. I actually I didn't know about that uh interview tool, so
30:09 thanks for sharing that with me. Um, >> yeah, just a heads up, it will ask a lot
30:12 of questions. I shared it with a couple friends and a couple people got annoyed,
30:17 but it's worth it, right? Especially if you wanted to build something end to end
30:22 or you're building a very like like very minute detailed feature, then it's
30:26 really really worth it. I wouldn't use the general plan personally. Um, so just
30:30 a heads up, but it's really really worth it and I would love to hear people's
30:34 feedback in the comments. >> Sounds good. We'll be in the comments.
30:38 uh you got to come back on in a few months or whenever people want you. Uh
30:42 it's always an absolute privilege to have you here. I'll include links where
30:48 you can follow uh and you should follow uh Msia Rasmike. Uh his YouTube channel
30:55 is X. I'll include the link to Ralphie. Even though if you're a beginner, don't
31:00 even click that link. I I wouldn't like I know there's maybe some degenerates
31:04 who do, but I highly suggest you don't because if you haven't even built
31:07 without it, >> then [clears throat] no point. >> Have some willpower, folks. Come on. You
31:12 know, don't click the link. But I'm putting it in there cuz I want to see
31:17 who's tempted and uh thanks again for coming on. I'll see you uh I'm coming to
31:20 Toronto in April, so let's hang out. >> Well, we'll see. We'll see each other
31:23 then. And again, as always, it's a pleasure. Thank you so much, you know,
31:25 for bringing me on.