you2idea@video:~$ watch ttdWPDmBN_4 [14:56]
// transcript — 315 segments
0:02 there is a very good chance that you are leaving most of the potential of your AI
0:07 coding assistant on the table. So right now I want to get super practical with
0:11 you and show you some of the best techniques that all the top agentic
0:16 engineers use for AI coding the people that have a real system in place for
0:20 working with their coding agent like claude code hero or cursor. And so I am
0:25 going to assume here that you have at least a basic understanding of how to
0:29 use coding agents because now I want to get specific into the really powerful
0:33 unlocks for you. And I'm going to be very concise in my explanations here.
0:37 I'm not going to waste any of your time. And the best part is all of this, it
0:41 doesn't require any new tools. It really is just a better way of working. All
0:45 right, so let's get started. At the top of the list here, we have PRD first
0:51 development. So PRD is short for product requirement document and that can mean a
0:54 lot of different things but in this context it is a markdown document a
1:00 single place to define the entire scope of work for your project. And so for
1:03 green field development when you're starting from scratch this generally
1:08 contains everything that you have to build to complete your proof of concept
1:13 or your MVP. And the beauty of this is this single document becomes the north
1:18 star for your coding agent. everything that you have to build. And so from the
1:23 PRD, you get all of the individual features that you're going to build out
1:26 with your coding agent. It is important to not have your coding agent do too
1:31 much at once or it's going to completely fall on its face. So you use the PRD as
1:36 a way to split your project into more granular features like implementing the
1:40 API, implementing the user interface, building the authentication in you split
1:45 it up like that. And then for brownfield development, if you are working on an
1:49 existing codebase instead of creating your initial scope of work, it's more
1:53 documenting what you already have in your project and then what you want to
1:56 build next. But either way, you're creating the north star for your
2:00 project. A lot of people miss this. They dive right into building that first
2:04 feature, but then they don't really have any connection between the different
2:07 iterations that they're doing with their coding agent. And in order to
2:11 demonstrate all of the techniques that I cover in this video, I have this GitHub
2:15 repository which I'll link to in the description. I have a very basic demo
2:20 project built out in this repo as well as all of the commands that I cover
2:24 here. And so all of my workflows that I use day-to-day for AI coding. I have
2:28 this all laid out for you. These are the core slash commands that I use every
2:33 day. And one of these commands is related to what we were just talking
2:37 about. I have a full workflow built out for you to help you create your PRDS. So
2:41 now going over to the repository locally, I have all of the commands in
2:45 thecloud/comands folder for you. And by the way, you can use these commands for any AI coding
2:50 assistant, not just cloud code. They're really just prompts that define these
2:54 workflows. And so I have the create PRD command right here. And so the whole
2:58 point of this is you have a conversation with an AI coding assistant about what
3:02 you want to build, right? Like I want to build XYZ. help me plan it out. And once
3:08 you get to the point where you're on the same page with the coding agent for what
3:12 you want to create in your conversation, you just run slashcreate PRD and it'll
3:16 output to a document that you specify here the entire scope of work. And so
3:21 for this simple habit tracker application, this is my PRD. And all of
3:25 these sections are a part of the template defined in the command. So
3:29 target users, your mission, what is in scope, what is out of scope. You also
3:33 have the whole architecture laid out here. This is now your northstar. So for
3:37 all of the feature development that you do after this point, you're now going to
3:41 be referencing the PRD to figure out what to build with the help of your
3:45 coding agent. So another command that I always use in my AI development
3:49 workflows is a prime command. I run this at the start of any new conversation to
3:53 load all of the necessary context in from the project. And the PRD is one of
3:58 the core files that I always make sure my agent reads because then after it
4:02 primes itself on the codebase, I can just simply ask based on the PRD, what
4:08 should we build next? And this question, I ask this every single day because I'm
4:13 showing you the workflow that I use no matter what I'm building and it doesn't
4:17 matter what codebase I'm working on. All right, so the next big concept to wrap
4:21 your head around is the idea of a modular rules architecture. Cuz here's
4:26 the thing, most people make their global rules way too long. Remember, these are
4:31 the constraints and conventions that are loaded into the context for your coding
4:35 engine at the start of every single conversation. So if it's not
4:39 lightweight, you're going to overwhelm the LLM with the number of rules that
4:43 you have for it. And so your agents.md or your claw.md, whatever your global
4:47 rule file is, you want to keep this actually really short and focused on the
4:52 rules that apply no matter what you're working on. things like the commands to
4:56 run, your testing strategy, your logging strategy, that kind of thing. But when
5:00 you're working on just the front end, maybe you're focusing on rules for your
5:03 components, or you're working on deployments, you're working on building
5:07 out the API for your application, you want to actually take these different
5:11 rules for different task types, split them out into different markdown
5:15 documents, and have your primary global rules reference these. So you're only
5:19 loading these into the context of your LLM when you're working on something
5:23 that really applies to the rules that you have laid out here. All right. So
5:27 going back to our handydandy habit tracker application, I want to show you
5:31 an example of what these rules can look like. And so I'm using cloud code. So
5:35 claw.md is my global rule file. And I'm keeping this very lightweight. You can
5:39 see that I don't even have 200 lines in my rules file here. But there's a lot
5:43 more context that I have in mycloud/reference folder. So, I'll show you this in just a
5:49 second, but what I have here is the things that I care about no matter what
5:52 I'm working on. My tech stack. I want my coding agent to know the project
5:56 structure so it can navigate it better. What commands we have to run the front
6:00 end and backend, for example, MCP servers, code conventions, my logging
6:04 standards. Remember, these are the things that like literally no matter
6:08 what kind of feature I'm working on, I want the LLM to know this information.
6:12 But here's the key thing. Right here, I have a reference section. This is where
6:18 I refer to the task type specific context that I want to load only for
6:23 certain types of features. And so because these paths are loaded into our
6:28 global rules, the coding agent is going to understand like, okay, when I'm
6:31 working on building API endpoints, that is when I should read this file. And so
6:36 I have these all just living in my codebase in this reference folder. And
6:40 so there's a lot more context here. Like this document alone is almost a thousand
6:45 lines long. And it's the same for a lot of these because this is where we get
6:49 very specific with our instructions. And we're allowed to make this longer
6:52 because we're only going to read this when we're actually working on our API.
6:56 And so having this reference section in your global rules is a very powerful way
7:01 to keep your rules concise while still having all the context that you need.
7:06 The goal is to protect the context window of your coding agent. That is
7:10 something that a lot of people severely underestimate the importance of. And you
7:14 can also reference these documents in your commands as well. Like maybe you've
7:17 built up a workflow to build API endpoints. So you don't actually have to
7:21 reference it here. You just have the path given in one of the commands that
7:24 you have. So whatever it takes to make sure that you have access to all the
7:29 context you need, but not all up front. So the next technique that I have for
7:32 you is probably the most obvious out of all of them. But it is so important that
7:37 I need to make sure this is always on the forefront of your mind. You should
7:42 be commandifying everything, if that really is a word. Basically, anytime you
7:47 send in a prompt to your coding agent more than twice, that should scream to
7:51 you that it's an opportunity to turn it into a command or a reusable workflow.
7:55 These are just markdown documents that we load in as context to define a
8:01 process for our coding agent. And so when you make a commit to git, when
8:05 you're doing a code review, when you are loading context in from your codebase,
8:08 pretty much anything that you can possibly do as a part of your
8:11 development workflow can be turned into a command because it'll save you
8:16 thousands of keystrokes as you use it more and more and more. And you can
8:19 share these workflows with other people like I'm literally doing for you right
8:24 now. And going back to the habit tracker repository, like I said earlier, I have
8:28 all of my core commands that I use on a day-to-day basis documented here and
8:32 included in the repository. So you can feel free to take all of these for
8:37 yourself and customize them to your own use cases. So pretty much everything
8:41 that I found myself prompting more than twice, I've packaged up into a workflow
8:45 here that you can feel free to use. So making git commits, creating the PRD
8:49 like we saw earlier, everything in my core feature development cycle like
8:53 executing, planning, priming, all of my validation commands, even the ones for
8:57 system evolution that we'll talk about in a little bit. I've got it all for you
9:01 here. And just to show you really quickly everything that I cover in the
9:05 course, this is the complete system that we build together for both green field
9:09 and brownfield development. I go over how I build features in cycles, system
9:14 evolution to make our AI coding agent more powerful over time. I cover it all
9:20 here. It is very, very comprehensive. All right, the next technique that I
9:24 have for you is also related to context management. If you couldn't tell
9:28 already, that is a very critical component to working with coding agents.
9:34 And so, here we have the context reset. And what I mean by this is in between
9:39 your planning and your execution where you write the code, you should always
9:44 restart the conversation window with your coding agent. And the only reason
9:48 you're able to do that is because you always end your planning session by
9:52 outputting a document, typically a markdown doc. And this document has all
9:58 of the context that you need to go into the execution. So we're not going to do
10:02 any priming. We're not going to tell the coding agent what we want to build when
10:07 we go into building our solution. That next feature, we just feed it this
10:11 document. That is it. And the reason we want to do this is because we want to
10:16 keep our context as light as possible when we get into the actual coding to
10:20 leave as much room for the agent to reason about what it's doing, do all the
10:24 self- validation, all that good stuff. So, I'll show you right now what this
10:28 looks like using commands that I have for you in the repository. So we always
10:32 start our planning with the prime command. So we understand what's in our
10:35 codebase and then we have a conversation with our coding agent to figure out what
10:39 we want to build next. Again using our PRD based on our PRD what's the next
10:44 feature that makes sense. And so I didn't go through this just the for the
10:47 sake of demonstration here. I went right into the next command where we create
10:52 our structured plan. This is the markdown document that we're going to
10:57 output that we use as context going into execution. And so right here I will
11:03 literally do /cle. So completely wiping the context window or you can just
11:06 restart your coding agent. And then I call the execute command and the
11:11 parameter for this command is that plan that I want it to read. That is all of
11:15 the context that it needs. And so I'll show you that really quickly. Like for
11:19 this simple demo for the habit tracker, we are improving the visuals for the
11:24 calendar. And so this outlines the feature description, user story,
11:27 everything at a high level. All the context to reference the individual
11:31 components we have to build out a task by task breakdown. Like this is very
11:35 comprehensive because we're not loading in any other context into our agent when
11:41 we execute the plan right here. Now, believe it or not, I have actually saved
11:45 the most important technique for last because now we are getting into system
4:21 your head around is the idea of a modular rules architecture. Cuz here's
4:26 the thing, most people make their global rules way too long. Remember, these are
4:31 the constraints and conventions that are loaded into the context for your coding
4:35 engine at the start of every single conversation. So if it's not
4:39 lightweight, you're going to overwhelm the LLM with the number of rules that
4:43 you have for it. And so your agents.md or your claw.md, whatever your global
4:47 rule file is, you want to keep this actually really short and focused on the
4:52 rules that apply no matter what you're working on. things like the commands to
4:56 run, your testing strategy, your logging strategy, that kind of thing. But when
5:00 you're working on just the front end, maybe you're focusing on rules for your
5:03 components, or you're working on deployments, you're working on building
5:07 out the API for your application, you want to actually take these different
5:11 rules for different task types, split them out into different markdown
5:15 documents, and have your primary global rules reference these. So you're only
5:19 loading these into the context of your LLM when you're working on something
5:23 that really applies to the rules that you have laid out here. All right. So
5:27 going back to our handydandy habit tracker application, I want to show you
5:31 an example of what these rules can look like. And so I'm using cloud code. So
5:35 claw.md is my global rule file. And I'm keeping this very lightweight. You can
5:39 see that I don't even have 200 lines in my rules file here. But there's a lot
5:43 more context that I have in mycloud/reference folder. So, I'll show you this in just a
5:49 second, but what I have here is the things that I care about no matter what
5:52 I'm working on. My tech stack. I want my coding agent to know the project
5:56 structure so it can navigate it better. What commands we have to run the front
6:00 end and backend, for example, MCP servers, code conventions, my logging
6:04 standards. Remember, these are the things that like literally no matter
6:08 what kind of feature I'm working on, I want the LLM to know this information.
6:12 But here's the key thing. Right here, I have a reference section. This is where
6:18 I refer to the task type specific context that I want to load only for
6:23 certain types of features. And so because these paths are loaded into our
6:28 global rules, the coding agent is going to understand like, okay, when I'm
6:31 working on building API endpoints, that is when I should read this file. And so
6:36 I have these all just living in my codebase in this reference folder. And
6:40 so there's a lot more context here. Like this document alone is almost a thousand
6:45 lines long. And it's the same for a lot of these because this is where we get
6:49 very specific with our instructions. And we're allowed to make this longer
6:52 because we're only going to read this when we're actually working on our API.
6:56 And so having this reference section in your global rules is a very powerful way
7:01 to keep your rules concise while still having all the context that you need.
7:06 The goal is to protect the context window of your coding agent. That is
7:10 something that a lot of people severely underestimate the importance of. And you
7:14 can also reference these documents in your commands as well. Like maybe you've
7:17 built up a workflow to build API endpoints. So you don't actually have to
7:21 reference it here. You just have the path given in one of the commands that
7:24 you have. So whatever it takes to make sure that you have access to all the
7:29 context you need, but not all up front. So the next technique that I have for
7:32 you is probably the most obvious out of all of them. But it is so important that
7:37 I need to make sure this is always on the forefront of your mind. You should
7:42 be commandifying everything, if that really is a word. Basically, anytime you
7:47 send in a prompt to your coding agent more than twice, that should scream to
7:51 you that it's an opportunity to turn it into a command or a reusable workflow.
7:55 These are just markdown documents that we load in as context to define a
8:01 process for our coding agent. And so when you make a commit to git, when
8:05 you're doing a code review, when you are loading context in from your codebase,
8:08 pretty much anything that you can possibly do as a part of your
8:11 development workflow can be turned into a command because it'll save you
8:16 thousands of keystrokes as you use it more and more and more. And you can
8:19 share these workflows with other people like I'm literally doing for you right
8:24 now. And going back to the habit tracker repository, like I said earlier, I have
8:28 all of my core commands that I use on a day-to-day basis documented here and
8:32 included in the repository. So you can feel free to take all of these for
8:37 yourself and customize them to your own use cases. So pretty much everything
8:41 that I found myself prompting more than twice, I've packaged up into a workflow
8:45 here that you can feel free to use. So making git commits, creating the PRD
8:49 like we saw earlier, everything in my core feature development cycle like
8:53 executing, planning, priming, all of my validation commands, even the ones for
8:57 system evolution that we'll talk about in a little bit. I've got it all for you
9:01 here. And just to show you really quickly everything that I cover in the
9:05 course, this is the complete system that we build together for both green field
9:09 and brownfield development. I go over how I build features in cycles, system
9:14 evolution to make our AI coding agent more powerful over time. I cover it all
9:20 here. It is very, very comprehensive. All right, the next technique that I
9:24 have for you is also related to context management. If you couldn't tell
9:28 already, that is a very critical component to working with coding agents.
9:34 And so, here we have the context reset. And what I mean by this is in between
9:39 your planning and your execution where you write the code, you should always
9:44 restart the conversation window with your coding agent. And the only reason
9:48 you're able to do that is because you always end your planning session by
9:52 outputting a document, typically a markdown doc. And this document has all
9:58 of the context that you need to go into the execution. So we're not going to do
10:02 any priming. We're not going to tell the coding agent what we want to build when
10:07 we go into building our solution. That next feature, we just feed it this
10:11 document. That is it. And the reason we want to do this is because we want to
10:16 keep our context as light as possible when we get into the actual coding to
10:20 leave as much room for the agent to reason about what it's doing, do all the
10:24 self- validation, all that good stuff. So, I'll show you right now what this
10:28 looks like using commands that I have for you in the repository. So we always
10:32 start our planning with the prime command. So we understand what's in our
10:35 codebase and then we have a conversation with our coding agent to figure out what
10:39 we want to build next. Again using our PRD based on our PRD what's the next
10:44 feature that makes sense. And so I didn't go through this just the for the
10:47 sake of demonstration here. I went right into the next command where we create
10:52 our structured plan. This is the markdown document that we're going to
10:57 output that we use as context going into execution. And so right here I will
11:03 literally do /cle. So completely wiping the context window or you can just
11:06 restart your coding agent. And then I call the execute command and the
11:11 parameter for this command is that plan that I want it to read. That is all of
11:15 the context that it needs. And so I'll show you that really quickly. Like for
11:19 this simple demo for the habit tracker, we are improving the visuals for the
11:24 calendar. And so this outlines the feature description, user story,
11:27 everything at a high level. All the context to reference the individual
11:31 components we have to build out a task by task breakdown. Like this is very
11:35 comprehensive because we're not loading in any other context into our agent when
11:41 we execute the plan right here. Now, believe it or not, I have actually saved
11:45 the most important technique for last because now we are getting into system
11:51 evolution. And this is the most powerful way to use coding agents when you treat
11:56 every bug as an opportunity to make your coding agent stronger. And so instead of
12:01 just encountering a bug and fixing it manually and moving on, we actually look
12:06 into the system for a coding agent, what should we fix to make it so that this
12:10 issue doesn't happen again. This is especially powerful when you see
12:14 patterns develop of an issue that your coding agent keeps hallucinating time
12:18 and time again. And so typically when you think about what you can fix in your
12:22 system, it's either your global rules, any other kind of reference context like
12:27 we covered earlier, or your commands aka workflows. There's going to be an
12:31 opportunity to address something here because when the coding agent messes up
12:35 on something, it's probably a rule that it doesn't understand that you want to
12:38 specify or it's a part of your process for validation, for example, that could
12:42 be better. And so just for a couple of examples that I have here, if the coding
12:46 agent uses the wrong import style, then you add a new rule, right? Like you just
12:50 have a simple oneliner to explain what that looks like. And a lot of times it
12:54 can just be a oneliner. AI forgets to run tests. Well, you just update the
12:58 template for your structured plan. What is fed into execution to include new
13:03 sections for testing. If the coding agent doesn't understand the
13:06 authentication flow, well, that's when you can create a new reference document.
13:10 and you would update your global rules when working on authentication, you
13:14 should reference this doc just like we showed up here. So, in the end, there
13:18 are a million different ways that we can get into the system improvement mode
13:23 with our coding agent, but typically you're going to do it right after you
13:26 finished building a feature and you validated things yourself. you notice a
13:30 couple of things that are incorrect with how the application works or something
13:34 wrong in the code and you just go in, you say, "Hey Claude, I noticed that XYZ
13:39 is not working in the application and so I had to make this fix." And so what I
13:44 want you to do is go into the rules, read all of the commands that we used
13:48 here, and I want you to figure out what we could improve in the process or the
13:52 rules so this issue doesn't happen again. Now, this is a bit of an
13:56 oversimplification. And you can see I used the speechto text tool to input
14:00 that here. But you get the general idea. You have it do more of a self-reflection
14:04 thinking about how the execution compared to the plan. How did it compare
14:08 to our rules and the process we laid out? What are the discrepancies there?
14:12 Things that we can address so these bugs don't come up again. And so I am very
14:17 loosely defining the strategy here because if anything, this is just more
14:21 of a mindset that I want you to adopt. Don't just fix the bug. fix the system
14:25 that allowed the bug. And that will take you so far because your coding agent
14:29 just gets more powerful and more reliable over time. So there you go.
14:34 Those are all of my favorite techniques used by all of the top agentic
14:38 engineers. And like I said earlier, in the description, I will link to the repo
14:41 with all the commands, the Dynamis Agentic coding course, and this diagram
14:45 if you want to download it for yourself. And so if you appreciated this video and
14:49 you're looking forward to more things on Agentic Engineering, I would really
14:53 appreciate a like and a subscribe. And
$

The 5 Techniques Separating Top Agentic Engineers Right Now

@ColeMedin 14:56 6 chapters
[developer tools and coding][productivity and workflows][AI agents and automation][hardware setup and infrastructure][marketing and growth hacking]
// chapters
// description

You're probably leaving most of the potential of AI coding assistants on the table. Engineers who are actually shipping production code at insane speeds? They're playing a completely different game. After studying the workflows of developers who are genuinely 10xing their output, I've identified 5 meta-skills that separate the top 1% from everyone else. It has nothing to do with the tools, it's all about the process and workflows. In this video, I'll break down each skill: starting every proje

now: 0:00
// tags
[developer tools and coding][productivity and workflows][AI agents and automation][hardware setup and infrastructure][marketing and growth hacking]