0:01 Last week, I was looking for some software to solve a very specific
0:05 problem. Lo and behold, I found this great app for 10 bucks. But as a Vibe
0:10 engineer, I didn't pay for it. Instead, I spent 3 days, $500 in claude credits,
0:14 and missed my kids' baseball game over the weekend to build a crappier version
0:18 from scratch. Developers are living in weird times as some of us are becoming
0:22 less productive and ditching AI altogether. Like the streamer coding
0:25 garden just completely destroyed the grift in this video. >> And it's not fun. It's not fun at all.
0:30 But many others are going allin and seeing productivity gains like never
0:34 before. Every employee at multi-trillion dollar company Nvidia is now AI enabled.
0:40 >> Every one of our engineers 100% is now assisted by AI coders and our
0:44 productivity has gone up incredibly. >> The truth is that AI does suck. It's
0:48 like gambling. When you prompt it and the code actually works, you feel that
0:52 indescribable rush of dopamine. >> I'm coming day and night. I mean, it's
0:56 terrific. But eventually your prompt won't work and this leads to a vicious
1:00 cycle that I like to call the prompt treadmill of hell where you keep burning
1:03 through credits but never actually get what you need. Despite that drawback,
1:07 there are ways to make AI coding more reliable and quasi deterministic thanks
1:12 to the power of model context protocol servers. In today's video, we'll look at
1:17 seven MCP servers every dev needs to know about to make AI coding suck less.
1:22 It is October 14th, 2025, and you're watching the code report. I think it's
1:25 safe to say that the majority of programmers are now using AI in some
1:30 capacity, but very few are using it to its full potential. If you don't already
1:33 have a couple of MCP servers hooked up to claude code, cursor, open code, or
1:37 whatever you prefer, you're falling behind and you're not going to make it.
1:41 But what even is a model context protocol server? Well, in one sentence,
1:45 it's a standardized way for your coding agent to talk to external systems. That
1:49 could be an app running on your local machine. It could be a remote server
1:53 that runs your code. or it could be a thirdparty API. To understand its full
1:57 potential, let's look at some examples. On this channel, I've complained
2:00 endlessly about AI not being able to generate proper spelt 5 code. Well,
2:04 luckily, just a few days ago, that problem has been solved thanks to the
2:08 release of the spelt MCP server. All you have to do is install this bad boy in
2:12 your favorite coding tool, like I'm using Claude Code here. And now, instead
2:15 of going in blind, you can start a prompt with /spelt. This will
2:19 automatically tell Claude how to get the right spelt documentation. And more
2:23 importantly, use the spelt autofixer. It'll perform a static analysis on the
2:28 code and delopify it when the LLM hallucinates random ReactJS code in your
2:32 project. But if you're a front-end developer, the most time-conuming task
2:36 is implementing your designer's Figma files into actual code. Well, not
2:40 anymore. The Figma MCP server allows you to connect your local Figma app on the
2:45 desktop or in the cloud. It'll pull a design file and automatically implement
2:49 it in HTML and CSS. You can even generate React components, use Tailwind,
2:54 or build iOS UI elements using Figma's more reliable tooling. Pretty cool. But
2:58 what if you're building something like a payment system with a third-party API?
3:02 If this is something you really can't afford to screw up, well, Stripe and
3:06 many other APIs now have an MCP server that can fetch documentation for the
3:11 exact API version you're using. Not only that, but it has a long list of tools
3:15 you can use to access live data in Stripe, opening the door to all sorts of
3:19 possibilities, like accidentally refunding 10,000 customers with a single
3:22 prompt. Now, even if you leverage these tools, your code is still probably going
3:26 to break in unexpected ways at runtime. If you use a monitoring tool like
3:30 Sentry, you can access all of the issues and errors your AI assistant missed
3:34 before deploying. Instead of trying to understand the slop you rolled out a few
3:37 minutes ago, you just tell it to query the Sentry issues and fix them on the
3:41 fly. That's helpful, but not all runtime issues can be caught. At other times, it
3:45 might be the QA guy who worked at Blizzard for 7 years who assigns you a
3:49 Jira ticket to fix some pointless edge case he found, in which case you'll want
3:53 to have the Atlassian or GitHub MCP server installed. You can use it to
3:57 automatically pull issues and tickets without ever needing to read them
4:00 yourself. Just run a prompt that tells the AI to fix the issue and close the
4:04 ticket while you sit on the train reading your favorite book. That's
4:07 awesome. And if you follow this guide, you'll eventually have a billion-dollar
4:10 app that requires more infrastructure to scale. There are now MCP servers for
4:15 AWS, Cloudflare, Verscell, and many others that can let AI provision the
4:19 actual resources you use in the cloud. In theory, the great thing about robots
4:23 is that they won't forget to shut down an EC2 instance that will destroy your
4:27 finances, but don't quote me on that. So far, all the examples we've looked at
4:30 require the trust of a third party developer. So, you have to trust us.
4:37 Just relax and fall. One, two, three. No. >> What's cool about this protocol, though,
4:42 is that it's essentially standardized now. And you can use it to build your
4:45 own highly specialized server. Like maybe you want a server that can look up
4:49 custom data sources, manage your smartome, or really anything you can
4:52 imagine. And they're easy to build because there's now an MCP framework for
4:56 every major programming language out there. But after you build one, you'll
4:59 need a place to deploy it. And an awesome platform you need to know about
5:03 is Savala, the sponsor of today's video. It's a modern successor to Heroku that
5:07 combines Google Kubernetes Engine with Cloudflare to give you a simple way to
5:11 deploy full stack applications, databases, and static sites without
5:15 drowning in YAML configs. We can easily ship our app by connecting a Git repo in
5:20 the Savala dashboard or by selecting one of their pre-built templates. Once it's
5:24 live, you get app analytics, environment variables, and everything else you need
5:28 to scale your app. You also get real environment pipelines for preview,
5:32 staging, and production, so your team can promote changes safely instead of
5:36 just praying to the prod gods. So try out Savala for free with $50 in free
5:40 credits using the link below. This has been the Code Report. Thanks for