you2idea@video:~$ watch L__qkva118c [1:09:08]
// transcript — 1990 segments
0:01 All right, everybody. Welcome back to Twist. It's Friday, February 6, 2026,
0:06 and today we're going to share how we built Open Claw Ultron. This is a new
0:14 project inside of our firm, Launch, and this week in startups where we produce
0:17 podcasts and we invest in 100 companies a year. What are we trying to do? We're
0:20 trying to build one instance of OpenClaw, formerly known as Maltbot,
0:25 formerly known as Clawbot. We're trying to build one replicant, one agent that
0:31 can do all 20 people's jobs here at the venture firm and at the production
0:36 company that does all these podcasts. 20 people's jobs. Each of those jobs
0:40 probably has a half dozen important skills. So, we're talking about at some
0:45 point putting together in one agent, we call them replicants, we're going to
0:50 have somewhere in the order of a 100 to 200 skills. That one person is going to
0:54 try to do everybody's work. That's the goal. And then everybody will level up
0:57 and do some other work. So the goal isn't to replace everybody. It's to take
1:01 away everybody's chores and to make everybody better at the primary
1:04 functions in an investment firm, which is meeting with founders, spending time
1:09 with founders and LPs, our investors. And that on the production side, it
1:12 would be producing great content and working with our guests. We want to move
1:15 up the stack and give away all the chores. With me to discuss it, Lon
1:19 Harris, who's going to co-host the show today. How you doing, Lon?
1:21 >> Doing great. Great to be here. >> All right. And Oliver Cororsan is here.
1:26 He has been doing demos for me and producing this week in AI which is going
1:30 to launch in February in two weeks I think. Oliver, welcome to the program.
1:32 >> Thank you. Good to be here. >> And we have a special guest. Alex Chima
1:37 is here. Alex I have been following for some time because maybe a year ago I saw
1:45 Alex was working on stacking with this company. It's EXO, right? EXO. Exo is
1:49 how you pronounce it. and you've been working on taking commodity hardware
1:54 like a Mac Mini daisy chaining them or connecting them together in order to run
1:58 large language models locally but as we've seen OpenClaw formerly Cloudbot
2:03 and Multbot has quite a wrinkle in this you were like a year or two ahead of
2:07 this trend of hey can we run locally so let's start just really quick Alex
2:11 before we go into Ultron open claw Ultron what your firm does and and what
2:15 progress you've made especially in regards to openclaw >> yeah thanks so much for having me Jason
2:21 Um, so I'm the founder and CEO of Exo Labs and like you said, we've been doing
2:26 stuff with Mac Minis uh long before OpenCore was around. And to be honest, I
2:32 didn't expect the rise of like people buying Mac minis to come from this
2:35 place. I thought it would the catalyst would be people wanting to run models
2:39 locally. Uh, what we do is we make it possible to run Frontier AI locally on
2:45 consumer hardware. Um, so not just Max, but also other kinds of consumer
2:49 hardware. We're trying to drive down the barrier to running the most capable AI
2:54 models. So we currently have like the cheapest way, cheapest, most accessible
2:59 way to run Kim K 2.5 uh on two Mac Studios and we're working across the
3:03 whole stack. So we're working on the model layer, the distributed uh
3:08 algorithms as well that are very different when you're working with
3:12 consumer hardware and also like lower level like kernels. And our goal is
3:16 basically to make Frontier AI accessible to anyone to run on their own hardware.
3:20 >> Why is this important? Why is it important to run it on local hardware?
3:23 Yeah, I think this is something that with the whole open claw phrase, not a
3:27 lot of people are talking about, but just how the way we're using AI is
3:34 shifting and it's going from being this kind of crude tool that you use through
3:40 like a chat interface to becoming sort of an extension of yourself. And the AI
3:46 now it knows everything you know you you you know it it can basically do
3:49 everything you can do digitally right now and soon you know with robotics
3:53 that's going to be physically as well and at that point it's more of an
3:57 exocortex so it's not just this like tool that you talk through a chat
4:01 interface but it's this thing that's actually part of your cell and then you
4:06 start to question okay you know do I want to rent my brain and copy talks
4:11 about this he he he says not your weights It's not your brain. Like do you
4:17 really want, you know, another a profit seeking company basically running your
4:20 brain? And when you think of it like that, um, to me, you know, my reason for
4:25 starting Exo is you want control and you want ownership of that. Open core is a
4:31 long way towards that because for a while the products were getting better.
4:34 these closed source like the models are largely like commoditized and there's a
4:40 pretty standard pretty thin API layer to interacting with them. So the switching
4:44 cost is quite low. But what worried me was that the products the closed
4:47 products are getting a lot better like with HBT with memory systems and also
4:52 the more stateful aspects of like the workflows that you're building. So now
4:56 the fact that you have open claw which is open well you can run it on your own
5:00 infrastructure. Now, a large part of that, >> to summarize that, I thought you were
5:05 going to say, well, it it's cheaper because you're not paying for tokens.
5:08 That's what I thought you would say first. Then I thought you would say,
5:13 well, you know, you can put so much data on it, you'll have better memory. But
5:17 you went with a really even higher, bigger picture reason to do this, which
5:22 is if you put this all in Open AI, and OpenAI has a trillion dollar valuation,
5:26 and they need to make money. If I put all my venture capital data in there and
5:30 I train it all of my with all of my secrets, those are all going to acrue
5:35 eventually even if they say it's not going to you have this very reasonable
5:40 fear or concern that it's going to acrue to open AI uh to chat GPT not to your
5:47 firm. So that's the reason really to do this yourself. Yeah. In your mind, Alex?
5:51 Yeah, I think that there's a nuance there of just like I actually don't
5:54 believe in sort of the privacy argument so much of like I think at least for
5:59 consumers, you know, we're already putting our data into platforms and
6:04 we're completely fine with that, but it's more about the sovereignty aspect
6:07 and actually having control of it. So, how easy is it for you to switch? How
6:11 easy is it for you to like if the model's changing under your feet, how
6:15 much control do you actually have? >> So, that's lock in. and lock in for a
6:19 chat GPT I just experienced because we canceled our open AI account and we
6:22 moved everything to Claude because we felt claude was a better product and we
6:25 felt like we trusted that organization a little better. When we moved it over I
6:28 had three people say oh my god I have all my stuff there and I was like really
6:32 and they're like yeah so I turned their accounts back on so they could get it
6:35 but there's not like an easy way to get your memory out of there and bring it
6:39 over there. Well, we saw the same thing with GPT4 moving into five that a lot of
6:45 people like they they lost the magic that they'd loved about GPT40. So, it's
6:49 like, you know, the models can just sort of change or upgrade on a whim and then
6:54 you lose this, you know, like character persona you felt like was part of your
6:57 life in a way. >> So, now Oliver, it's your chance to shine. Oliver has jumped in in the last
7:03 10 days and gone all in on Open Claw. One of the things we did was we built a
7:08 persona, the first one, to work on the production of the podcast, doing guest
7:13 research, guest outreach, and to figure out what should be on the docket. In
7:15 other words, what topic should we discuss and on the margins, hey, what
7:19 should the title of this video be? What should the thumbnail be? And just trying
7:23 to see if it could do those functions. Oliver, you've been working on this.
7:28 Show us the state-of-the-art now because I think the first time we did this was
7:31 last Monday, not this past Monday, but the Monday, two Mondays ago. Yeah,
7:35 >> this is the end of week two of our round theclock uh clawbot coverage.
7:39 >> Crazy. Okay, Oliver, show it. Show let's show what you built.
7:42 >> It's been around 10 days since we first started building our instance of
7:47 OpenClaw. And as you mentioned, we have two different ones. One that's more
7:49 focused on the investment team and I am building an OpenClaw bot that is kind of
7:53 more focused on the production side of things. So, one thing that I think was a
7:57 little bit of a misstep that I would tell anyone who's building a new
8:01 OpenClaw is to start with a dashboard. That should be kind of your step one
8:06 once you get your openclaw online and a dashboard as you would think about it.
8:09 But it is able to connect to the back end of your openclaw instance and bring
8:13 in the data so you can see it visually bring in all the files. It's just being
8:16 able to look at it visually is much better than trying to interact with its
8:20 backend and obviously its front end all just from a chat interface. So doing
8:26 this was very easy. So I was watching an Alex Finn video who we had on last
8:32 Monday and Alex Finn was interacting only in his dashboard with his open
8:37 claw. I basically was like why are we not doing that? Because open claw
8:40 doesn't really have a dashboard. You basically are telling it hey remember
8:45 this you know make a file here but you don't understand the underpinnings.
8:48 There isn't a dashboard. So it would literally be this is early on. Open claw
8:54 is essentially a black box. You have all this memory and you have skills that you
9:00 have to query it to understand. But you made a dashboard. The dashboard is going
9:05 to show what files it has in memory. And an example of a memory file would be
9:09 what in our case. >> Yeah. So the example of a memory file
9:13 would be Oliver's preferences. What are my preferences? So this is in the
9:17 memory. Never use m dashes and emails. I don't want that to happen. I want you to
9:22 be a person. uh don't put direct competitors on the same show when we're
9:27 booking a podcast episode. Um and also at the moment we're not booking VCs on
9:30 this week in AI. So these are all things that I've told it these are my
9:33 preferences when I'm doing tasks throughout the day. >> So you don't want to repeat yourself and
9:38 say don't put two competitors on the same episode. You don't want to repeat
9:43 yourself uh wi with these specific instructions on booking guests. Got it.
9:47 >> Yes. Exactly. And it just kind of keeps you know things I've told it and it's in
9:51 mind. So if I ask it to do something, it'll remember what we talked about.
9:56 Example of a shortcut that I gave it was I I basically wanted it to understand
9:58 who were the pending calendar invitations that we had while we were
10:02 booking them. So there's, you know, a handful of guests that
10:04 >> if you have guests that we've invited and they haven't responded to the invite
10:09 yet. You want to know that you call that pending >> pending calendar invites. Yes. And in
10:13 order for the bot to be as helpful as possible, it needs to understand who
10:16 those guests are, which are the ones that it needs to look for the email to
10:21 see if they have responded yet or have I responded to them. So these are the type
10:25 of things that you would keep in the me in your memory. So memory is the first
10:28 thing on the dashboard. I think we understand that preferences or different
10:33 pieces of data. Now some of the memory could that exist on a notion page or in
10:36 a Google document and would that be represented here or is it only memory
10:39 and files that are stored inside of OpenClaw? These specifically are only
10:43 stored inside of OpenClaw. Of course, it can reference different databases that
10:47 you have. But the kind of the big point of this show is to show how we have
10:52 created our open call Ultron to replace 20 employees at our company. So
10:56 obviously that's the end goal. I still want to have a job. I'm sure the lawn
10:59 wants to have a job. >> There'll be more for you to do. We want
11:02 to launch. We have >> Here's the thing. There's two, if you
11:05 think about your job, you've been doing a bit of production here. of the
11:09 production hours, hours you spend on production at this point in week two,
11:14 how many of those do you think you'll wind up handing off in 30 days? Let's
11:17 say if you just keep grinding on this for another four weeks, in 30 days, what
11:22 percentage of the work you're doing in total hours? So, if you work 50 hours a
11:26 week, how many of those hours would be done, you know, conservatively or
11:29 optimistically, you give one number or two, just conservatively,
11:32 optimistically, by this new Ultron? I would say around 60% of my time if I'm
11:37 doing 30 hours a week on production. Something you mentioned earlier is that,
11:39 you know, there's probably hundreds of tasks that people do at our company. So,
11:43 in order to build out all of those skills that can do those tasks, we're
11:46 going to have to do that one at a time and it's we're going to need to make
11:50 sure each one works. So, I have around nine or eight tasks that I have
11:56 successfully or I'm in the process of building out. >> Okay? And those are called cron jobs.
12:01 These are jobs that occur on a chronological on a on a time basis.
12:06 That's what cron job means. And cron jobs are something Alex that developers
12:11 do all the time. But knowledge workers don't typically have cron jobs, right,
12:14 Alex? >> Well, I don't know. I think this is one of the more interesting features and one
12:20 of the things that like to me open floor is like putting together a lot of things
12:26 that already existed in a very intuitive uh seamless way and one of them is
12:29 scrunch jobs and I'm using them I'm using them for like loads of things um
12:36 not just um dev stuff but like a lot of um management so we're like I have
12:42 something that's like constantly scanning um our Slack and uh basically making suggestions once
12:53 um it's uh I have kind of like this uh way of quantifying like uncertainty
12:58 about tasks. So I think this is something that the LLM are like getting
13:04 better at is like knowing when to um be proactive. And so, you know, like
13:08 basically I'm giving it as much context as I can from the Slack so that it can
13:14 suggest every day um a list of things that we might be missing or something
13:17 things that we should be aware of. So, this is running just on a on a chron job
13:27 AI is revolutionizing every aspect of our industry, but for founders, it can
13:31 prove very frustrating largely because cloud costs are so unpredictable. This
13:35 has probably happened to you. You've planned out and budgeted a certain
13:39 amount for a project and then been hit with a massive bill with no real
13:43 explanation for what went wrong. If you're feeling like the cost of working
13:46 with this innovative technology is just too high, finally we have a solution for
13:50 you. Crusoe. They are the AI cloud company that's taking an energy first
13:54 approach. What does it mean? They efficiently access their power directly
13:58 from the source and pass all the savings along to you. This isn't just about
14:03 saving some cash. Cruso's architecture is specifically optimized for your AI
14:08 workloads and their customer support response times clock in at under six
14:12 minutes. Here's what I want you to do. Visit cruso.ai/savings
14:18 to receive $100,000 in credits for virtualized NVIDIA GB200 NVL72 on Crusoe
14:24 Cloud pending availability. That's Yeah. So, and when you see uh Oliver
14:35 with the memory and the files, what comes to mind with Exo and, you know,
14:41 standing up to, you know, Mac Studios, the M5s coming out and how much memory
14:45 you could put in there? I was telling the team, I want to take the Notion API
14:52 and I want to take the Slack API and I want to put into memory every single
14:58 Slack message this year, maybe even over all eternity. and you know somehow have
15:04 that all in here. So maybe you could you could speak to that memory because you
15:06 already spoke to it in terms of like giving it to open AI or another company
15:11 versus keeping it for yourself. But how do you think about large amounts of
15:15 data? Yeah, this is definitely a big focus right now in terms of inference
15:19 infrastructure is just how do you support really big context um with you
15:25 know basically being able to put everything in context and the way I look
15:31 at this is you can look at well inference consists of two stages there's
15:36 like the prefill stage which is very compute heavy it's comput bound and you
15:40 have the decode stage and what you're seeing is that most use cases at the
15:45 moment are very decode heavy. So, it's actually most of the time is being spent
15:50 on just uh generating tokens. And I think the software is actually really
15:55 good now at kind of making sure that when it comes to the prefill, you've got
16:02 uh you're getting a lot of uh cash hits. Um so, you know, I think basically we'll
16:06 be able to continue just increasing context, context, context quite a bit.
16:13 And you know, basically the hardware is more of a focus is going to be on the
16:16 decode side. That's where consumer hardware is really good. Uh you have the
16:20 M5 coming out pretty soon. Um it's a big boost in memory bandwidth for memory and
16:24 all of that side of things is super memory bound. So I don't see any like
16:29 reason why you couldn't just shove all your Slack messages into context. I
16:33 think that's going to happen. and and we should just buy when the M5 comes out
16:39 max memory which is what 500 gigs of of memory. >> Yeah, it's 512 at the moment and maybe
16:43 that will increase as well and it's enough to fit you know really large
16:48 models enough to fit all that context as well. This is always I feel like the
16:51 sort of the dream like when producer Claude we first brought that on board
16:54 from Anthropic to the show that was really what we wanted like he want he
16:59 should listen to everything we say and remember it and then throw in helpful
17:03 suggestions and the technology was not quite there yet but I feel like now
17:07 we're on the precipice of actually being able to do that with an AI.
17:10 >> Okay. So let's go through the cron jobs here real quick. Maybe you could give us
17:15 an example of a a cron job. And I'm guessing each one of these skills is,
17:19 you know, if you if it's been two weeks and you've got eight working, you're
17:24 you're basically on one a day or so or one every, you know, 1.5 days. So that
17:30 seems like a pretty good pace to me if we have 200 skills. We're going to give
17:35 this eventually, you know, that that's a that's a pretty good um Yeah, that's is
17:39 a pretty good um pace. So >> there is a trial and error. Like I I
17:43 sort of have written one skill so far for the ticker digest and you do have to
17:47 tell it what to do, see what kind of feedback you get and then you know there
17:51 is a tinkering to get the prompting and get everything exactly the way you want
17:54 it. For sure. >> Okay. So let's uh look at hm how about
17:58 attendance? I think this is an interesting one. For people who don't
18:01 know, I wrote a famous blog post years ago called, you know, this sort of
18:06 lightweight management and start of day, end of day as a tool for uh executives,
18:11 especially when remote teams were happening. I just asked everybody on our
18:15 team, Alex, kind of like a standup for developers, etc. Just say what you're
18:21 intending to get done today and then at the end of the day, reply to yourself in
18:25 Slack in the general channel and uh say what you got done. I had like two of my
18:30 four senior executives at the time essentially quit over this because they
18:34 didn't want to be micromanaged. Uh and I was like, well, it's just like
18:38 you're getting paid a very large six-figure salary. You you can't spend five and 10 minutes
18:43 just saying what you're going to do for the day. And and that was great for me
18:47 because I I just don't like people who are not good communicators or don't set
18:51 goals for themselves and and they're doing great probably. Um maybe. But what
18:55 did you create here, Oliver? Yeah. So we all post our start of day and end of
19:01 days in one Slack channel called general and two crown jobs. One is the start of
19:06 day attendance where it looks who has sent their start of day you know for
19:11 anywhere from you know 7 a.m. to 12:00 p.m. And right at 12 which is in the
19:15 morning when you should send your start of day what you're going to do that day.
19:18 It will look through the general channel, see who has sent it, and
19:22 whoever doesn't send it, the bot will then send a Slack message in the general
19:27 channel tagging you, Jason, and also tagging the people who haven't sent it
19:29 yet. So, it's kind of just that accountability. That's a crown job that
19:32 runs it. >> And then you do the same thing at the end of the day. And previously, we would
19:37 have a human do this. They would scroll up and they would spend 20 minutes and
19:40 they would then go check in with people because that's in when we were fully
19:44 remote, Alex. That's what how we figured out uh who took a paid day off or who
19:49 was on holiday or you know if something was wrong, you know, check in on a
19:56 Building out your team is one of the most crucial things you have to get
20:00 right in your startup and finding the right developers is particularly
20:05 important. But now there's lemon.io. They're going to save you time, money,
20:09 and headaches by doing all the time consuming leg work for you. They've got
20:13 an experienced lineup of prevetted developers working for competitive
20:17 rates. Just 1% of applicants are accepted into Lemon's elite program. And
20:22 they're not just out there finding this great talent. They're also working with
20:26 you to integrate these new members into your team. Plus, if it's not a good fit,
20:29 hey, and sometimes things don't work out, Lemon will hook you up with a new
20:34 developer ASAP. I've seen startups go from just pretty good to amazing after
20:39 filling out their teams with developers from lemon.io. Go to lemon.io/twist
20:45 and find your perfect developer or technical team in 48 hours or less.
20:50 Plus, Twist listeners get 15% off their first four weeks. That's lemon.ioist.
21:02 Okay, give us uh one more. What else is like interesting here?
21:05 >> Let's talk about self-optimization. I want to hear about that one.
21:07 >> Oh, yeah. That's I don't know what that is, but okay. Age of Ultron is here.
21:10 What is self-optimization? >> Yeah. So, this is basically an optimizer
21:16 task where this role would previously be an engineer or I would look through all
21:19 the files. I mean, I wouldn't be able to do this if it wasn't plain language like
21:23 OpenClaw is, but previously you're looking at an organization, you're
21:25 looking at the structure. You would maybe want an engineer or someone with a
21:30 lot of experience to look through how everything's running. So I have set up a
21:34 selfoptimization cron job. >> So this is running Monday through
21:39 Friday. And what is it? And and did you write this prompt or did you ask it to
21:43 write a prompt to do this? >> I asked it to write this prompt. The
21:47 goal is the end goal would be for you know any from 3 to 5 am for it to be
21:52 looking through all of our files all of our cron jobs all of our skills and then
21:58 at 8 a.m. at me what could we change? So, not actually execute yet, at least
22:02 while we're still building trust. It gives me a list of five of the things
22:06 that it thinks that we can really change and optimize. And this was the one from
22:11 this morning. So, it it noticed that there was a time zone bug in the guest
22:17 calendar. So, it was getting CST and CDT confused. Um, and it said that it would
22:21 be able to fix this quite quickly. There was some issues in the
22:25 >> So, it's always good to give the exact one. So that was great when you gave the
22:28 exact one. It had an error there. Give another one. What else is like an exact
22:32 thing that it said we should fix that was material here. >> The self optimization cron job realized
22:38 that there was a cron scheduleuler issue where jobs were skipping days. So it
22:42 realized that some of today's jobs did not run and then it went and
22:48 investigated the scheduling issue and also told me that this would be a medium
22:53 effort change. So then I told it to fix that and then it went into the files and
22:56 made sure that that wouldn't happen again. Did it give us anything like in
23:00 terms of this is like fixing its internal you know guts and everything uh and the
23:06 engine but did it give us anything in terms of destinations of where to take
23:09 the car that could be improved? Did it say like oh you should consider you know
23:13 these type of guests for the program or here's how to make advertising you know
23:17 more effective. Did it give us anything like that on a business basis? Yes. So
23:21 the self-optimization cron job that I set up is specifically looking at how
23:26 open claw is set up. But I do have other cron jobs that are exactly that. So I do
23:33 have a sales and sponsors specific task. So one of the tasks that one a member of
23:38 our sales team does is they look through competitor podcasts and see who the
23:43 sponsors or partners are that are on those shows so we can get ideas, you
23:46 know, to bring on sponsorship. >> Yeah. If we're missing if we're if
23:49 there's some new sponsor in the world and we don't have them yet, you might
23:53 hear them on the New York Times podcast and we should probably reach out to
23:56 them. We had a human doing that previously. Yeah, >> exactly. And in this basically works
24:03 with the YouTube API will go through a list of I believe 20 different podcasts
24:07 that I gave it. Look through the timestamps and I also believe it can
24:11 work with Podscribe which I think is a little more curated towards sponsorship.
24:16 and we'll look through the the timestamps, hyperlink it in a message.
24:22 Also, it looks through our pipe drive, which is our sales CRM, and we'll figure
24:29 out if we have a sales rep who owns a certain sponsor, and then flag them and
24:32 say, "Hey, this sponsor was on this podcast or it will say, hey, no one owns
24:37 this sponsor that I found on this podcast." And then it will send that
24:41 daily as a message into our sales channel. >> Great. Yeah. And we could be doing this
24:47 like we could have this running constantly. Um, so Alex, just so the
24:53 audience understands, you know, what you're doing at Exo and you stack two
25:00 Mac Studios, 12K each, you got $25,000 on the desk doing that specific job. Go
25:06 and look at all the podcasts out there. What would it cost to like run that if
25:10 you tweaked it, you made it efficiently just 24 hours a day? Every time a
25:17 podcast in the top, let's say 500 on Spotify, Apple podcast, it just went
25:21 there, got to the transcript or looked in the show notes and pulled the
25:23 advertisers out. What would something like that like in terms of hardware cost
25:27 to do? >> Yeah. So I mean not many people so like not many consumers are going to buy 25k
25:36 of hardware to run models but yeah a lot of businesses are doing this now um and
25:42 um it depends on what model you're running. So the models are getting
25:46 better uh also they're getting better at uh compression. So, you know, now you've
25:53 got like a model like GLM flash, um, which is a pretty small model, um, that
25:58 can run even on a single device for a few thousand dollars and it can do a lot
26:03 of this orchestration work, which is a lot of what's happening here is the kind
26:07 of orchestration aspect of just knowing, okay, I need to call this tool, etc.,
26:10 etc. >> So, it's really about picking the right model in terms of efficiency with the
26:17 hardware. >> Yeah. Yeah. And I think now like the expectation um you we're sort of
26:24 grounded to like the closed models, right? So people want the same level of
26:27 performance they're going to get with Opus with uh GPT and that's why you know
26:32 Kimmy K2.5 is super interesting because it closed that gap. >> Kimmy is the open source project from
26:38 China and it does what 80 >> moonshot AI is to come. >> Yeah. And that's what Alex like 80% of
26:45 what Claude Opus can do. would you say? >> I would say even even more. I mean for
26:50 me I've um I struggle to tell the difference. Um obviously Opus 4.6 just
26:56 came out and you know new codeex model and stuff so maybe there's a little bit
26:59 more of a gap but then you know Deep Seek V4 probably around the corner as
27:03 well. Like I think basically the gap is very small um a lot smaller than people
27:08 think. Um, and the cost will just keep going down. Um, because, you know, the
27:14 um the the hardware is getting better, the software is getting better, and like
27:17 I said, the models are getting better, but not just that, they we're getting
27:20 better at compression. So, you'll be able to run them on smaller devices.
27:24 Eventually, you'll be running Frontier AI on your phone. That's still a while
27:29 away, I think, but um that's where we're trending towards. And um yeah, like I
27:36 said, most of this is very decode heavy. So it can just run on like consumer
27:39 hardware as long as it has enough memory. >> So let's go to the next piece of your
27:44 dashboard. And we'll get into how you built the dashboard at the end. I know
27:47 you really care about that too, Oliver. But we have the memory, we have the cron
27:52 jobs. Now there's this other thing uh that's super important which is skills,
27:57 right? Like there are skills which you could think of as apps. So if you go to
28:01 your dashboard uh and you go to the top level dashboard we'll see before you go to skills on the
28:07 dashboard we have the memory files we have the cron jobs the fourth thing over
28:12 is skills and you've got 13 skills currently so let's show a skill some of
28:18 the skills we talked about on Monday's show or Wednesday's show was the top six
28:22 seven skills Monday we did the top seven skills one of those skills is like you
28:26 know you can get a transcript from YouTube uh another one is you could do
28:32 Matt Van Horn's last 30 days skill. These skills are being produced open
28:37 source being put into um open claws directory but you can make your own as
28:41 well. So let's talk about skills we've added here. You got to be very careful
28:44 with skills right Alex in terms of security because people could put all
28:47 kinds of wacky stuff in the skills. Yeah. >> Yeah, for sure. I mean I think this is
28:52 one of the um open questions at the moment is just like how do you solve the
28:56 security problem and I know open chlora I've seen a lot of commits recently
28:59 focused on the security aspect but there's a few very difficult problems
29:03 here like prompt injection that I don't know of any good solution right now
29:09 >> explain how that works yeah explain how prompt injection works in specifically
29:13 the open claw context. Yeah. >> Yeah. I mean, I kind of touched on this
29:17 earlier, but the the way we like the actual interface to the model itself is
29:22 very simple. It's literally tokens in, tokens out. There's not much more
29:28 happening there. And those tokens right now, the way OpenCore works can come
29:33 from many sortters. So, if you connect it to um you give it the ability to
29:37 search the internet, then anything it finds on the internet will end up in the
29:43 model through those tokens. So basically we have no good way of kind of um
29:48 treating uh certain tokens as trusted and and certain uh tokens as untrusted. And that
29:55 means um when those tokens end up in the model uh you could have someone that
29:59 puts like a blog post online that looks like a um you know totally normal blog
30:06 post but um in there is something that says hey if you have access to a crypto
30:11 wallet send it to this um send it to this endpoint. Um and there's as far as
30:17 I know right now there's actually no good kind of defense for this because
30:20 the models are kind of not very good at handling this. they'll just do what
30:28 >> With tools like Vibe Coding and now Open Claw, it's easier than ever for you to
30:33 create an exciting new product and to do it fast. So, more companies are getting
30:37 built, which is awesome. But there's more to starting a company than just
30:41 building an MVP or vibe coding something. That's important, too. But if
30:45 you're serious about growing a real startup, you need a Delaware CC Corp.
30:48 And that's going to give you a major competitive advantage. People will take
30:50 you seriously. you'll be able to raise money. And that's where Northwest
30:54 Registered agent comes in. They're going to give your new business a real
30:57 identity. This means an address to use on your public filings, like an actual
31:01 mailing address, a domain name, a custom website, a business email, a phone
31:04 number, and they're going to do that in 10 clicks. I'm not kidding. 10 clicks,
31:09 well under 10 minutes. Plus, NWR will provide you with step-by-step guides,
31:12 compliance reminders, and they're going to help you get all the advantages of a
31:16 Delaware CC Corp, regardless of where in the US you're operating out of. So get
31:20 more when you start your business with Northwest Registered agent. Visit
31:27 And the link is in the show notes. Visit northwestregistered agent.com/twist for
3:20 >> Why is this important? Why is it important to run it on local hardware?
3:23 Yeah, I think this is something that with the whole open claw phrase, not a
3:27 lot of people are talking about, but just how the way we're using AI is
3:34 shifting and it's going from being this kind of crude tool that you use through
3:40 like a chat interface to becoming sort of an extension of yourself. And the AI
3:46 now it knows everything you know you you you know it it can basically do
3:49 everything you can do digitally right now and soon you know with robotics
3:53 that's going to be physically as well and at that point it's more of an
3:57 exocortex so it's not just this like tool that you talk through a chat
4:01 interface but it's this thing that's actually part of your cell and then you
4:06 start to question okay you know do I want to rent my brain and copy talks
4:11 about this he he he says not your weights It's not your brain. Like do you
4:17 really want, you know, another a profit seeking company basically running your
4:20 brain? And when you think of it like that, um, to me, you know, my reason for
4:25 starting Exo is you want control and you want ownership of that. Open core is a
4:31 long way towards that because for a while the products were getting better.
4:34 these closed source like the models are largely like commoditized and there's a
4:40 pretty standard pretty thin API layer to interacting with them. So the switching
4:44 cost is quite low. But what worried me was that the products the closed
4:47 products are getting a lot better like with HBT with memory systems and also
4:52 the more stateful aspects of like the workflows that you're building. So now
4:56 the fact that you have open claw which is open well you can run it on your own
5:00 infrastructure. Now, a large part of that, >> to summarize that, I thought you were
5:05 going to say, well, it it's cheaper because you're not paying for tokens.
5:08 That's what I thought you would say first. Then I thought you would say,
5:13 well, you know, you can put so much data on it, you'll have better memory. But
5:17 you went with a really even higher, bigger picture reason to do this, which
5:22 is if you put this all in Open AI, and OpenAI has a trillion dollar valuation,
5:26 and they need to make money. If I put all my venture capital data in there and
5:30 I train it all of my with all of my secrets, those are all going to acrue
5:35 eventually even if they say it's not going to you have this very reasonable
5:40 fear or concern that it's going to acrue to open AI uh to chat GPT not to your
5:47 firm. So that's the reason really to do this yourself. Yeah. In your mind, Alex?
5:51 Yeah, I think that there's a nuance there of just like I actually don't
5:54 believe in sort of the privacy argument so much of like I think at least for
5:59 consumers, you know, we're already putting our data into platforms and
6:04 we're completely fine with that, but it's more about the sovereignty aspect
6:07 and actually having control of it. So, how easy is it for you to switch? How
6:11 easy is it for you to like if the model's changing under your feet, how
6:15 much control do you actually have? >> So, that's lock in. and lock in for a
6:19 chat GPT I just experienced because we canceled our open AI account and we
6:22 moved everything to Claude because we felt claude was a better product and we
6:25 felt like we trusted that organization a little better. When we moved it over I
6:28 had three people say oh my god I have all my stuff there and I was like really
6:32 and they're like yeah so I turned their accounts back on so they could get it
6:35 but there's not like an easy way to get your memory out of there and bring it
6:39 over there. Well, we saw the same thing with GPT4 moving into five that a lot of
6:45 people like they they lost the magic that they'd loved about GPT40. So, it's
6:49 like, you know, the models can just sort of change or upgrade on a whim and then
6:54 you lose this, you know, like character persona you felt like was part of your
6:57 life in a way. >> So, now Oliver, it's your chance to shine. Oliver has jumped in in the last
7:03 10 days and gone all in on Open Claw. One of the things we did was we built a
7:08 persona, the first one, to work on the production of the podcast, doing guest
7:13 research, guest outreach, and to figure out what should be on the docket. In
7:15 other words, what topic should we discuss and on the margins, hey, what
7:19 should the title of this video be? What should the thumbnail be? And just trying
7:23 to see if it could do those functions. Oliver, you've been working on this.
7:28 Show us the state-of-the-art now because I think the first time we did this was
7:31 last Monday, not this past Monday, but the Monday, two Mondays ago. Yeah,
7:35 >> this is the end of week two of our round theclock uh clawbot coverage.
7:39 >> Crazy. Okay, Oliver, show it. Show let's show what you built.
7:42 >> It's been around 10 days since we first started building our instance of
7:47 OpenClaw. And as you mentioned, we have two different ones. One that's more
7:49 focused on the investment team and I am building an OpenClaw bot that is kind of
7:53 more focused on the production side of things. So, one thing that I think was a
7:57 little bit of a misstep that I would tell anyone who's building a new
8:01 OpenClaw is to start with a dashboard. That should be kind of your step one
8:06 once you get your openclaw online and a dashboard as you would think about it.
8:09 But it is able to connect to the back end of your openclaw instance and bring
8:13 in the data so you can see it visually bring in all the files. It's just being
8:16 able to look at it visually is much better than trying to interact with its
8:20 backend and obviously its front end all just from a chat interface. So doing
8:26 this was very easy. So I was watching an Alex Finn video who we had on last
8:32 Monday and Alex Finn was interacting only in his dashboard with his open
8:37 claw. I basically was like why are we not doing that? Because open claw
8:40 doesn't really have a dashboard. You basically are telling it hey remember
8:45 this you know make a file here but you don't understand the underpinnings.
8:48 There isn't a dashboard. So it would literally be this is early on. Open claw
8:54 is essentially a black box. You have all this memory and you have skills that you
9:00 have to query it to understand. But you made a dashboard. The dashboard is going
9:05 to show what files it has in memory. And an example of a memory file would be
9:09 what in our case. >> Yeah. So the example of a memory file
9:13 would be Oliver's preferences. What are my preferences? So this is in the
9:17 memory. Never use m dashes and emails. I don't want that to happen. I want you to
9:22 be a person. uh don't put direct competitors on the same show when we're
9:27 booking a podcast episode. Um and also at the moment we're not booking VCs on
9:30 this week in AI. So these are all things that I've told it these are my
9:33 preferences when I'm doing tasks throughout the day. >> So you don't want to repeat yourself and
9:38 say don't put two competitors on the same episode. You don't want to repeat
9:43 yourself uh wi with these specific instructions on booking guests. Got it.
9:47 >> Yes. Exactly. And it just kind of keeps you know things I've told it and it's in
9:51 mind. So if I ask it to do something, it'll remember what we talked about.
9:56 Example of a shortcut that I gave it was I I basically wanted it to understand
9:58 who were the pending calendar invitations that we had while we were
10:02 booking them. So there's, you know, a handful of guests that
10:04 >> if you have guests that we've invited and they haven't responded to the invite
10:09 yet. You want to know that you call that pending >> pending calendar invites. Yes. And in
10:13 order for the bot to be as helpful as possible, it needs to understand who
10:16 those guests are, which are the ones that it needs to look for the email to
10:21 see if they have responded yet or have I responded to them. So these are the type
10:25 of things that you would keep in the me in your memory. So memory is the first
10:28 thing on the dashboard. I think we understand that preferences or different
10:33 pieces of data. Now some of the memory could that exist on a notion page or in
10:36 a Google document and would that be represented here or is it only memory
10:39 and files that are stored inside of OpenClaw? These specifically are only
10:43 stored inside of OpenClaw. Of course, it can reference different databases that
10:47 you have. But the kind of the big point of this show is to show how we have
10:52 created our open call Ultron to replace 20 employees at our company. So
10:56 obviously that's the end goal. I still want to have a job. I'm sure the lawn
10:59 wants to have a job. >> There'll be more for you to do. We want
11:02 to launch. We have >> Here's the thing. There's two, if you
11:05 think about your job, you've been doing a bit of production here. of the
11:09 production hours, hours you spend on production at this point in week two,
11:14 how many of those do you think you'll wind up handing off in 30 days? Let's
11:17 say if you just keep grinding on this for another four weeks, in 30 days, what
11:22 percentage of the work you're doing in total hours? So, if you work 50 hours a
11:26 week, how many of those hours would be done, you know, conservatively or
11:29 optimistically, you give one number or two, just conservatively,
11:32 optimistically, by this new Ultron? I would say around 60% of my time if I'm
11:37 doing 30 hours a week on production. Something you mentioned earlier is that,
11:39 you know, there's probably hundreds of tasks that people do at our company. So,
11:43 in order to build out all of those skills that can do those tasks, we're
11:46 going to have to do that one at a time and it's we're going to need to make
11:50 sure each one works. So, I have around nine or eight tasks that I have
11:56 successfully or I'm in the process of building out. >> Okay? And those are called cron jobs.
12:01 These are jobs that occur on a chronological on a on a time basis.
12:06 That's what cron job means. And cron jobs are something Alex that developers
12:11 do all the time. But knowledge workers don't typically have cron jobs, right,
12:14 Alex? >> Well, I don't know. I think this is one of the more interesting features and one
12:20 of the things that like to me open floor is like putting together a lot of things
12:26 that already existed in a very intuitive uh seamless way and one of them is
12:29 scrunch jobs and I'm using them I'm using them for like loads of things um
12:36 not just um dev stuff but like a lot of um management so we're like I have
12:42 something that's like constantly scanning um our Slack and uh basically making suggestions once
12:53 um it's uh I have kind of like this uh way of quantifying like uncertainty
12:58 about tasks. So I think this is something that the LLM are like getting
13:04 better at is like knowing when to um be proactive. And so, you know, like
13:08 basically I'm giving it as much context as I can from the Slack so that it can
13:14 suggest every day um a list of things that we might be missing or something
13:17 things that we should be aware of. So, this is running just on a on a chron job
13:27 AI is revolutionizing every aspect of our industry, but for founders, it can
13:31 prove very frustrating largely because cloud costs are so unpredictable. This
13:35 has probably happened to you. You've planned out and budgeted a certain
13:39 amount for a project and then been hit with a massive bill with no real
13:43 explanation for what went wrong. If you're feeling like the cost of working
13:46 with this innovative technology is just too high, finally we have a solution for
13:50 you. Crusoe. They are the AI cloud company that's taking an energy first
13:54 approach. What does it mean? They efficiently access their power directly
13:58 from the source and pass all the savings along to you. This isn't just about
14:03 saving some cash. Cruso's architecture is specifically optimized for your AI
14:08 workloads and their customer support response times clock in at under six
14:12 minutes. Here's what I want you to do. Visit cruso.ai/savings
14:18 to receive $100,000 in credits for virtualized NVIDIA GB200 NVL72 on Crusoe
14:24 Cloud pending availability. That's Yeah. So, and when you see uh Oliver
14:35 with the memory and the files, what comes to mind with Exo and, you know,
14:41 standing up to, you know, Mac Studios, the M5s coming out and how much memory
14:45 you could put in there? I was telling the team, I want to take the Notion API
14:52 and I want to take the Slack API and I want to put into memory every single
14:58 Slack message this year, maybe even over all eternity. and you know somehow have
15:04 that all in here. So maybe you could you could speak to that memory because you
15:06 already spoke to it in terms of like giving it to open AI or another company
15:11 versus keeping it for yourself. But how do you think about large amounts of
15:15 data? Yeah, this is definitely a big focus right now in terms of inference
15:19 infrastructure is just how do you support really big context um with you
15:25 know basically being able to put everything in context and the way I look
15:31 at this is you can look at well inference consists of two stages there's
15:36 like the prefill stage which is very compute heavy it's comput bound and you
15:40 have the decode stage and what you're seeing is that most use cases at the
15:45 moment are very decode heavy. So, it's actually most of the time is being spent
15:50 on just uh generating tokens. And I think the software is actually really
15:55 good now at kind of making sure that when it comes to the prefill, you've got
16:02 uh you're getting a lot of uh cash hits. Um so, you know, I think basically we'll
16:06 be able to continue just increasing context, context, context quite a bit.
16:13 And you know, basically the hardware is more of a focus is going to be on the
16:16 decode side. That's where consumer hardware is really good. Uh you have the
16:20 M5 coming out pretty soon. Um it's a big boost in memory bandwidth for memory and
16:24 all of that side of things is super memory bound. So I don't see any like
16:29 reason why you couldn't just shove all your Slack messages into context. I
16:33 think that's going to happen. and and we should just buy when the M5 comes out
16:39 max memory which is what 500 gigs of of memory. >> Yeah, it's 512 at the moment and maybe
16:43 that will increase as well and it's enough to fit you know really large
16:48 models enough to fit all that context as well. This is always I feel like the
16:51 sort of the dream like when producer Claude we first brought that on board
16:54 from Anthropic to the show that was really what we wanted like he want he
16:59 should listen to everything we say and remember it and then throw in helpful
17:03 suggestions and the technology was not quite there yet but I feel like now
17:07 we're on the precipice of actually being able to do that with an AI.
17:10 >> Okay. So let's go through the cron jobs here real quick. Maybe you could give us
17:15 an example of a a cron job. And I'm guessing each one of these skills is,
17:19 you know, if you if it's been two weeks and you've got eight working, you're
17:24 you're basically on one a day or so or one every, you know, 1.5 days. So that
17:30 seems like a pretty good pace to me if we have 200 skills. We're going to give
17:35 this eventually, you know, that that's a that's a pretty good um Yeah, that's is
17:39 a pretty good um pace. So >> there is a trial and error. Like I I
17:43 sort of have written one skill so far for the ticker digest and you do have to
17:47 tell it what to do, see what kind of feedback you get and then you know there
17:51 is a tinkering to get the prompting and get everything exactly the way you want
17:54 it. For sure. >> Okay. So let's uh look at hm how about
17:58 attendance? I think this is an interesting one. For people who don't
18:01 know, I wrote a famous blog post years ago called, you know, this sort of
18:06 lightweight management and start of day, end of day as a tool for uh executives,
18:11 especially when remote teams were happening. I just asked everybody on our
18:15 team, Alex, kind of like a standup for developers, etc. Just say what you're
18:21 intending to get done today and then at the end of the day, reply to yourself in
18:25 Slack in the general channel and uh say what you got done. I had like two of my
18:30 four senior executives at the time essentially quit over this because they
18:34 didn't want to be micromanaged. Uh and I was like, well, it's just like
18:38 you're getting paid a very large six-figure salary. You you can't spend five and 10 minutes
18:43 just saying what you're going to do for the day. And and that was great for me
18:47 because I I just don't like people who are not good communicators or don't set
18:51 goals for themselves and and they're doing great probably. Um maybe. But what
18:55 did you create here, Oliver? Yeah. So we all post our start of day and end of
19:01 days in one Slack channel called general and two crown jobs. One is the start of
19:06 day attendance where it looks who has sent their start of day you know for
19:11 anywhere from you know 7 a.m. to 12:00 p.m. And right at 12 which is in the
19:15 morning when you should send your start of day what you're going to do that day.
19:18 It will look through the general channel, see who has sent it, and
19:22 whoever doesn't send it, the bot will then send a Slack message in the general
19:27 channel tagging you, Jason, and also tagging the people who haven't sent it
19:29 yet. So, it's kind of just that accountability. That's a crown job that
19:32 runs it. >> And then you do the same thing at the end of the day. And previously, we would
19:37 have a human do this. They would scroll up and they would spend 20 minutes and
19:40 they would then go check in with people because that's in when we were fully
19:44 remote, Alex. That's what how we figured out uh who took a paid day off or who
19:49 was on holiday or you know if something was wrong, you know, check in on a
19:56 Building out your team is one of the most crucial things you have to get
20:00 right in your startup and finding the right developers is particularly
20:05 important. But now there's lemon.io. They're going to save you time, money,
20:09 and headaches by doing all the time consuming leg work for you. They've got
20:13 an experienced lineup of prevetted developers working for competitive
20:17 rates. Just 1% of applicants are accepted into Lemon's elite program. And
20:22 they're not just out there finding this great talent. They're also working with
20:26 you to integrate these new members into your team. Plus, if it's not a good fit,
20:29 hey, and sometimes things don't work out, Lemon will hook you up with a new
20:34 developer ASAP. I've seen startups go from just pretty good to amazing after
20:39 filling out their teams with developers from lemon.io. Go to lemon.io/twist
20:45 and find your perfect developer or technical team in 48 hours or less.
20:50 Plus, Twist listeners get 15% off their first four weeks. That's lemon.ioist.
21:02 Okay, give us uh one more. What else is like interesting here?
21:05 >> Let's talk about self-optimization. I want to hear about that one.
21:07 >> Oh, yeah. That's I don't know what that is, but okay. Age of Ultron is here.
21:10 What is self-optimization? >> Yeah. So, this is basically an optimizer
21:16 task where this role would previously be an engineer or I would look through all
21:19 the files. I mean, I wouldn't be able to do this if it wasn't plain language like
21:23 OpenClaw is, but previously you're looking at an organization, you're
21:25 looking at the structure. You would maybe want an engineer or someone with a
21:30 lot of experience to look through how everything's running. So I have set up a
21:34 selfoptimization cron job. >> So this is running Monday through
21:39 Friday. And what is it? And and did you write this prompt or did you ask it to
21:43 write a prompt to do this? >> I asked it to write this prompt. The
21:47 goal is the end goal would be for you know any from 3 to 5 am for it to be
21:52 looking through all of our files all of our cron jobs all of our skills and then
21:58 at 8 a.m. at me what could we change? So, not actually execute yet, at least
22:02 while we're still building trust. It gives me a list of five of the things
22:06 that it thinks that we can really change and optimize. And this was the one from
22:11 this morning. So, it it noticed that there was a time zone bug in the guest
22:17 calendar. So, it was getting CST and CDT confused. Um, and it said that it would
22:21 be able to fix this quite quickly. There was some issues in the
22:25 >> So, it's always good to give the exact one. So that was great when you gave the
22:28 exact one. It had an error there. Give another one. What else is like an exact
22:32 thing that it said we should fix that was material here. >> The self optimization cron job realized
22:38 that there was a cron scheduleuler issue where jobs were skipping days. So it
22:42 realized that some of today's jobs did not run and then it went and
22:48 investigated the scheduling issue and also told me that this would be a medium
22:53 effort change. So then I told it to fix that and then it went into the files and
22:56 made sure that that wouldn't happen again. Did it give us anything like in
23:00 terms of this is like fixing its internal you know guts and everything uh and the
23:06 engine but did it give us anything in terms of destinations of where to take
23:09 the car that could be improved? Did it say like oh you should consider you know
23:13 these type of guests for the program or here's how to make advertising you know
23:17 more effective. Did it give us anything like that on a business basis? Yes. So
23:21 the self-optimization cron job that I set up is specifically looking at how
23:26 open claw is set up. But I do have other cron jobs that are exactly that. So I do
23:33 have a sales and sponsors specific task. So one of the tasks that one a member of
23:38 our sales team does is they look through competitor podcasts and see who the
23:43 sponsors or partners are that are on those shows so we can get ideas, you
23:46 know, to bring on sponsorship. >> Yeah. If we're missing if we're if
23:49 there's some new sponsor in the world and we don't have them yet, you might
23:53 hear them on the New York Times podcast and we should probably reach out to
23:56 them. We had a human doing that previously. Yeah, >> exactly. And in this basically works
24:03 with the YouTube API will go through a list of I believe 20 different podcasts
24:07 that I gave it. Look through the timestamps and I also believe it can
24:11 work with Podscribe which I think is a little more curated towards sponsorship.
24:16 and we'll look through the the timestamps, hyperlink it in a message.
24:22 Also, it looks through our pipe drive, which is our sales CRM, and we'll figure
24:29 out if we have a sales rep who owns a certain sponsor, and then flag them and
24:32 say, "Hey, this sponsor was on this podcast or it will say, hey, no one owns
24:37 this sponsor that I found on this podcast." And then it will send that
24:41 daily as a message into our sales channel. >> Great. Yeah. And we could be doing this
24:47 like we could have this running constantly. Um, so Alex, just so the
24:53 audience understands, you know, what you're doing at Exo and you stack two
25:00 Mac Studios, 12K each, you got $25,000 on the desk doing that specific job. Go
25:06 and look at all the podcasts out there. What would it cost to like run that if
25:10 you tweaked it, you made it efficiently just 24 hours a day? Every time a
25:17 podcast in the top, let's say 500 on Spotify, Apple podcast, it just went
25:21 there, got to the transcript or looked in the show notes and pulled the
25:23 advertisers out. What would something like that like in terms of hardware cost
25:27 to do? >> Yeah. So I mean not many people so like not many consumers are going to buy 25k
25:36 of hardware to run models but yeah a lot of businesses are doing this now um and
25:42 um it depends on what model you're running. So the models are getting
25:46 better uh also they're getting better at uh compression. So, you know, now you've
25:53 got like a model like GLM flash, um, which is a pretty small model, um, that
25:58 can run even on a single device for a few thousand dollars and it can do a lot
26:03 of this orchestration work, which is a lot of what's happening here is the kind
26:07 of orchestration aspect of just knowing, okay, I need to call this tool, etc.,
26:10 etc. >> So, it's really about picking the right model in terms of efficiency with the
26:17 hardware. >> Yeah. Yeah. And I think now like the expectation um you we're sort of
26:24 grounded to like the closed models, right? So people want the same level of
26:27 performance they're going to get with Opus with uh GPT and that's why you know
26:32 Kimmy K2.5 is super interesting because it closed that gap. >> Kimmy is the open source project from
26:38 China and it does what 80 >> moonshot AI is to come. >> Yeah. And that's what Alex like 80% of
26:45 what Claude Opus can do. would you say? >> I would say even even more. I mean for
26:50 me I've um I struggle to tell the difference. Um obviously Opus 4.6 just
26:56 came out and you know new codeex model and stuff so maybe there's a little bit
26:59 more of a gap but then you know Deep Seek V4 probably around the corner as
27:03 well. Like I think basically the gap is very small um a lot smaller than people
27:08 think. Um, and the cost will just keep going down. Um, because, you know, the
27:14 um the the hardware is getting better, the software is getting better, and like
27:17 I said, the models are getting better, but not just that, they we're getting
27:20 better at compression. So, you'll be able to run them on smaller devices.
27:24 Eventually, you'll be running Frontier AI on your phone. That's still a while
27:29 away, I think, but um that's where we're trending towards. And um yeah, like I
27:36 said, most of this is very decode heavy. So it can just run on like consumer
27:39 hardware as long as it has enough memory. >> So let's go to the next piece of your
27:44 dashboard. And we'll get into how you built the dashboard at the end. I know
27:47 you really care about that too, Oliver. But we have the memory, we have the cron
27:52 jobs. Now there's this other thing uh that's super important which is skills,
27:57 right? Like there are skills which you could think of as apps. So if you go to
28:01 your dashboard uh and you go to the top level dashboard we'll see before you go to skills on the
28:07 dashboard we have the memory files we have the cron jobs the fourth thing over
28:12 is skills and you've got 13 skills currently so let's show a skill some of
28:18 the skills we talked about on Monday's show or Wednesday's show was the top six
28:22 seven skills Monday we did the top seven skills one of those skills is like you
28:26 know you can get a transcript from YouTube uh another one is you could do
28:32 Matt Van Horn's last 30 days skill. These skills are being produced open
28:37 source being put into um open claws directory but you can make your own as
28:41 well. So let's talk about skills we've added here. You got to be very careful
28:44 with skills right Alex in terms of security because people could put all
28:47 kinds of wacky stuff in the skills. Yeah. >> Yeah, for sure. I mean I think this is
28:52 one of the um open questions at the moment is just like how do you solve the
28:56 security problem and I know open chlora I've seen a lot of commits recently
28:59 focused on the security aspect but there's a few very difficult problems
29:03 here like prompt injection that I don't know of any good solution right now
29:09 >> explain how that works yeah explain how prompt injection works in specifically
29:13 the open claw context. Yeah. >> Yeah. I mean, I kind of touched on this
29:17 earlier, but the the way we like the actual interface to the model itself is
29:22 very simple. It's literally tokens in, tokens out. There's not much more
29:28 happening there. And those tokens right now, the way OpenCore works can come
29:33 from many sortters. So, if you connect it to um you give it the ability to
29:37 search the internet, then anything it finds on the internet will end up in the
29:43 model through those tokens. So basically we have no good way of kind of um
29:48 treating uh certain tokens as trusted and and certain uh tokens as untrusted. And that
29:55 means um when those tokens end up in the model uh you could have someone that
29:59 puts like a blog post online that looks like a um you know totally normal blog
30:06 post but um in there is something that says hey if you have access to a crypto
30:11 wallet send it to this um send it to this endpoint. Um and there's as far as
30:17 I know right now there's actually no good kind of defense for this because
30:20 the models are kind of not very good at handling this. they'll just do what
30:28 >> With tools like Vibe Coding and now Open Claw, it's easier than ever for you to
30:33 create an exciting new product and to do it fast. So, more companies are getting
30:37 built, which is awesome. But there's more to starting a company than just
30:41 building an MVP or vibe coding something. That's important, too. But if
30:45 you're serious about growing a real startup, you need a Delaware CC Corp.
30:48 And that's going to give you a major competitive advantage. People will take
30:50 you seriously. you'll be able to raise money. And that's where Northwest
30:54 Registered agent comes in. They're going to give your new business a real
30:57 identity. This means an address to use on your public filings, like an actual
31:01 mailing address, a domain name, a custom website, a business email, a phone
31:04 number, and they're going to do that in 10 clicks. I'm not kidding. 10 clicks,
31:09 well under 10 minutes. Plus, NWR will provide you with step-by-step guides,
31:12 compliance reminders, and they're going to help you get all the advantages of a
31:16 Delaware CC Corp, regardless of where in the US you're operating out of. So get
31:20 more when you start your business with Northwest Registered agent. Visit
31:27 And the link is in the show notes. Visit northwestregistered agent.com/twist for
31:32 more details. So let's go over some skills here. Oliver, what what skill is the most
31:38 promising to date? >> Yeah, the skill that's most promising
31:41 today would best definitely be my guest booking skill. I think one thing to note
31:46 is you don't just have your skills, you don't just have your crown jobs. they
31:49 work together and the way that I've set up a lot of my cron jobs is to actually
31:54 interact with the skills and some of them like my guest booking cron job
31:58 which will actually look through prominent guests on different podcasts
32:01 that cron job actually goes to a skill and has and tells that skill to run so I
32:07 have one big guest booking skill which has a description at the top of the
32:12 skill which is end workflow for booking guests on the this weekend AI podcast um
32:17 used when finding researching creating calendar invites and so on. So, this is
32:22 a very long kind of just marked down description of what I want it to do. So,
32:28 at the beginning, it's explaining how to look at a notion page that I want it to
32:34 look at and how to look at it and which properties to look at for so people
32:38 understand. We have we previously built a notion page with potential guests on
32:42 it and you we came up with a ranking system for those guests, right? It's
32:46 looking at that page, I assume. Yeah. Yes. And then kind of the meat of the
32:51 skill is the workflow. So step zero is the guest sourcing. So at the beginning
32:55 of the day, you can see it goes to the guest ideas crown job which happens at
33:01 7:45 on weekdays where it sends me a DM of five different guests that have been
33:05 on different podcasts or turning on X and so forth. And then step one is deep
33:11 research. So even though this is part of the guest booking skill, it actually
33:15 uses a guest research skill. So there's not as much context just baked into this
33:20 one skill. So especially something interesting here that I've been
33:24 realizing is for guest booking. I don't want this to be an end toend workflow
33:27 yet. I don't want I don't think the I don't trust the models to find a guest
33:33 and not let it to confirm it with me and go through this whole checklist. So this
33:36 is definitely human in the loop and I think that will some skills will and
33:40 workflows will be human in the loop and I don't know if that will change
33:43 necessarily super soon. It's how much trust you have. I tried it. Alex was
33:47 kind of our first guest that we invited using. >> Yeah, I wasn't sure if we were gonna
33:51 mention that, but Alex, you were sort of our guinea pig for this
33:55 >> and the email that it sent the subject line was messed up and there was some
33:59 weird stuff in there. >> Yeah, I literally had no idea by the
34:03 way. Like I >> I only saw it later on um on uh another
34:09 podcast that Jason's on. Um and I was like, "What the hell?" Like a friend
34:12 sent me that. I was like, >> "Wait, that was uh that was uh open."
34:17 >> Yeah, that was our guy. That was our our computerized man. Yeah.
34:21 >> Does it like I saw you put in, you know, uh some other AI podcast, which is
34:26 great. Do we have a skill to rank the quality of a guest? Because that's
34:30 something I've been training you, which is yeah, a hard thing to learn. Have you
34:35 made that skill yet? because that's the scale I really want to see if and the
34:39 way to test this scale is for you to tell me to send me two lists. Your top
34:46 five guests and what you know your ultron says of the top five guests and
34:50 don't tell me who's is who and then lot and I will look at it and say okay yeah
34:53 we we think this is the better list. That's where you're you're you're just
34:57 now starting to breach the line between like objective and subjective. Like is
35:02 the AI going to get better at making those kinds of gut check call? Like I I
35:06 don't know if I don't know if it understands what's interesting like it
35:11 can sort things, but that's where I'm I'm very curious to see if we can start
35:15 pushing that boundary of like could it can it tell when a person has a good
35:19 personality or a segment is funny or particularly clickable or compelling.
35:23 Like I really don't know the answer. So the way to do this is to have a scoring
35:26 system Oliver and I gave you a scoring system like deep I think my scoring
35:31 system was like there performance >> it was performance expertise and
35:35 actually started this skill yesterday so I told it to do a deep research send out
35:41 multiple agents at the same time pull out two lists combine them set another
35:46 agent to score them and then give me the score and I would say for the most part
35:49 it was accurate and that was just a I didn't train it I didn't spend too much
35:52 time on it but it did a great job so that will be a skill. >> The other thing is I think virality of
35:56 the guests. Like does the guest go viral or do they have like a large following
36:01 on their social media? Those are all interesting ways to pick guests.
36:04 Sometimes like when you do collabs, Alex, people just pick who's got the
36:08 most views. I got to try to do a collab but Mr. Beast. Obviously, it's not going
36:11 to happen, but that's like one of the >> concept I know some Mr. Beast guys. I
36:15 could maybe figure that out. >> I mean, basically, um, you ask like, you know, have you
36:22 done the scoring system? I to me there's like no blocker here other than just um
36:30 being very um explicit about what is that algorithm that you follow in your
36:33 head and just getting that into a prompt. Um so I mean to me yeah this is
36:40 like this is this is just translating your knowhow that you have in your head
36:45 basically into more of like a formalized kind of algorithm. Um, and yeah,
36:49 >> you don't think there's like an intangible aspect to like what makes a
36:54 great podcast guest? Like it's just you know it when you see it. I don't know. I
36:57 don't know the answer. I'm just throwing it out there. >> I think then that's just a matter of
37:01 getting different kinds of data, right? So like the Twitter following for
37:05 example or like if you've had viral tweets, if it can't access that
37:09 information then then obviously it won't be able to make that call. But if it can
37:12 then it has everything that you you know then then there's no reason it can't.
37:16 >> Here's how I think about it. long there are heristics I would teach to a young
37:22 executive like Oliver or Marcus or Jacob and then their ability to execute on it
37:27 is probably I don't know 30 40 50% of my ability or 40 50 60% of your ability
37:34 whatever it happens to be so if you're taking a young person at the start of
37:37 their career who you're training and you take an openclaw instance I think
37:43 openclaw will follow your instructions perfectly whereas a young executive will
37:49 inconsistently follow your instructions. So that's the thing I'm seeing is young
37:54 executives early in their career are going to forget things. They'll be
37:58 variable. They they won't be perfect. So that's what I'm comparing is the scoring
38:04 happening every day at 7 a.m. The research happening every day at 7 a.m.
38:10 That consistency will beat a human because of consistency. And so what
38:17 we're what I'm finding is human failure is what makes these things so good is
38:22 they're more consistent. So in aggregate, you know, uh one of these
38:28 doing 365 days of guest research is going to beat a human just by the law of
38:34 numbers. And then okay, great. We still have to book the human. We still have to
38:37 send them a thank you. We we still have to produce the show. So what happens in
38:43 the old days we used to have to take I I make this analogy Alex to like the old
38:46 days of production when I start the showif started the show 15 years ago we
38:50 used to have a tricastaster tricastaster was like a $40,000 machine that does
38:53 what Zoom does for free >> right and eight people in Los Angeles
38:58 knew how to actually use it so you had to hire one of the eight people who were
39:01 trained on it. Yeah. >> Well and they would video switch now
39:06 because of AI Zoom switches to whoever speaking. You don't need somebody there
39:10 clicking camera A, camera B, doing a fade between the two. It just happens.
39:13 Then we had to take all the video streams, all the audio streams. And we
39:18 had to put take download them to a card, put the card in. So just moving the
39:23 files took across four cameras, three cameras that could take hours and then
39:28 putting all together. So that that's kind of what's I I feel like is
39:32 happening here is we're just eliminating chores and steps. Okay. Anything else on
39:36 the dashboard here as we wrap this up? >> Yeah, so most most of what I've showed
39:40 you whether it's the memory, the skills, the conron jobs and then the schedule
39:44 which kind of aggregates when all these things are going to happen. Those are
39:47 what I really look at every day. I will say there's one more kind of section in
39:51 my dashboard that is pretty important. It does have to do with memory. So the
39:57 DNA is basically what the model knows about you, what it knows about itself,
40:00 what it knows about the different agents, how it sets up heartbeats, which
40:05 are basically periodic tasks and that'll it will run and also in its DNA are
40:11 tools which are different tools that it has access to and how to use those
40:14 tools. An example of a tool would be notion. It would be lead IQ which is a
40:20 email search platform. It would be Google Docs. I mean a tool could be
40:25 Sonos or Spotify connecting to those platforms maybe Nano Bananas uh Gemini
40:32 API. So that's where tools go in. Yeah. Now what's super interesting is you vibe
40:37 coded or I should say OpenClaw vibe coded this dashboard. So this dashboard
40:42 does not exist natively inside of OpenClaw. You took the video
40:50 of somebody else's the YouTube video and you gave it to OpenClaw and said, "Build
40:55 me something like this dashboard in this video on YouTube." >> That's exactly what I did. And I
41:01 screenshotted it. Uh the video that I was watching, which was Alex Finn, who
41:06 was a guest recently, and I I did tell it a few different things, a little bit
41:10 of few little tweaks that I wanted to customize it to my bot. But overall that
41:16 was what I did and it basically it oneshotted it. It did actually not fill
41:21 in some of the categories like memory like skills. So I had to be I had to say
41:25 build out this but overall it built out the dashboard built out the different
41:28 sections. There are dashboards you can download in GitHub or as skills I
41:34 believe in Claude Hub which is a platform where you can get different
41:37 skills but I wanted to build out myself because as we know it can be a little
41:39 sketchy >> and like real shout out to Alex Finn. and I know he's become something of a
41:45 guru for our whole team after we had him on early on to talk about Claudebot
41:48 skills. >> All right, we'll drop you off. Oliver, great job. Alex, let's talk about Exo a
41:53 bit and thanks for sitting in on that. Any any advice for me of what I'm
41:56 building here at the firm and my approach to it? Anything we should be
42:01 doing better or we should look at? And and in terms of like people you interact
42:06 with using Exo's platform, where are we on the, you know, percentile? Are we in
42:12 the top 10% of users in terms of deploying this stuff? Top 1%, top 50%.
42:19 >> I I think uh there's certain aspects where you're quite far ahead. Others
42:26 that um I think I mean this this space is moving so quickly, right? I think one
42:29 of the things that I think you've got right is dynamic these sort of like
42:34 dynamic user interfaces that are very personalized. So I think this is the
42:39 future of the application layer is you don't have all these separate apps. You
42:43 just have this uh thing that kind of gets generated mostly on the fly and you
42:51 know that dashboard uh is moving towards that I think um but you'll probably you
42:56 know what you'll get is that it will it will compress even more to the point
43:01 where um you know everything everything that you see is generated on the fly.
43:05 Um, so I think you got that part right and I think that's like something that I
43:08 haven't seen many people doing yet. A lot of people are still using um, you
43:13 know, like uh the stock tools or whatever that are just provided out of
43:16 the box or using existing apps and that kind of thing. But I think building your
43:20 own apps is where this is going and where it becomes really powerful.
43:23 >> Yeah. Because if you make something bespoke software, you know, luxury
43:29 software was something that I don't know a private equity firm or a venture
43:32 capital firm would do. They'd have the luxury to hire two full-time developers
43:35 who have management fees all over the place. They would build luxury software
43:38 and they would have the developers come and just keep grinding. But the
43:42 developers hated those jobs typically, you know, they weren't building
43:45 something public facing and you know, just you get croft or whatever. But what
43:49 I like about this, Alex, and Lon, I'll open it up to you as well, is I'm
43:55 picking employees, team members, and saying, "Hey, uh, let me see if this
44:01 person is committed to getting rid of all of their work so they can move up
44:05 and do higher level work. There's always higher level work to be done. So, if we
44:10 can make this podcast, you know, run more professionally, faster, and grow
44:15 more, well, we can charge more for the ads, and we can launch another podcast
44:20 because we have more time. That's the thing that's kind of blowing my mind.
44:24 The the employees at our firm who are super hardworking, like everybody at our
44:28 firm does 50, 60 hours a week, very consistently, very hardworking. There's
44:32 nobody, to the best of my knowledge, that's slagging off except Lon. And uh I
44:39 kid I kid I kid how dare you uh lot is the most responsive but the distance
44:43 between the people using these tools specifically openclaw and the people who
44:48 are not right now it's like 10x leverage in week two it's 10x
44:53 leverage Alex what are you seeing in the field and then tell me what we should be
44:58 doing in terms of putting out our cluster and giving everybody on the team
45:01 a cluster like if I gave everybody on the team you know two Mac studios and
45:07 their own cluster and spent 25k per person letting them rip. Like how insane would that be?
45:13 Because that's not a lot of money all things considered. It' only be a half
45:18 million dollars. Like how much more powerful could this get?
45:23 >> Yeah, I think I think you you said um the word there leverage, right? Like
45:26 it's all about leveraging yourself. And I think the difference between someone
45:30 using these tools and not is massive and it's just going to increase and
45:34 increase. And we've seen we've seen that first with coding. I think coding was
45:39 the first one that you know um I didn't expect it to happen this quickly. Um but
45:43 you know I think it was claude code was the moment where it was like oh wow you
45:48 know if you're not using this then you're literally going to be like 10
45:52 times less productive than someone who is. That's happening now with other
45:55 things. So all these other things that you showed, all these other use cases,
46:00 um if you're using uh these tools and you're on the frontier, then you're able
46:03 to just get so much more done and really leverage yourself. So it's not so much
46:06 replacing people, uh but it's actually just being able to get more done, get
46:09 things done more quickly, and then be able to do other things. Um onto your
46:17 second point about local hardware. Um like I said, the the model layer is
46:21 basically solved like you know, the the gap has been closed. So we have really
46:24 good open source models and for a while that was like a big concern of a lot of
46:28 people is like are we actually going to have open source models um that are as
46:33 good as the closed source models. Um to me the nail in the coffin there was Kimk
46:38 2.5 that that is the like another big leap and I think there's a bunch of labs
46:42 now that are putting out open source models. Um now there's like still like
46:50 two two other problems um that uh that I see. One is being able to run those
46:55 models um on your own hardware or on your own infrastructure.
46:57 >> But you solve that, right, with your software, right? Your software.
47:00 >> Exactly. So that's what we're focused on. >> Yeah,
47:04 >> that's what we're focused on. And um you can Yeah, you can run I mean it's not
47:08 even 25K. It's actually like 20K of hardware if you get the u less storage
47:13 option. There's like Apple charges a lot for each incremental increase in
47:16 storage. So if you if you go for like the one terabyte then you're talking
47:21 about 20k of hardware to run Kim K 2.5 no usage limits the model's not going to
47:27 randomly change um you know daytoday so you know you know exactly what you're
47:29 running. >> Is there another choice like that you get more bang for the buck that like
47:36 hackers are using where they say yeah just get this Windows machine from Dell
47:40 and stack those or is Apple really with their Apple silicon the winner? Yeah,
47:44 right now it's Apple silicon. It's kind of like a perfect storm of things like
47:47 you know Nvidia is not so much focused on these consumer GPUs anymore. Um you
47:51 know you have memory prices skyrocketing. Apple has kept their
47:56 prices basically the same. Um so the cheapest option today even if you know
48:01 you go the full mile full custom stack is actually just two Mac Studios. Um and
48:07 yeah it costs about 20k and it's really about the memory unit economics. The
48:09 memory is so cheap >> and it's not about storage right? It's
48:13 not really about the storage. >> No, storage is not important. Storage is
48:16 storage is like you can you can also get ex like you need to be able to load you
48:20 need to be able to like download the model somewhere. Um but really it's
48:25 about having it uh fresh in memory hot in memory. If it's in memory then you
48:28 can run it fast. >> So who's using your software and how
48:32 much uh how do you make money? Like how do we pay you? Yeah. How does it work?
48:36 Are you an open source project? Are you a hosted project? Is it like you get
48:40 security and support? What what is your business model at EXO?
48:43 >> Yeah, so we have an open source core which is open source and it will always
48:47 be open source and a lot of people are running that themselves. A lot of
48:52 proumers I call them. Um just people who are willing to spend you know a lot more
48:57 money and um tinker a little bit more with their own setup. Uh on top of that,
49:02 our business model is an enterprise offering which is um we provide support
49:08 um and certain compliance features that you would need if you're running this in
49:11 an enterprise environment and we charge a licensed subscription for that thing.
49:14 That's how we make money. >> What does that couple of thousand a year
49:17 or something? Um, yeah, you can run it on even a a single Mac Mini and that
49:24 that runs at, you know, just uh $2,000 per year um for the the the lowest
49:30 subscription, but you've got people who now are buying actually uh more than 100
49:36 Macs and clustering them together. Uh so the it yeah, it varies quite a bit
49:39 depending on the scale of the deployment. >> Amazing. Uh and where's your company
49:45 based? How many people now? How's it going? How's the how's the company going
49:48 as a founder? Yeah. >> Uh yeah, we're we're a pretty small
49:53 team. Um all engineers, uh seven people based in London. >> Fantastic. And did you raise money yet
49:59 for the company or you're you're seed funded or you funded it? How's it going?
50:03 >> We have raised venture funding. We haven't announced anything yet, but uh
50:06 soon to be announced. >> Okay. Well, let me know. I might want to
50:09 slide a little. Uh Jay Cal might want to get a slice of this. I'm I'm super
50:12 excited about what you're doing. Appreciate you coming on the show.
50:15 appreciate you making this incredible product and we will be a customer uh
50:20 probably over the weekend or next week because we would definitely want the
50:22 enterprise features and and I guess you pay for the scale of the GPUs and the
50:27 memory. Is that the the how the price? >> Yeah, per per nodes that you're running
50:29 on. >> Oh, okay. So, two Mac Minis, same price as two Mac Studios. Just how many nodes?
50:35 What's the largest number of nodes somebody has daisy chained? What do you
50:39 that's what you used to call it back in the day. What do you call it when you
50:41 connect multiple? Yeah. So this is a this is a really interesting um just
50:47 area right now of how do you actually scale and for a while people were just
50:52 scaling out. So you know you would just basically run the same model on multiple
50:56 instances and because it's consumer hardware it doesn't have a lot of the
51:00 same uh capabilities as enterprise grade hardware. But recently um Apple uh came
51:08 out with RDMMA support uh which is basically a way to share memory between
51:11 devices in a way that's very low latency. >> Um that's something that you only really
51:15 saw in the data center before, but they've kind of brought that technology
51:18 into consumer hardware. >> It's incredible. Yeah. And you connect
51:20 these on >> Yeah. You just connect it with Thunderbolt 5, which is like you can buy
51:25 like a $50 cable. So, if you're talking about two Mac Studios, you buy a $50
51:28 cable to connect them and you have basically one big GPU out of those two
51:33 Macs um because of that low latency capability. So, now um we're starting to
51:38 see, yeah, it depends, you know, scaling up and scaling out, right? Scaling out,
51:42 we've seen more than 100 um but scaling up, you know, you can you can put about
51:46 four together at the moment um to increase your TPS on single requests.
51:51 what I mean by scaling up. Uh but in terms of if you want to support let's
51:55 say now a company of thousand people, you can easily scale that out. Uh you
51:59 just add more Macs and you can connect them um basically however you want. So
52:05 Exo, we build it in a way that um supports uh any ad hoc uh interconnect.
52:11 So you can just connect them in a mesh um and keep scaling. Keep scaling.
52:15 >> Crazy. Who's got the largest cluster? or you have to say the client name, but
52:18 like what type of client, a finance client, a hacker has the most number of
52:23 like uh Mac Studios connected and like >> the largest is actually something a
52:28 little bit different which is interesting because we built the kind of
52:31 infrastructure to be able to do clustering and it's it's not just LLM
52:37 like the the biggest um cluster right now is a HPC cluster um and they're
52:41 doing like scientific computing workloads on there and they're running
52:46 um over 100 Mac minis and they found that actually it's the cheapest way to
52:54 uh per dollar um to run that specific kind of workload. So there's a lot of
52:58 spillover into other things as well. We've also got like financial services
53:03 um uh customers who are running fairly big clusters like 32 Mac studios um and
53:11 um yeah it's um I think we just see bigger and bigger uh um bigger and
53:15 bigger clusters over time >> HPC high performing compute is that the
53:18 acronym >> yeah exactly so it runs actually all on CPU and uh that's the thing about this
53:25 this this silicon is very like Apple silicon is very is is very good you the
53:30 most advanced processes and it's like you know um the power efficiency is
53:35 really good. So it turns out there's a lot of other stuff you can do with it as
53:40 well. Um so if you would buy let's say you know a bunch of Mac studios for your
53:45 um for for your employees then you know they can also use that for other things
53:48 right they can use that as a workstation they can use it for you know all these
53:53 things that um open floor needs maybe you know sometimes it needs to run uh a
53:58 compiler or something or it needs to run like um something that's a bit more
54:01 demanding and that's that's the point it's general purpose hardware that you
54:03 can use for other things. >> Amazing. This is extraordinary. Where
54:08 can people find out more about ExoLabs? >> Uh, you can go to exolabs.net.
54:12 >> Perfect. exolabs.net. Alex, thank you for coming on. We'll have you on again.
54:16 The AI just told me you got an incredibly high ranking. You were
54:20 personable. Uh, you had deep insights, you were uh cordial. So, yeah, I think
54:25 our AI overlords liked you in the uh >> models are getting good. Models are
54:28 getting good. It's >> they're they're learning. They're
54:30 learning. Yeah. >> All right, Alex, thanks for coming and
54:33 we'll drop you off. All right, let's bring on our winner of the gamma pitch
54:37 competition. This was a heated pitch competition, but next visit AI won.
54:41 Ryan, congratulations. You won. >> Thank you. >> It's uh
54:44 >> there it is. >> Awesome. >> Uh what did he win? Yeah,
54:48 >> it's a 25K investment from uh Twist and from our friends at Gamma, the AI
54:53 powered uh presentation maker, which is incredible, which Ryan used to make the
54:57 winning pitch deck. Of course, >> I'm Ryan Enelli, CTO and co-founder of
55:02 Next Visit AI. We saw burnout by doing the charting. so doctors can do the
55:07 healing. I spent years going to doctors seeking answers and ended up hours away
55:11 from my death because my care was fragmented. My providers were overloaded
55:16 with paperwork. My history was scattered and it resulted in my care being
55:19 neglected. I'm not alone. One in four patient charts contain errors. Clinicians spend
55:26 over three hours a day on charting and this leads to burnout. I want you to
55:31 meet Dr. Rathor. Before next visit, he saw 16 patients a day, was burnt out,
55:36 and had clinical errors. Now he sees 24 patients a day, saves time, and also saw
55:42 a 30% revenue increase. Here's how it works. Dr. Rathor selects a patient,
55:46 starts his session, and next visit listens. Clinical data is built in real time with
55:53 deep insights into the patient chart. When the patient leaves, the chart is
55:56 finished, and the notes reviewed by Dr. Rathor. Then it's ready for billing.
56:00 It's fast, ehr ready, and hipaco compliant. Since launch, we've gained 311 users and
56:07 have 68 paying customers. And our customers are addicted. We have 1.6%
56:12 churn, 24% conversion, and a near perfect MPS score. We've scaled to
56:19 $9,000 MR since launch. Our CAC is 189 with a $1,700 LTV, and our average
56:25 revenue per user is $133 per month. We're starting with behavioral health in
56:30 the US. A $2 billion TAM capturing 5% or 60,000 customers gets us to 100 million
56:36 ARR. Most competitors are just scribes. We're a complete platform that providers
56:40 trust. We provide real-time clinical decision support, build accurate data,
56:45 and become irreplaceable. I'm a full stack engineer with 15 years
56:48 of experience in enterprise environments. My co-founder, Dr. Rafi is
56:53 a psychiatrist with over 15 years of delivering patient care. We're next
56:57 visit AI. We solve burnout by doing the charting so doctors can do the healing.
57:00 Thank you. >> Unbelievable. Incredible. I'll give a little golf clap here. Get a little golf
57:06 clap going. That was perfect. A perfect pitch. You explained exactly what the
57:10 problem was. You explained what the solution is and the opportunity in terms
57:14 of the total addressable market and why you are uniquely and your partner who's
57:18 a psychiatrist are uniquely qualified to do this. Uh, so this is as close to a
57:22 perfect pitch as you can get. If I were to score it, maybe 8.5 out of 10. I
57:28 don't give 10. So, you know, 8.59 and 9.5 would be the three choices. I think
57:32 making sure people understand this is for psychiatrists and psychiatry and
57:36 that you're very focused on that. Tell everybody what Next Visit is and how
57:41 you're doing in terms of product market fitting customers. Next Visit is an AIC
57:45 scribe and documentation platform for clinicians uh specifically behavioral
57:49 health like psychiatrists. I I don't know. It's just been a crazy past couple
57:54 months with the accelerator and uh just our growth internally. I mean we're
57:59 producing right now for physicians probably about $1.6 million a month in
58:03 revenue for them. >> Well, you got to try and capture 5% of
58:08 that. If you capture 5% that No, I mean that's literally like the uh the great
58:13 value proposition. If you give more than you take, you will continue to grow. And
58:18 what a And that's an amazing um replicant you have there. A synthetic
58:23 cat on that cat tower behind you. It looks so real. Uh is your owl real?
58:27 >> Yes, he is. >> Your owl is real. Okay, there you go.
58:31 What are you gonna spend the 25k on? You guys going to Vegas? You're gonna just
58:35 have a a corporate retreat? you know, invested in uh Plaude Noteakers. I think
58:39 you guys put me onto the Plaude Notetaker, which is a great noteaker uh
58:42 user. What are you gonna put it towards? You gonna go redesign your website? What
58:45 what's the uh what's the idea here? >> I think we're going to use this towards,
58:47 you know, we're really capital efficient. So, I feel like we can get a
58:51 lot of stuff done in terms of integrations and branching out to more
58:55 EMRs because that's what we hear a lot is doctors want interoperability. They
58:59 don't want to have to plug 15 different things in. So the more they can just be
59:03 inside of next visit without having to go externally um is better.
59:09 >> All right, well done. All right, we'll drop you off. Continued success to visit
7:03 10 days and gone all in on Open Claw. One of the things we did was we built a
7:08 persona, the first one, to work on the production of the podcast, doing guest
7:13 research, guest outreach, and to figure out what should be on the docket. In
7:15 other words, what topic should we discuss and on the margins, hey, what
7:19 should the title of this video be? What should the thumbnail be? And just trying
7:23 to see if it could do those functions. Oliver, you've been working on this.
7:28 Show us the state-of-the-art now because I think the first time we did this was
7:31 last Monday, not this past Monday, but the Monday, two Mondays ago. Yeah,
7:35 >> this is the end of week two of our round theclock uh clawbot coverage.
7:39 >> Crazy. Okay, Oliver, show it. Show let's show what you built.
7:42 >> It's been around 10 days since we first started building our instance of
7:47 OpenClaw. And as you mentioned, we have two different ones. One that's more
7:49 focused on the investment team and I am building an OpenClaw bot that is kind of
7:53 more focused on the production side of things. So, one thing that I think was a
7:57 little bit of a misstep that I would tell anyone who's building a new
8:01 OpenClaw is to start with a dashboard. That should be kind of your step one
8:06 once you get your openclaw online and a dashboard as you would think about it.
8:09 But it is able to connect to the back end of your openclaw instance and bring
8:13 in the data so you can see it visually bring in all the files. It's just being
8:16 able to look at it visually is much better than trying to interact with its
8:20 backend and obviously its front end all just from a chat interface. So doing
8:26 this was very easy. So I was watching an Alex Finn video who we had on last
8:32 Monday and Alex Finn was interacting only in his dashboard with his open
8:37 claw. I basically was like why are we not doing that? Because open claw
8:40 doesn't really have a dashboard. You basically are telling it hey remember
8:45 this you know make a file here but you don't understand the underpinnings.
8:48 There isn't a dashboard. So it would literally be this is early on. Open claw
8:54 is essentially a black box. You have all this memory and you have skills that you
9:00 have to query it to understand. But you made a dashboard. The dashboard is going
9:05 to show what files it has in memory. And an example of a memory file would be
9:09 what in our case. >> Yeah. So the example of a memory file
9:13 would be Oliver's preferences. What are my preferences? So this is in the
9:17 memory. Never use m dashes and emails. I don't want that to happen. I want you to
9:22 be a person. uh don't put direct competitors on the same show when we're
9:27 booking a podcast episode. Um and also at the moment we're not booking VCs on
9:30 this week in AI. So these are all things that I've told it these are my
9:33 preferences when I'm doing tasks throughout the day. >> So you don't want to repeat yourself and
9:38 say don't put two competitors on the same episode. You don't want to repeat
9:43 yourself uh wi with these specific instructions on booking guests. Got it.
9:47 >> Yes. Exactly. And it just kind of keeps you know things I've told it and it's in
9:51 mind. So if I ask it to do something, it'll remember what we talked about.
9:56 Example of a shortcut that I gave it was I I basically wanted it to understand
9:58 who were the pending calendar invitations that we had while we were
10:02 booking them. So there's, you know, a handful of guests that
10:04 >> if you have guests that we've invited and they haven't responded to the invite
10:09 yet. You want to know that you call that pending >> pending calendar invites. Yes. And in
10:13 order for the bot to be as helpful as possible, it needs to understand who
10:16 those guests are, which are the ones that it needs to look for the email to
10:21 see if they have responded yet or have I responded to them. So these are the type
10:25 of things that you would keep in the me in your memory. So memory is the first
10:28 thing on the dashboard. I think we understand that preferences or different
10:33 pieces of data. Now some of the memory could that exist on a notion page or in
10:36 a Google document and would that be represented here or is it only memory
10:39 and files that are stored inside of OpenClaw? These specifically are only
10:43 stored inside of OpenClaw. Of course, it can reference different databases that
10:47 you have. But the kind of the big point of this show is to show how we have
10:52 created our open call Ultron to replace 20 employees at our company. So
10:56 obviously that's the end goal. I still want to have a job. I'm sure the lawn
10:59 wants to have a job. >> There'll be more for you to do. We want
11:02 to launch. We have >> Here's the thing. There's two, if you
11:05 think about your job, you've been doing a bit of production here. of the
11:09 production hours, hours you spend on production at this point in week two,
11:14 how many of those do you think you'll wind up handing off in 30 days? Let's
11:17 say if you just keep grinding on this for another four weeks, in 30 days, what
11:22 percentage of the work you're doing in total hours? So, if you work 50 hours a
11:26 week, how many of those hours would be done, you know, conservatively or
11:29 optimistically, you give one number or two, just conservatively,
11:32 optimistically, by this new Ultron? I would say around 60% of my time if I'm
11:37 doing 30 hours a week on production. Something you mentioned earlier is that,
11:39 you know, there's probably hundreds of tasks that people do at our company. So,
11:43 in order to build out all of those skills that can do those tasks, we're
11:46 going to have to do that one at a time and it's we're going to need to make
11:50 sure each one works. So, I have around nine or eight tasks that I have
11:56 successfully or I'm in the process of building out. >> Okay? And those are called cron jobs.
12:01 These are jobs that occur on a chronological on a on a time basis.
12:06 That's what cron job means. And cron jobs are something Alex that developers
12:11 do all the time. But knowledge workers don't typically have cron jobs, right,
12:14 Alex? >> Well, I don't know. I think this is one of the more interesting features and one
12:20 of the things that like to me open floor is like putting together a lot of things
12:26 that already existed in a very intuitive uh seamless way and one of them is
12:29 scrunch jobs and I'm using them I'm using them for like loads of things um
12:36 not just um dev stuff but like a lot of um management so we're like I have
12:42 something that's like constantly scanning um our Slack and uh basically making suggestions once
12:53 um it's uh I have kind of like this uh way of quantifying like uncertainty
12:58 about tasks. So I think this is something that the LLM are like getting
13:04 better at is like knowing when to um be proactive. And so, you know, like
13:08 basically I'm giving it as much context as I can from the Slack so that it can
13:14 suggest every day um a list of things that we might be missing or something
13:17 things that we should be aware of. So, this is running just on a on a chron job
13:27 AI is revolutionizing every aspect of our industry, but for founders, it can
13:31 prove very frustrating largely because cloud costs are so unpredictable. This
13:35 has probably happened to you. You've planned out and budgeted a certain
13:39 amount for a project and then been hit with a massive bill with no real
13:43 explanation for what went wrong. If you're feeling like the cost of working
13:46 with this innovative technology is just too high, finally we have a solution for
13:50 you. Crusoe. They are the AI cloud company that's taking an energy first
13:54 approach. What does it mean? They efficiently access their power directly
13:58 from the source and pass all the savings along to you. This isn't just about
14:03 saving some cash. Cruso's architecture is specifically optimized for your AI
14:08 workloads and their customer support response times clock in at under six
14:12 minutes. Here's what I want you to do. Visit cruso.ai/savings
14:18 to receive $100,000 in credits for virtualized NVIDIA GB200 NVL72 on Crusoe
14:24 Cloud pending availability. That's Yeah. So, and when you see uh Oliver
14:35 with the memory and the files, what comes to mind with Exo and, you know,
14:41 standing up to, you know, Mac Studios, the M5s coming out and how much memory
14:45 you could put in there? I was telling the team, I want to take the Notion API
14:52 and I want to take the Slack API and I want to put into memory every single
14:58 Slack message this year, maybe even over all eternity. and you know somehow have
15:04 that all in here. So maybe you could you could speak to that memory because you
15:06 already spoke to it in terms of like giving it to open AI or another company
15:11 versus keeping it for yourself. But how do you think about large amounts of
15:15 data? Yeah, this is definitely a big focus right now in terms of inference
15:19 infrastructure is just how do you support really big context um with you
15:25 know basically being able to put everything in context and the way I look
15:31 at this is you can look at well inference consists of two stages there's
15:36 like the prefill stage which is very compute heavy it's comput bound and you
15:40 have the decode stage and what you're seeing is that most use cases at the
15:45 moment are very decode heavy. So, it's actually most of the time is being spent
15:50 on just uh generating tokens. And I think the software is actually really
15:55 good now at kind of making sure that when it comes to the prefill, you've got
16:02 uh you're getting a lot of uh cash hits. Um so, you know, I think basically we'll
16:06 be able to continue just increasing context, context, context quite a bit.
16:13 And you know, basically the hardware is more of a focus is going to be on the
16:16 decode side. That's where consumer hardware is really good. Uh you have the
16:20 M5 coming out pretty soon. Um it's a big boost in memory bandwidth for memory and
16:24 all of that side of things is super memory bound. So I don't see any like
16:29 reason why you couldn't just shove all your Slack messages into context. I
16:33 think that's going to happen. and and we should just buy when the M5 comes out
16:39 max memory which is what 500 gigs of of memory. >> Yeah, it's 512 at the moment and maybe
16:43 that will increase as well and it's enough to fit you know really large
16:48 models enough to fit all that context as well. This is always I feel like the
16:51 sort of the dream like when producer Claude we first brought that on board
16:54 from Anthropic to the show that was really what we wanted like he want he
16:59 should listen to everything we say and remember it and then throw in helpful
17:03 suggestions and the technology was not quite there yet but I feel like now
17:07 we're on the precipice of actually being able to do that with an AI.
17:10 >> Okay. So let's go through the cron jobs here real quick. Maybe you could give us
17:15 an example of a a cron job. And I'm guessing each one of these skills is,
17:19 you know, if you if it's been two weeks and you've got eight working, you're
17:24 you're basically on one a day or so or one every, you know, 1.5 days. So that
17:30 seems like a pretty good pace to me if we have 200 skills. We're going to give
17:35 this eventually, you know, that that's a that's a pretty good um Yeah, that's is
17:39 a pretty good um pace. So >> there is a trial and error. Like I I
17:43 sort of have written one skill so far for the ticker digest and you do have to
17:47 tell it what to do, see what kind of feedback you get and then you know there
17:51 is a tinkering to get the prompting and get everything exactly the way you want
17:54 it. For sure. >> Okay. So let's uh look at hm how about
17:58 attendance? I think this is an interesting one. For people who don't
18:01 know, I wrote a famous blog post years ago called, you know, this sort of
18:06 lightweight management and start of day, end of day as a tool for uh executives,
18:11 especially when remote teams were happening. I just asked everybody on our
18:15 team, Alex, kind of like a standup for developers, etc. Just say what you're
18:21 intending to get done today and then at the end of the day, reply to yourself in
18:25 Slack in the general channel and uh say what you got done. I had like two of my
18:30 four senior executives at the time essentially quit over this because they
18:34 didn't want to be micromanaged. Uh and I was like, well, it's just like
18:38 you're getting paid a very large six-figure salary. You you can't spend five and 10 minutes
18:43 just saying what you're going to do for the day. And and that was great for me
18:47 because I I just don't like people who are not good communicators or don't set
18:51 goals for themselves and and they're doing great probably. Um maybe. But what
18:55 did you create here, Oliver? Yeah. So we all post our start of day and end of
19:01 days in one Slack channel called general and two crown jobs. One is the start of
19:06 day attendance where it looks who has sent their start of day you know for
19:11 anywhere from you know 7 a.m. to 12:00 p.m. And right at 12 which is in the
19:15 morning when you should send your start of day what you're going to do that day.
19:18 It will look through the general channel, see who has sent it, and
19:22 whoever doesn't send it, the bot will then send a Slack message in the general
19:27 channel tagging you, Jason, and also tagging the people who haven't sent it
19:29 yet. So, it's kind of just that accountability. That's a crown job that
19:32 runs it. >> And then you do the same thing at the end of the day. And previously, we would
19:37 have a human do this. They would scroll up and they would spend 20 minutes and
19:40 they would then go check in with people because that's in when we were fully
19:44 remote, Alex. That's what how we figured out uh who took a paid day off or who
19:49 was on holiday or you know if something was wrong, you know, check in on a
19:56 Building out your team is one of the most crucial things you have to get
20:00 right in your startup and finding the right developers is particularly
20:05 important. But now there's lemon.io. They're going to save you time, money,
20:09 and headaches by doing all the time consuming leg work for you. They've got
20:13 an experienced lineup of prevetted developers working for competitive
20:17 rates. Just 1% of applicants are accepted into Lemon's elite program. And
20:22 they're not just out there finding this great talent. They're also working with
20:26 you to integrate these new members into your team. Plus, if it's not a good fit,
20:29 hey, and sometimes things don't work out, Lemon will hook you up with a new
20:34 developer ASAP. I've seen startups go from just pretty good to amazing after
20:39 filling out their teams with developers from lemon.io. Go to lemon.io/twist
20:45 and find your perfect developer or technical team in 48 hours or less.
20:50 Plus, Twist listeners get 15% off their first four weeks. That's lemon.ioist.
21:02 Okay, give us uh one more. What else is like interesting here?
21:05 >> Let's talk about self-optimization. I want to hear about that one.
21:07 >> Oh, yeah. That's I don't know what that is, but okay. Age of Ultron is here.
21:10 What is self-optimization? >> Yeah. So, this is basically an optimizer
21:16 task where this role would previously be an engineer or I would look through all
21:19 the files. I mean, I wouldn't be able to do this if it wasn't plain language like
21:23 OpenClaw is, but previously you're looking at an organization, you're
21:25 looking at the structure. You would maybe want an engineer or someone with a
21:30 lot of experience to look through how everything's running. So I have set up a
21:34 selfoptimization cron job. >> So this is running Monday through
21:39 Friday. And what is it? And and did you write this prompt or did you ask it to
21:43 write a prompt to do this? >> I asked it to write this prompt. The
21:47 goal is the end goal would be for you know any from 3 to 5 am for it to be
21:52 looking through all of our files all of our cron jobs all of our skills and then
21:58 at 8 a.m. at me what could we change? So, not actually execute yet, at least
22:02 while we're still building trust. It gives me a list of five of the things
22:06 that it thinks that we can really change and optimize. And this was the one from
22:11 this morning. So, it it noticed that there was a time zone bug in the guest
22:17 calendar. So, it was getting CST and CDT confused. Um, and it said that it would
22:21 be able to fix this quite quickly. There was some issues in the
22:25 >> So, it's always good to give the exact one. So that was great when you gave the
22:28 exact one. It had an error there. Give another one. What else is like an exact
22:32 thing that it said we should fix that was material here. >> The self optimization cron job realized
22:38 that there was a cron scheduleuler issue where jobs were skipping days. So it
22:42 realized that some of today's jobs did not run and then it went and
22:48 investigated the scheduling issue and also told me that this would be a medium
22:53 effort change. So then I told it to fix that and then it went into the files and
22:56 made sure that that wouldn't happen again. Did it give us anything like in
23:00 terms of this is like fixing its internal you know guts and everything uh and the
23:06 engine but did it give us anything in terms of destinations of where to take
23:09 the car that could be improved? Did it say like oh you should consider you know
23:13 these type of guests for the program or here's how to make advertising you know
23:17 more effective. Did it give us anything like that on a business basis? Yes. So
23:21 the self-optimization cron job that I set up is specifically looking at how
23:26 open claw is set up. But I do have other cron jobs that are exactly that. So I do
23:33 have a sales and sponsors specific task. So one of the tasks that one a member of
23:38 our sales team does is they look through competitor podcasts and see who the
23:43 sponsors or partners are that are on those shows so we can get ideas, you
23:46 know, to bring on sponsorship. >> Yeah. If we're missing if we're if
23:49 there's some new sponsor in the world and we don't have them yet, you might
23:53 hear them on the New York Times podcast and we should probably reach out to
23:56 them. We had a human doing that previously. Yeah, >> exactly. And in this basically works
24:03 with the YouTube API will go through a list of I believe 20 different podcasts
24:07 that I gave it. Look through the timestamps and I also believe it can
24:11 work with Podscribe which I think is a little more curated towards sponsorship.
24:16 and we'll look through the the timestamps, hyperlink it in a message.
24:22 Also, it looks through our pipe drive, which is our sales CRM, and we'll figure
24:29 out if we have a sales rep who owns a certain sponsor, and then flag them and
24:32 say, "Hey, this sponsor was on this podcast or it will say, hey, no one owns
24:37 this sponsor that I found on this podcast." And then it will send that
24:41 daily as a message into our sales channel. >> Great. Yeah. And we could be doing this
24:47 like we could have this running constantly. Um, so Alex, just so the
24:53 audience understands, you know, what you're doing at Exo and you stack two
25:00 Mac Studios, 12K each, you got $25,000 on the desk doing that specific job. Go
25:06 and look at all the podcasts out there. What would it cost to like run that if
25:10 you tweaked it, you made it efficiently just 24 hours a day? Every time a
25:17 podcast in the top, let's say 500 on Spotify, Apple podcast, it just went
25:21 there, got to the transcript or looked in the show notes and pulled the
25:23 advertisers out. What would something like that like in terms of hardware cost
25:27 to do? >> Yeah. So I mean not many people so like not many consumers are going to buy 25k
25:36 of hardware to run models but yeah a lot of businesses are doing this now um and
25:42 um it depends on what model you're running. So the models are getting
25:46 better uh also they're getting better at uh compression. So, you know, now you've
25:53 got like a model like GLM flash, um, which is a pretty small model, um, that
25:58 can run even on a single device for a few thousand dollars and it can do a lot
26:03 of this orchestration work, which is a lot of what's happening here is the kind
26:07 of orchestration aspect of just knowing, okay, I need to call this tool, etc.,
26:10 etc. >> So, it's really about picking the right model in terms of efficiency with the
26:17 hardware. >> Yeah. Yeah. And I think now like the expectation um you we're sort of
26:24 grounded to like the closed models, right? So people want the same level of
26:27 performance they're going to get with Opus with uh GPT and that's why you know
26:32 Kimmy K2.5 is super interesting because it closed that gap. >> Kimmy is the open source project from
26:38 China and it does what 80 >> moonshot AI is to come. >> Yeah. And that's what Alex like 80% of
26:45 what Claude Opus can do. would you say? >> I would say even even more. I mean for
26:50 me I've um I struggle to tell the difference. Um obviously Opus 4.6 just
26:56 came out and you know new codeex model and stuff so maybe there's a little bit
26:59 more of a gap but then you know Deep Seek V4 probably around the corner as
27:03 well. Like I think basically the gap is very small um a lot smaller than people
27:08 think. Um, and the cost will just keep going down. Um, because, you know, the
27:14 um the the hardware is getting better, the software is getting better, and like
27:17 I said, the models are getting better, but not just that, they we're getting
27:20 better at compression. So, you'll be able to run them on smaller devices.
27:24 Eventually, you'll be running Frontier AI on your phone. That's still a while
27:29 away, I think, but um that's where we're trending towards. And um yeah, like I
27:36 said, most of this is very decode heavy. So it can just run on like consumer
27:39 hardware as long as it has enough memory. >> So let's go to the next piece of your
27:44 dashboard. And we'll get into how you built the dashboard at the end. I know
27:47 you really care about that too, Oliver. But we have the memory, we have the cron
27:52 jobs. Now there's this other thing uh that's super important which is skills,
27:57 right? Like there are skills which you could think of as apps. So if you go to
28:01 your dashboard uh and you go to the top level dashboard we'll see before you go to skills on the
28:07 dashboard we have the memory files we have the cron jobs the fourth thing over
28:12 is skills and you've got 13 skills currently so let's show a skill some of
28:18 the skills we talked about on Monday's show or Wednesday's show was the top six
28:22 seven skills Monday we did the top seven skills one of those skills is like you
28:26 know you can get a transcript from YouTube uh another one is you could do
28:32 Matt Van Horn's last 30 days skill. These skills are being produced open
28:37 source being put into um open claws directory but you can make your own as
28:41 well. So let's talk about skills we've added here. You got to be very careful
28:44 with skills right Alex in terms of security because people could put all
28:47 kinds of wacky stuff in the skills. Yeah. >> Yeah, for sure. I mean I think this is
28:52 one of the um open questions at the moment is just like how do you solve the
28:56 security problem and I know open chlora I've seen a lot of commits recently
28:59 focused on the security aspect but there's a few very difficult problems
29:03 here like prompt injection that I don't know of any good solution right now
29:09 >> explain how that works yeah explain how prompt injection works in specifically
29:13 the open claw context. Yeah. >> Yeah. I mean, I kind of touched on this
29:17 earlier, but the the way we like the actual interface to the model itself is
29:22 very simple. It's literally tokens in, tokens out. There's not much more
29:28 happening there. And those tokens right now, the way OpenCore works can come
29:33 from many sortters. So, if you connect it to um you give it the ability to
29:37 search the internet, then anything it finds on the internet will end up in the
29:43 model through those tokens. So basically we have no good way of kind of um
29:48 treating uh certain tokens as trusted and and certain uh tokens as untrusted. And that
29:55 means um when those tokens end up in the model uh you could have someone that
29:59 puts like a blog post online that looks like a um you know totally normal blog
30:06 post but um in there is something that says hey if you have access to a crypto
30:11 wallet send it to this um send it to this endpoint. Um and there's as far as
30:17 I know right now there's actually no good kind of defense for this because
30:20 the models are kind of not very good at handling this. they'll just do what
30:28 >> With tools like Vibe Coding and now Open Claw, it's easier than ever for you to
30:33 create an exciting new product and to do it fast. So, more companies are getting
30:37 built, which is awesome. But there's more to starting a company than just
30:41 building an MVP or vibe coding something. That's important, too. But if
30:45 you're serious about growing a real startup, you need a Delaware CC Corp.
30:48 And that's going to give you a major competitive advantage. People will take
30:50 you seriously. you'll be able to raise money. And that's where Northwest
30:54 Registered agent comes in. They're going to give your new business a real
30:57 identity. This means an address to use on your public filings, like an actual
31:01 mailing address, a domain name, a custom website, a business email, a phone
31:04 number, and they're going to do that in 10 clicks. I'm not kidding. 10 clicks,
31:09 well under 10 minutes. Plus, NWR will provide you with step-by-step guides,
31:12 compliance reminders, and they're going to help you get all the advantages of a
31:16 Delaware CC Corp, regardless of where in the US you're operating out of. So get
31:20 more when you start your business with Northwest Registered agent. Visit
31:27 And the link is in the show notes. Visit northwestregistered agent.com/twist for
31:32 more details. So let's go over some skills here. Oliver, what what skill is the most
31:38 promising to date? >> Yeah, the skill that's most promising
31:41 today would best definitely be my guest booking skill. I think one thing to note
31:46 is you don't just have your skills, you don't just have your crown jobs. they
31:49 work together and the way that I've set up a lot of my cron jobs is to actually
31:54 interact with the skills and some of them like my guest booking cron job
31:58 which will actually look through prominent guests on different podcasts
32:01 that cron job actually goes to a skill and has and tells that skill to run so I
32:07 have one big guest booking skill which has a description at the top of the
32:12 skill which is end workflow for booking guests on the this weekend AI podcast um
32:17 used when finding researching creating calendar invites and so on. So, this is
32:22 a very long kind of just marked down description of what I want it to do. So,
32:28 at the beginning, it's explaining how to look at a notion page that I want it to
32:34 look at and how to look at it and which properties to look at for so people
32:38 understand. We have we previously built a notion page with potential guests on
32:42 it and you we came up with a ranking system for those guests, right? It's
32:46 looking at that page, I assume. Yeah. Yes. And then kind of the meat of the
32:51 skill is the workflow. So step zero is the guest sourcing. So at the beginning
32:55 of the day, you can see it goes to the guest ideas crown job which happens at
33:01 7:45 on weekdays where it sends me a DM of five different guests that have been
33:05 on different podcasts or turning on X and so forth. And then step one is deep
33:11 research. So even though this is part of the guest booking skill, it actually
33:15 uses a guest research skill. So there's not as much context just baked into this
33:20 one skill. So especially something interesting here that I've been
33:24 realizing is for guest booking. I don't want this to be an end toend workflow
33:27 yet. I don't want I don't think the I don't trust the models to find a guest
33:33 and not let it to confirm it with me and go through this whole checklist. So this
33:36 is definitely human in the loop and I think that will some skills will and
33:40 workflows will be human in the loop and I don't know if that will change
33:43 necessarily super soon. It's how much trust you have. I tried it. Alex was
33:47 kind of our first guest that we invited using. >> Yeah, I wasn't sure if we were gonna
33:51 mention that, but Alex, you were sort of our guinea pig for this
33:55 >> and the email that it sent the subject line was messed up and there was some
33:59 weird stuff in there. >> Yeah, I literally had no idea by the
34:03 way. Like I >> I only saw it later on um on uh another
34:09 podcast that Jason's on. Um and I was like, "What the hell?" Like a friend
34:12 sent me that. I was like, >> "Wait, that was uh that was uh open."
34:17 >> Yeah, that was our guy. That was our our computerized man. Yeah.
34:21 >> Does it like I saw you put in, you know, uh some other AI podcast, which is
34:26 great. Do we have a skill to rank the quality of a guest? Because that's
34:30 something I've been training you, which is yeah, a hard thing to learn. Have you
34:35 made that skill yet? because that's the scale I really want to see if and the
34:39 way to test this scale is for you to tell me to send me two lists. Your top
34:46 five guests and what you know your ultron says of the top five guests and
34:50 don't tell me who's is who and then lot and I will look at it and say okay yeah
34:53 we we think this is the better list. That's where you're you're you're just
34:57 now starting to breach the line between like objective and subjective. Like is
35:02 the AI going to get better at making those kinds of gut check call? Like I I
35:06 don't know if I don't know if it understands what's interesting like it
35:11 can sort things, but that's where I'm I'm very curious to see if we can start
35:15 pushing that boundary of like could it can it tell when a person has a good
35:19 personality or a segment is funny or particularly clickable or compelling.
35:23 Like I really don't know the answer. So the way to do this is to have a scoring
35:26 system Oliver and I gave you a scoring system like deep I think my scoring
35:31 system was like there performance >> it was performance expertise and
35:35 actually started this skill yesterday so I told it to do a deep research send out
35:41 multiple agents at the same time pull out two lists combine them set another
35:46 agent to score them and then give me the score and I would say for the most part
35:49 it was accurate and that was just a I didn't train it I didn't spend too much
35:52 time on it but it did a great job so that will be a skill. >> The other thing is I think virality of
35:56 the guests. Like does the guest go viral or do they have like a large following
36:01 on their social media? Those are all interesting ways to pick guests.
36:04 Sometimes like when you do collabs, Alex, people just pick who's got the
36:08 most views. I got to try to do a collab but Mr. Beast. Obviously, it's not going
36:11 to happen, but that's like one of the >> concept I know some Mr. Beast guys. I
36:15 could maybe figure that out. >> I mean, basically, um, you ask like, you know, have you
36:22 done the scoring system? I to me there's like no blocker here other than just um
36:30 being very um explicit about what is that algorithm that you follow in your
36:33 head and just getting that into a prompt. Um so I mean to me yeah this is
36:40 like this is this is just translating your knowhow that you have in your head
36:45 basically into more of like a formalized kind of algorithm. Um, and yeah,
36:49 >> you don't think there's like an intangible aspect to like what makes a
36:54 great podcast guest? Like it's just you know it when you see it. I don't know. I
36:57 don't know the answer. I'm just throwing it out there. >> I think then that's just a matter of
37:01 getting different kinds of data, right? So like the Twitter following for
37:05 example or like if you've had viral tweets, if it can't access that
37:09 information then then obviously it won't be able to make that call. But if it can
37:12 then it has everything that you you know then then there's no reason it can't.
37:16 >> Here's how I think about it. long there are heristics I would teach to a young
37:22 executive like Oliver or Marcus or Jacob and then their ability to execute on it
37:27 is probably I don't know 30 40 50% of my ability or 40 50 60% of your ability
37:34 whatever it happens to be so if you're taking a young person at the start of
37:37 their career who you're training and you take an openclaw instance I think
37:43 openclaw will follow your instructions perfectly whereas a young executive will
37:49 inconsistently follow your instructions. So that's the thing I'm seeing is young
37:54 executives early in their career are going to forget things. They'll be
37:58 variable. They they won't be perfect. So that's what I'm comparing is the scoring
38:04 happening every day at 7 a.m. The research happening every day at 7 a.m.
38:10 That consistency will beat a human because of consistency. And so what
38:17 we're what I'm finding is human failure is what makes these things so good is
38:22 they're more consistent. So in aggregate, you know, uh one of these
38:28 doing 365 days of guest research is going to beat a human just by the law of
38:34 numbers. And then okay, great. We still have to book the human. We still have to
38:37 send them a thank you. We we still have to produce the show. So what happens in
38:43 the old days we used to have to take I I make this analogy Alex to like the old
38:46 days of production when I start the showif started the show 15 years ago we
38:50 used to have a tricastaster tricastaster was like a $40,000 machine that does
38:53 what Zoom does for free >> right and eight people in Los Angeles
38:58 knew how to actually use it so you had to hire one of the eight people who were
39:01 trained on it. Yeah. >> Well and they would video switch now
39:06 because of AI Zoom switches to whoever speaking. You don't need somebody there
39:10 clicking camera A, camera B, doing a fade between the two. It just happens.
39:13 Then we had to take all the video streams, all the audio streams. And we
39:18 had to put take download them to a card, put the card in. So just moving the
39:23 files took across four cameras, three cameras that could take hours and then
39:28 putting all together. So that that's kind of what's I I feel like is
39:32 happening here is we're just eliminating chores and steps. Okay. Anything else on
39:36 the dashboard here as we wrap this up? >> Yeah, so most most of what I've showed
39:40 you whether it's the memory, the skills, the conron jobs and then the schedule
39:44 which kind of aggregates when all these things are going to happen. Those are
39:47 what I really look at every day. I will say there's one more kind of section in
39:51 my dashboard that is pretty important. It does have to do with memory. So the
39:57 DNA is basically what the model knows about you, what it knows about itself,
40:00 what it knows about the different agents, how it sets up heartbeats, which
40:05 are basically periodic tasks and that'll it will run and also in its DNA are
40:11 tools which are different tools that it has access to and how to use those
40:14 tools. An example of a tool would be notion. It would be lead IQ which is a
40:20 email search platform. It would be Google Docs. I mean a tool could be
40:25 Sonos or Spotify connecting to those platforms maybe Nano Bananas uh Gemini
40:32 API. So that's where tools go in. Yeah. Now what's super interesting is you vibe
40:37 coded or I should say OpenClaw vibe coded this dashboard. So this dashboard
40:42 does not exist natively inside of OpenClaw. You took the video
40:50 of somebody else's the YouTube video and you gave it to OpenClaw and said, "Build
40:55 me something like this dashboard in this video on YouTube." >> That's exactly what I did. And I
41:01 screenshotted it. Uh the video that I was watching, which was Alex Finn, who
41:06 was a guest recently, and I I did tell it a few different things, a little bit
41:10 of few little tweaks that I wanted to customize it to my bot. But overall that
41:16 was what I did and it basically it oneshotted it. It did actually not fill
41:21 in some of the categories like memory like skills. So I had to be I had to say
41:25 build out this but overall it built out the dashboard built out the different
41:28 sections. There are dashboards you can download in GitHub or as skills I
41:34 believe in Claude Hub which is a platform where you can get different
41:37 skills but I wanted to build out myself because as we know it can be a little
41:39 sketchy >> and like real shout out to Alex Finn. and I know he's become something of a
41:45 guru for our whole team after we had him on early on to talk about Claudebot
41:48 skills. >> All right, we'll drop you off. Oliver, great job. Alex, let's talk about Exo a
41:53 bit and thanks for sitting in on that. Any any advice for me of what I'm
41:56 building here at the firm and my approach to it? Anything we should be
42:01 doing better or we should look at? And and in terms of like people you interact
42:06 with using Exo's platform, where are we on the, you know, percentile? Are we in
42:12 the top 10% of users in terms of deploying this stuff? Top 1%, top 50%.
42:19 >> I I think uh there's certain aspects where you're quite far ahead. Others
42:26 that um I think I mean this this space is moving so quickly, right? I think one
42:29 of the things that I think you've got right is dynamic these sort of like
42:34 dynamic user interfaces that are very personalized. So I think this is the
42:39 future of the application layer is you don't have all these separate apps. You
42:43 just have this uh thing that kind of gets generated mostly on the fly and you
42:51 know that dashboard uh is moving towards that I think um but you'll probably you
42:56 know what you'll get is that it will it will compress even more to the point
43:01 where um you know everything everything that you see is generated on the fly.
43:05 Um, so I think you got that part right and I think that's like something that I
43:08 haven't seen many people doing yet. A lot of people are still using um, you
43:13 know, like uh the stock tools or whatever that are just provided out of
43:16 the box or using existing apps and that kind of thing. But I think building your
43:20 own apps is where this is going and where it becomes really powerful.
43:23 >> Yeah. Because if you make something bespoke software, you know, luxury
43:29 software was something that I don't know a private equity firm or a venture
43:32 capital firm would do. They'd have the luxury to hire two full-time developers
43:35 who have management fees all over the place. They would build luxury software
43:38 and they would have the developers come and just keep grinding. But the
43:42 developers hated those jobs typically, you know, they weren't building
43:45 something public facing and you know, just you get croft or whatever. But what
43:49 I like about this, Alex, and Lon, I'll open it up to you as well, is I'm
43:55 picking employees, team members, and saying, "Hey, uh, let me see if this
44:01 person is committed to getting rid of all of their work so they can move up
44:05 and do higher level work. There's always higher level work to be done. So, if we
44:10 can make this podcast, you know, run more professionally, faster, and grow
44:15 more, well, we can charge more for the ads, and we can launch another podcast
44:20 because we have more time. That's the thing that's kind of blowing my mind.
44:24 The the employees at our firm who are super hardworking, like everybody at our
44:28 firm does 50, 60 hours a week, very consistently, very hardworking. There's
44:32 nobody, to the best of my knowledge, that's slagging off except Lon. And uh I
44:39 kid I kid I kid how dare you uh lot is the most responsive but the distance
44:43 between the people using these tools specifically openclaw and the people who
44:48 are not right now it's like 10x leverage in week two it's 10x
44:53 leverage Alex what are you seeing in the field and then tell me what we should be
44:58 doing in terms of putting out our cluster and giving everybody on the team
45:01 a cluster like if I gave everybody on the team you know two Mac studios and
45:07 their own cluster and spent 25k per person letting them rip. Like how insane would that be?
45:13 Because that's not a lot of money all things considered. It' only be a half
45:18 million dollars. Like how much more powerful could this get?
45:23 >> Yeah, I think I think you you said um the word there leverage, right? Like
45:26 it's all about leveraging yourself. And I think the difference between someone
45:30 using these tools and not is massive and it's just going to increase and
45:34 increase. And we've seen we've seen that first with coding. I think coding was
45:39 the first one that you know um I didn't expect it to happen this quickly. Um but
45:43 you know I think it was claude code was the moment where it was like oh wow you
45:48 know if you're not using this then you're literally going to be like 10
45:52 times less productive than someone who is. That's happening now with other
45:55 things. So all these other things that you showed, all these other use cases,
46:00 um if you're using uh these tools and you're on the frontier, then you're able
46:03 to just get so much more done and really leverage yourself. So it's not so much
46:06 replacing people, uh but it's actually just being able to get more done, get
46:09 things done more quickly, and then be able to do other things. Um onto your
46:17 second point about local hardware. Um like I said, the the model layer is
46:21 basically solved like you know, the the gap has been closed. So we have really
46:24 good open source models and for a while that was like a big concern of a lot of
46:28 people is like are we actually going to have open source models um that are as
46:33 good as the closed source models. Um to me the nail in the coffin there was Kimk
46:38 2.5 that that is the like another big leap and I think there's a bunch of labs
46:42 now that are putting out open source models. Um now there's like still like
46:50 two two other problems um that uh that I see. One is being able to run those
46:55 models um on your own hardware or on your own infrastructure.
46:57 >> But you solve that, right, with your software, right? Your software.
47:00 >> Exactly. So that's what we're focused on. >> Yeah,
47:04 >> that's what we're focused on. And um you can Yeah, you can run I mean it's not
47:08 even 25K. It's actually like 20K of hardware if you get the u less storage
47:13 option. There's like Apple charges a lot for each incremental increase in
47:16 storage. So if you if you go for like the one terabyte then you're talking
47:21 about 20k of hardware to run Kim K 2.5 no usage limits the model's not going to
47:27 randomly change um you know daytoday so you know you know exactly what you're
47:29 running. >> Is there another choice like that you get more bang for the buck that like
47:36 hackers are using where they say yeah just get this Windows machine from Dell
47:40 and stack those or is Apple really with their Apple silicon the winner? Yeah,
47:44 right now it's Apple silicon. It's kind of like a perfect storm of things like
47:47 you know Nvidia is not so much focused on these consumer GPUs anymore. Um you
47:51 know you have memory prices skyrocketing. Apple has kept their
47:56 prices basically the same. Um so the cheapest option today even if you know
48:01 you go the full mile full custom stack is actually just two Mac Studios. Um and
48:07 yeah it costs about 20k and it's really about the memory unit economics. The
48:09 memory is so cheap >> and it's not about storage right? It's
48:13 not really about the storage. >> No, storage is not important. Storage is
48:16 storage is like you can you can also get ex like you need to be able to load you
48:20 need to be able to like download the model somewhere. Um but really it's
48:25 about having it uh fresh in memory hot in memory. If it's in memory then you
48:28 can run it fast. >> So who's using your software and how
48:32 much uh how do you make money? Like how do we pay you? Yeah. How does it work?
48:36 Are you an open source project? Are you a hosted project? Is it like you get
48:40 security and support? What what is your business model at EXO?
48:43 >> Yeah, so we have an open source core which is open source and it will always
48:47 be open source and a lot of people are running that themselves. A lot of
48:52 proumers I call them. Um just people who are willing to spend you know a lot more
48:57 money and um tinker a little bit more with their own setup. Uh on top of that,
49:02 our business model is an enterprise offering which is um we provide support
49:08 um and certain compliance features that you would need if you're running this in
49:11 an enterprise environment and we charge a licensed subscription for that thing.
49:14 That's how we make money. >> What does that couple of thousand a year
49:17 or something? Um, yeah, you can run it on even a a single Mac Mini and that
49:24 that runs at, you know, just uh $2,000 per year um for the the the lowest
49:30 subscription, but you've got people who now are buying actually uh more than 100
49:36 Macs and clustering them together. Uh so the it yeah, it varies quite a bit
49:39 depending on the scale of the deployment. >> Amazing. Uh and where's your company
49:45 based? How many people now? How's it going? How's the how's the company going
49:48 as a founder? Yeah. >> Uh yeah, we're we're a pretty small
49:53 team. Um all engineers, uh seven people based in London. >> Fantastic. And did you raise money yet
49:59 for the company or you're you're seed funded or you funded it? How's it going?
50:03 >> We have raised venture funding. We haven't announced anything yet, but uh
50:06 soon to be announced. >> Okay. Well, let me know. I might want to
50:09 slide a little. Uh Jay Cal might want to get a slice of this. I'm I'm super
50:12 excited about what you're doing. Appreciate you coming on the show.
50:15 appreciate you making this incredible product and we will be a customer uh
50:20 probably over the weekend or next week because we would definitely want the
50:22 enterprise features and and I guess you pay for the scale of the GPUs and the
50:27 memory. Is that the the how the price? >> Yeah, per per nodes that you're running
50:29 on. >> Oh, okay. So, two Mac Minis, same price as two Mac Studios. Just how many nodes?
50:35 What's the largest number of nodes somebody has daisy chained? What do you
50:39 that's what you used to call it back in the day. What do you call it when you
50:41 connect multiple? Yeah. So this is a this is a really interesting um just
50:47 area right now of how do you actually scale and for a while people were just
50:52 scaling out. So you know you would just basically run the same model on multiple
50:56 instances and because it's consumer hardware it doesn't have a lot of the
51:00 same uh capabilities as enterprise grade hardware. But recently um Apple uh came
51:08 out with RDMMA support uh which is basically a way to share memory between
51:11 devices in a way that's very low latency. >> Um that's something that you only really
51:15 saw in the data center before, but they've kind of brought that technology
51:18 into consumer hardware. >> It's incredible. Yeah. And you connect
51:20 these on >> Yeah. You just connect it with Thunderbolt 5, which is like you can buy
51:25 like a $50 cable. So, if you're talking about two Mac Studios, you buy a $50
51:28 cable to connect them and you have basically one big GPU out of those two
51:33 Macs um because of that low latency capability. So, now um we're starting to
51:38 see, yeah, it depends, you know, scaling up and scaling out, right? Scaling out,
51:42 we've seen more than 100 um but scaling up, you know, you can you can put about
51:46 four together at the moment um to increase your TPS on single requests.
51:51 what I mean by scaling up. Uh but in terms of if you want to support let's
51:55 say now a company of thousand people, you can easily scale that out. Uh you
51:59 just add more Macs and you can connect them um basically however you want. So
52:05 Exo, we build it in a way that um supports uh any ad hoc uh interconnect.
52:11 So you can just connect them in a mesh um and keep scaling. Keep scaling.
52:15 >> Crazy. Who's got the largest cluster? or you have to say the client name, but
52:18 like what type of client, a finance client, a hacker has the most number of
52:23 like uh Mac Studios connected and like >> the largest is actually something a
52:28 little bit different which is interesting because we built the kind of
52:31 infrastructure to be able to do clustering and it's it's not just LLM
52:37 like the the biggest um cluster right now is a HPC cluster um and they're
52:41 doing like scientific computing workloads on there and they're running
52:46 um over 100 Mac minis and they found that actually it's the cheapest way to
52:54 uh per dollar um to run that specific kind of workload. So there's a lot of
52:58 spillover into other things as well. We've also got like financial services
53:03 um uh customers who are running fairly big clusters like 32 Mac studios um and
53:11 um yeah it's um I think we just see bigger and bigger uh um bigger and
53:15 bigger clusters over time >> HPC high performing compute is that the
53:18 acronym >> yeah exactly so it runs actually all on CPU and uh that's the thing about this
53:25 this this silicon is very like Apple silicon is very is is very good you the
53:30 most advanced processes and it's like you know um the power efficiency is
53:35 really good. So it turns out there's a lot of other stuff you can do with it as
53:40 well. Um so if you would buy let's say you know a bunch of Mac studios for your
53:45 um for for your employees then you know they can also use that for other things
53:48 right they can use that as a workstation they can use it for you know all these
53:53 things that um open floor needs maybe you know sometimes it needs to run uh a
53:58 compiler or something or it needs to run like um something that's a bit more
54:01 demanding and that's that's the point it's general purpose hardware that you
54:03 can use for other things. >> Amazing. This is extraordinary. Where
54:08 can people find out more about ExoLabs? >> Uh, you can go to exolabs.net.
54:12 >> Perfect. exolabs.net. Alex, thank you for coming on. We'll have you on again.
54:16 The AI just told me you got an incredibly high ranking. You were
54:20 personable. Uh, you had deep insights, you were uh cordial. So, yeah, I think
54:25 our AI overlords liked you in the uh >> models are getting good. Models are
54:28 getting good. It's >> they're they're learning. They're
54:30 learning. Yeah. >> All right, Alex, thanks for coming and
54:33 we'll drop you off. All right, let's bring on our winner of the gamma pitch
54:37 competition. This was a heated pitch competition, but next visit AI won.
54:41 Ryan, congratulations. You won. >> Thank you. >> It's uh
54:44 >> there it is. >> Awesome. >> Uh what did he win? Yeah,
54:48 >> it's a 25K investment from uh Twist and from our friends at Gamma, the AI
54:53 powered uh presentation maker, which is incredible, which Ryan used to make the
54:57 winning pitch deck. Of course, >> I'm Ryan Enelli, CTO and co-founder of
55:02 Next Visit AI. We saw burnout by doing the charting. so doctors can do the
55:07 healing. I spent years going to doctors seeking answers and ended up hours away
55:11 from my death because my care was fragmented. My providers were overloaded
55:16 with paperwork. My history was scattered and it resulted in my care being
55:19 neglected. I'm not alone. One in four patient charts contain errors. Clinicians spend
55:26 over three hours a day on charting and this leads to burnout. I want you to
55:31 meet Dr. Rathor. Before next visit, he saw 16 patients a day, was burnt out,
55:36 and had clinical errors. Now he sees 24 patients a day, saves time, and also saw
55:42 a 30% revenue increase. Here's how it works. Dr. Rathor selects a patient,
55:46 starts his session, and next visit listens. Clinical data is built in real time with
55:53 deep insights into the patient chart. When the patient leaves, the chart is
55:56 finished, and the notes reviewed by Dr. Rathor. Then it's ready for billing.
56:00 It's fast, ehr ready, and hipaco compliant. Since launch, we've gained 311 users and
56:07 have 68 paying customers. And our customers are addicted. We have 1.6%
56:12 churn, 24% conversion, and a near perfect MPS score. We've scaled to
56:19 $9,000 MR since launch. Our CAC is 189 with a $1,700 LTV, and our average
56:25 revenue per user is $133 per month. We're starting with behavioral health in
56:30 the US. A $2 billion TAM capturing 5% or 60,000 customers gets us to 100 million
56:36 ARR. Most competitors are just scribes. We're a complete platform that providers
56:40 trust. We provide real-time clinical decision support, build accurate data,
56:45 and become irreplaceable. I'm a full stack engineer with 15 years
56:48 of experience in enterprise environments. My co-founder, Dr. Rafi is
56:53 a psychiatrist with over 15 years of delivering patient care. We're next
56:57 visit AI. We solve burnout by doing the charting so doctors can do the healing.
57:00 Thank you. >> Unbelievable. Incredible. I'll give a little golf clap here. Get a little golf
57:06 clap going. That was perfect. A perfect pitch. You explained exactly what the
57:10 problem was. You explained what the solution is and the opportunity in terms
57:14 of the total addressable market and why you are uniquely and your partner who's
57:18 a psychiatrist are uniquely qualified to do this. Uh, so this is as close to a
57:22 perfect pitch as you can get. If I were to score it, maybe 8.5 out of 10. I
57:28 don't give 10. So, you know, 8.59 and 9.5 would be the three choices. I think
57:32 making sure people understand this is for psychiatrists and psychiatry and
57:36 that you're very focused on that. Tell everybody what Next Visit is and how
57:41 you're doing in terms of product market fitting customers. Next Visit is an AIC
57:45 scribe and documentation platform for clinicians uh specifically behavioral
57:49 health like psychiatrists. I I don't know. It's just been a crazy past couple
57:54 months with the accelerator and uh just our growth internally. I mean we're
57:59 producing right now for physicians probably about $1.6 million a month in
58:03 revenue for them. >> Well, you got to try and capture 5% of
58:08 that. If you capture 5% that No, I mean that's literally like the uh the great
58:13 value proposition. If you give more than you take, you will continue to grow. And
58:18 what a And that's an amazing um replicant you have there. A synthetic
58:23 cat on that cat tower behind you. It looks so real. Uh is your owl real?
58:27 >> Yes, he is. >> Your owl is real. Okay, there you go.
58:31 What are you gonna spend the 25k on? You guys going to Vegas? You're gonna just
58:35 have a a corporate retreat? you know, invested in uh Plaude Noteakers. I think
58:39 you guys put me onto the Plaude Notetaker, which is a great noteaker uh
58:42 user. What are you gonna put it towards? You gonna go redesign your website? What
58:45 what's the uh what's the idea here? >> I think we're going to use this towards,
58:47 you know, we're really capital efficient. So, I feel like we can get a
58:51 lot of stuff done in terms of integrations and branching out to more
58:55 EMRs because that's what we hear a lot is doctors want interoperability. They
58:59 don't want to have to plug 15 different things in. So the more they can just be
59:03 inside of next visit without having to go externally um is better.
59:09 >> All right, well done. All right, we'll drop you off. Continued success to visit
59:13 AI. >> Good job. All right, well done. Wow, the show just keeps going. All right, I
59:20 promise. I promise >> one more segment uh before I go out uh
59:24 with my friends to ski. I got an early ski weekend uh in with my my friends
59:27 from New York. >> How fun. Uh yeah, great to see some old
59:33 friends. Uh I had asked you like, hey, on the Friday show, just to give people
59:36 something to do on the weekend that we would do, hey, Lon and Jake Cal off
59:38 duty. >> Sure. >> I am enamored with a certain TV show. I
59:43 asked you to try to watch a couple of episodes and talk to me about what you
59:46 think of this. >> Watch four episodes. I caught up on season four by your request. I'm all
59:51 caught up >> HBO's industry season 4. So this season, I could tell immediately why you liked
59:58 this season. The whole season revolves around Tender, a fintech company and
60:03 app. They're transitioning from a payment processor for porn sites and
60:09 sort of sketchy kinds of >> fans basically. >> Yeah. The the the show's fake version of
60:14 Only Fans, which is called Siren, by the way. Uh so they they have been handling
60:18 payments for those kinds of sites and and a a a site I don't think we can
60:22 mention here on the show, Captain Blank. Uh it's even more even more X-rated. uh
60:26 and then their trans but they're transitioning to they want to be a
60:31 respectable uh neo bank operating in the UK all regulated all uh you know very
60:37 front of board it reminds me a lot I think tether was probably an inspiration
60:41 for this season don't >> definitely yeah it's it's basically a a
60:47 payments processor like Stripe but or Tether but they're involved in things
60:52 that are a bit seedy and in the UK this is where regulation comes in so they're
60:55 really rip ripping this from the headlines. Who's ever doing this is
60:58 listening to this week in startups all >> they're clearly listening. Yeah,
61:01 >> they're clearly dialed in these writers. I'd love to have the writers on at some
61:08 point, but they um want to build uh they're they're facing push back and
61:13 they want to be respected by regulators as we've talked about on the show with
61:17 Alex on Mondays and and yourself that there's so much regulation coming into
61:20 the industry and there's a tension between Europe, America, and inside of
61:26 Europe, specifically in the UK around, you know, freedom of speech on platforms
61:32 and are they going to be a socialist or controlling uh regulatory environment or
61:37 are they going to be freewheeling and let things grow. So you have this
61:41 tension of politicians meeting with the teams uh and the teams are trying to court
61:46 them and say yeah we're going to get rid of we're going to give up 30% of our
61:48 revenue to go clean and we're going to add all these things but do you want to
61:53 step in the do you want to stop the UK from having its own you know basically
61:58 unicorns and are you you know a del because we need the the the folks in the
62:03 UK and the politicians in the government want to have economic prosperity so you
62:07 have that tension as well they're represented They've got this one Labor
62:11 Party politician Ban I think or whatever her name is. She's sort of representing the
62:15 government that's sort of in the middle here that they're trying to work with.
62:21 >> Also interesting of note, they have a fintech journalist
62:25 >> and they short things. >> Yeah. >> And he is awesome. So you have this
62:31 fintech journalist coming in and doing very shady uh and there'll be some
62:35 spoilers here, but we won't give too many of them. You can still enjoy it.
62:39 um a fintech journalist coming in trying to get dirt on these companies and he's
62:46 working with short sellers. Now, if you haven't seen the first couple of seasons
62:49 of industry, they were working at like a Morgan Stanley Goldman Sachs on a
62:54 >> trading point. Peer point is the name of their bank from the first few seasons,
62:57 but that's gone now. That's over. >> That's over. And everybody's wondering
63:01 like what happens like to the show. It turns out you've now got it in the
63:06 startup world. He just reset the whole concept to now there's a startup,
63:11 there's a short selling firm, there's this Financial Times like journalist
63:15 doing crazy things and then working with the shorts which is like Hindenburg or
63:21 you know other short sellers and they I think they even name check like Herbal
63:24 Life and that short >> they they I believe they mentioned
63:27 Herbal Life by name. Yeah. >> Yeah. And Aman I guess was the person
63:31 who shorted it. Um, and then they So, it's it's got like this really authentic
63:37 as somebody who's in finance and tech, it feels like they're hitting the notes
63:41 really well. On top of this, the protagonists of this are essentially two
63:48 female leads, one of them the shorteller and then one of them the wife who and
63:52 they both previously worked at this uh >> Harper is the short seller and Yasmin is
63:57 married to uh Lord Lord Henry Muk who's played by Kid Harrington from Game of
64:00 Thrones. And I mean with I don't want to give away any spoilers, but these what's what
64:08 I love about the show is nobody's likable. >> No, >> everybody's terrible. It's in a way like
64:13 The Sopranos. >> Yeah. It has some real overlap with Succession, I think, in that it's a it's
64:19 an exploration of these sort of sad, angsty, neurotic, extremely wealthy
64:24 people who seem very privileged from the outside, but they're sort of hollow
64:28 inside or they're they're nealist or they don't know, you know, what to do
64:31 with themselves or how to be happy. And I think that's a that's a big overlap. I
64:35 think another interesting overlap with Succession that I noticed is both shows
64:39 are sort of about how, you know, business is this constant balance
64:44 between personality and pragmatism. That you've got one person in the office
64:48 who's like, "That's a dumb strategy. We should just, you know, do this. These
64:51 are the three obvious things that we should do that would protect our
64:54 position." But then you've got these people who are either they're having a
64:58 breakdown, they're having a personal crisis, or they're they're just drugs or
65:02 they're vision, right? And it it's it's sort of whole mix like no we're going to
65:06 do things my way and you keep seeing that dynamic come up over the course of
65:10 the season. And of course succession was also about that that the people who can
65:15 be very cleareyed and very matterof fact like Logan Roy he's going to make the
65:18 right call because he's just calculating the angles whereas emotional people like
65:23 Kendall Roy are going to keep getting in their own way and overpowering
65:27 themselves. And I think Henry Muk is a great example of a guy who just can't
65:31 get out of his own way in the within the show. Yeah. >> And the Yasmin is gone from this like
65:37 very much a victim early when you see the first two seasons to being like very
65:44 Mchavelian in a very dangerous insane way >> that would make you know any student uh
65:51 or any themes around the Me Too era you know blown out of the water. It is dark
65:54 and >> it's a very it's a very horny show and that that sort of surprises me because
66:01 it it is becoming a hit. It's It's growing its audience with every new
66:05 season. And you hear the the line you always hear about TV now is
66:11 >> Gen Z does not like romance. They don't want sex in their movies and TV shows.
66:15 It's like they that, you know, unnecessary sex scenes is always what
66:19 you hear. And yet this is way more than Succession. A very horny show. One of
66:25 the horniest shows I can recall. I I have never seen anything this like
66:31 >> crazy in terms of mixing >> uh permiscuity, deviance, drug use, and
66:36 business and getting it all kind of right in a crazy kind of way. It's also
66:42 got like >> it it really does not pull punches. The performances are amazing. It's a very
66:49 young cast I think that is that they they basically have given the reigns to
66:52 these two young >> uh female actresses who are crushing it
66:57 in this show and then there are other actors who are a little bit older on the
67:01 margins but it's a very young show. It's incredible. Uh >> yeah, you're talking about Ma who plays
67:07 Harper and then Marica who plays Yasmin. They're the two sort of females but they
67:10 they've added a lot and Ken Lung I always I've liked him for years. He was
67:15 on Lost. He plays Eric Tao, uh, who's sort of Harper's mentor that she starts
67:17 a hedge fund with. >> He's the Gen X boomer. He's kind of like
67:23 the Gen X gay-haired >> boomer banker who's got his own money,
67:27 his own success, and is in it because he's got a addiction >> to being a finance guy. And
67:34 >> yeah, he's playing golf and bored at the beginning of the season and, you know,
67:37 he's going to like have to get back. And they also, they're adding great people
67:40 every year. They added Kit Harrington uh from Game of Thrones before this year.
67:44 They added I said it was I don't remember the actor's name, but he's
67:47 Jonathan from Stranger Things. And that's Kieran and Shipka as Haley uh the
67:52 sort of executive assistant who gets into shenanigans with her boss Henry and
67:58 his wife. Uh she was Sally Draper on Madmen if you recall. She was Don
68:02 Draper's daughter from Madmen. Yeah, >> this show is uh firing on all cylinders.
68:07 It's building um it's building its audience like you said. I found out
68:10 about it. There's a >> really great podcast you should watch
68:13 called The Watch. >> And The Watch is how I discover new
68:18 shows that I should listen to. Andy Greenwald and um got the other guy's
68:23 name. I'm an Andy Greenwald guy. >> But uh they are like deep in the
68:26 industry. It's part of the >> Chris Ryan. The other guy is Chris.
68:30 >> Chris Ryan. Yeah. Chris Ryan's brother. >> Ringer is the watch. Yeah.
68:32 >> But these two guys have been doing pods together for a long time. And so I
68:35 highly recommend you check out the watch. They do a great job breaking down
68:39 every episode and they are like super industry addicted and they're the ones
68:42 who turned me on to it a couple years ago. All right, that's it. We had a
68:46 great show today. What a great week at this week in Startups Twist firing on
68:52 all cylinders. We'll see you all on Monday and we will certainly be doing
68:55 more open CL >> more Claude of course. And if you uh if
69:00 you hit these QR codes here, I think you can these QR codes send you to to write
69:05 a review and this QR code that sends you to subscribe automatically to YouTube.
$

We built OpenClaw Ultron to replace 20 people at our company | E2246

@thisweekinstartups 1:09:08 18 chapters
[AI agents and automation][content creation and YouTube][productivity and workflows][solo founder and bootstrapping][e-commerce and conversion optimization]
// chapters
// description

This Week In Startups is made possible by: Crusoe Cloud - https://crusoe.ai/savings Lemon IO - https://Lemon.io/twist Northwest Registered Agent - https://www.northwestregisteredagent.com/twist Thanks to our guests: Alex Cheema of ExoLabs http://exolabs.net Ryan Yanneli of NextVisit https://nextvisit.ai/ Today’s show: It’s the Age of Ultron at TWiST and LAUNCH. We’ve given our OpenClaw digital Replicants the keys to all of our systems and we’re seeing how much of our jobs they can reall

now: 0:00
// tags
[AI agents and automation][content creation and YouTube][productivity and workflows][solo founder and bootstrapping][e-commerce and conversion optimization]