Ep. 105Thursday, March 5, 2026

“Just Trust the Agent” Is the Worst Advice in Programming History with guest Carl Brown (Internet of Bugs)

Books Covered

Reflections on Trusting Trust

Reflections on Trusting Trust

by Ken Thompson

Get the book →
Coding Machines

Coding Machines

by Lawrence Kesteloot

Get the book →

Book links are affiliate links. We earn from qualifying purchases.

Authors

Ken Thompson
Lawrence Kesteloot

Hosts & Guests

Carter MorganHost
Nathan ToupsHost
Carl BrownGuest

Transcript

This transcript was auto-generated by our recording software and may contain errors.

Carl Brown (00:00)

in my experience, right, having lived through this kind of thing over and over and lots and lots of panic about, you oh, well, now that this is capable, we don't need junior people anymore. Well, you know, we always still need junior people. It's just what a junior person does now is very different than what a junior person did in 1993.

Carter Morgan (00:20)

Right.

Carter Morgan (00:29)

Hey there, welcome to Book Overflows, the podcast for software engineers by software engineers where every week we read one of the best technical books in the world in an effort to improve our craft. I am Carter Morgan and I'm joined here as always by my cohost, Nathan Toops. How are you doing, Nathan? Well, we are so excited to have Carl Brown back on the podcast. We love Carl. We're going to talk about it a bit in the interview. We're always looking for an excuse to chat with Carl and we read a

Nathan Toups (00:40)

Doing great, everybody.

Carter Morgan (00:54)

short story in an essay recently, the essay was Reflections on Trusting Trust by Ken Thompson and the short story was Coding Machines by Lawrence Kessalu, which really got us talking a lot about AI and how the industry is changing and ⁓ yeah, just what our workflows look like as software engineers and where the big AI labs are being truthful and maybe where they're not being truthful. We had a lot of great discussion in our discussion episode about that. And when we were thinking about who we could have on to discuss these pieces, we thought, know, Carl is one of

kind of the leading voices on YouTube right now, giving a different perspective on AI. And we thought, we gotta have Carl back on. We had him on, you're gonna hear the interview. It was awesome. Nathan, give people a sneak peek of what we're gonna hear from Carl.

Nathan Toups (01:35)

Yeah, we went all over the map. mean, it's just such a wealth of knowledge. And it was cool to start off of this sort of like trust and trust and security component. But we ended up hitting a lot of topics around future of work, the intents of the leadership and sort of the, there's just all kinds of interesting topics that we got to explore. I hope we get to have more discussions like this in the future. And we'd love your feedback.

If you want more content like this, just let us know and we will try to make it happen.

Carter Morgan (02:08)

Yeah, so the interview is gonna start out talking about those kind of two pieces we mentioned. You can check out our discussion episode, there'll be a link to that if you wanna hear the whole thing. But if you ⁓ didn't read those pieces, we'll summarize them for you and we move beyond them to talk about other more general concepts. you should be able to listen to this interview and enjoy it no matter your familiarity with the pieces. Check it out, this is Carl Brown as we talk about all things AI. ⁓

Carter Morgan (02:37)

Carl, such a pleasure to have you back on the podcast. Thanks for coming again.

Carl Brown (02:40)

Yeah, thanks for having me again. It's always fun.

Carter Morgan (02:43)

We're excited and I'm personally excited because I am a fan of your videos, right? I'm always excited when they come out, especially I usually eat lunch alone most days because I need my quiet time and anytime there's a new Carl Brown video, I'm like, awesome.

Carl Brown (03:01)

Honestly, I really appreciate that feedback. when you spend a lot of time doing programming, right? You do a thing and you run it and the test passes and you're like, cool, I get some feedback that I accomplished something, right? YouTube is not like that, right? YouTube, I start working on a script and it's weeks before anybody a lot of times even sees it or says anything about it. So staying motivated can be a trick when you're used to the real quick feedback of programming and now you go to a much longer kind of thing. So I really do appreciate the feedback.

Carter Morgan (03:05)

Right.

Yes, yes.

No, it's great. It's great content. And we loved our first interview with you. And we've been thinking for a while, like, how do we get Carl back on the podcast? Because that was such a fun interview. And then we read these pieces recently, Reflections on Trusting Trust by Ken Thompson and Coding Machines by Lawrence Kestelu, both short stories.

we, we'd love to have Ken Thompson and Lawrence Kesslute on the podcast, but we thought in the meantime, who's someone we know who's been talking a lot about AI, talking a lot about these pieces who would have a lot of interesting thoughts on them. And we thought Carl Brown is the perfect time to have him back on the podcast. So super psyched to break this down with you and kind of talk about the industry at large.

Carl Brown (04:07)

Great to be here. I'm looking forward to it. Where do I start?

Carter Morgan (04:08)

Well, fantastic. Yeah, yeah, it's going to be awesome.

Yeah, so I guess we can just start maybe up top. We have a discussion episode about this. so listeners, if you haven't listened to that discussion episode, feel free to check that out as well to get our full thoughts on this. But basically these essays, Ken Thompson's On Trusting Trust, the whole idea is that you can't trust code you didn't write yourself. That there's ways to obfuscate hidden, you know,

bugs and errors and viruses and things like that. I'm using the wrong words, but you get what I'm saying. ⁓ That can ultimately make these programs harmful to people down the line. Coding machines is a short story that kind of plays into that idea. But it plays into that idea from like an AI perspective where the whole premise of coding machines is that ⁓ somewhere along the line, a computer program evolved to sort of write code in this obfuscated way.

It's injected itself into the binaries of all of these popular programming languages and programs. And it's of self-evolving. And the story leaves it ambiguous as to whether or it's malicious or if it's just kind of a new form of life. But it kind of really plays into the zeitgeist today, which is like, now the most popular influencers are telling you, don't read any of the code you write. Just trust the agent. Ship whatever you want. And then, again, there's questions about if AI

There's a lot of talk about AI alignment and could it become a malevolent one day? And so I guess we're going to talk about all of that, but maybe give me your thoughts straight up about what you thought about the two pieces and how you think they kind of relate to the world today.

Carl Brown (05:36)

Yeah.

So

I'll talk about the Ken Thompson piece first. So that was from a lecture, an ACM lecture from like, I wanna say the 80s. So it's like, I've lived with that my whole career. ⁓ And I remember back when I was first building stuff from scratch. And it was like, I always was annoyed because I had to download a binary. I could build everything else, but I always had to download a binary compiler. And so I was always concerned about, that was, if I'm gonna get,

Carter Morgan (05:51)

Yes.

Yes.

Right.

Carl Brown (06:15)

if I'm going to get a Trojan on this machine, that's going to be the way that I'm going to get it. And that always bugged me. ⁓ So that I always found that to be really interesting. The the the other piece started off being really interesting. And then it stopped before it got where I wanted it to go. yeah. So, ⁓ OK, so let me let me give you a book recommendation.

Carter Morgan (06:37)

Ooh, hot take already. Talk to us about it.

Great, great.

Carl Brown (06:45)

Can you read that? Where the hell's my camera? There we go. Cuckoo's egg. So this is a book from, actually, I don't know when this is from. This is old, old, 1989. Yeah, so I bought this probably in college. It's from Hastings, which is a ⁓ bookstore that doesn't even exist anymore. I've still got the label on the back of it. So that is a story of a guy who found a hacker in like the early 80s.

Carter Morgan (06:46)

Yes, the cuckoo's egg. ⁓ Tracking a spy through the maze of computer espionage.

You

Carl Brown (07:14)

Who was breaking into the machines in his university and all of the work that he did to try to like figure out how do I figure out where this is coming from and all that kind of stuff it was very very early back before there were firewalls that you could buy that kind of thing right it was it was a crazy time ⁓ and What I was expecting or hoping for I guess and and okay, so this is not intended to be a dig on the story because

Carter Morgan (07:41)

Right.

Carl Brown (07:41)

The person that wrote the story was trying to tell the story they wanted to tell and the fact that I was wanting them to tell a different story is not their problem, right? ⁓ But the people in that story did not exhibit much in the way of computer security expertise, right? And so there's not a lot of difference practically between

Carter Morgan (07:48)

Hahaha.

Nathan Toups (07:48)

Right?

Carter Morgan (08:02)

interesting.

Carl Brown (08:09)

what they were going through and somebody trying to hack their computers, right? And if you're being hacked, I have been hacked several times for various reasons. If you're being hacked, there are certain things that you do, right? And they didn't do any of those. And so I was kind of frustrated because I thought it would be a good opportunity to kind of walk people through the like, do you do when you're being hacked and how do you do this? know, but they didn't talk about like, so, you know, one thing,

Carter Morgan (08:13)

Right, right.

Carl Brown (08:37)

that you've probably seen. don't know how much you pay attention to it. But every time you download a binary from a reputable source, there's always a signature file next to it. So that when you get the thing down, you can run a checksum. You can check it with a signature file. You can make sure you got the one that you were expecting to get, that kind of thing. Those checksum files you can run later too. And so the idea of, oh, OK, well, it keeps changing itself. It's like, well, set up a thing that runs a checksum on it every five minutes and figure out when it gets changed.

Carter Morgan (09:04)

Yeah.

Carl Brown (09:08)

The other thing that they didn't do, which I, well, that I was kind of surprised about is they never tried to change operating systems. Right? They never were like, okay, you know, you never knew what they were working on, but you, but they never said, okay, well, you know, obviously this windows machine is compromised. Let's go grab something or other, right? You know, let me get down to the store. Let me grab a raspberry pie. Let me put, you know, be a netBSD on it or something. And let me, ⁓ the other thing is the bit where they were doing the,

Nathan Toups (09:15)

All

Carter Morgan (09:15)

Interesting.

Carl Brown (09:36)

the like trying to compile the compiler kind of thing, and they were trying to build a compiler. The thing to do at that point before you try to build your own compiler, and I realize he wanted to talk about building your own compiler, is build across, is get a cross compiler. Right? So you on a different architecture on a different operating system, you, I mean, so like the way you build ⁓ apps for Android or for iOS or something is kind of the version of a cross compiler that most people would probably have run into.

where you're building on a Windows box for a thing that's gonna run on an Android phone and they've got different processors and different operating systems, but it still works, right? You can cross, mean, and that's how we get, the first version of GCC for Linux was built on BSD 44 or something, I'm sure, right? Or Solaris or HP UX or something, right? Cause you have to like bootstrap, you have to get a compiler onto that machine before you can start building things with it, right? So that's how we get things onto

Carter Morgan (10:09)

Right.

Nathan Toups (10:25)

Great.

Carter Morgan (10:28)

Right.

Carl Brown (10:33)

a new machine that that's never had that thing running on it before is using cross compilers. And so, you know, it's like, okay, if if if I can't trust anything, I can't trust the compiler, I can't trust the operating system, cool, I'm to go over here, I'm going to build all my executables in cross compiler, it's going to take a while, but I'm going to do that. And then I'm going to move them over. And that way, I know that, you know, unless both of them are hacked, right? Kind of thing. And so they Yeah.

Nathan Toups (10:54)

Right. Or like somehow it can go across air gapped machines or some

real tinfoil hat stuff where you're like, OK, I ⁓ can't even reason about this.

Carl Brown (11:06)

Yeah, so I it was it was kind of interesting, but I was I was waiting for any other thing is that they didn't really go in into any. Because I mean, the in Cuckoo's egg, there's a lot of like, OK, now that I realize that we're being hacked, let me figure out where this guy is coming from. And let me figure out how this is happening. And let me go try to connect to the machine that he's coming from and all that kind of stuff. And so there was none of that. Let me figure out where this thing is going. There was no like real investigation where the origin of it was. And so it was like.

Carter Morgan (11:20)

Right, right.

Carl Brown (11:34)

It felt to me like there was the most interesting parts of the story they could have told got left on the cutting room floor or never got written in the first place. So I was kind of bummed in it. But again, that's about what I was wanting from the story, not what the author was actually trying to write.

Carter Morgan (11:51)

Yeah, and I get what you're saying and I'm with you. Sometimes I'll read stories and it's like, I read this one story. You know what, I have this problem with the musical, The Producers, which is crazy, which is that I think the producers lets you know, well, I mean, guess I saw the movie and I've listened to the Broadway, but my problem with The Producers is that you know from the beginning that the play is gonna be a flop. And so I think the more interesting part of The Producers is the fallout of the play being a flop.

Carl Brown (12:02)

The movie or Broadway? Okay.

Carter Morgan (12:19)

But instead, like two thirds of the musical is them making the play. so I, and so I, it's kind like the interesting part happens and then it ends. And so anyhow, I guess that's, if you want on your bingo card what the producers and coding machines have in common. ⁓

Carl Brown (12:23)

Right.

You

But yeah,

the Cuckoo's like, it's a nonfiction book. It's a memoir kind of thing. So if you want to put that in rotation on your cue of people that might want you to talk about it, it's a really good... Actually, guess I could get on the Discord and I could stick it in the list myself, but...

Carter Morgan (12:48)

Okay, that's good to know.

There you go, yeah. Well, I mean, so I guess talk to us a bit about, these ideas overlap and you've been one of the, I guess more prominent AI skeptic voices out there. And I do just wanna make it clear for any of our listeners that AI skepticism, maybe Carl, why don't you tell our listeners your kind of approach to AI?

Carl Brown (13:12)

Yeah, so I mean, so okay, so I have a physics degree, right? I'm, you know, I grew up on scientific method. That's all a thing to me, right? So for me, it's it's not about it's not about I mean, so scientific method, you're skeptical, skeptical of everything to start with, right? But the question is basically, so what does the evidence say? Right? And if you follow my channel for a couple of years, I'm coming up on I guess I'm like two weeks from the end of this two year anniversary of the first video I actually posted.

⁓ But ⁓ if you follow my channel for a couple of years, you'll see you've seen me take an AI run it against a challenge, go, okay, got this right. I didn't get this right. And then, know, okay, three months later, okay, this is better at this. It's better at this kind of thing. Right. So I'm trying to follow what it's actually doing a good job of. ⁓ But the the the discourse, especially coming out of the AI companies, and some of the

the AI industry adjacent media is very like, it's all about taking the ⁓ kind of the curve of like, is, what has AI been doing and then kind of projecting it to what the possible future is. And it has not been the case a lot of times that ⁓

Carter Morgan (14:29)

Right.

Carl Brown (14:37)

things that people talk about going viral stay going viral. People talk about like exponential growth and in the world, there's really no such thing as exponential growth. Basically, you're always limited by something. There's always some resource that you start to run out of. ⁓ Right? And so the question has always been when it comes to AI, what's the limiting resource and when do we start hitting it?

Carter Morgan (14:49)

Right.

Nathan Toups (14:54)

Yeah.

Carter Morgan (14:58)

right.

Carl Brown (15:06)

And my guess is the limiting resource is basically the amount of data that exists that's actually good data that wasn't created by an AI in the first place. And that seems to be, you know, now that they've scraped basically the entire internet, right, and they're having trouble finding more stuff on the internet that they're sure that humans made, because if you train an AI on itself, that ends up causing a thing called model collapse eventually. And so

Nathan Toups (15:34)

W-w-

Carter Morgan (15:35)

Do we know, I've heard that maybe they are surmounting some of the model collapse problems or, well they claim a lot of things.

Nathan Toups (15:41)

mean, that's what they claim, right?

Carl Brown (15:43)

So I have

seen some research. This was in a video I did a while back. basically, the model collapse is not a you get one document that was created by an AI and you're done kind of thing, right? It's a ratio of of natural content to synthetic content. And it's like 25 % maybe.

Carter Morgan (15:58)

Right, right.

Carl Brown (16:11)

It's not, you can't do like one to one. You can only do, you can increase the amount of text that you've got by a fraction of what that is. And whether that's 10 % or 25 % or maybe 50 % or something, we're not talking about, I mean, they need like 10x improvement, right? And so if the data is actually the limiting factor,

Carter Morgan (16:15)

Right.

right.

Carl Brown (16:37)

getting 10x the data is gonna be really, really, really hard. And you're definitely not gonna be able to do that synthetically. So you can try to make, you can try to allow more synthetic data in the mix without causing moral collapse, but you're not gonna be able to get to the 10x kind of ⁓ increases that they're hoping to get to.

Nathan Toups (16:56)

And if you think about it, I so I was a failed physics student. think Carl and I actually had coffee back in January and talked about this a little bit. ⁓ But signals processing and audio, I think, fits into this well. mean, obviously, all of that is physics as well. But you get this thing, a feedback loop, right? And feedback loops can just turn into these infinite noise cycles where, therefore, the input becomes completely meaningless. And this is, you know, if you're building

Carter Morgan (17:02)

You

Nathan Toups (17:25)

the next most likely token prediction machine, and the next most likely token is trained on a bunch of guesses of the next most likely token, you can just imagine this feedback loop in your mind without even having to understand all the math, of just being like, well, if it's a bunch of generated data of the thing trying to guess the next most likely token, and that's what the inputs are, what is that even, it doesn't mean anything. There's no semantic meaning coming from humans, right?

Carter Morgan (17:39)

Right. ⁓

Carl Brown (17:47)

Mm.

Carter Morgan (17:53)

Yeah, think ⁓ you talk about like, okay, what's the resource limit? And that's always the question, right? I think they call it the sigmoid, right? Which is that like, you know, things look exponential until they flatten out. And I think for any of our listeners, can imagine like the iPhone, for example, when they started making the iPhone, every iPhone that released was just this huge improvement over the last one. And then you get to the iPhone, what, seven or eight-ish, and they've kind of flatlined since then. ⁓

And so that's the question. like, it looks exponential until it's not. And I hear, and I'd love to get your take on this, Carl, because ⁓ I keep hearing people, and you know, the AI labs are there. I was a little cagey. They want you to believe the most maximalist thing possible. And I keep hearing people who are kind of like adjacent to the space, like reporters, or maybe like CEOs say, Claude code is iterating on itself. It's being used to improve Claude code. And I always want to say, well, wait a minute. Are you telling me that

the engineers that make Cloud Code or using Cloud Code to build Cloud Code? Because I would hope that was always the case, right? Or is Cloud Code making autonomous improvements to itself? And even then, Cloud Code improving itself is very different from the models improving themselves. Cloud Code is just an agentic harness around the models. It's basically a sophisticated context management machine. ⁓ And so I guess that's what I keep hearing people say, who are kind of like, who

Carl Brown (18:59)

Right.

Carter Morgan (19:21)

who believe in that exponential growth, say, well, the AI is improving itself. It'll just keep improving itself faster and faster until we create the singularity. But I'm skeptical. Where do you lie?

Carl Brown (19:31)

Yeah, the recursive self-improvement thing has always been, I mean, so it's an interesting idea, right? I have never seen any evidence that that's the case. There was a recent thing they did where they had, I want to say it was cloud code, build a compiler, a C compiler from scratch. And what from scratch means is basically we're going to take the test suite from GCC or from LLVM, I forget which one,

Carter Morgan (19:42)

right?

Yes, yes. Yeah.

Yes.

Carl Brown (20:00)

that has 15,000 different test cases, and we're gonna have the generate code until that works, and then they had somebody... Right, right. And so it's like, okay, if you can't do... Because C compilers are like... mean, granted, there are some really sophisticated things that happen inside C compilers. But C compilers are like, definition, one of the earliest and simplest... Not simplest, one of the earliest and most well-understood...

Carter Morgan (20:04)

Yeah. ⁓

All supervised by a staff level engineer, by the way, right?

Carl Brown (20:30)

programs ever, right? At least written in C. And so if you can't get that, right, how are you going to be doing, know, research, how are you going to be doing improvements to a thing that we don't know how to improve if you can't even replicate something that is one of the oldest things that we know to exist and has been studied since, you know, 1970 something. It's,

In my world, it's possible, my beliefs, it's possible. And there are people that don't believe this, and that's fine. We don't actually know. I don't see a reason why we could not take the structure of the human brain and emulate it in a computer and make it work in a computer, right? I mean, it's possible. There might be other stuff going on inside the brain that we can't replicate. We don't know. We won't know that for a long time, I don't think. ⁓ But ⁓ the question is basically, what is the evidence showing us?

Carter Morgan (21:16)

Sure, right.

Carl Brown (21:30)

And if the evidence were showing us that, it can write a compiler, and look, it improved on a thing that we've never managed to get GCC to do and that kind of stuff, if it was doing that kind of thing, then I can imagine, OK, maybe self-improvement is possible. But one of the reasons that I harp a lot on AI writing code and the quality of the code it writes is because in order for AI to do this exponential self-improvement thing,

it's got to be good at writing code. And if it can't be good at writing the kind of code that's already in its training set, it's really unlikely to be able to be good at writing code that can't be in its training set by definition because it's trying to improve on a thing that, you know, that doesn't, that isn't in its training set. So, so yeah, that's always been a thing for me. I keep a close eye on how well it does code and, know, it does a lot, a much better job on writing code than it did two years ago.

Carter Morgan (22:01)

Right.

Right.

Carl Brown (22:28)

but it's nowhere near what a good programmer, much less a good programming team can do. And so I'm not worried about recursive self-improvement yet. I have seen no evidence that that's gonna be possible. I don't see any reason that it couldn't be possible at some point, but I don't see any evidence that we're anywhere close to that right now.

Nathan Toups (22:48)

Yeah, so that's.

Carl Brown (22:50)

or

even on the path for that right now, really.

Nathan Toups (22:53)

Yeah, it feels so strange because sometimes in these spaces where we're seeing these conversations happen, it's almost like it's revealed on where somebody falls on this belief. Because sometimes I'll listen to a group of folks that I otherwise have a lot of respect for and I'm like, ⁓ you're a believer. You believe that AGI is amongst us. ⁓

Carl Brown (23:09)

Mm-hmm.

Mm-hmm.

Nathan Toups (23:22)

and that we're discovering it, you know, and ⁓ there's the weird, I think the strangest thing about the open-claw stuff is that there's this desire to personify these things as if there's somewhere hidden inside there's volition, right? Like it has this desire and it's just, we have to unlock it. And sometimes I'm thinking like, ⁓ you talking to me in metaphors and you're just kind of speaking about this way? Or do you really believe it in?

Carl Brown (23:33)

Mm.

Carter Morgan (23:38)

Right.

Yeah.

Nathan Toups (23:51)

I've definitely come away from some conversations not actually knowing what that person actually believes.

Carl Brown (23:58)

I mean, like, I know people, I can't, I mean, my wife does this, I occasionally will do this too. Like, I'll apologize to my car when I hit a pothole. Right? I mean, we tend to like personify things, just because it's good. It's a good shortcut for the way that we way that we talk and the way that we deal with the world. You know, we attribute feelings to, you know, we attribute

Carter Morgan (24:08)

Right, Yes.

Carl Brown (24:24)

complicated thoughts to our cats and to our dogs and to our pets and we, just the, know, kids talk about stuffed animals wanting things or not wanting things. We just tend to treat everything that might have any kind of agency or appears to have any kind of agency as if it was a person because that kind of language is really useful. But that's a real problem from an AI standpoint because it implies a lot of things that

Carter Morgan (24:34)

Right.

Carl Brown (24:52)

there's no evidence or true. don't know. So I have a tattoo right here. I don't know if you can see it, but it's a quote from Dune, which is my favorite book. It's in the font of the paperback of Dune that I got when I was in junior high. But it says, thou shalt not make a machine to counterfeit the human mind. And for me, the important part of that is counterfeit, right? So I think a lot of AIs are fine.

Carter Morgan (24:58)

Yeah.

Nice.

You

Carl Brown (25:22)

⁓ I understand there are people that are very upset with them about the whole stealing of intellectual property thing. And I understand that. But in my world, right, I'm primarily interested in dealing with the way that they generate code and the way that they are trained on code. They are primarily trained on publicly available open source code. And so I don't see a problem from an intellectual property stealing perspective about the kinds of AI that I'm concerned about.

Carter Morgan (25:31)

Right.

Right, right.

Yes.

Carl Brown (25:52)

Right. And so I'm not, I'm not trying to say that when I say I don't have a problem with that, the intellectual property stuff is not, it doesn't reflect on me. Right. It's not that I'm not saying that the people that think that it's wrong because of intellectual property or are, you know, are incorrect or something. But from a learning off of open source software and then trying to recreate software kind of perspective, the intellectual property, things are a different kind of concern. ⁓ But ⁓

Carter Morgan (26:00)

Right, right.

Carl Brown (26:21)

when we're talking about ⁓ how well it can generate code and we're talking about that, I'm not seeing a lot of the kinds of improvements that I really would expect if it's going to get to the point where it can start writing its own code and doing everything itself and all that kind of stuff. The singularity just, we don't seem to be on a path to that right now. And the other thing to understand about it, which is

you know, is kind of glossed over a lot. The things can't learn. They just can't, right? They reread their instructions on every time that you give them a prompt. They reread, you know, all their documents and they reread the rest of the conversation and then they, you know, start from scratch. I keep liking it to the Memento guy. ⁓ One of the, Caprothi, he gave a talk where he talked about... ⁓

Carter Morgan (26:57)

Yes.

Yeah.

Carl Brown (27:17)

a movie called 50 First Dates. I've never seen that, but it's the same kind of thing where somebody has a short-term memory block. Yeah, Dream Berrymore, I think, but I've never seen that movie. that's another, if you've seen that and not Memento, then that might be a useful thing. But like they don't learn. And I just, can't imagine that something could be equivalent to the human, a human without being able to learn. It's just like, you know, that's like one of the fundamental things about us.

Nathan Toups (27:21)

I think it's got Adam Sandler. Adam Sandler's in that. Yeah. Yeah.

Carl Brown (27:46)

And figuring out what to learn and what's important to have stick and what's not important to have stick seems to me to be one of the most fundamentally difficult parts of what a human does.

Carter Morgan (27:46)

rhyme.

Well, if I can push back on something you said, you said it's not as good as a team of good programmers, to which I'd say most programmers are not very good, right? just that, and for example, like I can't give a one-to-one example, but I'll, know, at work right now, we're trying to migrate our data from Mongo to Postgres, right? We tried to use some sort of migration service and it just kept not working. So we said, okay, let's do.

Our approach is pretty simple, where we just have a simple replication engine that's taking the Mongo collections, replicating them in Postgres's dumb tables, just like ID, document, right? And then we're writing these Postgres trigger functions that then take that document and shape them into the proper shape, right? And so we have done a lot of reasoning on our end to define the new model or the new schemas. ⁓ I wrote some Claude code skills to make it very clear how it should create these trigger functions. But once I feed it that context,

I I just learned what Postgres trigger functions are like a few months ago, right? I just don't do a ton of work in Postgres. so Mongo, or sorry, the Cloud Code is creating those trigger functions much better than I could. I think, I saw someone on Twitter say basically that like the whole winning theme in capitalism has been, and I don't even mean capitalism pejorative term, I just mean like the economic system, has been that cheaper mass-produced goods,

have be handcrafted stuff. All clothes used to be tailored. And now we just buy cheap mass produced stuff off the rack. And so you say it's not as good as a good programmer. Sure, I agree. But a lot of people, there's a decent amount of software developers who their whole job is just take a ticket, implement ticket, take a ticket, implement ticket. And so if I'm them, I might be looking at me like, whoa, like maybe this thing can do my job a lot.

Carl Brown (29:55)

Yeah, I mean, I'm still waiting on one that can actually do the backlog for me, because I hate grinding the backlog. It's one of my least favorite things. But ⁓ so what I say about the way that it does, the way it helps programmers, is it makes the easy things easier and the harder things harder, right? And so if you were someone who had spent years writing

Carter Morgan (30:00)

Yeah.

Right. OK.

Carl Brown (30:22)

postgres trigger functions, right? You would look at the code it's generating in all likelihood and go, ⁓ I wouldn't, no, no, here, do it this way instead, right? And so if you're...

Carter Morgan (30:28)

Yeah, that's fair, that's fair. And I do catch

it several times going like, wait, uh-oh, you did that wrong, right?

Carl Brown (30:33)

Right. Yeah.

if you're trying to do a thing that you're a novice at, it's going to be better than you. And that's great. And take advantage of that. That's wonderful. Use that as a springboard either to get you to the point where you can work on the parts of the code. It's like, OK, cool. Now I've got the trigger stuff done. Now I can work on the schema, which is the part that I actually find interesting.

Carter Morgan (30:41)

Yeah, okay.

Carl Brown (31:01)

⁓ Or to say, okay, cool. Now that I've got this, now it's spinning out these trigger functions and I understand the format of those, let me figure out what the consequences of those are. Okay. Now we're trying to run a query and there's this thing that screwed up. Let me go back and change the trigger functions and let's rerun this again kind of thing. So you use it as kind of a ⁓ bootstrap for getting a thing that you can start iterating on. ⁓ That's great. But the simple way that I talk about it is like, okay, so

Carter Morgan (31:14)

Right, right.

Carl Brown (31:30)

on your phone when you're texting, ⁓ You are much less likely in texting to send a misspelled word, right? Because autocorrect is gonna fix that for you. But you are much, much, much more likely to send the wrong word. Because sometimes autocorrect goes to a different place. And if you sent the misspelled word, the person that received the text from you is a lot more likely to understood what you meant to say.

Carter Morgan (31:48)

Interesting.

Mm-hmm.

Carl Brown (32:00)

than

if the AI corrects it to the correct word or what it thinks is the correct word when it's actually not. And then the person gets it in there like, okay, what were you trying to say? I don't get it, right? Yeah, exactly. ⁓ So, you know, by taking a lot of the easier work and making it go away,

Carter Morgan (32:10)

All right. Go duck yourself? What?

Carl Brown (32:22)

It both frees us up to think about the more difficult things, but it also makes some of the more difficult things harder than it would have otherwise because we don't have the complete understanding of what was happening under the covers, or we didn't have to think through the problem the way we should have had to think through the problem if we were gonna do the lower level parts. ⁓ So it's a ⁓ double-edged sword to some extent. And a lot of times, so I use it primarily, the thing I really, really appreciate it with.

is prototyping the thing we used to call spikes or ⁓ tracer bullets. It's like, okay, build build a thing from scratch that does this and then I'll basically look at it, understand what it does and then take that the pieces of it that are relevant and then rip pull that over into my big project. It's amazing for that. It's amazing for like, I don't know exactly how I want to do this. I wreck it. So like I talked to people about ⁓ one of the things that happens

Carter Morgan (32:55)

Yeah, ⁓

Nathan Toups (32:56)

Mm-hmm.

Yep. ⁓

Carter Morgan (33:09)

Right,

Carl Brown (33:19)

when in startups and early product kind of stuff is you're trying to figure out what are the pieces of the product that I'm building that people care about. And there's a lot of pivoting that happens because like I thought that this would be the thing that everybody would use, but everybody's using this kind of thing over here. So let me make that a more important part of the product kind of thing, right. And then there's a point where you get to what startups called product market fit, where you have discovered the part of ⁓ your your offering.

that's the part that people find valuable enough to pay for. Getting to product market fit is time consuming and expensive and a lot of companies fail during that. And the less money and time you can spend on that, the better off you are. And then once you get to product market fit, then you need to make the thing stable and you need to make the thing reliable and you need to make the thing worth people's money.

And so AIs are great up to that point, right? Of like, okay, spin this up, spin this up, spin this up. Let's see what people like kind of thing, right? You got to make sure, and this goes back to part of what you were saying is, the difference between dressmaking or that kind of thing is that you, ⁓ the consequences of getting a dress wrong are not so important, right? There's the occasional wardrobe bone function, right? But like,

Carter Morgan (34:19)

Right.

Right, ⁓

Carl Brown (34:46)

We don't have mass produced architecture. We have, we architected this thing and we build many copies of that thing. But like, don't do ⁓ mass produced architecture diagrams for buildings that might fall down. There are rules about that. We don't do that kind of thing for bridges. We don't do that kind of thing. We do a lot of testing for cars and that kind of thing. There's a lot of safety testing in that.

Carter Morgan (34:49)

Right.

right, right.

Carl Brown (35:15)

So the question is basically, the thing that you're talking about being overwhelmed by the cheaper versions, what are the consequences of that being low quality? And there are a lot of things that programmers do that being low quality is fine. And for those kinds of things, I expect AI to take a lot of that work. And to a large extent, that's fine with me. But when it comes to things like a service that's sitting on the internet that someone's going to try to hack.

Carter Morgan (35:27)

Right, right.

Carl Brown (35:43)

Right? It's very important to me that that's done correctly. When we're talking about a service that holds customer credit card numbers or email addresses or social security numbers or any of those things that you don't want bad guys to have, it's very important to me that that wasn't just vibe coded. Right?

Nathan Toups (35:54)

Great.

Carter Morgan (36:01)

We, ⁓

yeah. And I was trying to explain that to some people at work or at least share my opinion, which is so at our startup, we actually are embarking on a rewrite, ⁓ one to make it more easy for Claude code to reason about the code base, but really it's so that we can reason about the code base better because our data access patterns are problem. The original sentence, we basically built a relational database schema and structure in Mongo. And so our data lookup patterns are incredibly inefficient. And so.

Carl Brown (36:19)

Right.

Okay.

Carter Morgan (36:31)

We were faced with either restructuring all of our Mongo data to work more like an actual document store and denormalize a bunch of stuff or port it over to a post-grad, a relational database. And then we were also using GraphQL, which we have just found time and time again, just been foot gun after foot gun after foot gun, right? And so we kind of looked at it and said, you know what, I think a rewrite makes sense. And then we thought, okay, and if we're working with like,

relational data, define schemas, more standard technology. We were using some weird async task orchestration library. And so we didn't understand it. So heaven knows Claude didn't understand it. And so the hope is that after moving to some more standard patterns, that not only will we be able to understand it better, that AI agents will be able to understand it better as well. But I was just trying to tell my team, OK, so our code base, and we had four repositories that were interacting with each other.

Claude reached the point where it could not understand what was going on in there. And we are a tech startup where this is like all we really care about. I said, you got to think about Disney, for example, where Disney, you know, they've got, let's say that if someone pitched it and said, we could rewrite our systems or whatever, or try to, you know, embark on a big migration to make it more AI native. So Claude can reason about it better. And if we do that, we can reduce engineering head count by 50%.

If you're Disney, you've got to kind of say like, wait a minute, people don't come to Disney because of our amazing technology. They come for our rides and our characters and our churros. And so it's like you're saying if at any point during that reride, you lose the ability to purchase Disneyland tickets or any part of your cruise system ship systems malfunction, then all of a sudden the cost is way more than the benefit. Right. And so, cause people talk about kind of like a SaaS perspective that like

Carl Brown (38:11)

Mm-hmm.

Carter Morgan (38:22)

That's what the SaaS companies are gonna go to zero because someone's gonna make a cheaper alternative. They're gonna run it with 20 % of the employees and like, I don't know, maybe that will happen. But like for these big companies where that's not what you're selling, like Walmart is not gonna get disrupted because someone vibe codes a better Walmart. And Walmart employs a lot of software engineers, right? But.

Nathan Toups (38:29)

It.

Carl Brown (38:39)

Right.

Nathan Toups (38:41)

It's like a Dunning-Kruger-powered tech-dead machine is basically what we're making right now. Because I do think that it's... We're basically selling people the fact that prototype quality software can replace their engineering team to non-technical people, right? I've run into folks who've like, Vibe-coded something in Lovable, and then they're like, yeah, I've got this startup. And I'm like, hmm, yeah.

Carter Morgan (38:44)

Yeah. Yes, yes.

Carl Brown (38:46)

Yeah.

Carter Morgan (38:58)

Right, right.

Carl Brown (39:06)

I mean, so in my mind, that's an opportunity for us, right? Because as programmers, because there are a lot of businesses that never get off the ground because, you know, they can't afford to bring in a programmer for that, right? But if they vibe code a thing, and then they get people that use it and they've got money coming in, then, you know, then they start realizing, you know, this is a giant pain to maintain, maybe we should rewrite this and they'll bring in a programmer to rewrite it, hopefully, right?

Nathan Toups (39:10)

Absolutely.

Carter Morgan (39:16)

Totally.

interesting.

Right, right.

Carl Brown (39:35)

At least once they get to the point where they're convinced that the AI isn't going to get so much smarter that can rewrite it itself next month or something. I made a video a year ago about some things that programmers might actually benefit from. I think it was like five jobs that programmers, that AI will invent for programmers. And one of them was basically going in and rewriting a thing that an AI managed to find product market fit for.

Carter Morgan (39:41)

Right.

Interesting. Yeah, yeah.

Carl Brown (40:05)

So it's, know, we've got, you know, it's always, they're always a double edged sword, right? mean, tech, I've been through so many different, you know, technologies that were supposed to make all programmers go away. I've been through so many different downturns. I've been through so many different offshoring or, you know, outsourcing or, you know, trends in the, like, the, you know, the things that the MBAs believe are going to be the thing that's going to make.

Carter Morgan (40:19)

right. ⁓

Carl Brown (40:34)

all development cheaper and make those, you know, pesky, annoying, expensive software engineers go away. And to me, this feels just like another one of those. It's, you know, and my belief is, and we will see, and I don't know how long this is going to take us the scary part, but my belief is that on the other side of this, there's going to be a bunch of stuff that we're going to be able to tackle with software that

we never would have approached before because it wouldn't have been cost effective.

Carter Morgan (41:06)

Yeah, I

love what you said there, like the idea that like, there might be some non-technical founders who do vibe code something to existence, find product market fit, and they go, wait a minute, I need a professional to kind of help me scale this.

Nathan Toups (41:19)

there's a YouTube channel called Starter Story. I mean, not joking, these folks have great product minds, are able to do stuff that's pretty simple coding projects, but just they understand customers really well, way better than I do. Anytime I look at them, like, you built a business out of that? Like, OK, that's kind of cool. But yeah, and they're using a lot of these kind of off the shelf tools plus a little bit of vibe coding. It's interesting.

Carl Brown (41:42)

Yeah,

I saw a news report. We'll talk about that in a second. But I saw a news report, I guess it was yesterday or before yesterday, where they had said that one of the things that they were talking about was that there was a cardiologist who knew nothing about code who had come in in the top number, top few of a hackathon that Claude code sponsored, built something with no technology experience at all. And it's like,

Carter Morgan (42:06)

Yeah, yeah.

Carl Brown (42:11)

That's really cool. I don't want that running on me. It's like, hackathon quality code is not something that I want running in the real world, but the fact that there's now this idea of this thing that kind of works as a prototype stage, I think is good for everybody. I just hope that somebody who knows what they're doing gets in front of that or in between that and people with actual heart problems that need to get.

Carter Morgan (42:15)

Yeah,

Yeah.

Carl Brown (42:38)

I don't know what

the thing did, but if a cardiologist is writing it, certainly hope that ⁓ somebody takes a good look at it before it actually starts dealing with heart patients.

Carter Morgan (42:46)

And you know, I had seen that that cardiologist also did have previous tech experience. ⁓ But it's still interesting that someone whose job was not full-time software engineer created something.

Carl Brown (42:51)

Mm.

Yeah, I

Yeah, the report that I, to change subjects a bit, the report that I saw that there was talking about that was actually initiated by this recent thing where the, I want to say it was the head of AI safety at Metta got her mailbox mostly nuked by OpenClaw. Which, mean, so a couple things there. So one thing to understand about, I mean, so I've talked a lot, I made a couple videos about how,

Nathan Toups (43:17)

I saw that.

Carter Morgan (43:17)

That's my cl- open claw, I saw that, yeah.

Carl Brown (43:27)

Agents are inherently unsafe because of the prompt injection problem, right? Which we could talk about for a sec, but we could put a pin in that. But the second thing to understand, and this is kind of interesting in a history kind of way, agents don't have an async control channel, right? So back in the old, old days, before SSH, we used this thing called Telnet to connect to an old, ⁓

Carter Morgan (43:32)

Yes.

Carl Brown (43:58)

another machine, right? And Telnet had an out-of-band control channel. And so if you're Telneted into a machine and you accidentally tell it to print the wrong cat, the wrong file to the screen, and you hit Control C, it will stop, right? Because the out-of-band control channel will go to the transmitting machine and say, throw all that data away. We don't care. Stop, right? There's no such thing in SSH, for example.

Carter Morgan (43:59)

Mm-hmm.

Right, right.

Carl Brown (44:26)

right? And so if you accidentally tell it to, know, you type the wrong thing or whatever, you tell it to cat a giant file. A lot of times what I ended up having to do is SSH into that machine on a different terminal and kill the process, right? Because Ctrl C doesn't do anything. Well, the the open clause stuff is the same way, right? Because if you if you read what she did, it's like it told her it was deleting a bunch of stuff. And she's like, Stop, don't do that. Don't do that. Right? And it's cool. As soon as I'm finished with what I'm doing, I will read your next prompt, right?

Carter Morgan (44:35)

Yeah, right, right.

Yeah. Yeah.

Carl Brown (44:54)

you know, and that's another kind of thing that we don't, you know, we don't think about the fact that, you know, to a person, it's obvious that when you're doing something and someone starts screaming at you, you stop what you're doing, right? But the, know, we didn't teach the AIs that, and teaching the AIs that is going to be a nightmare, especially since the thing is written in Node, I think. So it's inherently single threaded. So,

Carter Morgan (45:09)

Yeah.

Right, right.

Carl Brown (45:25)

So, you know, there are lot of that kind of thing. then, but anyway, so that that but that particular incident has spawned a bunch of like, you know, think pieces on, you AI and the the AI doomer people who drive me crazy. And I've made some videos about and I'm doing some more work on on that have basically jumped on that to try to push their agenda of basically we have to to to, you know, we have to think about AI killing us all basically.

And unfortunately, actual implication of AI will kill us all is not we should stop all AI research, it's we should go faster. And that's because, unlike with nuclear weapons, which is the thing they always compare it to, there is no equivalent of a Geiger counter for an AI lab.

Nathan Toups (46:09)

Right.

Carter Morgan (46:19)

Mm-hmm.

Carl Brown (46:20)

You can't tell from looking from the outside whether somebody is doing an AI research somewhere. And so if you believe that super intelligent AI is actually going to kill the entire human race and you believe that super intelligent AI is going to be so much smarter than a human that we can't even compete, the only defense against one super intelligence is another super intelligence. And so anything we do to slow down the American, the people that are supposedly on our side, anything we do to like,

Carter Morgan (46:26)

Right, right.

Right, right.

Carl Brown (46:49)

prevent them from encouraging ⁓ teenagers that are having mental problems not to go seek help or AI psychosis or bad facial recognition or anything we try to hold them responsible for is just going to slow them down and China is going to get to super intelligence first and kill us all,

Nathan Toups (46:59)

Right?

Carter Morgan (47:08)

Right, right.

Carl Brown (47:08)

So

that whole narrative drives me crazy. And every time anything goes wrong in the AI space and it becomes a viral story, then all of the do-mers basically use that to jump on all the news programs and talk about why we don't need to worry about that thing that just happened. We don't need to worry about the actual real problems that are currently happening. We need to worry about this thing that we can't agree on how to even do anything about. I forget the name of the, there's a fallacy. It's like the privation something fallacy, which is basically you.

You take the problem that you're trying to argue about, you equate it to a much bigger problem that doesn't have a solution, and then you prevent people from working on the actual solution. It's kind of like to maybe get a little bit political. So like one of the chic ways of talking about like, know, billionaires being, you know, ⁓ having too much control over things is to say, you know, that's just capitalism.

Nathan Toups (47:47)

interesting.

Carl Brown (48:03)

And what that does is that implies that in order to be able to get rid of billionaires, we have to get rid of capitalism and getting rid of capitalism is going to be, you know, basically a non-starter. That's not true. There are lots of laws we could put in place. There lots of regulations we could do. There's a lot of public pressure we could apply inside a capitalist system to be able to change the inequality. And there have been many times in history without getting rid of capitalism that the equality inequality levels have changed drastically, like

Carter Morgan (48:09)

Sure, sure.

Right, right.

Carl Brown (48:32)

coming out

of the Great Depression. But when people say, well, that's just capitalism, capitalism just like that, what they're doing is basically taking that issue of inequality as bad, and they're equating that to, in order to make that any better, we're going to have to completely switch to a communist or a socialist system. And that's not true at all. But it kills the conversation. ⁓

Carter Morgan (48:55)

Right, I see what you're saying. I mean, what do you... So I guess something we talk lot about on our podcast, right, is like our podcast is dedicated to reading technical books and trying to understand them at a deeper level. And we, every episode we say, okay, what are we gonna improve in our careers as a result of having read the book? And I think there's, especially amongst junior engineers, like I get why people feel this way. And like, it's so funny because I once thought like,

When I got my master's degree, of my maybe like little hobby projects I wanted to do was like teach at a community college with it. But now I don't even know what that looks like because any sort of trivial programming exercise you could assign to help someone kind of build those programming muscles can just be done by, mean, chat, you be three for crying out loud in half a second. And so I think what a lot of junior engineers see is they see all these tasks that we have previously have been junior level tasks. And they're like, this thing can just crank out.

Carl Brown (49:42)

Mm.

Carter Morgan (49:53)

It probably crank out your CS 301 project in a couple of prompts, right? And so I think there's a feeling of hopelessness and you have talked about on your channel that there is still a need for deep technical understanding in the world. We

Carl Brown (49:58)

Right. ⁓

Carter Morgan (50:08)

believe in that need as well. I mean, we would wrap up the podcast. We felt like there wasn't a purpose for it. ⁓ in kind of an AI age where you're right, it it does accelerate you in a lot of ways. ⁓ I mean, how are we threading that?

One, assure the people, assure them that there is still a need for deep technical understanding. But two, how are we kind of threading that curve between when the surface level knowledge you get from an AI is good enough to get started versus when you do need to really dig deep and understand a problem.

Carl Brown (50:26)

.

Okay, so let me, before I answer that, let me come up a couple levels. So we have a fundamental education problem in this country that AI is exposing, which is that we assume that people that can regurgitate things that they memorized actually understand, right? And like, when I was a kid, I had a really good memory.

Carter Morgan (50:45)

Right, right.

Okay.

Uh-huh.

Carl Brown (51:10)

And so there were a lot of times that I could take a test without, like I didn't study for the test, I didn't really care about the test, but I remembered hearing enough stuff that I could get the questions right, right? I didn't really think about the subject much, I could just regurgitate the thing. And then when I was in my junior year of college, I hit the point where I really, really, really needed to pay attention and I really struggled learning how to study because I never really had to before. You know, we assume that being able to write a five-paragraph S-

Carter Morgan (51:19)

Mm-hmm.

Right.

Carl Brown (51:37)

five paragraph essay is equivalent to English understanding, that kind of thing, right? And we've now been disabused of that notion, right? We've gotten to a point where the things that we're used to using in an academic setting to determine whether kids have learned or not, ⁓ we realize now that they don't actually reflect the learning the way that we've always assumed that they would. ⁓ And that's caused two problems. One, means that we need that

Nathan Toups (51:46)

you.

Carl Brown (52:07)

We can't continue to use those things in an academic setting and we're going to have to think of better and different ways of doing that. And I've got some ideas on that in a second, but although that's not my expertise. But the other thing that it's done that drives me crazy even more is that because we've always said that passing the bar exam equals being a good lawyer. Now that the AI can pass the bar exam, the assumption is that, well, that means the AI is a good lawyer and the AI is not a good lawyer. It makes up cases all the bloody time.

Carter Morgan (52:29)

Mm-hmm.

Right, right.

Carl Brown (52:38)

And so we've kind of realized or we're realizing, some of us haven't gotten there yet, as a society, that a lot of the assumptions that we've made about the correlation between human memory and human understanding and reasoning have not been the case, right? And as somebody who has, you know, when I was a kid, always had a, not much, but.

above average memory relative to my peers. I was advantaged in school because it was easier for me to be able to take the test because I remembered a bunch of stuff, right? And that wasn't fair to the, you know, the other kids. So we need to I saw a ⁓ really interesting. ⁓ I guess it was a paper. I'm trying to where it was. I'll see if I can find it and we can stick in the show notes. But there was a professor that basically said, okay, so

you know, you're going to come into class, you're going to do your your the work that I give you right during the during the hour or two hours or whatever. You've got a choice, right, you have to pick you can either use AI. In if you use AI, you are required to store, you know, keep it a living document of all the prompts you use and all that kind of stuff. And then you are going to be graded on one standard.

or you can choose not to use AI and you will be graded on a different standard. And I don't care what you pick and it's not gonna be, you it's like you're not gonna get marked down or whatever. But, you know, I'm not going to allow someone using AI to compete with and be graded on the same scale as someone who's not using AI, right? And those are the kinds of things.

Nathan Toups (54:24)

Rim.

Carter Morgan (54:25)

Right.

Carl Brown (54:29)

that we need to start doing both in an academic environment and in ⁓ trying to evaluate AI kind of environment to really get to the core of like, are people learning or not? ⁓

Nathan Toups (54:46)

Yeah, that's interesting because it's sort of like a second order indicator because, know, we have this black box in here. And so we had this like second order indication of like, well, we understand enough what a high functioning attorney behaves in this sort of environment and not thinking, but if we have an imitation machine that can imitate the output of this enough, it's actually pretty convincing, even though it has no idea what it's actually doing. I just wrote a short couple of paragraphs recently ⁓

Carl Brown (54:50)

Mm.

Mm.

Nathan Toups (55:16)

in my newsletter called An Illusionist is Not a Sorcerer. And so there's this idea that like, no matter how good the illusionist gets, like it can't actually pull a rabbit from a hat, right? Like it can be more convincing. It could be set up so much that even though as an audience member, I know they're doing an illusion, I can't figure it out. But there's not like one level of skill that an illusionist gets to where a rabbit actually manifests out of the ether of the universe.

Carl Brown (55:40)

Mm.

Nathan Toups (55:46)

into a hat and it pulls it out and we're like, wow, we finally made real magic, right? And I feel like that's where we are with large language models is that it does a certain type of thing really well, which is like, it's an incredible high dimensional indexing service on top of all of human knowledge, right? It can take patterns that are fascinating that it finds these deep high dimensional patterns and then says, these things are actually associated to each other. And we're like, wow. And when I think about using the large language model that way, I'm like,

Carl Brown (55:49)

Right.

Right.

Nathan Toups (56:15)

Delighted I'm delighted with what what comes out of the other side in the same way that I if I watch Penn and Teller and I'm like What's pin gonna do next? ⁓ But if I go to Penn and Teller and I'm like, this is the most powerful sorcerer and in five years,

Carl Brown (56:21)

Right. Right.

Carter Morgan (56:22)

Yeah.

Nathan Toups (56:28)

he will be sorcerer of the whole universe. I You're either delusional or you're crook. Like that's the whole

Carter Morgan (56:34)

Right,

Carl Brown (56:34)

Yeah,

Carter Morgan (56:34)

right.

Carl Brown (56:35)

saw sometime recently, I don't know I'll be able to find it or not, but someone was basically saying that a critique of the AI industry was that these people actually believe that if they keep trying to get better and better illusions, eventually they're going to be able to saw someone in half and put them back together. Right. And they don't realize the

Carter Morgan (56:56)

Right, right.

Carl Brown (56:59)

difference between what they're actually doing and what they're trying to do or assuming they're going to do or you know eventually wanting to get to. Which I thought was really interesting analogy.

Carter Morgan (57:09)

I have to remind myself, yeah,

well, I have to remind myself, because everything I've kind of read has been that the people at the AI labs are sincere in their beliefs, that they have the potential to create the super intelligence. I have to kind of remind myself, like, so were the NFT guys, so were the crypto guys, right? So was Juicero. They thought they revolutionized fruits and blenders, right? Exactly, right?

Carl Brown (57:26)

Right.

So was Enron. ⁓

But I mean, yeah, but the other thing, so I've got some, ⁓ I'm working on a set of pieces for a substack I'm starting. But ⁓ one of the things is that, so there are these people that are, I call them cheerleaders, right? And they're not really involved in the AI space, but they're influencers or they have big audiences or they're newscasters or whatever.

And what they do is they platform these ideas from these people who stand to gain financially from the narrative that they're pushing. Right? And so they talk about, you know, how important, you know, one of the things that drives me crazy is when, you know, Sam Altman and Dario and whatever go on and say, you know, there's a 20 % chance that AI is going to kill us all.

right, which, like I said, you know, feeds into the narrative of, you know, if that's true, then we have to get there before China, right? So that's, you know, that's to their benefit. ⁓ And that just, you know, but people don't explain that, right? The other one is when AI people say over and over over again, you know, these things aren't built, they're grown, or we don't understand how they work, or those kinds of things, right?

And for us as software developers, especially like web scale software developers, there's a ton of stuff that we deal with that we don't understand how it works. And the reason that we don't understand how it works is because it's making lots and lots of decisions based on the data that's coming into it. And we don't know what the data is, right? Because it's petabytes and petabytes of data, right? can't like, ⁓ Kyle Hill had this thing about, we don't understand how this works down to the transistor. It's like, okay, so.

Here's a Google web search, right? That web search touched four data centers and pulled things from different places because it gets routed to whatever machine happens to be the least busy at the moment. You tell me, you know, which hard drive this particular word from this answer came off of. You can't, right? We're used to that whole, you know, this is a ton of data. We're processing the data. We don't...

Nathan Toups (59:46)

right?

Carl Brown (59:54)

We can't tell you exactly where it came from because we can't internalize all the data. And in my mind, this is the same thing, right? We understand the mechanism, we understand the pipeline. The problem is that all of these in my mind, all of the weights that everybody's talking about, like they're magic, they're just data. And we don't understand all, know, there's too much data for us to...

Carter Morgan (1:00:16)

Right, right.

Carl Brown (1:00:20)

into it. And so we don't understand it. But that doesn't mean that, you know, it's grown. And that doesn't mean that we'll never be able to understand it. And that doesn't mean that it's inherently a different kind of organism than it would be if we understood it, right? ⁓

Nathan Toups (1:00:34)

Amen.

Carter Morgan (1:00:36)

You talk about from like, I like what you guys are talking about, pulling a rabbit out of a hat and that like you can't actually pull a rabbit out of a hat. But I would argue that a lot of tasks that junior engineers engage in are kind of simple enough that they're more like the illusion level than the source for a level. I just remember one of my very first tasks right out of college was to take an AngularJS library and migrate it to Angular 2, right? And as a junior engineer,

That was a pretty tough challenge. ⁓ You could do that in a couple API calls today. These were small libraries, right? And so, and you can kind of also say like, again, a senior engineer is just more effective, can produce, more effective, can produce more code now than they could previously. And so I personally kind of believe her in Jevons paradox, which is that anything that makes something more efficient usually increases the demand for that thing.

Right? Like, ⁓ that's been the history of all software engineering up until this point. Maybe this technology is the exception, but who knows? But I can see kind of junior engineers being like all the stuff I can kind of understand at my level can now be done by this robot. And so what's even the point anymore.

Carl Brown (1:01:53)

Well, I mean, so the thing to understand is this is not the first time or even the second time that that's happened, right? So in my career, when I started, there was no Google, right? When I was in college, there was ⁓ no web. Mosaic, the very first web browser that was cross-platform was released the year I graduated from college, right? And so...

Carter Morgan (1:02:00)

Okay, now talk to us.

Right.

Carl Brown (1:02:21)

What we did was we read books, right? ⁓ And then, but then what happened is we got to the point where we could search the entire internet, right? And it was like, you know, ⁓ well that, you know, now programmers are gonna be able to go look up stuff and so they're not gonna need programmers anymore. And then stuff like Stack Overflow happened and it was like, well, people are gonna be able to copy and paste code and they're not gonna need programmers anymore.

Carter Morgan (1:02:24)

Yeah.

Carl Brown (1:02:48)

And then there was ⁓ the first one I ever dealt with was called CPAN, but which was a pearl thing, but basically the equipment of the PIPs and the NPMs and all that. well now programmers can just, you know, go grab a module and get it sucked into as a dependency. And so they're not, we're not going to need programmers anymore because all programmers are going to be able to, are going to be doing is just sitting there and going, okay, I'm going to grab this thing and this thing and this thing and this thing and write a couple of lines of code to hook them together. And you know, we're not going to need programmers anymore. Right. And that's, that's never been true.

Nathan Toups (1:02:54)

yeah.

Carl Brown (1:03:18)

And this in my experience, right, having lived through this kind of thing over and over and lots and lots of panic about, you oh, well, now that this is capable, we don't need junior people anymore. Well, you know, we always still need junior people. It's just what a junior person does now is very different than what a junior person did in 1993.

Carter Morgan (1:03:18)

Right.

Right.

Carl Brown (1:03:40)

And it's changed several times.

Carter Morgan (1:03:41)

That's a good point.

Carl Brown (1:03:45)

And in the same, it's like you get to be old. It's like the whole, you know, history doesn't repeat itself necessarily, but it rhymes. It's like, I've heard this argument before. I've heard it before again. I've heard it before again, right? It's never been true before. Maybe someday it will be. I see no evidence it will be this time. And the other thing that I want to point out to folks that are listening to this that are on the programming types is, you know,

If you're going to have to be something and live through this AI apocalypse kind of thing, don't forget I said that. through this ⁓ shift in jobs that's caused by large language models, ⁓ it's much better to do that as a programmer than as a journalist or as a copywriter or as a stock photocopier.

Carter Morgan (1:04:18)

Right.

Carl Brown (1:04:37)

photographer or you know that kind of thing right so we're still in a in a much more advantageous position than a lot of people that doesn't make it fun that doesn't make it fair that doesn't make it pleasant but you know there's a lot of disruption happening in the in the the space if you read ⁓ there's a really good substack called blood on the machine blood in the machine ⁓ by brian what is his last name i've got his book over there ⁓ but

Carter Morgan (1:04:38)

Right, right.

Carl Brown (1:05:07)

it's a, and he talks, Blood in the Machine basically is a reference to the Luddites that were like trying to destroy the looms. And he basically talks about that as that has gotten a bad reputation. The word Luddite means something different than what it should mean because the people that told that story were the people that won that labor battle. But, you know, he's spent a lot of time documenting and interviewing folks that have been laid off.

Carter Morgan (1:05:29)

Right, right.

Carl Brown (1:05:37)

that were artists or journalists or that kind of thing, right? Because there are careers that have really, really been hurt by this. ⁓ And so there is a lot of disruption and we're gonna have to figure out what to do with that as a society or really bad things are gonna happen. But the good news, I argue, is that, know, we at least of the people that actually make things that are white collar making things, we're in a much better position than a lot of people. And honestly, if the AI can actually

get to the point where it can make, you know, human quality decisions. ⁓ I'm guessing that the first people that are going to get replaced are the middle managers.

Carter Morgan (1:06:18)

Yeah.

Nathan Toups (1:06:18)

Right.

This is something I've noticed that I don't have a good way of describing it. Maybe, I think it was Brian Merchant is the Blood in the Machine guy.

Carl Brown (1:06:26)

Yeah, that's

right.

Nathan Toups (1:06:31)

There's like a mean spiritedness that I've noticed also from the unhinged AI folks. There's like this almost like salivating of like, yeah, well that job's irrelevant or this person's not gonna need this anymore. Not in the like, hey, we've got big questions. Like I remember this with the self-driving cars a while back where I was thinking about a semi truck driver or someone who is doing a delivery service who no fault of their own, you know, is on this precipice where in the next few years,

Carl Brown (1:06:53)

Mm-hmm.

Nathan Toups (1:07:00)

those jobs could very well be replaced. And it's not a celebration in the sense of like, I'm not saying that we should be Luddites or protect or whatever, but that we should acknowledge as a society that when we categorically remove something, we need to like take care of the humans that are involved in that the new negotiation of what the contract is. Because these are hardworking people, right? This wasn't a bunch of lazy people or, you know, some other justification.

Carter Morgan (1:07:26)

Right, right.

Nathan Toups (1:07:28)

These are hardworking people who maybe even would be pretty far into their careers and are ready to retire in five or seven years. And as a society, we should celebrate the technological advancements that if we can do certain things and make things safer and we don't need humans to be there. But I'm not hearing that in the conversation. I guess that's the other part of this that makes me upset.

Carl Brown (1:07:48)

Yeah, there's so

one of my very favorite thinkers is a guy name and forgive me if I butcher this, but Ted Chiang, C-H-I-A-N-G. He's a science fiction short story writer. He wrote, among other things, the short story that the movie Arrival was based on. Brilliant, brilliant guy. And he wrote two essays. I'll track him down and we can stick him in the show notes. One of them is called Something to the Effect of Chat GPT is a Blurry JPEG of the Web.

Carter Morgan (1:08:04)

very cool. Yeah, yeah.

Nathan Toups (1:08:04)

cool.

Carter Morgan (1:08:18)

Yeah.

Nathan Toups (1:08:19)

That's

Carl Brown (1:08:19)

which was amazing.

Nathan Toups (1:08:20)

such a good way of putting it, yeah.

Carl Brown (1:08:20)

And the other one, I'm gonna, I don't exactly remember the title, but basically it was the thing that the, it's like, you know, the thing to fear is not AI, it's capitalism or, the way that, the thing that people fear about AI doing to them is actually the same thing that Silicon Valley has been doing to them or something like that. I'll go find the right.

⁓ I'll put the actual article in the show notes. ⁓ But, you know, the whole, if you've spent much time in the startup world, especially in the Silicon Valley adjacent startup world, they kind of salivate over this disruption, they call it, right? The idea of, you know, we're going to make all of, know, Uber, we're going to make all of the taxi drivers unemployed, right? ⁓ I was part of a project actually at Amazon. ⁓

Carter Morgan (1:09:06)

Right, right.

Carl Brown (1:09:18)

when I was there, the app that the users, that the drivers use to bring the thing to your house and drop it on the porch and take a picture of it and then drive off, right? I was part of that, getting that project off the ground. And that, you know, disrupted a lot of delivery drivers, right? Amazon has a bunch of like robot things that they're doing to try to make the warehouses, you know, more efficient, which basically means, you know, making it harder on the

Carter Morgan (1:09:41)

Right, right.

Carl Brown (1:09:47)

the workers that are there. And Jeff Bezos had a famous saying, was basically, your margin is my opportunity. And there's this idea of selfishness, of how do we

Carter Morgan (1:09:57)

Yeah.

Carl Brown (1:10:11)

take a thing that's valuable to people and how do we move the revenue associated with that value to us away from whoever is doing it and concentrate that. And in the process of doing that, if we end up basically spending a lot of time and a lot of investor money undercutting all the people that do that for, you know, that have been doing that for a living forever.

or for their entire careers and make things so cheap that they can't compete and then they go out of business and then we could raise prices however we want to. That's one of the playlists from Cory Doctorow's, ⁓ in crapification, to put it that way. But that's part of the high growth Silicon Valley playbook.

Carter Morgan (1:10:59)

Yeah. ⁓

Carl Brown (1:11:11)

And it's unfortunate for, you know, big swaths, I mean, it's unfortunate for big swaths of the population. On the other hand, you know, it has given us some, you know, Google, right? I mean, it has given us some really useful things that have really advanced the sum of human knowledge and how fast we can go on things. there is no, I don't think there is no...

Carter Morgan (1:11:24)

Right.

Carl Brown (1:11:39)

everything something is wholly bad or something is wholly good but there's definitely I think a lack of compassion and a huge amount of selfishness where people just are like you know well you know sorry I won the game and you lost the game you know bad to be you sorry or not sorry

Carter Morgan (1:11:58)

That's what I find really

interesting about all of this. And I'll push back a bit on, for example, like Uber and Amazon, which like, I am not defending everything Amazon and Uber have ever done, right? But if you understand Jeff Bezos, he did have a sincere desire for the customer, right? And he was ruthlessly focused on improving that customer experience. And I think Amazon, like I'm not defending having to pee in bottles in the warehouses, right?

Carl Brown (1:12:13)

Absolutely.

Right, right.

Carter Morgan (1:12:24)

But the fact that I can order almost anything in the world and get it delivered in my house in two days is, think you could describe that to someone in the 90s and say, this will be possible in 2020. They'd say, that's fantastic. What an interesting future. Uber, for example, the fact that I in Lehigh, Utah could get a quote unquote cab to my door in five minutes is crazy. And I think like the taxi cab industry in particular.

Carl Brown (1:12:29)

Amazing.

Carter Morgan (1:12:49)

was ripe for disruption because of a lot of kind of anti-consumer anti-user practices. so I think a lot, DoorDash is another example where like, I personally am not a DoorDash guy. Like I just, I find it very excessive. But you know, the fact that more restaurants can now have, you can now have food delivered to you. And I use it about once a year to get a very sad DoorDash order, which is like saltines and anti-diarrheals and Gatorade, right? And the DoorDash knows exactly what day I'm having, right? And it's a...

Carl Brown (1:13:11)

Right.

Carter Morgan (1:13:19)

But I think these are all kinds of things where if you were to describe them to someone 20 years ago and say, is what the future holds, they'd be like, well, that sounds interesting. That sounds neat. I mean, I can't wait to have all these goodies. The AI labs don't do that. They just talk about what's going to un-employ all of us.

Carl Brown (1:13:27)

Well, that's so yeah, well, right. mean, so

so as far as that goes, right, that's there are different phases. This is everybody go read. What is it? Chokepoint Capitalism, Cory Doctorow's book. But the ⁓ or Seizing the Means of Computation, I think is his other one. The ⁓ there are phases of the encrapification cycle, right?

Carter Morgan (1:13:54)

Right, right.

Carl Brown (1:13:55)

And there's a time when the companies are really, really focused on trying to make the best customer experience. And that's in theory great, right? The problem is once you get to the point where there's no low hanging fruit there anymore, and the amount of work that it takes to keep increasing the customer experience gets bigger and bigger and bigger. Because like we said, you know, it's on one of those curves where, you know, it goes up really good. And then like suddenly, we start asymptotically because we're running out of something.

and there's no more improvements to make, then they go, cool, we have to keep growing. And so now we need to switch gears. And that means we need to figure out some other way to squeeze blood out of this turnip. And whoever we have to hurt in order to keep growing, we're going to hurt, right? And so the initial part of the let's focus on providing value to people, and then let's focus on providing value to our partners, those are not...

Carter Morgan (1:14:24)

Right, right.

Yes.

Carl Brown (1:14:52)

necessarily bad things. But when you get to the point where growth is running out, and there's nothing else you can do, that's a win-win, then that's the point where the selfishness kicks in. Right.

Carter Morgan (1:15:05)

But it just surprised

me with the AI labs, like they haven't even done that first part, the win-win part. cause one of the strangest arguments I find is, cause people kind of express AI skepticism and the biggest AI hype guys will be like, don't you want to go to Mars? Don't you want to cure cancer? Don't you want automated robots? I'm like, wait a minute, how are we getting there? Because I don't know why Sam Altman and Dario just every...

a couple months stand and say, know, all knowledge work in a year is gonna be gone. And then when they try to build these data centers and the local community has objections to them, they're always like, what? Like, don't you wanna cure cancer? And I'm a little like, like why? It's like the sign that these AIs have not figured out like super intelligence is because they haven't even figured out public relations yet. Why can't chat GPT, you know, tell them to quit saying this stuff? ⁓ And I'm with you, Nathan. It's a very like,

Carl Brown (1:15:59)

you

Carter Morgan (1:16:03)

Gleeful like ha ha ha we're gonna take all your jobs and like you were saying Carl like I'm gonna be the winner You're gonna be the loser which not defending everything Amazon Jeff Bezos ever done But like he didn't talk about that for at least the first 15 years of Amazon He was talking about like here's all of the wonderful things I'm going to bring to you and isn't this future so exciting that gleeful like antagonism is very strange to me

Carl Brown (1:16:28)

Yeah, it's so it's a it's an acceleration function and it's largely I think due to the amount of money we're talking about. Right. So the amount of money that the AI industry is requiring an investment is orders of magnitude more than Amazon needed or, you know, Uber needed or any of the other needed. And their time scales are orders of magnitude shorter.

Carter Morgan (1:16:53)

Interesting.

Carl Brown (1:16:58)

right? I mean, they're they're trying to get to, you know, AGI in, you know, a couple of years, they're talking about data centers coming online, you know, in a year or two kind of thing. Because they're talking about this exponential curve. And that's never happened before with that that much money in that short a timeframe means that they have to accelerate that in crapification process, you know, a lot faster. So they would argue

that there are a lot of things that they have done that have made it easier for people to transcribe text, right? And to track to-do lists and to have better voice assistants, although Siri and Alexa are still whatever. And a lot of the grudges,

Carter Morgan (1:17:38)

Right.

bright.

Carl Brown (1:17:56)

Drudgery? Drudgery. Where that came from. I trying to say grunge work and drudgery at the same time. A lot of that kind of stuff of writing emails and summarizing things and deciding what's worth my time and not. And hey, can you summarize that? Let me see if it's worth reading and that kind of thing. They would argue that that has saved a lot of people a lot of time. And then if you'd ask somebody 10 or 20 years ago, hey, I'll

Carter Morgan (1:17:57)

Treasury.

Carl Brown (1:18:22)

you know, what if you had a thing that could like read all of your email and tell you what was important and what you didn't need to worry about? You know, people would go, well, great, where can I sign up for that? And there were actually some services, you know, 10, 20 years ago, that you would like sign up and they would, you know, read your mail for you. And they didn't use large language models, they used a bunch of heuristics, but it was the same kind of thing. And there were people that made money doing that. So I mean, I would say they would argue that they have provided some value. It's just that the the

Carter Morgan (1:18:33)

Right.

Carl Brown (1:18:51)

the amount of additional investment that they have to justify is so much higher, so much faster, that they had to go to the gleeful Mr. Burns kind of, before the Jeff Bezos's or the, I don't remember who's in charge of Uber, but the Uber folks or whatever. Yeah, kind of thing. it's just, what we're seeing is it's way accelerated.

Carter Morgan (1:19:05)

Yeah, yeah.

Travis Kalanick.

Carl Brown (1:19:21)

I think because of the amount of money involved.

Carter Morgan (1:19:24)

That's really interesting. Then is it, what's your thought? there just no way they can possibly just, if they're having to make these claims that we are going to replace all knowledge work, right? And they have to make those claims to justify the investment. And if you're like you or me or Nathan, and you're kind of inherently skeptical of that claim that you're gonna be able to automate all knowledge workers out of existence in three years, right? Is it all just destined to fail? Like what's gonna happen?

Carl Brown (1:19:36)

Yep.

So

one of the researchers recently in a podcast ⁓

podcast is like, Dua, something like that. I don't remember. I'm really bad with names. We'll find it. But basically, he said that one of the that we were going to have to we are going to be entering a new era of research or a new phase of research or something to that effect. The idea being basically that the company that the the the supposition up until recently has been that

Carter Morgan (1:20:06)

Hahaha.

Carl Brown (1:20:29)

If we throw more data at it and we throw more compute at it, it's going to keep growing until the point where it's better than all humans, right? they've, know, Sam Altman loves to talk about, know, chat GPT-5 is going to be a, chat GPT-5 is going to be a PhD in every subject and that kind of stuff. And then it came out and it didn't happen, you know, that kind of thing. But the idea was if we keep scaling, it's going to keep getting smarter and keep getting smarter. And that's not happening.

Carter Morgan (1:20:36)

Right, right.

Right, right.

Carl Brown (1:20:54)

right? They're putting way more resources into it and they're getting a lot less out than they used to. And so now they're at a point where they're betting, I think, this is my opinion, but they're betting that they're going to hit the next breakthrough, the next equivalent of that attention is all you need transformers paper from 2017, that they're going to hit that and they're going to be able to exploit that new technique before the money runs out.

Carter Morgan (1:21:16)

Right, right.

Nathan Toups (1:21:25)

Well.

Carl Brown (1:21:26)

right? And if they if there's another genuine breakthrough, then it's possible that this is going to actually turn into something. And the longer we go before there's that genuine breakthrough, the more likely it is that this is just a bubble and it's going to pop. And so basically, it's a big, you know, where do you place your bets kind of thing? ⁓ I'm Yeah.

Carter Morgan (1:21:34)

Right, right.

Nathan Toups (1:21:45)

Yeah, it's like a bubble Ponzi scheme. It's like you have to have the bubble

Carter Morgan (1:21:45)

Right, right.

Nathan Toups (1:21:50)

to give you the other bubble.

Carl Brown (1:21:52)

Yeah, and I don't personally, you know, and I wouldn't, but I have not seen any evidence of a new breakthrough, or even progress toward a new breakthrough. So to me, I don't think there's any, you know, I see no reason why this isn't a bubble. And I see no reason why it isn't going to pop. And I see no reason why it's not going to be a nightmare when it does. ⁓ But, you know, it's possible that there's it's just it's it. I would be shocked if there's actually some

Carter Morgan (1:22:00)

Yeah.

Carl Brown (1:22:22)

real breakthrough looking research that's happening inside one of these companies that they're not, you know, talking about like it's, you know what I mean? It's just, although maybe they exaggerate, over exaggerate so much that we wouldn't realize that, you know, they're talking about, you know, but it just, I can't imagine that something like that is happening. one, and them not talking about it. And two, that a lot of the folks that we're seeing that are leaving these companies.

Carter Morgan (1:22:33)

Right, right.

Carl Brown (1:22:52)

that would be in a position to know if there was something on the drawing board that was I wouldn't expect somebody to be, you a bunch of people to be leaving OpenAI if we were right on the verge of another giant breakthrough. it feels

Carter Morgan (1:22:56)

Right.

The product roadmaps never make sense

to me. It's like, okay, you're on the verge of AGI. So you're releasing ads. So you're releasing a translator app. you're releasing, you know, like, and again, we've spent exactly, right. We're spending so much of this kind of exploring the AI skeptical path. ⁓ When, like we kind of mentioned upfront, like all three of us are users of AI and find it very useful in many, many ways in our careers. ⁓ But yeah, it's just like,

Nathan Toups (1:23:19)

erotic chat bots, right?

Carl Brown (1:23:20)

Yeah,

right.

Carter Morgan (1:23:37)

Yeah, like you're gonna build the digital god. You're this close. If you can just give us a little more money. We're this close guys and don't worry Yeah, we also released, you know, we're releasing the porn version of chat GPT. It's like what?

Nathan Toups (1:23:49)

Well, and I want to loop

Carl Brown (1:23:51)

Right.

Nathan Toups (1:23:52)

this back into what we were talking about with the on trusting trust part, which is that I'm fixated with a couple of things that are coming up. There's these dark patterns, which is we do not have these large language models that are giving permissive access to many machines. OpenClaw is good example of this, in which the output of the model is now becoming the input of the model running in the next iteration.

Carter Morgan (1:24:08)

Yes.

Nathan Toups (1:24:16)

And this is exactly what Ken Thompson was talking about. This is why you write a quine and realize that you can write source code that is the source code itself. And then you could imagine putting ulterior motives in code that's hard to reason about. And then that gets compiled into a compiler. And then at that point, it becomes invisible. This is the whole purpose of it, is that could, even though I don't think that we're at the verge of AGI, is it possible within the bounds of what's

available to the large language model that it could start hiding messages or instruction sets to itself in a way that it's persistently stored across trying to wipe these things that you could actually, yeah.

Carl Brown (1:25:00)

So I

would argue no, but it's definitely possible for a human hacker to do that. So we need to treat that as a serious problem. there are a lot of these, when people argue with me about AI, there are a lot of these hypotheticals, right? About, know, well, what happens if an AI can, you know, can put, you know, can like make a nuclear plant go critical because they figured out

you know, how to shut down the control panel or something. like, if it's possible to shut down the control panel on a nuclear reactor remotely, then some human is going to figure out how to do it. And we need to block that, right? You know, there's no, you know, any vulnerability that an AI is likely to be able to exploit is a vulnerability that a dedicated team of hackers is going to be able to attack. And so we need to worry about that, but it has nothing to do with the AI.

Carter Morgan (1:25:41)

right.

Right, right.

Nathan Toups (1:25:58)

Well, I guess my point was, is there a way for a hacker to have an instruction set that, yes, hides a set of ulterior instructions in what looks like normal Markdown files?

Carl Brown (1:25:59)

right?

absolutely.

Yeah, I

so I talked about I mean, so like when the whole way that open claw and mold book is even worse works is it's got all these skill files. And it's got all of this, you and there's a I saw an article where basically somebody did an audit of all of the skill files that that are running around notebook. And some like, you know, less than 50%, but I think more than 30 % of them were malicious in one way or another.

Carter Morgan (1:26:37)

Hmm.

Carl Brown (1:26:38)

It's just, you know, there are are so many ways. mean, I like in Moltbook, it's basically a giant what hackers would have as a command and control network. Right. Where you inject a command into the thing and then all of the machines get it and all the machines do whatever. Right. It's like, you know, hackers spend lots and lots of time trying to figure out how to hack a bunch of machines and build these command and control networks. And here are people volunteering to hook their machines up to them intentionally. Right. Yeah.

Carter Morgan (1:26:53)

Yeah.

And you have to write a markdown file, right?

Carl Brown (1:27:08)

But,

you know, and so the whole bit about, you know, ⁓ obfuscating markdown and like, you know, putting in the markdown a curl command that goes and grabs a thing and then executes it, but doesn't actually download it. So you never actually see it. The number of go grab this thing with curl ⁓ instructions that live in the markdown files for open claw and skills, open claw skills and multbook and multbook skills is terrifying.

And even if all of those were legitimate, and I'm not saying they are, but even if all of those were legitimate, all it takes is one of those to get hacked by someone and replaced with something that's bad. Right? I mean, it's terrifying, but it doesn't require an AI level intelligence in order to take advantage of it.

Nathan Toups (1:27:57)

Yeah, that's ⁓ a point. That's a valid point. And a thousand times more terrifying to think about, actually.

Carter Morgan (1:28:01)

Yeah.

So what is the... How are you threading that curve? You got to thread that needle with understanding things deeply. I think, look, I know some of the graybeards out there would disagree with me, but I do not understand how assembly works at a deep bubble. I took a class on it in college. I understand the basics of it. ⁓ But...

I have not needed to know this throughout my career and I've had a by all accounts successful enough career without really understanding any of that. with AI, we have moved up kind of a level of abstraction. And so there are people who would say like, you even know you don't know how the code works anymore. The AI just writes it for you. And so what is the role these days in kind of deeply understanding technical material? Is it kind of the same ways where he moves up a level abstraction and just most developers don't even know how to

manage their own memory, right? It's just not really something we need to do anymore. ⁓

Carl Brown (1:29:09)

Right. So

this is actually a place where the coding machine's short story is really, really good. The sequence that they talk about of like, OK, there's a bug here. I don't understand it. OK, cool. Spit out the assembler. Let's go look at that. Right. And so the current day version of that is, OK, this ⁓ vibe coded thing is not doing what I want it to do. What the hell?

OK, let me go dig into the code that the AI generated, and let me figure out what's actually generating this. OK, that doesn't make any sense. All right, cool. Let me go down to the ⁓ bytecode that the Python runtime or the Ruby runtime or the whatever Java runtime is executing. Let me go look at that. OK, that doesn't help. OK, let me go look at the assembly. Let me go look at, know, so it's just a level of, you know, I keep hearing people saying, you know, why do we need to? ⁓

Carter Morgan (1:29:38)

Bye.

Carl Brown (1:30:06)

Why do we need the AIs to generate code in one of our coding languages? Why can't we just let the AI generate binaries directly? And it's because, well, when we need to debug it, we don't know how. Because we do, right? ⁓ And maybe we'll get to the point at some point where the AIs write perfect code. But I honestly don't think that's possible. Because what I have found in my career is that you get to the point very quickly in a ⁓

Carter Morgan (1:30:13)

right.

Right, right.

Carl Brown (1:30:34)

good project with a competent team, that all of the situations that you envisioned work. In all of the, you know, in all of the failure cases that you envisioned, you handle correctly. And then you put it out in the world and something goes wrong and you're like, okay, well, how did that happen? And you go look and because the user did something that you didn't expect, or the data was corrupt in a particular way, or and that's where most of the

Carter Morgan (1:30:43)

Yes.

Right.

Carl Brown (1:31:01)

the work is, right? That's where most of the hard stuff is. And one of, I say a lot, one of my most useful debugging tools, my entire career has been diff. And what I do is I put up, it's like, if I don't understand what's going on with something, I put a lot more logging statements in. And then I wait until the bug is triggered, and then I diff the logs when the bug happened, and the logs when the bug didn't happen, and I see where they start to diverge.

Nathan Toups (1:31:15)

you

Carter Morgan (1:31:15)

Yeah.

Yeah.

Right, right.

Carl Brown (1:31:30)

and go,

okay, so now I know at least where the execution pass started to go off, right? Let me go dig into the code there and see what's going on, right? And then, depending on what's happening, right? There's no telling what direction I have to go. It's like, okay, well, you know, now we're talking to the GraphQL server, okay? So let me go over look at the GraphQL code. And now let me look at the SQL for that. And now let me look at the, you know, the statements that do the insert that create the data that's in the schema that then triggered the SQL, the GraphQL,

query to be wrong to, you know, whatever, right? And so that it's just one more layer of, okay, let me go, you know, let me let me go deeper and let me figure out what the what the code looks like. It's just it's one more, you know, the same way that they had to go from looking at the C code to looking at the assembler. Now we're going from looking at the, you know, looking at the prompt and looking at the prompt output to looking at the

Python code or whatever it was generated to looking at the bytecode that ended up getting generated by the run, that's running in the runtime to the C, the assembler underneath that and so on and so forth.

Carter Morgan (1:32:42)

It's a brave new world. But it's the same,

Carl Brown (1:32:45)

It, I mean,

that's, that is the curse and the blessing of this career, right? If you're doing this, you know, I don't know how long you've been doing this, but if you're doing this another, you know, I've been doing it for 37 years or something, you know, if you're still doing this in 20 years or 25 years or, you know, 15 years or whatever, you will find that there's a lot of stuff that you learned when you ⁓ were coming up in the world that you don't need to know anymore until something goes wrong.

Nathan Toups (1:32:51)

Right.

Carl Brown (1:33:15)

And the number of times that I've been like, had this one project where they did, it was a, they converted everything to Amazon lambdas. And then they were like, oh, our database bill is crazy. Everything is too slow, blah, blah, blah, blah, blah. And we went and dug into it and it turned out that the queries, cause there was a SQL backend behind the thing. And the queries were tiny, tiny, tiny,

fractions of a second, half a millisecond, not half a millisecond, 15 millisecond kind of things. But the amount of time it took to log in from the Lambda to the SQL database and then run the query and then log out, the vast majority of that was the setting up the connection. And I was like, cool, you reinvented connection pooling or the need for connection pooling. And they were like, what's connection pooling? And I'm like, oh, sit down. Let's talk about this one.

Carter Morgan (1:34:05)

Yeah.

Nathan Toups (1:34:07)

Wait.

Carter Morgan (1:34:10)

Yeah.

Nathan Toups (1:34:12)

Not funny.

Carl Brown (1:34:14)

it for those of you that don't know, there's, if you look up ODBC or JDBC, there's a there was this thing where basically what would happen is everything you would do to talk to a database took a long time. And it got frustrating and things were too slow. And so what we would do is we would create these connection pools, where there were there were different connections to the database that were just held open that were logged in. And then when you wanted to run a query, it would just put it in a queue that one of the things in the pool would take it off, run it and give it back to you.

And that way you could skip the whole login process because these connections in the pool stayed logged in all the time. And so ⁓ a lot of times it's like, I don't need to know that anymore. I don't need to know that anymore. And then something comes up and you start digging into it you're like, ⁓ I know this. I had to deal with this before. And my guess is that over time, you'll get to the point where you're like, I don't really think I need to deal with this anymore. mean, one of my favorites was ⁓

Carter Morgan (1:35:01)

Yeah. ⁓

Nathan Toups (1:35:02)

Yeah.

Carl Brown (1:35:13)

Once upon a time, I so I used to do like DOS Windows admin kind of stuff, right back in the the 80s when I was working at the university. And one day I got rid of I took all of my my DOS books, my old DOS books to half price books and got rid of them, not that they wanted much for them. But you know, we're getting much for them. But it was like, okay, I need I need more shelf space. And then not too long after that, Windows created a ⁓ unattended install thing, where you could basically a net boot they

if you ever heard of that PXE, it's like you can boot a PC off of the network and then you can have it do a remote install. And so if you're doing like a corporate environment and you want all the computers to be the same, then you can just do that, right? Well, it turns out that when you do PXE and what it does is it downloads an image and then it boots off that image and it runs the commands on that image. And that image is a frigging DOS floppy.

Nathan Toups (1:35:44)

Yep.

Carl Brown (1:36:04)

And it has to do config.sys and hymem.sys. And you might not even know what any of that means. But basically, all of the DOS stuff that I used to do, right, when I was building floppies that would do installations, I now need to know again. And I had just gotten rid of all of my books, right? So it's like, it's amazing how often things come around again, because that code and that knowledge.

When someone at Microsoft is looking around for, how do we do this? It's like, oh, okay, well, you know, we'll just make it like a floppy because we've already got all the code that parses that kind of stuff. And, you know, we've already got boot sequences for that and all that kind of stuff. So we just have to download the, you know, have the, the, the, the, the, the, the, ethernet card and the controller card in that just download the floppy. And then we'll just use the same code that we use to boot off a floppy. We'll just use it in this to this RAM disk.

So it's amazing how often things pop up again. But the great thing about this career and the worst thing about this career, depending on your personality type, and it will, for a lot of people, change as they go through their career. It's always changing. There's always new stuff. There's always something new you have to learn. And it's perfectly okay for people to go, you know what? I'm sick of this.

Carter Morgan (1:37:09)

Right.

Nathan Toups (1:37:18)

Great.

Carl Brown (1:37:18)

you

know, I'm going to go into business or I'm going to go into project management or I'm going to, you be a manager, I'm going to stop coding or whatever. It's just, you know, I'm going to go into product, you know, management, because there's a point where, you know, a lot of people are like, you know, I don't, I don't, I'm tired of the rat race. I'm tired of having to keep up with the new technology every time. And I know a lot of people, a lot of actually the vast majority of the people that were doing computing stuff when I was starting out, that I know that I've kept up with aren't doing

Carter Morgan (1:37:37)

Right, right.

Carl Brown (1:37:47)

programming anymore. ⁓ I have been, you know, I've been a VP before I've been, you know, I've had 14 direct reports. That's the most miserable I've ever been. It's just that's not my, you know, that's not my personality type. And I'm like, you know, I got into computers, I got into computers, because it was easier than dealing with people for me. And I, I prefer that, right. So I'm probably going to be doing computers forever. Although honestly, what I'm doing now,

Carter Morgan (1:37:48)

Right.

Nathan Toups (1:38:11)

Right.

Carl Brown (1:38:17)

is much more educating, I guess. When I think of myself as a YouTuber, I'm like, but if I think of myself as an educator, that makes more sense to me. But I'm doing more, and I think I'm required, it's not the right word, but I think it's incumbent on me to help the people that are coming up now do a better job of the craft the way there were people that helped me when I was starting out.

Carter Morgan (1:38:21)

Right.

Right, right.

Carl Brown (1:38:48)

And so, you know, I'm doing a lot less programming now than I am script writing and video editing and that kind of stuff. ⁓ sometimes I miss it and sometimes videos take longer than they would have otherwise because I'm playing with something. But, you know, there are times in this career where things change underneath you.

and you either have to decide to change with it or you have to decide to get off the track and go do something else. I mean, there's no shame in either, but understand if you're gonna try to be a, and I'm not talking just to you, I'm talking to the listeners too. If you want to be in this career for 35 or 40 years, there are going to be times that things that you loved and that you really, really enjoyed, nobody wants anymore and you're gonna have to do something else.

Nathan Toups (1:39:12)

Craver.

Carter Morgan (1:39:38)

And that's one reason I like your content so much. And Nathan, I'm sure you'd agree, which is that I know some people ask, why are there so few kind of older programmers? is it ageism? And I'm not going to, there are ageism problems in our industry. But something other people, I've seen older programs point out, just say, there's just less of us. There were fewer programmers, there were fewer 30-year-old programmers in the 90s.

Carl Brown (1:39:52)

There are ages and problems.

Right.

Carter Morgan (1:40:06)

than

there are today, right? And so there's just fewer people to kind of impart that wisdom to the newer generations.

Carl Brown (1:40:13)

There are actually a lot more than you think the the thing that's different about me is I'm on YouTube

Carter Morgan (1:40:18)

But that's what I like about your content is like, you know, the kind of people who are driven to produce content, they're gonna gravitate towards like your hype stirs, know, your hot takes, you know, trying to ride the next wave. And something people have expressed about our podcast, what they like about it is that we provide measured, thoughtful, diligent discussion. And I think it's why we kind of vibe with your channel so well, because you're doing a lot of the same stuff. ⁓

And so I think I've said on the podcast before, like, look, I'm sure there are people who have ruined their lives because they can't see the writing on the wall and they don't get out of something before it's too late. But I think there are so many more people who missed out on good opportunities because they got jumpy and because they said, my gosh, here's a change, right? This is scary. I got to get out of here. And, I hold the opinion for any software engineers. Like you say, like this industry is always changing. And if you're willing to change with it, there are lots of cool, exciting.

Carl Brown (1:41:16)

Absolutely.

Carter Morgan (1:41:17)

things

happening right now. And that's what I like about your channel and the help you're giving to maybe the younger generations, which is kind of saying like, I love what you said earlier, just like, look, this has already happened in the industry. I was part of this happening. What a junior did in the 90s is very different from what they did in 2020, which is very different from what they'll do in 2040. And it's okay. You know, I appreciate that a lot.

Carl Brown (1:41:39)

Yeah.

And I mean, the weird quirk that happened to me ⁓ is that, you know, I happened to have had a daughter in college a couple of years ago when this AI thing was starting to happen. And she had some friends that were computer science majors who were freaking out. And more importantly, their parents were freaking out about whether or not they were still going to have jobs when the parents got finished paying for college.

Carter Morgan (1:42:01)

Right, right.

Carl Brown (1:42:06)

And so I was like, okay, you know, I'm happy to write up some stuff and was told, you know, dad, no one my age will ever, you know, find or read your blog. And so the reason I ended up on YouTube was by accident and it was because, you know, I wanted to make some content for a specific group of people. I figured I would have as many subscribers as her computer club at her university had members.

Carter Morgan (1:42:16)

Yeah.

Carl Brown (1:42:34)

But it was like, okay, well, you know, I'll put something out in a format that they can consume. And that way they can show it to their parents or they can talk intelligently with their parents about it. And that way they won't panic because I've, you know, I've seen this panic over and over again. And it turned out that there were a lot more people that wanted that, that needed and wanted that information than I would have expected. But if it weren't for that, that quirk of fate, I wouldn't be on YouTube and I wouldn't even consider being on YouTube. So there are a lot of people my age that have really

useful information to impart, they just haven't kept up with the not the tech not not the programming side of it. But they haven't kept up with the the information dissemination part of it.

Carter Morgan (1:43:15)

Right, right. Well, I mean,

it's hugely valuable. And you've seen that with your channel growth. And yeah, we can't thank you enough. We'll have to have you on another time, because we spent a lot of this episode talking about the AI skeptic side of things. But I think it would be interesting for the three of us to sit down and talk about, how are we seeing AI benefit our workflows? We talked about that just a bit. But ⁓ at any rate, OK.

Carl Brown (1:43:22)

Absolutely.

Yeah. Yeah, I mean, just real quick,

I have Claude code write all of my first drafts. I tell it, I want something like this, right? And I talked about how I have it do prototypes and I like pull the parts out of the prototype that are useful and put it over into things. you know, I rarely type code the way I used to. It's pretty much always get Claude, you know, and then I'll go in and change it because generally it's easier for me to fix the things that I don't like than to tell Claude what I don't like and have it fix it and...

Carter Morgan (1:43:45)

Yeah, absolutely. Yeah.

Right, right.

Carl Brown (1:44:07)

screw something else up in the process. But I don't write code anymore. Claude generates all of my code for the first draft for the most part. So, you know, for whatever that's worth, I'm more a ⁓ code editor now than a code writer.

Carter Morgan (1:44:09)

Right, right.

Right,

right. I mean, it's, it's, uh, yeah, I, and obviously the industry is changing in huge ways. Um, but again, it's just so valuable to have someone who has gone through a lot of those changes. And I love hearing that kind of origin story of yours that like your whole point was like trying to calm down just, you know, your, daughter's friends and, and I appreciate it because I tend to be kind of a sober minded guy, but you know, you spend all day on Twitter and you're just like, Louise, like, you know, it's all happening. And then a Carl Brown video will come out. like, okay.

Yeah, okay, he's making sense. I ⁓ see the other side to this. So we can't thank you enough for your content and also for generously coming on our podcast because we love you and we're so grateful.

Carl Brown (1:45:03)

Anytime.

I love these conversations.

Carter Morgan (1:45:07)

That's fantastic. And yeah, I mean, you can always catch more of Book Overflow just on here on our channel. If you're listening to us on YouTube or any sort of audio platform, we do episodes every single week where we discuss a new technical book. And we try to interview as many authors as we can. And we also try to interview cool people like Carl whenever we can. You can us on Twitter at BookOverflowPod. I'm on Twitter at Carter Morgan.

Nathan has his newsletter at rohorobaldo.com slash newsletter. Nathan, anything I'm missing, anything we want to tell the audience before we sign off?

Nathan Toups (1:45:38)

No, think we're good to go. I do have ⁓ a new thing I'm experimenting with with Rojo Roboto called Debug Mode. I'm doing a platform engineering office hours every two weeks. ⁓ It's an experiment. We just did a live stream. ⁓ You may think it's cool, and you may find it boring. But ⁓ yeah, you can come hang out and ask me platform engineering questions live.

Carter Morgan (1:46:00)

Great. Well, Carl, anything you want to leave our listeners with before we sign off?

Carl Brown (1:46:03)

So I keep all of my, I've got internetofbugs.com. And so I try to keep up with kind of what I'm doing and all that kind of stuff. I'm working on a substack that hopefully will be done or out by the time ⁓ that this gets published. So it'll be like substack.com slash internet of bugs or however that works. ⁓ yeah, I keep like a link tree kind of thing on internetofbugs.com. So whatever it is that I'm working on, that's where I.

Carter Morgan (1:46:30)

Nice.

Carl Brown (1:46:33)

Links to all my socials and all that stuff.

Carter Morgan (1:46:35)

Well, for anyone who watched or listened, just know that this kind of, what we think more measured reason discussion, you can get more of it on Carl's channel and on Book Overflow ⁓ anytime. So if you liked what you heard today, stick around. We're gonna keep making this content for as long as ⁓ it's fun and it keeps remaining fun. thanks everyone for listening. We'll see you later.