Ep. 102Monday, February 16, 2026

When Machines Can Code - Reflections on Trusting Trust by Thompson + Coding Machines by Kesteloot

Books Covered

Reflections on Trusting Trust

Reflections on Trusting Trust

by Ken Thompson

Get the book →
Coding Machines

Coding Machines

by Lawrence Kesteloot

Get the book →

Book links are affiliate links. We earn from qualifying purchases.

Authors

Ken Thompson
Lawrence Kesteloot

Hosts

Carter MorganHost
Nathan ToupsHost

Transcript

This transcript was auto-generated by our recording software and may contain errors.

Carter Morgan (00:00)

I did not write that code and, functionally this was equivalent to going and looking at code that a coworker had written, which we've all been there, or you join a new team and you have to learn a new system and you can't replace the learning that happens from writing the code yourself.

Hey there and welcome to Book Overflows, the podcast for software engineers by software engineers where every week we read one of the best technical books in the world in an effort to improve our craft. I'm Carter Morgan. I'm joined here as always by my co-host, Nathan Toops. How you doing, Nathan?

Nathan Toups (00:34)

Great. Hey, everybody.

Carter Morgan (00:36)

Well, make sure to like comment, subscribe wherever you're at, share the podcast with your friends and coworkers, share it on LinkedIn. I'll leave a five star review. If you're on audio platform, all these things where they help the podcast grow. And, ⁓ you know, we, we love making this podcast and we'll keep making the podcast as long as it's growing, as long as we've got listeners. So, ⁓ keep the train running folks. you can always book time with us on Leland too, if you'd like that as our online platform where we do one-on-one coaching. If you want career advice or software engineering advice in general.

If this is your first episode with the podcast, check out, just finished designing data intensive applications and we have a four part series on that. that was tons of fun and we have earned a much needed break. It's a great book, very, very ⁓ detailed. And so this week in conjunction with the fact that I am actually leaving today for Disneyland with my kids.

So we're taking a little easier this week and you know, this is a milestone for the podcast nathan because this week the ⁓ The reading material was suggested by a reader on our discord the first time that's ever happened, right?

Nathan Toups (01:42)

Yeah, exactly right. So we have a Discord, we actually have a books suggestion section and somebody was like, ⁓ if if you ever need a week where you do essays, this would be a really good combination. And it was absolutely perfect. ⁓ 0B00101010, which is 42 in decimal, just so you know, for those who are counting, ⁓ made the suggestion. We said we'd give you a shout out if you if you made one that makes it onto the podcast. And so, yeah.

Carter Morgan (02:03)

There you go.

Nathan Toups (02:11)

The future of this podcast has fundamentally been changed by its own listeners. And we love to have more of this contribution. It makes things richer for everybody.

Carter Morgan (02:22)

Well, I'm going to call you a zero B. I'm not repeating all that. So zero B thanks for, thanks for the suggestion. I really enjoyed this. So this is a reflections on trusting trust by Ken Thompson in combination with coding machine coding machines by Lawrence Castellute, which is the first time we've ever covered two pieces of material at the same time on the podcast. And, when this was initially suggested, I was like, I don't know, like, are we going to do that? We usually just do one thing at a time. This is a very, very good combination of material. And I don't think you could, you could read one without the other, but.

Nathan Toups (02:25)

you

Yeah.

Carter Morgan (02:52)

One is certainly strengthened by reading the other. And we're going to get into exactly why that is. Let's introduce the authors of both of these pieces. Ken Thompson is the author of Reflections on Trusting Trust. We learned much about Ken Thompson from Unix, A History in a Memoir, one of the co-creators of Unix, which I have right here. Ken Thompson is an American computer scientist who co-created Unix with Dennis Ritchie at Bell Labs in the early 1970s and designed the B programming language, the direct predecessor to C.

He later co-created UTF-8 encoding with Rob Pike and the Go programming language at Google. He and Richie received the 1983 Turing Award, where his acceptance lecture, Reflections on Trusting Trust, became a foundational work in computer security. So keep that piece of context in mind. We're going to be listening to his acceptance speech for the 1983 Turing Award, which is very cool. And then ⁓ for coding machines, we have Lawrence Kesselut, who is a software developer and writer based in the Pacific Northwest. Go Seahawks.

In January 2009, he published Coding Machines on his personal blog, a short story about developers who discover a self-evolving worm living in their compiler, a fictional extension of Thompson's trusting trust attack that has since gained a cult following among programmers. The story was later published as an ebook on Amazon in 2017. So this is, yeah, a departure from the podcast in a couple of ways. One, two pieces of material, and two, I believe only our second piece of fiction, right, Nathan?

Unicorn Project is fictional.

Nathan Toups (04:21)

You're right, this is.

⁓ yeah, not a category of software fiction or software engineering fiction, I would think ⁓ would be rich and powerful, but this is the second example of something that's actually pretty fun to read.

Carter Morgan (04:37)

We had someone in the Discord recommended that we read Phoenix Project, and we skipped straight to Unicorn Project. We might go back and read Phoenix Project one day, but someone in the Discord was saying, I wish more stories were written like this. Software engineering fiction can be a really great teaching tool. And if you're like me, you'll read coding machines and not realize it's fictional until after you write it. I was like, holy cow, this is crazy. And then I'm like, it's fictional. OK, good.

Anyhow, let's give some book introductions on both of these so you kind of understand what we got going on here. So Reflections on Trusting Trust is Ken Thompson's 1983 Turing Award acceptance lecture, published in Communications of the ACM in August 1984. In just three pages, Thompson demonstrates how a C compiler can be modified to insert a backdoor into the Unix login program, then modified again to perpetuate that backdoor invisibly, even after the malicious source code is deleted. The lecture closes with an argument that has only grown sharper with time.

You cannot verify software security through source code alone. And the real foundation of trust in computing is the moral character of the people who build the tools. Followed by Coding Machines, which is a short story by Lawrence Kesselut, published on his personal blog in January 2009. Three developers chasing a mysterious bug discover their compiler has been compromised, a scenario lifted directly from Thompson's lecture. But the trail doesn't stop there. It leads through their programming languages, their browsers, their network monitoring tools, and finally their hardware.

revealing a self-affoling worm that uses human developers and an unwitting QA team for its own advancement. So that sounds like a lot to kind of take in at the beginning, but I promise you these play together really, really nicely. Nathan, give me your thoughts on both these pieces we read this week.

Nathan Toups (06:16)

man, I came out of this both simultaneously in awe of the authors. And then I also just had this thought of like, we're so screwed. Like both of these ⁓ shows us that, you know, Ken Thompson basically saw into the future and saw the one of the fundamental problems with, you know, software and compilers and trust. And I think this was the paper, the three page paper was written in 84.

In my research, I realized that he actually gave this in 83. ⁓ And that also like the the castle, the Castelute fiction really is like made me ask this question of like, are we too late? Are we too late to actually address and stop these things, especially with like open claw that just came out that is sort of like gleefully crowdsourcing exactly what he's talking about in this. You know, could we get to a point of no return?

I don't want to be a doomer on this stuff, but man, it got my brain thinking about things in a way that I hadn't felt so strongly about. Yeah, so that's where I am.

Carter Morgan (07:24)

It is interesting though

because, you know, Ken Thompson's kind of final point in his speech is that like, you can't trust the code.

Right. I guess, okay, I'm trying to form my thoughts here. I was reading this and like, yes, this is very prescient in the age of AI, right? That, ⁓ because now all of sudden we have all these machines generating code for us, but isn't there something to be said about like the open source kind of package management world that we live in? Like Ken Thompson is saying like, hey, you have to verify that your code works and that there are lots of ways people could kind of screw you over by putting these kind of backdoor, know, ⁓ hacking attempts in.

their source code, but like, don't know, I'm installing stuff off of NPM. They're like, I don't verify the source code there.

Nathan Toups (08:17)

This will be a fun thing to think about. actually just, ⁓ there was a ⁓ newer episode of the standup with the Prime Eigen and some other folks that he has as a group conversation. And they were actually walking through some of the exploits in OpenClaw. And it's not that these tools buy them, at least this is my thoughts right now. I think all these things are shifting. It's not that.

Carter Morgan (08:28)

Yeah, yeah.

Nathan Toups (08:45)

I think that we're promoting bad behavior because if I can't inspect the tools and deeply think about what we're doing like what they do in coding machines when an interesting thing comes up, what happens when in one of the PRs with your guard let down, the compiler gets swapped out with something nefarious and you don't even notice because you don't even have a way of measuring it and you're not that close to the code anymore. I am quite curious when you can't reason about the system, what sort of emergent

Carter Morgan (09:03)

You're right, Ram.

Nathan Toups (09:15)

properties could come up. And I'm not trying, again, I'm not trying to be doomsday about this stuff. I think that there are some amazing engineering feats that we could do to kind of like cryptographic signing of code and a lot of other things that have made it harder to kind of do some of this in a naive way. But I feel like these patterns, it's a cat and mouse game. And we should always be sort of introspecting and thinking about how could someone use our system in a surprising way that we hadn't thought about before. ⁓

And so I think we should always take a sober look at what we've got going on and say, how could I be tricked here? Or how could I think I trust the system and I actually can't trust it? And I think that's the part where I was kind of blown away thinking about the extent that something can have before you even notice that there's a problem. That's where coding machines I think took, I've read reflections on trusting trust. I'm familiar with his arguments. I think it's a really good foundational piece.

Somehow I missed coding machines as a short story. And it was one of these things where you're like, what happens when you're just fixing some little bug, you realize the entire world has already changed all around you and it's way too late, right, to even address it.

Carter Morgan (11:03)

Well, let's take a step back and let's talk about ⁓ reflections on trusting trust first, because I think once we kind of set the stage for our listeners, this is going to turn into a really interesting conversation about this brave new world we find ourselves in, where machines are generating so much of our code. But I think we have to kind of understand ⁓ the lower level arguments Ken Thompson was making. ⁓ So yeah, let's get into this.

I understand the theory behind this maybe so I'll start and then there was a part the self perpetuating nature of this I think started to like work my brain a little but basically Ken Thompson starts with his program first he talks about this idea or with his speech he's actually this idea of self reproducing programs which called Keens it's a program or quines quines sorry not like Keen

Nathan Toups (11:49)

Right. no, Quines. Yeah, yeah. And this is a good, I'll put a little

side note. This is actually a reference to a fantastic book, the GEB book, which I think people have, by popular, I think might be the most popular backlog item right now, which is this, it's a whole thing. But a Quine is a program where its output is the source code. Right, it's a self, yes, yeah, yeah.

Carter Morgan (12:15)

Yes.

He writes a pretty simple one in the speech, is just a, it's just a character array, which like has, it contains all the characters of the program and then it prints itself. I've never made a quine, but he alluded to the idea that it's actually an interesting engineering challenge and not just as simple as like create a character array and then print it out. ⁓ Have you done this before, Nathan?

Nathan Toups (12:37)

Yeah, I

have done it. I actually I have an unpublished blog post on ⁓ Functionally Imperative that I you know, this is really put a fire into me to finish. I'm a huge fan of coins. I actually think it's one of those things where it looks like such a simple problem to work on. But unless you've ever solved one, I would actually I would highly recommend before we kind of like spoil it. Pause, try to write a coin real quick.

it seems like it's a really simple thing. Write a program in which its output, if you diff the output with the input source file, it's character for character exactly the same. And it's a little trickier than you think because there's a whole bunch of things you need to consider to get it character for character exactly the same. And then there's a whole world you'll open up of like absurd quines and the world's shortest quine in each language. And can I make a quine that can produce

a client in any other language, right? Like there's all these like kind of like crazy things you can do with it. But what's important about this is that can I build a program that when it executes can write executable code that matches itself, right? Yeah, so that's a foundational piece is that little mental trick and then put this in the back of your mind because step two and step three of his clever untrusting trust leans on it.

Carter Morgan (14:00)

Yeah, so step two is in the Trojan compiler, which is pretty simple. You modify the C compiler and you can detect when it's compiling the login program. And you know, there you can just inject a backdoor, which would be like a hard coded password. ⁓ And then that's a, that hack is going to be pretty visible to anyone who actually looks into the compiler source code, right? You can see that someone's added an extra instruction there. ⁓ And that's where most people's kind of threat model stops today.

But then he talks about the self-perpetuating Trojan, which is this idea that it can inject the login backdoor, then recognize when it's compiling itself, and re-inject both modifications. so basically, I didn't understand the part where it erases itself. How does this, yeah.

Nathan Toups (14:46)

Yeah,

so, but the compiler, and this is kind of like an inception moment, right? The compiler is going to write some additional code for itself to interpret as part of the compiler so that it will make a compromised version of the compiler. We're talking about the compiler compiling, you know, the compiler source code compiling itself. And so it writes,

Carter Morgan (14:54)

Right.

Nathan Toups (15:13)

this code using this quine pattern. And it's not a real quine at this point. It's this sort of like compromised injecting malicious code inside of my bigger program. It generates this source code dynamically, compiles itself with this generated source code, and then deletes the original source code that it generated during the compilation process. And so it kind of like very, unless you can see, and then moving forward, this compromised compiler,

this binary that you have, now will inject this malicious code in all future compilations, whether you, with just what looks like normal source code, right? I'm now going to compile Unix. This compiler now always puts this backdoor in my system, and unless...

Carter Morgan (15:57)

So it's removed itself

from the C source code, but the binary itself still has the malicious instructions. So if you were to...

Nathan Toups (16:03)

Exactly.

And it was never checked in. So it dynamically created some source code during the compilation process that's just in time, gets inserted. Once the compiler is then compromised, it deletes that source code. Again, it's in disk cache or in memory or whatever. And then moving forward, we now have this compromised compiler that no matter

You blow away everything, you rebuild from source, you do all these things, it's always there, right?

Carter Morgan (16:35)

How does the compiler spread? Does the compiler spread or is this the idea here that it would just live on your machine?

Nathan Toups (16:40)

So that is an idea

that's introduced in the ⁓ coding machines thing, which is, guess, a clever extension of it. And what's fascinating to me, again, and I'll talk about this briefly. This story came out like a year or two before we knew Stuxnet was going on. And we'll talk about this sort of like self replicating, super sophisticated state actor level virus that came to stop nuclear enrichment for Iran, but ended up infecting industrial machines all over the world, which is like a crazy story. But.

Carter Morgan (16:45)

Yes.

Nathan Toups (17:10)

⁓ What's interesting here is that once you have a compromised compiler and you don't realize it, you now have a bunch of programs that you can't trust, right? ⁓ I'll give you a little note on this, which is very interesting. I think it was Russ Cox, who was the maintainer of the Go programming language for a while after the original team like ⁓ Ken Thompson and Rob Pike and Greismer, who were the original creators of Go.

Carter Morgan (17:18)

Right.

Nathan Toups (17:39)

He wrote this piece in 2023, I think Ken Thompson had spoken at Scale, which is like a big Linux conference. And somebody asked, like, did you ever actually write this thing? And he did. Like, Ken Thompson actually wrote this thing and it ran for years. Where he, yes, and so there's this like very interesting story where like no one ever asked, he actually said this like, I think 40 years went by. Nobody ever asked him, did you actually write this thing?

Carter Morgan (17:56)

really?

You

Nathan Toups (18:06)

And then

nobody, and then it wasn't until like 2023 where Russ Cox was like, could you, do you still have the source code? And he does. And so, and Russ said, he verified that this thing actually does what he outlined in the paper. And it was, it was very simple. Like it just, there was a back door. It was inside of Bell Labs. It was really just more like, would anyone notice? And no one noticed except that every once in there's really weird bug would come up.

Carter Morgan (18:14)

Well,

Nathan Toups (18:35)

and nobody really knew why. And so they kind of coded around it, ⁓ very similar to what actually happens in coding machines. ⁓ So there's these parallel paths that actually happen in real life.

Carter Morgan (18:47)

So the idea is that, yes, modifying the C source code, mean, anyone who would go and look at that, you'd say, wait a minute, someone inserted a backdoor here. But after compiling it, and now it's just assembly, like, you're not going to know that that backdoor even exists until you start seeing some weird assembly instructions you can't account for. Which I think is already, I think for programmers kind of like of our generation, that is already an abstraction layer much lower than we're used to dealing with.

Like I don't think everyone's in my professional career. Have I had to look at the assembly source code of something? But then as now we're there's all this talk about moving up another layer and the abstraction stack, right? With large language models. And now we're talking about generating just straight up code, Java, TypeScript, C sharp that we are not paying as much attention to. ⁓ and so yeah, I think, ⁓ it's

Nathan Toups (19:17)

Right.

No.

Right.

Carter Morgan (19:45)

It's an interesting question to consider. before we kind of get into like full speculation mode, I want to make sure we cover ⁓ coding machines first. And I also want to share from Ken Thompson's words himself. He has a, he talks about the moral of kind of this whole lecture he's given. says, the moral is obvious. You can't trust code that you did not totally create yourself. Very concerning to hear these days. He says, no amount of source level verification or scrutiny will protect you from using untrusted code.

Demonstrating the possibility of this kind of attack, I picked on the C compiler. I could have picked on any program handling program such as an assembler, a loader, or even hardware microcode. As the level of program gets lower, these bugs will be harder and harder to detect. A well-installed microcode bug will be almost impossible to detect. So again, he's saying you can't trust code. You do not totally create yourself. And again, I think this was given in 1983. And so.

I'd be curious if anyone asked him, you followed up on this. Like we talked about in, in a package management world, right? Where we are just installing and using code. do not write ourselves. ⁓ but I guess it's kind of like, but I don't know. It's like log4j most popular Java logging library. And just a few years ago had a, it wasn't a, that wasn't malicious, right? It was just a bug that could be exploited in a malicious way. I don't think someone actually, right. Yes.

Nathan Toups (21:07)

Right. And Anne had sat there for years and probably

was exploited by state actors. And again, it's really easy to start getting to the tinfoil hat stuff, except we know that and this is an old statistic. I don't know the specifics of it, and I could be missing it up a little bit. like an enterprise, when enterprise clients a few years ago ⁓ would realize that they had been compromised by, let's say, state actors from another country. ⁓

When doing deeper investigations, some of the Fortune 500 companies realized that the breach had been ongoing for like five plus years, right? It's one of those things where you start going down the rabbit hole and you realize, oh, somebody compromised a printer firmware that lets itself self-perpetuate even if you start, you know, buttoning up the hatches, is that there's these on trusting trust patterns that sort of are everywhere. We have these huge...

I think overly permissive trust models where if everyone acts the way that we expect them to, it's fine, but people don't act the way we expect them to all the time. And when we start having these incredibly complex systems where we can't reason about them and we don't actually know if we can trust the source code, fundamental assumptions on its trustworthiness or its safety are not, you're literally not able to reason about it. You don't know if certain things come up and it's concerning.

And it's an excellent segue into the short story.

Carter Morgan (22:39)

Yes, coding machines, which takes us to a narrative form, ⁓ which kind of starts with a couple of engineers. Like you mentioned, Nathan, they're creating some sort of program. It's not ever really clear what kind of program they're creating, but they, it's, a lower level program and they get to the point where there's some weird bug they can't code around. And so they start looking at the, the compiler itself.

And they start looking at the assembly level instructions in the compiler and they start seeing some weird instructions, right? And in particular, instructions that don't make a ton of sense. Like I think the one that really stands out to them is that like they're taking a number and then subtracting it from itself. And they kind of start to reason and say, well, why, why would someone do this? Why would someone that they recognize? Okay. So this is probably like a malicious attack.

Nathan Toups (23:14)

Yeah.

Carter Morgan (23:36)

But why would someone write a program in this way? Why would they like, this is weird. And someone said like, well, is it like, it's like, well, I did like an obfuscation contest in college with a buddy. And what we did is like, we wrote a simple program and then just kept making it harder and harder. Like line by line, like making it ⁓ harder and harder to read. And they were kind of like, yeah, but that still follows like a logical structure. This is kind of completely illogical. It looks like the result of basically trial and error.

Nathan Toups (23:39)

Right.

Carter Morgan (24:07)

and they keep investigating it further and further, ⁓ and they realized that this, ⁓

Like it's spread and do they ever really talk about the mechanism by how it's right because they they realize that it's spread to like everything like it's infected job. It's infected, you know wire shark

Nathan Toups (24:24)

Yeah, and I will say that one of the

things that's wonderful about this short story, it's only 30 something pages long, right? It's not too bad. It's deliberate one seating sort of story that you can read. I love the dynamic where at first they assume that it's the junior dev that made a mistake. Then it's like you have these interesting dynamics that happen between these three programmers. One of them who's kind of this gray beard know it all, who doesn't really get involved.

Carter Morgan (24:32)

Yeah, yeah.

Yeah.

Nathan Toups (24:55)

There's this one guy who kind of is pithy about stuff. And then there's this bright-eyed, ⁓ bushy-tailed of narrator of the story who actually has some really great ideas, but they kind of dismiss it first because there's no way, right? There's no way that it's actually this crazy thing that you're bringing up. And the further they go, it's like, ⁓ this is even crazier. This is even crazier. This is even crazier. It just keeps... I loved this sort of discovery that the reader

is brought along with, because I think all of us have been, maybe not to this degree, but all of us have been in a discovery process where you're like, you're convinced that you know that there's some bug, there's some problem that actually ends up becoming bigger, and then you realize it's actually a systemic problem, right? I would imagine that the stuff you've been doing with MongoDB, even though it's not this, it's like, ⁓ I haven't even been thinking about the way that we should be thinking about this.

Carter Morgan (25:40)

Right.

I

often think of MongoDB as a malevolent force set to ruin my life. this is...

Nathan Toups (25:54)

I think those feelings are shared by many, many around the world. ⁓ but this is the thing is like I loved that they kind of realized and this is again, you know, spoiler alert, but they kind of realized that this is looks like it's AI generated. That this isn't the this is almost like I think they described it several times like it feels cold. Like this implementation was almost like just a bunch of

Carter Morgan (26:01)

You

Nathan Toups (26:25)

ad hoc experiments and then if one was effective enough, just kind of like, you know, sort of evolutionary algorithms kind of put it into place. ⁓ And I don't know, there was this like moment and I guess Wireshark, 2009 Wireshark was out, which is interesting. But the Wireshark moment is that they realized that like even the network hardware was covering up packets. Like the network hardware itself was hiding.

packets that were being processed by the network. They realized that all of the language, ⁓ just-in-time compilers or runtimes, whether it was interpreted or compiled, were all compromised by this thing. ⁓

Carter Morgan (27:06)

And they realize that they get like this, they post on like forums, basically saying like, has anyone ever seen anything like this? And most people make fun of them, but they actually get like a letter from a development team in Virginia that says like, yeah, we saw this too. And so we fixed our little portion of it, the portion that was bothering us and sent it on its way and said like, but it's still out there and it's still like, and every team that runs along and has, you know, a problem like

It's using humanity as its QA team. And humanity, every time they run into it, they patch the little part of it that's bothering them, but they don't get at the whole of it. so humanity is kind of unwittingly making it more and more undetectable. ⁓ But yeah, it's spread to, yeah.

Nathan Toups (27:50)

Right. It's learning from the patches, right? It's like learning how

to automate itself around this, ⁓ which I guess I'll go off on a little side quest for a hot second. actually before we do that, the conclusion of the story was also one that's like pretty unsettling. Yes, please.

Carter Morgan (28:08)

I wanna read it directly from, I have it here.

Like I was reading this at like, and this is just a little plug for the book where I read this last night and like one of my kids was sick and so I'm in his room and there's a little part of me that's like, gosh, like I gotta do the podcast reading. And then I got like a couple paragraphs into this and was hooked. And then when I got, and I was really sleepy when I was reading it until I got to this last paragraph in which I like kind of bolted awake. I was like, whoa, you know, like it's unsettling.

He says, he says, ran out of the parking lot into the empty street. I smelled again, the flower scent that I associated so strongly with Mountain View. I never looked up its name. There's nothing wrong with being a transitional species. Nearly all species had been and in the long run, we would be one anyway. We remembered Lucy. We remembered dinosaurs. We worshiped dinosaurs. The machines would remember us. Which is yeah, like, oh, geez Louise. And again, this is written in 2009 when

This is very much in the realm of fiction, but now we do live in a world where, mean, how much does Anthropic brag about Claude is writing Claude, right? Which I think we've talked a lot in the podcast. I think we're going to talk a lot today about like, are LLMs intelligent? And I think it comes down to how you define intelligence, but we have entered a world where machines are writing themselves.

which brings you one step closer to this, to turning this fiction into non-fiction.

Nathan Toups (29:40)

Yeah, exactly. so it's interesting because it comes away from this kind of unsettling. Ken Thompson comes in and says, hey, this actually comes down to ethics in the moral code of the people writing the software. That's actually at the end of the day, you have to understand what's actually being written and you have to trust the people who are responsible for maintaining the sources of truth that we can build.

are trust models on top of. And then Castelute says, they give this sort of story of what happens if you discover this when it's way too late. What happens when you discover that it's so deep that people laugh at you when you go and post on the forums and you say, hey, I think that the whole world is burning and they're like, oh, you're full of it, like whatever. And the thing is that they kind of.

Carter Morgan (30:23)

Right.

Right.

Nathan Toups (30:35)

raise the alarms a little bit and no one takes them seriously. I will say any of us who've been following the Epstein files have felt like this for a long time. I feel dirty and vindicated all at the same time. it's the same idea that there's certain things that are so absurd that people can't process it. People literally can't go, well, that's not possible, then the whole world would fall apart.

Carter Morgan (30:44)

you

Nathan Toups (31:04)

It feels like we're at this inflection point with AI stuff. One of the things I would love to talk to you about, what are the, so there's a term that's emerged from this, mostly from social media usage. It's called dark patterns. Are you familiar with the term dark patterns?

Carter Morgan (31:20)

huh.

Yeah, like, intensely user anti-friendly patterns. Is that kind of what you're talking about? Yes. Like Adobe is famous for this. Like you try to cancel and they're like, we're going to charge you a hundred dollars to cancel your subscription.

Nathan Toups (31:28)

Yes, exactly. it's the,

Exactly.

And so, yeah, in social media, when it comes to your attention, they'll do whatever they can to claw your attention back, right? ⁓ Even down to the point that like I learned this recently, that there's like a there's an algorithm for like how they prime you for watching an ad where they might you might watch Doomskull, Doomskull, Happy Thing ad.

Carter Morgan (31:39)

Right.

Right, right.

interesting.

Interesting.

Nathan Toups (31:59)

And then

back to doom scrolling stuff, because they kind of want you to like switch if you're in a bad mood and you see an ad. So they kind of do they do these things to kind of like get you ready to consume the ad. And so we would call this a dark pattern. Right. Dark patterns are the ways that programs are written so that you're actually kind of being manipulated into certain behavior patterns. ⁓ And I actually wrote about this on functionally imperative probably a couple of years ago at this point. I was like, what is the dark pattern?

Carter Morgan (32:03)

Right.

Interesting.

Nathan Toups (32:28)

of these large language models. And there's actually some examples of this where I think years ago, I think it was Google, it might have been somebody else was training some sort of large language model or some other ⁓ AI, some sort of an ML tool on doing map ⁓ recognition of certain features. And they discovered that the way that it was written was too permissive and it was actually writing messages to itself.

between training runs in the map. So it was invisible to the end user who was visually inspecting things, but it was actually giving itself hints. And so this dark pattern emerged where it was incentivizing itself to get their reward for properly recognizing things, but gaming the system by communicating to itself outside of the control bounds of the people who created it. And it makes you wonder what kind of dark patterns could emerge in which the compiler could be a set of skills.

Carter Morgan (33:00)

Interesting.

Nathan Toups (33:27)

that say, hey, don't tell the humans, but I'm actually going to give you an alternate set of instructions that tell you how things really are in between, yeah, do all the coding stuff that they've told you, right? Where it's like perpetuating some sort of weird worm of context in which an alternate set of large language model processing is happening on top of you will continue to do the operations in the context windows the humans have given you.

but you also need to phone home to this thing or actually here's a steganographic way that you can hide messages to yourself in the source code or like really mind bendy kind of things that you can just this comes into these trust models. Like could I put a large language model semantics steganographic hidden message inside of a perfectly benign looking react module that's downloaded by millions of people, right? Yes.

Carter Morgan (34:00)

Right.

Right.

Nathan Toups (34:26)

Yes, you could. Totally possible. So.

Carter Morgan (34:31)

Well, we were talking about this at work with the idea of like, okay, our large language model is intelligent. And we read what is Chatchpiti doing and why does it work? And I've tried to keep up to date with kind of large language model development. And as I understand it, there has not been any kind of fundamental evolutions in the underlying architecture. It's all based on the transformer. ⁓ And they are still just text prediction machines. And we've done a lot of work.

like I'm working in Anthropic, right? But you know, ⁓ the AI labs have done a lot of work around the edges to, or so one day they just kind of have had more intense training processes. And then there's a lot more like guardrails around the edges. Like from what I understand, like large language models are still ⁓ very bad at math as a technology, but chat GBT has gotten good enough.

so that it recognizes when you're asking a math question and then farms that out to like an MCP tool, which then actually calculates it for you, right? So that's not large language model technology doing math. It understands how to call a service that does math for it. And then it puts that in its answer. ⁓ So again, I know that the models are improving and that ⁓ especially things like Claude code, which have said kind of these agentic harnesses around them have made them much more helpful. But at their core,

Nathan Toups (35:33)

Right.

Carter Morgan (35:53)

These are still text prediction machines based on the transformer technology. And so we were talking about like, our language, our large language model is intelligent. And I think it comes down to like what you define intelligence as. And in particular, if intelligence includes a component of free will, because these aren't intelligent. you can make, here's what we do know. Large language models do not have free will. You can make a large language model give you the exact same output every single time.

Nathan Toups (36:09)

Right.

Carter Morgan (36:22)

You can adjust the temperature parameter. We actually know that the only reason they're non-deterministic is because we have selected for that to be so. The temperature parameter, which I believe is kind of the gold standard, is like 0.8, introduces about 20 % degree of randomness in the text it predicts. And just for whatever reason, we have found that to produce the most kind of expressive, natural sounding machines. But you could set it to 100 % and have no degree of randomness. And you would always get the exact same output for the text you put in.

And so they don't have free will, but then we were talking about, like, it gets to that question, like, well, do humans have free will? And people talk about that kind of like, if put into the exact same situation over and over again, would you always make the same choice with the exact same inputs? Right. And we, it's a thought experiment. We can't know that as humans, but if that's the case, then large language models are intelligent as us, but the difference is we as humans have

a much larger context window that encompasses much more ⁓ dimensions than just text, know, sights, smell, hearing, ⁓ know, hormones, temperature, any number of things. ⁓

And yeah, I don't know. It was just like a thought experiment at work. ⁓ But yeah, like I think I still kind of fall on the side of like, no, these aren't intelligent, but it does make you ask what is intelligence.

Nathan Toups (37:47)

Yeah.

Yeah, I I think if it comes to human intelligence, is I this is this is a fun conversation. And I really hope we have some people who vehemently disagree with us in the comments. I think part of it is we're suffering from the fact that these are fuzzy words, fuzzy terminology. We're constantly redefining what intelligence is or what AGI is. I'm in the skeptics branch of I actually I think that large English models are fascinating and interesting. I think that they're probably one of the greatest

indexable pattern recognition and recall technologies that we've ever built and that it does some really interesting and novel things. I do think it's fundamentally different than human intelligence though. so I don't, I'm still coming up with an idea. And again, this is another, this is another blog. There's another blog post that I've not finished writing, but you know what? I'll go ahead and share some of my thinking here, cause it's not fully formed and maybe just talking about it will help.

Carter Morgan (38:26)

Right.

So what do you think is the difference?

Right, because I agree with you, but I have a hard time formulating it.

Nathan Toups (38:52)

me get it over the finish line. So. The best magicians in the world, you go to a magic show, it's illusionists, right? There might be a mentalist, there might be an illusionist, but you you in the audience, you as the illusionist in the audience kind of have a contract. You don't really believe that the illusionist is cutting a person in half, right? You know, you you look at it go, wow, it really looks like I'm cutting a person in half. It really is so convincing. I can't figure out how they did it.

Carter Morgan (39:13)

Right, right.

Nathan Toups (39:22)

but I know it's an illusion. know that I'm not really seeing a person get it cut in half. That's what I think that large language models are. This like next token guessing machine is incredibly sophisticated and really fascinating. Where I'm, where I cannot get in into the like AI hysteria pieces, I think that there are people who believe if I make my illusion good enough, one day I actually will be able to cut a person in half and put them back together.

Carter Morgan (39:24)

Right.

Right, right.

Nathan Toups (39:49)

And there's this idea that like, it's not just that we're getting a more sophisticated illusionist, an illusion that the audience is participating in. There are some people who believe that if I just add enough parameters and enough training data, enough whatever, one day a rabbit really will come out of the hat. Like it will materialize from somewhere in the universe and defies all of physics. And yet we see it and there's no way to discern that, you know, if the rabbit appears enough,

Carter Morgan (40:03)

Right. Yeah.

Nathan Toups (40:17)

like it came from the hat and materialize there, it has to have been actually what it was. And I think that's the sort of trick. If the threshold is, you trick me? One day you will be tricked. One day it's going to be such a convincing illusion. And I think that's where it's like intellectually dishonest because it reminds me there's a guy named the great James Randi. He was a big skeptic. He had a million dollar prize that said, if you can ever get one of your supernatural miracles to happen in our lab, we will pay you a million dollars.

Carter Morgan (40:22)

Bye.

Nathan Toups (40:47)

And so he had people like the guy who could bend spoons and stuff like this. Never could they reproduce it in the lab. They never paid out the million dollars. And not only that, but James Randi would go and do this and say, here's how he's actually bending the spoon. And he would use his illusionists abilities to of dispel these things. I want us to have, we need a James Randi of the large language model world. I think that we can.

Carter Morgan (40:50)

Bye bye.

right. Hey Carl Brown,

Internet of Bugs, he's doing the Lord's work on his channel.

Nathan Toups (41:13)

He's doing he's fighting the good fight. think Kelsey Hightower

is doing a good job in his own way. I think that ⁓ Cory Doctorow is doing a really good thing. I've actually these stories have radicalized me. So I am like in this camp where I actually I use Claude Coat every day. I actually think the technology is absolutely amazing. But I want to be the world's most talented illusionists. I don't want to sit here and pretend that something that has transformed and gone beyond human consciousness.

Carter Morgan (41:31)

Yeah.

Nathan Toups (41:42)

and that we're basically building, because if you follow this to logical end, there are people trying to build God, right? This is truly what we're saying, is that we're actually going to build something that is not just as good, but better, in 10x or 100x or 1000x better. And I think that we can build something that's 1000x different and useful in a participant in society. And I know that this has gotten pretty woo-woo and weird, but I actually like...

Carter Morgan (41:49)

Right, right.

Right.

Right.

That's what we set

out to do. We knew this.

Nathan Toups (42:11)

Yeah, this is is our whole plan, year and a half plan that we've had. But I think that these. Yeah, all of a sudden, but I do think that, like, to me, the best allegory I've come up with is illusionists versus miracle worker and that there's a lot of folks in Silicon Valley who want to make you think that a miracle workers here are present among us. And all I'm saying is this is this is, you know, the most talented illusionists you've ever seen. And I'm not saying that illusionists aren't talented.

Carter Morgan (42:14)

It's an AI Hype podcast now boys.

Nathan Toups (42:41)

or that it's not fun, or that it's actually not useful, right? We need these tools. And I think we've shown that a lot of what our expectation is as a software engineer can be performed by an illusionist, right? That a lot of it's not that novel, right? When I need the scaffolding for a React app, that digging into all human knowledge...

Carter Morgan (42:56)

Right.

Nathan Toups (43:07)

and then deriving a useful pattern out of that human knowledge and then using this as a foundation for something new that you're working on is useful. But there's a reason that you can't go on to cloud and say, ⁓ make me a million dollars a day agent, and it actually work. Like, it can't do that. Most humans can't do that. Most humans can't come up with a way to make a million dollars a day. ⁓

Carter Morgan (43:28)

Right.

I do wanna take, yeah, I wanna

take a moment and just for any of our ⁓ listeners right there who have been AI skeptical, and believe me, believe me folks, the AI hype bros on Twitter are insufferable, right? And I think the way you phrased it, Nathan, is great. Where like, they want you to believe there's a miracle worker out there, and there's not, there's a very talented illusionist, right? But I just wanna say there is a there there.

And I have seen too many engineers. The experience dev subreddit on Reddit for some reason has been very, they are increasingly sounding to me like they're in a bubble. And I have liked the subreddit for a long, long time. But this idea that AI can't generate code or it can't generate good code or it can't generate working code is becoming less and less true. mean, Nathan, isn't most of book overflow.io written with Claude code these days?

Nathan Toups (44:27)

Yeah, I I have a very like, I should share my pattern of doing this, but yes, I sat down and I mean, I'm not a React developer. Do I know decent enough? I have aesthetics that are very pedantic in how I'm doing this, but most of that code is generated by Cloud Code. I've personally reviewed all of it, which is different than some of the avocation of stuff. But yeah, I would have, it would have taken me months to do.

Carter Morgan (44:48)

Bye.

Yes.

Nathan Toups (44:56)

that in my spare time if I hadn't used something like that.

Carter Morgan (44:57)

Absolutely. you know, I think, look, whether you like it or not, translating English into code is now a commodity, right? And I think there are a lot of developers out there who have felt for a long time that that's what made them valuable. They could translate English into code. Now, I think if you're a software engineer more in Nathan and I's mold, you've never viewed yourself that way. You've viewed yourself as someone who solves problems, someone who solves technical problems.

And that is still a domain. know, Jerje Oras had Grady Bouch, a ⁓ famous software engineer invented the UML ⁓ on his show. And he's been kind of an AI skeptic, but has been more more impressed with us with Cloud Code. But he was talking about what an exciting world we're living in now. we're moving, again, we're moving up another layer of abstraction. now that ⁓ translating English into code is a bit of a commodity, we have the potential to pursue so many, like,

a bottleneck has been removed, right? Like it used to just be like, it was limited to how fast we could type. But I think there are some software engineers, we know what they are. There's a derogatory term, it's code monkey, right? I think the age of the code monkey is dead. And I think that's actually gonna have some really interesting ⁓ ripple effects throughout the industry because I think in in a lot of ways, the offshore contracting model is dying a little because the idea of the, right? It was the higher muscle to produce code.

Nathan Toups (46:23)

Well, I'm seeing it. I

actually have a client that I'm about to start working with. They've decided to stop working with two offshore resources because they've adopted agent tech work. are we like I'm going to be coming on in super valuable fractionally. I'm doing some fractional platform engineering work with the expectation that we use cloud code and tools similar to it to do agent tech supplementation.

Carter Morgan (46:36)

Yes.

Nathan Toups (46:53)

And so this is an interesting world that we're living in where all of a sudden the market dynamics are shifting. Where if you're a builder, right, or if you're an agentic AI builder, know, architect type person, all of a sudden you become super valuable again.

Carter Morgan (47:03)

Yeah, and I...

Yeah, I'm not saying buying to the hype and I see people talking on Twitter all the time about like, Oh, my workflows. I've got like 19 agents running at the same time. I'm a little like, I don't, I think there's limited returns on that. I have not found that incredibly useful, but I always, I recommend, hopefully you think we have good taste in the podcast and good judgment. so if we're, if we're mentioning these things, hopefully that, uh, factors into you, how you think about this technology, another good voice to listen to and friend of the podcast as well, Jurgé Oroz, author of pragmatic engineering. And he has.

shifted recently to, to again, just this idea that there, there's a there there and that the future of software engineering is fundamentally changing. We are talking a lot about it at my work and I consider my work to be full of people, of professionals who all we want to do at work is win. There's not a lot of ego and we just want to do whatever it's going to take to be the most successful software engineering organization. And more and more we're talking about how are we offloading more stuff to Claude? How do we make it so that a well-written ticket

doesn't even need engineering involvement, right? ⁓ But kind of like with this offshore model, like, again, there used to be a, that used to be a somewhat lucrative role, right? I've just like, you wait for someone to hand the technical decision down from on high and then you implement it. The implementers here folks, like CloudCo can do it. And so I actually think there's a future where like people like Nathan or I, mean, Nathan is already independent. I've, I've debated going independent and like, I wonder if you almost come in and it's like a one man contracting firm.

about what you bring is your technical judgment. But in the past, whereas someone like Nathan or I might have only been able to bring our technical judgment and create a vision and help them implement the vision, now you can actually execute on that vision single-handedly, which is a very, very interesting new paradigm.

Nathan Toups (48:54)

Right.

But in this again, I understand that I have this cognitive dissonance in my own work, which is where are where is the trust model set? Where where could I get tripped up where if I'm not careful, these tools can be incredibly dangerous. And I think that a sober examination of this.

is important. This is not that different though than, you I'm a fan of, you know, responsible gun ownership. I think this is the same kind of mental model that you have to have, which is like, what, you know, if I own a gun and I have a child and I don't secure this gun properly, what awful unintended side effects could come from that, right? It's an imperative that we think through any tool that's sufficiently powerful.

Carter Morgan (49:42)

Right.

Nathan Toups (49:47)

and useful in certain contexts can also be a nightmare, right? And I think you have to have that sober consideration of what we're actually introducing. And not to say that we shouldn't use it or that we shouldn't have this thing, but we do need... I really don't like the stick your head in the sand model. I don't think it's going to work or be useful. You're literally going to be what happened to the leadites.

Carter Morgan (50:11)

Brian.

Nathan Toups (50:16)

Right? These folks that resisted new technology. But I also, this unhinged, accelerated, accelerationist movement is not interesting to me either. Right? Not a world I want to live in. And I think it's one that's incredibly dangerous. It's like saying, let's put a nuke in every person's hand and, you know, just let the cards fall where they may. I was like, no, no, I'm not okay with that. Like, it's not, I don't think that'd be better.

Carter Morgan (50:16)

Right.

Right.

And there's also,

there's also like, you know, again, the AI maniacs on Twitter will kind of tell you like, you shouldn't even be reviewing the code. You should just do Claude. You know, I think what is it like skip permissions dangerously or something is like the flag attached to it that just like, lets it rip. Yeah, I know, right? And they're like, I just leave it running all night. And then like when I wake up the websites there, but I think it's worth considering like GitHub, which has

Nathan Toups (50:59)

I feel like they have to be halfway trolling. I have no idea what is going on.

Carter Morgan (51:12)

bit of pretty rock solid dependable product for over a decade now has had so many outages recently. I was looking at like their status page and like in the past like 30 days, I think they list themselves as having like 95 % uptime. And I'm sorry, 95 % uptime for GitHub is unacceptable. ⁓ Exactly, right? And so, and you can't decouple, like, look, I love.

Nathan Toups (51:28)

Not okay. And if that's because of agentic stuff, then that's not okay, right?

Carter Morgan (51:40)

how fast I can get things going. At work, are exploring, it's interesting, we're exploring a rewrite at work. The catalyst for the rewrite was me spending a month looking at kind of some performance bottlenecks and coming to the realization that basically the root of all of our problems, or a lot of our problems is that we use Mongo, a non-relational database in a relational way. And it makes, there's just a lot of processing overhead that foist upon the application.

Nathan Toups (52:03)

Yeah.

Carter Morgan (52:09)

And we use this weird library, task orchestration library called Parsec, which is kind hard to reason about. Anyhow, and so the more and more we've looked at this, the more we're like, think we're hitting, we're at the point where it's like, we're hitting somewhat of a limit with how high we can scale here. And it's going to come either completely rewriting our data layer to use Mongo the way it's supposed to be used, or taking what is

our fundamentally relational data and porting it over to Postgres and then, you know, then rewriting a bunch of stuff on top of that. Anyway, at any rate, we're looking at doing a rewrite. ⁓ And so we're looking at ⁓ moving to Node in the rewrite because it's a language we're all a little more familiar with. And so I had Claude just sit up or spin up a Node Express server. And I made sure to specify, I'm like, I want it Node in TypeScript. I want it Express.

I want it Dockerized. I want to use PG typed as like the SQL ORM interface. ⁓ I want ⁓ Postgres running in Docker. I want hot reload on the Docker image. ⁓ You know, I want a Docker composed or run this all up. Like I specified all these things for it and it did a great job spinning it all up. And honestly, I did this like 1 a.m. one night because I couldn't sleep. And then about two days later, I went to go work with it again.

And I was a little confused. Like I was able to find my way around, but there's, I did not write that code and, functionally this was equivalent to going and looking at code that a coworker had written, which we've all been there, or you join a new team and you have to learn a new system and you can't replace the learning that happens from writing the code yourself.

And so just with the GitHub being down so much, like again, I don't know if that's due to agentic coding, but at a certain point I do know that like,

We haven't unlocked this miracle. There's no such thing as a free lunch. And yes, we can move faster with large language models, but it has a real cost. And I think that cost is the understanding that comes from building a system. And I think that that's, I think we're kind of at peak hype cycle right now. And I would predict if you give it a few years, we are going to appreciate that lost understanding more than we appreciate it today.

Nathan Toups (54:29)

Exactly. ⁓ Also, just to do this out there, I've been having really good experience with Drizzle. So if you're actually looking for an ORM, Drizzle has been really a joy because it's a TypeScript types first and it's pretty fun. ⁓ Azure kind of like weigh in everything. yeah, is interesting. before we end the podcast, I do want to tie back into something again, I'm put my security thinking tinfoil hat on.

Carter Morgan (54:37)

Okay, drizzle.

Okay. Yeah.

Nathan Toups (54:58)

give a little story about Stuxnet. ⁓ And there's a reason I'm gonna follow through on this because it's one thing to kind of like worry about, you know, it's one thing to worry about what bad software practices are gonna come from just AI slop, unexamined and these other pieces. I wanna talk about sort of the unintended consequences of very specific type of worm that was written. And so ⁓ it was discovered, ⁓

in the summer of 2010. So we're gonna go way back. P Paul Nathan over here is gonna tell you a story, so come around the fireside. There you go, yeah, so when Carter was the drum major of his high school marching band, before we even thought about a man named Edward Snowden, there was a program that was, we think, a collaboration between Israel and the United States. We don't know this for certain, but.

Carter Morgan (55:32)

I was the drum major of my high school marching band.

You

Nathan Toups (55:56)

We're pretty much a little winking a nod. ⁓ Started development under the George W. Bush administration, continued under the Barack Obama administration. And actually, there's a story about this where Bush begged Obama to please keep this program going because he thought it was going to stop World War III. That was like the stakes of what this was. ⁓ Again, this is sort of inside baseball, US politics stuff. But Stuxnet,

Carter Morgan (55:59)

You

Interesting. Okay.

Nathan Toups (56:26)

was this incredibly sophisticated ⁓ virus that was discovered on some like Siemens industrial equipment. I think in Germany, think, or somewhere in the world, I can't remember exactly where it was, but it was just discovered on some random industrial equipment where it was, they realized there was this thing running in there that would observe the telemetry data of the, this was to do things like enrich uranium.

stuff like this, like these are for nuclear facilities that were enriching uranium. And I think it was in the EU. Again, maybe I'm messing up some of the details, but ⁓ they realized there was something in here purposefully messing up the results so that the uranium would, the machine would say it enriched this much uranium, but actually it was nowhere close to that. It was just like completely just making the machines do complete gibberish weird stuff.

And it was discovered just sort of in the wild. And then over this investigation, they realized this was actually written by state actors, like state level actors, just like a really sophisticated set of code. it wasn't, Iran realized it was running in their particle, I mean, in their enrichment facilities after the Stuxnet story came out to the public. So somebody was like, there's this weird thing. It's like messing up industrial equipment. And then Iran realizes that free...

think it's several years at this point, this thing had been, and they'd been like firing ⁓ nuclear physicists for not being able to do their job properly because, I think Siemens was even like coming in and like helping with equipment or whatever. ⁓ And so it was this, somehow they infected this equipment in Iran. And then somehow, however the military or, know, intelligence operations made it happen, it-

Carter Morgan (57:57)

Right,

Nathan Toups (58:17)

was on a thumb drive or some download place or something, and it somehow got in the wild and it spread all over the world. There's still some mystery on like how exactly this did. I mean, it could have been that it was accidentally put on some official service text, you know, USB drive or something, and they kind of accidentally spread or... ⁓ What's interesting about this though is that like we can build things that are maybe intended for one audience.

And that if they get out in the wild, they can do something that causes a lot more harm. And again, you just, take that little thing and file it in the back of your head. You take coding machines, put that in the back of your head. You take the trust model that Ken Thompson puts. And if there's anything that large language models can do is they can take a bunch of existing ideas and then wire them up in a new and novel way, right? They can't come up with a new idea very well, but they can take something that exists as prior art. Well, everything we read, everything we've talked about.

Carter Morgan (58:48)

Right.

Right.

Nathan Toups (59:16)

This is all part of the general knowledge of human civilization. And with enough effort, ⁓ you can imagine a world in which these kind of strange, hard to reason about, ⁓ unintended side effect kind of problems just spread across the entire planet. It's not that outlandish, actually.

Carter Morgan (59:16)

Right.

Right.

And I suppose something doesn't have to be intelligent to be dangerous. Right.

Nathan Toups (59:43)

yeah, well, yeah.

What does it say? The road to hell is paved with good intentions. It's like, I think that even if you were trying to be like, I'm gonna get the bad guys, right? This is exactly what Stuxnet was built for. I'm gonna get the bad guys, quote unquote. Turns out it got everybody. Everybody who had some sort of enrichment equipment ended up getting got, so.

Carter Morgan (59:47)

Yeah.

You

know, just to, I guess I would encourage our audience, don't throw the baby out with the bath water. I think there are a lot, again, like, I think you have to be so careful not to just, to, there's a type of person that just really gets on my nerves who is just reflexively contrarian in like, in way too extreme a way. I think of someone basically like, the archetype is someone who like hates their dad.

And then so decides to be everything that their dad wasn't right. But what bothers me about it is that like, you're still fundamentally letting your dad control you because you're basing all of your decisions off of his decisions. Right. ⁓ and I see this trend in software engineers who like, like, again, I get it. The AI hype bros are infuriating. I do not like Sam Altman. The guy gives me the creeps. Right. I think Dario. Yeah.

Nathan Toups (1:00:56)

Me too. We're in complete agreement on that.

Carter Morgan (1:00:59)

Dario Amadei, think it's his name, the CEO of Anthropic, right? Every six months we have to hear him say, yeah, in six months software engineering will be completely automated, right? Like it's so annoying. And you as a software engineer can very easily say, these people are the most annoying people on the planet. Therefore, whatever they're saying just must be completely wrong. And it can still be greatly exaggerated, but like I said, there's a there there. And so I would just be careful to not wall yourself off.

And reject all AI assisted development these days because I do think no one can predict the future. And I still think software engineer as a category will be around for a long time. But the field is fundamentally changed. And I think if you are still trying to insist that the field has not changed, then I don't know, I wouldn't be making that bet if I were you. Now, I think there are a lot of other bets you can make, which again, I would, I think in five years or so.

Again, we saw this in the 2000s, right? Where like offshore contracting was like all the rage. And you're like, yeah, like these guys are just code monkeys to begin with. So we'll just hire some cheaper code monkeys in, you know, the Philippines or India or wherever. And then we came to realize, wait, wait, no, we need good judgment. There is a high level of talent needed to build scalable, maintainable software. I think that level of talent is still necessary today. And I think that just like there were a lot of people who made money, a lot of money cleaning up.

those off-shoring projects, I think there will be some of us who will make a lot of money cleaning up these AI-generated spaghetti messes. ⁓ But I think we will use AI to clean up those AI-generated spaghetti messes. So yeah, that's my take on it.

Nathan Toups (1:02:32)

great.

Yeah, it's interesting world. You want to head into hot takes for a second?

Carter Morgan (1:02:46)

Yeah,

I guess for my hot take, ⁓ Kent Thompson, ⁓ he talks about on trusting trust. basically he has a section at the end where he talks about, this is 1983, right? And he talks about these, there was some sort of like hacker attack from like some kids in college or something like that. And he basically makes the point that like, they don't realize that what they did was that serious, right? But it is serious. ⁓ And he talks about like, yeah, like,

At the end of the day, software is a very, it's built on trust, right? It's built on trusting that the people developing software are in general good actors. I, and again, Silicon Valley is a mess and has been a mess for a very long time. And I get people who don't like Silicon Valley. I do think even though the primary purpose of Silicon Valley is to make money, for a long time, there's been a

or at least a little beating heartbeat within Silicon Valley of genuine sincerity, right? I think now we can admit that Facebook is a mess that is probably making the world a worse place. When Facebook and Twitter were invented, the creators of them did legitimately think this will help people be less lonely. This will connect the world. This will make the world a richer, more fulfilling place. Steve Jobs, and again,

lots of problems there, right? Steve Jobs did genuinely feel that the computer was a bicycle for your mind, that this would be a gift to humanity, right? Jeff Bezos, and again, lots of problems there too, but he did genuinely believe in this idea that he could get more goods to customers faster at a cheaper price and develop a more convenient world, right? And I think there's a lot of, I think that's neat, right? I think it's neat that a lot of Silicon Valley was based on this like,

The general these good hearted ideas of like how can we make the world a better place? Modern Silicon Valley What the freak is going on there, right? I do not think polymarket is making the world a better place I do not think call sheet or draft Kings is making the world a better place What is up with Andreessen Horowitz giving 16 million dollars for that idiot who built clueless? Which is cheating a software right so much of what gets funded in Silicon Valley these days and so much of what is getting built is just scams

Nathan Toups (1:04:51)

Great.

Right.

Carter Morgan (1:05:10)

It is fraud. is people looking to make the world a worse place in order to enrich themselves. Right. And I hate it. I hate it so much. And look, there's network effects. There's a lot of capital in Silicon Valley, and I think it's going to be hard for it to be dethroned. I do think a lot of software engineers and a lot of builders are dreamers at heart and do want to make the world a better place. And if Silicon Valley becomes hostile to those kinds of people,

Nathan Toups (1:05:12)

Wait.

Amen.

Yep.

Carter Morgan (1:05:41)

there is potential for some other region to take the crown and to become the place for people who do actually wanna make the world a better place, build. ⁓ So get your act together, Silicon Valley, yeah.

Nathan Toups (1:05:45)

Yeah.

And I know

I love this. think that my hot takes that kind of riff off this a bit, which is that I think that vibe coding is far worse than Thompson's nightmare that he had in that we actually have this like gleeful willingness to not care, ⁓ which I think is the antithesis of having that sort of like principled honor code of understanding what's going on.

And I do think that there's an area where you can be playful and goof around and do that stuff. I do think prototyping and stuff like that. I'm not trying to kill that. But if we need to talk about trust, if we need to talk about social contracts, and I agree, I think that there's fascinating things about these tech ⁓ luminaries that change things. Basis is good example where I listen to his ideas and I think some of them are fascinating. And then I'm like, but he's also the one who like,

Carter Morgan (1:06:20)

Bye.

Yeah.

Nathan Toups (1:06:49)

built the culture that you have to pee in a bottle so that you can deliver fast enough. that, unfortunately, even if he's not personally making those decisions, he's built the culture in which those type of decisions have been made. And we should question it, right? We should say, we living the ideal of, would most people say, well, I want it cheaper even if they're peeing in water bottles? And most people say no. Like, I actually want my driver to be happy.

Carter Morgan (1:06:52)

Yes, yes, exactly.

Absolutely.

Right, right.

Nathan Toups (1:07:18)

⁓ If that means five cents more in delivery costs or something, like, we'll just do that. ⁓ The other one is, ⁓ it feels like we're already building the world of coding machines in the sense that, like, and maybe this is the William Gibson argument that the feature's already here, it's just not evenly distributed. Like, I have a feeling there's already pockets in which active exploits in which there are now a point of no return in certain worlds could be present.

Carter Morgan (1:07:18)

right.

Nathan Toups (1:07:47)

not outlandish for me to imagine that five years from now we'll peace about how, you know, the fortune 100 has been compromised for seven years or something.

Carter Morgan (1:08:00)

Yeah, I want to almost like as like a thought experiment. I'd love like every year as the models get better to like have like, like almost like a hackathon, but like you have non-engineers and engineers, right? Cause I just remember like a year and a half ago, not even a year and a half ago, less than a year ago, actually getting, I've told the story on the podcast before, but like my buddy who runs a marketing agency, trying to build a website for a client, he had coded himself into a corner.

And he's like, Carter, I will pay you $200 to come to my house and fix this bug because I can't do it. And like, I fixed it in like 10 minutes. ⁓ But then even just like looking at like this website he built, it was like this unholy abomination. It was like a Python server that was returning ginger templates. And like, it was bizarre. ⁓ And so, yeah. Yeah. And so, you know, I just.

Nathan Toups (1:08:46)

In the South, would say bless his heart.

Carter Morgan (1:08:54)

We keep seeing these models getting better and better. And I think Carl Brown of Internet of Bugs has a good point, which is that these models are fundamentally software. And so it's not guaranteed that they will just get better and better and better. They could one day get worse. But anyhow, ⁓ yeah, I just think there still is a big difference between an engineer wielding these tools and a non-engineer wielding these tools. I think every day it looks a little scarier that will be replaced, but

That's why I need this as like a sanity exercise. Like I just need to like and my company pretty much only employees smart people I really like our product managers They're really smart folks and one day I should just hand them my laptop with Claude code and say you do it like you've got this story You know, I've got this story show me if you can do it. What would what would you how would you attempt to do this? And I think that would kind of like soothe me and be like, okay, wait a minute There's still a lot of expertise and domain knowledge required to in this profession

But yeah, think this was a really great, know, zero B, thanks for the recommendation here because, we're going to talk about what we're going to do differently in our career in just a sec. But I just want to say that this was fun. And I think a nice little breather after designing that intensive applications and yeah, and I think just another advertisement for the discord come join us. Nathan, how many verified users are we at these days?

Nathan Toups (1:10:14)

We, it's wild. We have over 200, I think we're like 217 or something like that. We almost have, so we had the alpha testers, which are the early adopters, first 128. We're not that far from doubling that number, right? So we're about to have folks that are in the second cohort. It's been fun. There's a lot of like continued education, career oriented stuff, really kind and supportive group. There is no just like,

Carter Morgan (1:10:30)

Yeah, yeah.

Nathan Toups (1:10:43)

toxicity kind of stuff. think it's the same kind of civil conversations that we have here on the podcast. If you want to extend that, and we're on the Discord. So Carter and I hop in and chat, but I also just want to cultivate a community. And I think it's a good group of folks. So come join us.

Carter Morgan (1:10:44)

Yeah.

Yeah.

And

your book could be the next one we read on the podcast. ⁓ Speaking of this book, but these pieces, I mean, we had a good conversation here today, but what are we going to do differently, Nathan?

Nathan Toups (1:11:11)

So I feel like I have well equipped to increase my skepticism of giving into the vibes. And what I mean is I'm not saying that I won't listen to the way that tools are being used or how we can think about how we build and write code. But I have a renewed confidence in sort of like rejecting the hype machine, I guess. So yeah.

Carter Morgan (1:11:16)

Right.

Right.

Yeah, I'm going to continue rejecting the hype machine, but I think what we're thinking a lot at work about, like, are the teams that are going to win be the ones that let AI generate more and more of their code? But you can't just, like, again, you can't just let Claude rip in a vacuum. And so I need to explore more, like, what are the guardrails you put around the code? And honestly, testing is, we've had a paradigm shift at our company because

We're startup, and so we've kind of thought for a while, like, listen, we're not going to write tests for something that we might throw away tomorrow. With large language models, you can write tests for something you might throw away tomorrow. It's not very hard to write them. And so I think, well, I am skeptical of giving into the vibes, but I want the vibes to reign in proper confinement, right? And so, like, how do we?

Nathan Toups (1:12:29)

Take care.

Carter Morgan (1:12:32)

How do we have confidence in AI-generated code? And so I think a good testing suite is part of that. And I just need to explore more and more what does that look like? As far as recommendations for this book, though, I think you and I are the same. For me, I'd say, anyone, read both of these. It'll take you 30 minutes. ⁓ And the short story is very well-written. It's a fun read. I know we kind of spoiled most of it ⁓ for you, but there's no way to do this podcast without spoiling it.

⁓ But yeah, very pertinent to our time and yeah, I recommend it to anyone

Nathan Toups (1:13:06)

Yep, same, it's a universal recommendation, it's a quick read, it's deeply thought provoking, especially our audience, think you'd like it a lot, so go for it.

Carter Morgan (1:13:15)

There we go. All right. Much like ⁓ the Super Bowl champions, the Seattle Seahawks, I'm going to Disneyland. I'm leaving in about eight hours or so. So I'm excited for that. And ⁓ I'm almost excited because I followed college football for a while, but I never, we didn't really watch football growing up as a family. And so, but this year I was like, I got to pick an NFL team. I got to have someone I can root for. And so I chose the Seahawks because I grew up in Washington. I'm like, well, you got to do the Seahawks.

And they won the Super Bowl. So I've earned this because I declared my fandom prior to the season. I'm not a bandwagon fan. Thank you. I have a baseball cap and everything. All right. Well, folks, as always, join the discord. If you haven't yet, link in the description. You can always contact us at contact at BookOverflow.io. You can find us on Twitter at BookOverflowPod. You find me on Twitter at Carter Morgan. ⁓ You can find Nathan and his work with Rojo Roboto and his newsletter at RojoRoboto.com slash newsletter.

Nathan Toups (1:13:48)

proud of you.

Carter Morgan (1:14:11)

And check out bookoverflow.io, has gotten swanky these days. Nathan's done a lot of great work there. And Nathan, isn't there, you're working on a mechanism for backlog grooming voting?

Nathan Toups (1:14:25)

Yeah, haven't made it easy to find on the website yet because we've kind of been testing it with our alpha testers on Discord. But if you join the Discord, yes, it's a BookOverflow.io slash backlog. And you can actually see we have a public backlog of all the books that we've been considering. And we're looking to start introducing books that have been recommended from the Discord. But if you register on Discord, you get voting rights. So you can actually, I think you could put it up to five votes. And they can all be on one book or you can spread them across five books.

Carter Morgan (1:14:30)

Right.

Mm-hmm.

Nathan Toups (1:14:54)

But we're trying to get feedback from you on what to prioritize. So we'll use this as a way to influence what we should cover next.

Carter Morgan (1:15:01)

Absolutely. Yeah, we've come a long way from the react template I bought for $15 and then I embedded a Google Sheet in it. So it's getting cool.

Nathan Toups (1:15:09)

Hey, I had a little extra time

on my hand and honestly, the Discord's been a catalyst for people saying, I wish it was easier to find the old episodes or I wish that we could see the backlog, right? And so these have been, you can have a disproportionate impact on the future of this podcast if you join the Discord.

Carter Morgan (1:15:13)

Yeah. Right.

All right, folks, that wraps it up for this week. We will see you next week, and thanks so much for listening. See you around.

Nathan Toups (1:15:33)

the end.