Software Architecture: The Hard Parts: Modern Trade-Off Analyses for Distributed Architectures
Part 1
Book Covered

Software Architecture: The Hard Parts: Modern Trade-Off Analyses for Distributed Architectures
by Neal Ford, Mark Richards, Pramod Sadalage, Zhamak Dehghani
Get the book →Book links are affiliate links. We earn from qualifying purchases.
Authors
Hosts
Transcript
This transcript was auto-generated by our recording software and may contain errors.
Carter Morgan (00:00)
So is that a monolithic service? Is it a micro service? Like I tend to lean more towards that this was like probably a micro service and that a monolithic service would be more of like S3 and EC2 and VPC and all that all were living in the same thing. And so, you know, I don't mind a micro service from that perspective.
Hey there. Welcome to Book Overflows, the podcast for software engineers by software engineers where every week we read one of the best technical books in the world in an effort to improve our craft. I'm Carter Morgan. I'm joined here as always by my co-host, Nathan Toops. How are doing, Nathan?
Nathan Toups (00:36)
Doing great here, everybody.
Carter Morgan (00:38)
As always, like, comment, subscribe, share the podcast on LinkedIn with your friends and coworkers, join the discord, all that good stuff. And we are excited. We got another good meaty book this week. Is media good way to describe this, Nathan?
Nathan Toups (00:51)
You know, that's a way to describe it. think, sure, I'm not gonna argue with that.
Carter Morgan (00:56)
Because we have the aptly named software architecture, the hard parts. And this is by four co-authors, but three of them are friends of the podcast. We've interviewed them, which is Neil Ford, Mark Richards, Pramod Saralaghe, and the new is Jamak Degani. ⁓ So yeah.
Nathan Toups (01:17)
Quick note on
Jamak, ⁓ she is actually the person who created the term data mesh. So, ⁓ doing a little research on that one, yeah.
Carter Morgan (01:24)
⁓ wow. Very cool.
Well, there's a lot of, I'm going to read all the bios very quickly. And then maybe in subsequent episodes, we don't read all the bios, but we got Neil Ford. He's the director and software architecture architect at ThoughtWorks.
He is an internationally recognized expert in software development and delivery, particularly the intersection of agile engineering and software architecture. He's authored eight books, spoken at hundreds of conferences worldwide. His work covers software architecture, continuous delivery and functional programming and book overflow. No, I believe this is the first author we ever interviewed.
Nathan Toups (01:58)
I think you're right. Yeah.
Carter Morgan (01:59)
Yeah, yeah, I remember I was in a conference in Canada and I had to book it from the conference center back to my hotel room to talk to. Oh, sorry, sorry. That's Neil Ford. Mark Richards is the first author we've ever interviewed. Yes. Yes. I was thinking about Mark Richards while I said that, but we were talking about Neil Ford. Neil Ford, we've had him twice. Mark Richards, the first author we ever interviewed.
Nathan Toups (02:06)
That's right, man.
That's right. Yes, because they were the co-authors of Fundamentals of Software Architecture. And yeah, had both of them back to back. Yep.
Carter Morgan (02:26)
As a hands-on software architect with deep experience in microservices or service oriented architectures and distributed systems. He's been in the software industry since 1983. is the founder of developer to architect.com a free resource for developers making the journey to software architect. He's also authored numerous technical books and spoken at hundreds of conferences worldwide. Pramod Sadalage is a principal consultant at ThoughtWorks as well, specializing in bridging the gap between database professionals and application developers. In early 2000s, he pioneered techniques for evolutionary database design.
Using version controlled schema migrations, he's the co-author refactoring databases and NoSQL distilled. And we had him on for building evolutionary architectures. He tackled the database portions of that book. And then Jamak Dekani is a technologist focusing on distributed systems and data architecture in large and complex environments. She's a member of multiple technology advisory boards. Jamak is an advocate for the decentralization of all things, including architecture, data, and ultimately power. She is the founder of DataMesh.
So we read the book and let's give the introduction first. Software architecture, the hard parts tackles the difficult problems in distributed architecture. The ones who with no clear best practices where every choice is a trade-off. Neil Ford, Mark Richards, Peron Sotoyage and Jamak Deghani walk through strategies for service granularity, managing workflows and orchestration, decoupling contracts between services and handling data across distributed systems. They thread all of this through a fictional case study called the Sysop Squad, showing how these decisions play out in practice.
Okay. We, this is bizarre this week. I was, there's an audio book for this and fundamentals of software architecture, which was the first book we read by Neil Portmark Richards. The audio book is excellent. So we're like, okay, there's an audio book for this. Let's pick it up. And we're going to talk a bit more about that work. That was very, very good, but there's also a lot of benefit to actually reading this. So when I was looking at it, I have the audio book and we go, okay, let's read through chapter nine. That's a third of the book. And then Nathan, you just informed me that that is actually like.
the midpoint of chapter six and the print version?
Nathan Toups (04:27)
Yeah, yeah. So if you're listening to the audiobook and you read the the print book, the chapters do not match up. It was super confusing. I kind of realized maybe around chapter two or three, I was like, what is happening? Like, where am I? Yeah, noted that we went through chapter nine of the audiobook, which gets us about midway through chapter six in the print or the Kindle edition. So that's where we are today.
Carter Morgan (04:42)
Yeah.
So yeah, it's a third of the book, but I don't think we've ever seen that play out on the podcast before, so we thought it'd be funny to note. Nathan, give me your thoughts on the first third of ⁓ software architecture, the hard parts.
Nathan Toups (05:09)
This is truly the hard parts. will tell you that Fundamental to Stuff for Architecture, I think you've coined it as the book that broke the podcast, right? ⁓ that was Building Evolutionary Architectures. Gosh. Yeah, so these books do not hold back, right? So like I would put it in the same category for any of you YouTube viewers out there. You know how PBS Space Time is a phenomenal physics channel. Maybe many of you know, maybe some of you don't.
Carter Morgan (05:19)
No, that was building evolutionary architectures, but also that Anil Ford joined.
Nathan Toups (05:39)
It does not hold back. Like it will throw in very dense physics concepts and mathematical equations, even though it tries to communicate to the public. It's like super, super dense ⁓ for someone who doesn't have a PhD. This book is very similar in the sense of like it's no holds barred, unapologetic, all the hard, weird things that you get tangled up into that.
ball of mud for software architecture. ⁓ How do you really get out of these edge cases? How do you really handle stuff that seems impossible to handle? ⁓ You definitely should not read this first. This book is ⁓ not for most people. think that this is a, if you really want to get into understanding how you do these distributed systems architectures, this book just gets into it. ⁓
Like I said, I think in the opening chapter, it's when there's no best practices, right? You can't Google it. You can't go and try to figure it out. You're in the middle of this thing. You know, these patterns that are ahead of you. ⁓ How do I even make progress, move forward, detangle things? And yeah, I had to take a step back. I was listening to the audio book. It was really hard for me to listen to the audio book. The PDF material that comes with it is like,
300 pages, I think, of the diagrams that they reference. They're like, hey, look at diagram 1.2 in the attached material. It's like half of the audio book is referencing that. And I'm like, OK, I'm either going to go back later and look at this. I'm not. Or I'm going to take a break every few minutes and take a look at that. Or I'm going to read the Kindle book along with the audio book, which is what I ended up having to tap out and do.
Carter Morgan (07:34)
Right.
Nathan Toups (07:34)
I did
listen to the audiobook and I was like reading it on my Kindle at the same time. So they've gotten like, I don't know, $75 out of my pocket for this. really committed. But yeah, this book's in hard mode. Listening to this book is in like hard, hard mode. That's what I'll say.
Carter Morgan (07:42)
you
And that's where I'm operating. just haven't, yeah, I haven't been able to sit down and do both at the same time. I am mostly listening to this while commuting. ⁓ am ⁓ regular since the podcast know that we're in the middle of a huge re architecture at work, which I am in a technical lead on. yeah, so, ⁓ my thoughts reading this have been colored by that. And I've just kind of been trying to get to this when I can.
Nathan Toups (07:55)
I love it.
Carter Morgan (08:24)
so I mean, agreed like the audio book is not the ideal medium, but I will defend the audio book. ⁓ I think. I find this actually with a lot of audio books that, ⁓ we listened to where they're really, really good at reinforcing concepts. You already have a, a good or a surface level understanding of, can get that up an audio book. And then when it introduces new concepts, that is where I like, it kind of totally goes over my head. had that with like,
designing data intensive applications. were just some new concepts that I just did not grok because I was listening to the audio book. So because this book is more about just like general software architecture and because we've read two other books by some combination of Mark Richards and Neil Ford and a lot of the ideas are not, they're not the same. They're like echoes of the same idea almost. ⁓ I've been able to grok this a lot better by doing the audio book. I mean, this really,
Nathan Toups (09:00)
Right.
Carter Morgan (09:22)
confirms that there's just a whole class of problems that are very like Fortune 500 software legacy problems, right?
Nathan Toups (09:29)
Yes! ⁓ my god.
That's exactly what I was thinking the whole time. We're reading this.
Carter Morgan (09:35)
Yeah. Right.
And like, I have not had that experience. I have worked, I'm at a startup now. I was at kind of big tech for a couple of years in a row. And then I did start off my career at a fortune 500, but we were on like a centralized component team. And so we were making like a lot of like building block components, like NPM packages and stuff that other teams were using. so I have never worked on like.
A giant monolithic application that's doing, you know, tens of million or facilitating tens of millions of dollars in revenue. And, uh, has kind of all of these problems, but I really like, because I haven't worked on something like that, uh, it is interleaved throughout the book, this narrative, the CISOP squad. It's like, it's a geek squad at best by basically, right. Um, and talking about their kind of giant monolithic application that has started to.
have serious performance and reliability issues and the company is debating shutting down the whole product line because it's just not reliable enough. And so our star of the show, Addison, and she's got like a buddy, Evan, Mike, I don't remember his name. ⁓ They are in charge of kind of decomposing this giant application. so, you know, I've mentioned that before on other books we've read. like, I really liked the theory. I would have liked maybe some more practical applications. ⁓
kind of grok it a little better. This book is kind of my personal ideal pattern where it's like theory and then it'll wrap up the chapter with the case study portion of it or the case study portion will be interspersed through it. So a big plus one for me there for that format. Well.
Nathan Toups (11:20)
Yeah, it definitely
has kind of some DevOps handbooks slash unicorn project vibes as far as it's definitely still a Neil Ford ThoughtWorks style book, but they've kind of do put these narratives in between, which is tied together because it is. It's like the whole hypothesis here is you either you've inherited this thing and you're like, ⁓ this is not sustainable. It's not serving as well or.
Carter Morgan (11:25)
Yes. Yeah.
Nathan Toups (11:46)
they kind of allude to the fact that maybe you're the one who actually architected the bed idea, and you're trying to architect your way out of it at some later date, right? You went to a conference, guy a little too excited about service-oriented architecture and the distributed monolith or something. Yeah.
Carter Morgan (11:50)
Yeah.
Yep.
I'll also say that
this book, could read it without reading Fundamentals of Software Architecture. It is helpful to have read Fundamentals of Software Architecture first.
Nathan Toups (12:09)
Ooh,
yeah, you would be missing a lot, especially because you need to spend time with some of these concepts. Especially, there's a lot of vocabulary. think early on in the book, talk about, first they talk about, hey, look, this is really about when you don't have best practices anymore. They kind of talk about, hey, once you get the fundamentals under your belt, when do you apply these things or how do you do these things? And this book is all about those juicy edge cases.
I will say that there's some vocabulary in here that I'm like, it's cool, but at the same time, I'm like, I'm just gonna confuse people. Afferent. yeah, and there's also some equations where I'm like, I feel like a caveman scratching my head sometimes when I'm looking at these. I'm like, is this generalizable like this? I don't know. I don't know if I'm not smart enough to handle this. Yeah.
Carter Morgan (12:45)
They start throwing out some equations and I was like...
Yeah.
Yeah, yeah. Well, let's
take a quick break and we'll come back. We'll discuss the first third of this book.
All right. Let's talk about, uh, some of that vocabulary that the book sets up at the front. can talk a bit about coupling. We were laughing about this. is Afrin cup for, sorry, Afrin coupling and efferent coupling, which like they should have chosen different words. You know what this actually reminds me of when I was a missionary in Brazil, the
Portuguese words for grandma and grandpa. It's the same. It's a VO They're both spelled a VO but there's like a different accent mark over the ⁓ and so it's like a vo and a V or like I couldn't do it like I had like a Brazilian Missionary we were you know partners and like I just be like a vo a vo is like no like a vo a vo and he's like no and so that is afferent and efferent coupling ⁓ But useful concepts if poorly named
Nathan Toups (13:46)
You
Carter Morgan (13:54)
Afferent coupling is the count of incoming dependencies and efferent coupling is the opposite. It's how many things the component depends on. yeah, basically, so afferent says how many things will break if I change this and efferent is like, how many things do I rely on not breaking so that I can continue to function? Yeah, mean, useful concepts.
Nathan Toups (14:03)
Whew.
Yeah, what a thing to have
very useful concepts, but also super confusing and I guarantee you I will use them incorrectly, 50 % of the time. It's like trying to plug in a USB cable. I'm like, which one? Well, old school, I'm dating myself because USB-C does not have this problem, but you know what I'm talking about. You know what I'm talking about. ⁓
Carter Morgan (14:28)
Yeah. Yeah. ⁓
Yeah, yeah, you know, most of our listeners do.
I think our age range breaks down like 28 to 45, so welcome.
Nathan Toups (14:43)
Yeah, everyone's
had to have that old printer cable that you needed to find, ⁓ the USB cable. It sticks around forever. But yeah.
Carter Morgan (14:47)
Yeah.
Yeah, I think this
is useful. The book talks about like special tools you can run. I will say this book brought up Eclipse at one point. I'm like, Eclipse, really? you know?
Nathan Toups (15:02)
It is funny. This is definitely a Java shop. Like every one of these examples is there in ThoughtWorks, of course, has. ⁓ So to kind of bring up the fact that there are, if you've read Building Evolutionary Architectures and you've read Fundamental Software Architecture, you end up getting you. This book feels very familiar. Right. These are really I would say that if you you should probably read the books in this order, fundamental software architecture, building evolutionary architecture, and then
Carter Morgan (15:06)
Yeah, Yep.
Yes, yes.
Nathan Toups (15:32)
software architecture, the hard parts. And the reason being is that if you're familiar with those other two books, this is a really just a deep dive set of case studies into the stuff that can go wrong and how to work your way through it. And there's a ton of examples of things that they reference in the other books about these sort of thought works maintained ⁓ fitness function frameworks, right? And so one of them is this like architecture framework that
measures these types of things in your code of like, hey, how many times are statements having a coupling to this other thing and you just write some Java code that kind of analyzes this in your CI pipeline, right? ⁓ That's a level of meta programming I've never seen outside of the ThoughtWorks ecosystem. I've never seen this type of thing ⁓ elsewhere. So it actually intrigued me because I was like, ⁓ that would be like a really cool thing to always have.
anytime I'm writing code is like, want to enforce that we're not doing something completely insane ⁓ on my watch, right? Like I'm the software architect. I've intended that, you know, we have an acyclic graph of my libraries into things that go out outward. ⁓ You could decree this, you can write an ADR, you can do all the stuff that you intend, but you might, you know, go on vacation for three weeks and come back and realize that, you know, the kids were misbehaving.
Carter Morgan (16:36)
Yeah.
Yes, yes.
Yeah. Yeah. That's a whole, that's like the whole next level of this. Cause this book is outlined like a ton of like good, like best practices quote unquote, right? Or mostly concepts to help you understand, like guide yourself. Like that's afferent and efferent coupling, right? ⁓ But then creating some sort of forcing mechanism for that is a whole other challenge, but then it's like, how do you, how do you balance? Because at the same time it's like,
Nathan Toups (17:14)
Mm-hmm.
Carter Morgan (17:26)
You wouldn't want to be like, okay, here is the afferent efferent forcing mechanism for our 3000 person engineering team across the whole company, right? Like good luck. Uh, but in the same time, you do want some sort of semblance of best practices. I've been thinking about this a lot in our rearchitecture. We're moving from graph QL to rest and from Java to TypeScript. And, uh, we have been using.
Claude a ton to basically like it's pretty because there's a good contract on the front end of what data needs. It's pretty easy to kind of point Claude at the old Java pointed at the front end, let it know the patterns and rules we have. And then it can kind of like, it can create a working version pretty quickly of the end points of the new front end needs. And, and you know, we're reviewing it and we, know,
I'd like to review it a little more, but kind of on the front end, it's little like, hey, if the front end page keeps working and performing all the same actions that the old version of the page did, then like that means that it's working, right? But this week we have declared the week of hard problems. And we were like, okay, we're gonna tackle everything in the system that is a little more critical or load bearing. And so I drew the short straw and I'm tackling the order service, like actually processing all of our orders. And that is more.
That's not just like, okay, have Claude create a copy of what exists in the current service. Like the current service sucks. And so it's like, we need to completely rearchitect it. And so it's been on my mind a lot. Like how do you force good patterns? How do we make sure that the new version of this doesn't wind up like the old version did? And so I'm feeling confident. We're almost like a strategy pattern where like it's one endpoint, but then depending on the type of order that comes in, it resolves to
a different strategy and those strategies implement the same interface. And we've kind of identified like, okay, all orders need to do like the same for actions and sequence. And so that's what the interface does. you know, like, and so I like that. Yeah, basically. Right. So I like that from a, ⁓ conceptual perspective. And I think it'll make it easy for future engineers to come in and kind of look at that pattern and be like, this is how we do things here. But it kind of begs the question of like, when do you try to do something more
Nathan Toups (19:29)
Little state machines, yeah.
Carter Morgan (19:48)
strict, more dictated, like we're talking with afferent and afferent coupling and some of these other measures we'll talk about where you are having some sort of like deterministic function maybe as part of your CI pipeline that enforces some threshold, you know?
Nathan Toups (20:04)
Yeah, and this gets again into the evolutionary architectures idea, right? Rebecca Parsons, Dr. Rebecca Parsons talked about this as well. If you come from this idea that I'm going to change the system over time, but we also have this idea of like, is this fit? Does this meet these fitness criteria? Then ⁓ you kind of start gravitating towards these things of like, okay, well we write unit tests, we write an end test, maybe you're using Playwright. ⁓
What other, those are all a subclass of fitness functions in this definition of the way that they think of start-up architecture in the ThoughtWorks sort of world. And it is an interesting idea. Some of these concepts, I will say, I take pause in, ⁓ you get a little bit into the like, you get a little bit into the clean code Uncle Bob stuff where they'll start being like, well, you know, a function should have this many statements under and if it deviates from this, then like,
This is, you know, so many standard deviations from the mean. And I, I will say, I, I don't want to live that life. Like I do think that I'm in the John Osterhout view of this, of some, I would rather sort of deep, deep interfaces or deep methods, right. And it's okay. Certain, certain things might just be deep. And while it might be interesting to note,
Carter Morgan (21:12)
Right, right.
Right, right.
Nathan Toups (21:34)
that maybe I'm doing too much in one place, it could just be that things are inherently complex. And it doesn't need to be decomposed in certain ways. ⁓ That being said, this book does not tell you you have to, it really just kind of says, hey, you've noticed this pattern, here's how to measure it, and here's how to make sure that you have enforced some standard that you've decided on, right? That's very abstract. And again, I like this. I like the fact that they have these little debates between the SysOps squad and,
One of the things I really enjoy about this book is they'll have some idea like, ooh, we're going to decompose this and take the database apart and put it into its database per service or whatever, microservice. And then some person in the organization is like, oh, absolutely not, never will you ever do this. And they have to come up with a way of defending why this actually is a better thing. And then they equip you in the chapter with, OK, so you think that this is going to be better.
Here's how you measure it. Here's how you address concerns. Here's how you do this stuff, which is all real world things, right? You have to be defensible. You can't just go to a conference and come back and be like, hey, we're gonna kill all the foreign keys and decompose all of our tables into their own services. And now everything's gonna call each other over the network. You're like, okay, that sounds like a nightmare if you do it wrong. Yeah.
Carter Morgan (22:47)
Right, right.
Have you ⁓ been following any of the drama around Gary Tan on Twitter?
Nathan Toups (23:02)
No.
Carter Morgan (23:03)
Gary Tan, he's the CEO of Y Combinator. And I'm thinking about this in like an AI large language model world, right? And in a world where code generation is much faster, how does that change our behavior as engineers? ⁓ And Gary Tan, who I used to really like, but it's one of those things. Okay, so I guess to explain, Gary Tan, CEO of Y Combinator, ⁓ I've often known to be a pretty good measured voice.
But lately he's been doing this thing where he's bragging about how many lines of code he's shipping, like a week. And it's absurd amounts. It's like 78,000 lines of code. And he's been saying like, this is just the future now. This is how all development is to be done. And it's really weird and it's really bizarre. And it kind of breaks down into like, he's either sincere, which is really, really bad, or he's doing like elaborate trolling performance art.
which is also really bad because why is the CEO of Y Comair doing elaborate performance trolling art? ⁓ And so someone finally like one of the projects he's doing and that's what's weird about this. Like his project is a blog. Like it's basically a blog and he's bragging about like how he's generating tens of thousands of lines of code for it. It's really, really strange. And so someone like legitimately, I don't think that is a poor way of describing it.
Nathan Toups (24:20)
I think this is a form of AI psychosis. Like I really, yeah.
Carter Morgan (24:29)
So someone went to the blog and just did like performance analysis on it. Like the first contentful paint takes like six seconds. The homepage is serving you test files, not like hello world files. It does serve you that, like Ruby on Rails unit test files, right? It has, it loads like six versions of the same image. It's bizarre. It's like total slop. And so we've been talking about it as a team, like,
It just again in a world where code generation is fast, like how can we be more bold on like refactoring things? Right? Like how are we taking the concepts you learn from like this book or working effectively with legacy software or like with legacy code, right? And applying all those kind of like migration patterns and you know, uh and testing harnesses because if you understand those patterns really well armed with ai you could take a complicated
chunk of the system and instead of saying, know, like, you know what, I'm just going to make my change, get in, get out as soon as possible. It might be much more feasible these days to just re-architect that little part of the system using all of these good patterns that we've known for 20 years, but have been very time intensive. And so, I don't know, as we talk about all of this and like, yeah, just it's, it's an interesting new world, but it's just, it's, it's so dangerous in either direction.
Nathan Toups (25:45)
Right.
I'm
quoting myself again, but it's the Dunning Kruger powered tech debt machine, right? I'll keep saying it. I'm gonna shout it from the hilltops. These large language roles, and I'm not pretending that I'm not guilty of some of this as well. I feel like this is sort of like, Will Larson talks about nibbling, right? ⁓ It's so easy to nibble your way into the craziest PR.
Carter Morgan (26:03)
you
Yeah.
Nathan Toups (26:22)
because you're just like, what if I, you know what, I'll just go ahead and fix this problem too and I'll fix this problem. And then all of sudden you're like, why did I touch 64 files and hundreds of lines of code when all I was asked to do is like change the padding offset for some CSS selector or something. ⁓ And of course it'll pad your ego the whole way along. yeah, this is the right way to do it. And you did all these things. ⁓ And it is weird actually.
Carter Morgan (26:23)
Yeah.
Yay.
All right.
Nathan Toups (26:51)
I've already seen some falling out or I don't know. I'm starting to lose some respect for some folks in the industry who've become these AI maxis that are like, actually this came up in our Discord. I'm gonna bring this up. Shout out to our Discord. We have some folks that are earlier in their career and they're like, what role should large language models play in what we're doing? And we had a really good group discussion about, if you really like tinkering, if you really wanna get your mind around the concepts, you probably should be pretty hands off.
Carter Morgan (27:10)
Right.
Nathan Toups (27:19)
Right. You should spend the time and kind of grunt your way through it. And I made a similar argument. There's really good, well reasoned arguments in there. I made some similar arguments of basically like, Hey, I agree with this because I have a 10 year old daughter who's learning mathematics. Calculators exist. Nobody would go, Oh yeah, calculating things by hand. That's the past. You should just use a calculator and you know, in the future, no one's going to think at all. You just use a calculator. You could do that.
Carter Morgan (27:20)
Right, right.
You're right.
Nathan Toups (27:48)
at your own disservice. ⁓ You also don't really understand, you don't, like she didn't understand how amazing the calculator was until she's having to do these things by hand. And then really appreciates, man, I can just get the square root of something that's not easily done in my head with the calculator or whatever. ⁓ I think that it's important for us to like appreciate why and when we should do things and we have to think.
Right? This is really what this comes down to. the reason I'm bringing this stuff up, and I think it's really important to talk about, this book was written, I think it came out in 2021, right? This is like right before. There's not a mention of large language models in this book. It's, I think so. Let me, let me actually, you know what? I'm gonna spot check this. Let's see, software architecture.
Carter Morgan (28:31)
This is the hard parts came out in 2021. Okay.
Nathan Toups (28:43)
I'm pretty sure this came out. 2021, October 2021. So I mean, yes, the tools were emerging, but it's like you have obligatory references to large language models in your programming. And this book is like, this is probably one of the last books in our industry published right before you had to start writing about large language models.
Carter Morgan (28:49)
Okay, there we go.
Right, right.
Nathan Toups (29:12)
to the point that even when, Neil Ford, was doing a little research before this episode, Neil Ford and the gang, Mark Richards and folks, they start talking about this, like they have talks, you know, at GoTo and some other conferences, and they're talking about the hard parts, and then they're also talking about large language models and how they interact with each other. So it's like, it was right up to the threshold, 2022, 2023 comes along.
and you can't have a conversation about this stuff without saying, you know, what does Chetchi Buti say, or what are these other things? What a different world we're in, really.
Carter Morgan (29:45)
The Ryreim.
Yeah, and it's, yeah, you we try to keep this podcast like, my North Star for this podcast has always been like, I want this to sound like two coworkers talking over lunch and we've gotten feedback that you guys like the tangents. So our apologies if we ever veer a little too off the beaten path, but you know, I think it's worth considering. We also try to have like kind of evergreen content. so, you know, listener who's listening to this in 2031, sorry, you know, we, you know, who knows what large language models look like at that point.
Nathan Toups (30:02)
Yeah.
Well, and I will say, though, I think that probably things will get better and things will get worse. Any of these powerful technologies that come out, it's so easy. was having this conversation with my wife recently. Actually, I think this is worth also bringing up. We talk about AI psychosis. We talk about getting in these echo chambers. We already saw the glimpses of this on social media. All of us have at different points have had that family member where you're like, wow.
Carter Morgan (30:19)
But you
Right, man.
Right, man.
Yeah, yeah. Yeah.
Nathan Toups (30:47)
Facebook plus this family member, not a good combo.
I also think that, you know, I have a lot more appreciation for this sort of, call, I was talking to my wife about this, calling it White House psychosis. I think that every president deals with this, where they come in bright-eyed and bushy-tailed with the best of intentions, and then all of a sudden, you have this group of people that are like, that's a great idea, right? This is a great idea. No, it's not this, it's not that. There's all these agendas in place, and you can't discern reality. Like, I think,
Carter Morgan (31:01)
Yeah.
Yeah, right, right.
Nathan Toups (31:16)
This is not a political party problem. This is literally every president gets in, doesn't understand how anything works anymore because they can't trust anybody. Everything's presented to them and that they're like the greatest thing ever. And you start getting this big ego, right? This big head on your shoulders or you get you want to go cower into a corner and all your hair turns gray. Right. And so I think that like whatever breaks a president's brain is also now being democratized and is breaking all of our brains.
Carter Morgan (31:35)
Yeah.
Nathan Toups (31:46)
You know, like we now have the untrustworthy chief of staff and counsel that's like, hey, I was like, I had this cool idea about, you know, encrypted payloads. And they're like, ⁓ not only is this an amazing idea, it's a world changing idea and you should work on it. And I'm like, okay, maybe. And then it's like, yeah, you should, this is revolutionary. And you're like, okay, I got this, I can't get on this ship. I'm not, it's a mildly interesting idea, not like gonna change the world.
Carter Morgan (31:52)
Right.
Yeah.
You're
right. is something about kind of, I saw someone on Twitter say that basically like, if you use large language models and, you know, like Claude or whatever, like the regular chat interface, it has like this memory function where I like remember things about you, but it always kind of reads as a little creepy. someone said it's like your aunt who like remembers you like dinosaurs at the family reunion 40 years ago, right?
Nathan Toups (32:36)
Yeah
You
Carter Morgan (32:44)
And then that's all they bring up. And someone was theorizing that forgetting might be a more powerful part of the human learning mechanism than we realize, that we are able to forget things that aren't important and we reinforce things that are important. ⁓ And I think part of what, and this isn't just large language models, but this applies when creating ⁓ architecture, software architecture, which is like, you have to have the ability
to reject the bad ideas. Like rejecting the bad ideas is almost more important than coming up with the good ideas. I had this experience just yesterday where I'm looking at our order service and I'm saying, okay, like the core problem with the existing order service is that there is a lot of business logic baked into the flow. And so it's just like all this if-else statements like, okay, if this is a group order, then apply this discount and you
There's all these little like if else branches that make it really complicated to work with. And so I'm like, okay, priority number one, I want to extract kind of the rules from the ⁓ application, right? I want those kind of cordoned off in their own place. And so I was talking with Claude about it. was like, okay, so this is kind of what I want. It's like, yeah, this is a beautiful idea. And what you should do is you should have it all be entirely config driven, like
And those can, and that config can live in the database and you can have all these rules about like if an order is greater than or less than or equals than, and then you can create this whole neutral rules application engine. And so I'm like, oh yeah, yeah. Like this, this sounded pretty neat. And I took it back to my team. I'm like, what do you guys think of this? And they were like, well, we like the core concept of what are you talking about? Like this is way more than anything we need. And like anytime we need to add a new rule, like a trivial example, like
Nathan Toups (34:32)
You
Carter Morgan (34:37)
maybe we support greater than, we don't support less than like, we're going to have to go add that and make sure that works. And they were just like, and they're like, and we don't have the need for like this extensibility. Like we don't need like product and ops to be able to control pricing at that granular level. Like we're okay if we need to make pricing updates that it has to be a code change. And they're like, and, code is the best config and all of their points were on like, of course, like, yeah, that's exactly what we should do. And so we kept that concept ultimately of like trying to
have a standard flow for all the different kinds of orders that come in and then using the strategy pattern, extracting some of that validation and computation logic to a separate layer. And so the core idea was there. just like this, Claude was like, oh, it's going to be so beautiful. And you're going to be so smart for building it. And then I was glad to have a good team that were like, this is not what we should be doing, Carter.
Nathan Toups (35:33)
And again, it's crazy to me that not only have we redefined how work is, like these things are part of our flow and the expectations of it, but that we have to be so vigilant about do I trust you, right? Like never before was I like, okay, I'm gonna write a struct with some methods and go. And I'm like, I wasn't like, do I trust that this method is even the right method?
Carter Morgan (35:50)
Yeah, exactly.
Yeah.
Nathan Toups (36:02)
I might be like, taste level is bad or the abstraction is not that good, but I'm not like sitting here being like, yeah. And that's the part here I've had to really, you have to come up with all these ways of reigning in, ⁓ reigning in stuff. I'm gonna go way out in the left field. It reminds me of Jim Carrey, where Jim Carrey, you look at him in Man and the Moon, you look at him in The Truman Show, amazing actor. And then you're like,
But if you leave him to his own devices, he's gonna make the mask or he's gonna make Ace Ventura, right? Like, just let him do whatever he wants and you're like, he's like talking out of his butt cheeks and you're like, don't really want that. But then you like see what an, like you see movies that he can be inspiring and you're like, okay, properly honed, there's something here. It's like really amazing actually. And so maybe Claude is our Jim Carrey. That's what my worry is.
Carter Morgan (36:35)
Yeah.
Yeah.
Yeah,
just speaking of Jim Carrey, sorry, this episode is gonna get off the rails. ⁓ This is my favorite thing in a Wikipedia article ever. This is the Jim Carrey Wikipedia article. says, in April, 2022, Carrey announced that he was considering retirement, saying, I have enough, I've done enough, I am enough. He said he would return to acting if angels bring some sort of script that's written in gold ink that says to me that it's going to be really important for people to see. In February, 2024, it was announced that Carrey would reprise his role as Dr. Ivo Robotnik in Sonic the Hedgehog 3.
Nathan Toups (37:03)
You
The angels spoke, you know, another Sonic movie needed to be made. I love it. Again.
Carter Morgan (37:37)
Okay, let's talk
about more parts of the book. We promise it's all related in some sense of it.
Nathan Toups (37:43)
It's we're all gonna
there's a great I'm Charlie from ⁓ always sunny with the you know putting all the lines together. ⁓
Carter Morgan (37:48)
Yeah, with the conspiracy theory board. We should
talk about architecture quantum, because I think this is a really important, yes, like the thesis of the book and a really important concept to understand when developing systems, ⁓ which is an architecture quantum is an independently deployable artifact with high functional cohesion and synchronous conessence. Wait, there's conessence and conessence. This is another one. What does conessence mean? I forget which one is which.
Nathan Toups (38:18)
Yeah, co-nascence is the one of how something's connected. It's the measure of its coupling. ⁓ co-nascence of name, that's the one that I always remember and I will yap about, is the ideal, because you can just swap a name out and all the implementation details are there.
Carter Morgan (38:24)
Yes, yes.
yes, yes.
Yes, yes. So, but basically, this is what separates you from doing like true microservices. And again, how big is a microservice, right? ⁓ Versus like the distributed monolith versus the big ball of mud, right? And it's like a spectrum between those three things. ⁓ But one of the anti-patterns of like, ⁓
when you're designing like theoretically microservices is again, you gotta think about how much quanta, how many quanta you have. ⁓ And if you have 10 microservices, but they all share the same database, you don't have 10 quanta, you have one quantum, right? If you have 10 services, but they're all tightly coupled based on like some sort of like event broker service, like, and you can do an event broker properly, but like you can imagine a scenario where like,
there's a very tight coupling between this event processing system that requires when a change is made there, you must make a change here. And they all must be deployed at the same time. Again, you don't have 10 quanta. You have one quantum. ⁓ This is something, yeah, like, this book, I'm debating titling this episode Revenge of the Microservices, just because this book is very much like,
all about microservices and breaking apart a giant monolithic application into different microservices. Again, the question is like, what constitutes a microservice, right? Like when I was at, you know, one of the cloud providers, like our specific component and like, if you looked at our component, like you can go to its webpage, like it is advertised as a individual component for this cloud provider, right? And this whole component was governed by one,
Nathan Toups (40:03)
Mm-hmm.
Carter Morgan (40:29)
service, right? So is that a monolithic service? Is it a micro service? Like I tend to lean more towards that this was like probably a micro service and that a monolithic service would be more of like S3 and EC2 and VPC and all that all were living in the same thing. And so, you know, I don't mind a micro service from that perspective.
But I what I can say about our service is like, yeah, it was one quantum. had its own database. It was independently deployable. We had a lot of
Coupling in the sense of like I don't know if it's coupling is probably the wrong word for it like, know We were lied in a lot of like other API's right and were you know, we had Data exchange between other services, but we didn't care We had no concern with like what database they use We didn't even know what programming language these services were written in right? So long as that API contract was maintained they could do their thing We could do our thing and it worked like a
Nathan Toups (41:10)
All
Yeah. It's interesting too, because of course, inherently, if you start calling things quanta, ⁓ you also can have a lot of fun with terms like quantum coupling, which is this idea that, hey, if you have quantum coupling, it really means that we're not as broken out as we would imagine. So they do borrow these ideas and concepts from physics and other areas.
Carter Morgan (41:40)
Yeah.
Nathan Toups (41:53)
And it is interesting to think about what is this deployable unit. There's a lot of speaking about domain-driven development, ⁓ which is actually an area where I have familiarity with, but I haven't done a deep dive. We plan on doing that later this year, actually. ⁓ But there's this idea of ⁓ app context or these contexts in which you enclose your domain-driven development, and it kind of gives you these separations of concerns. ⁓
What the things that they don't talk about here, because I think their audience is Fortune 500 in breaking these services out. We talked about this last week, actually, when we're finishing up the crafting engineering strategy book is that ⁓ microservices really are in my mind. And again, this book doesn't mention this at all. When talking about this, microservices are really about teams and how they think about deploying stuff as a team.
Carter Morgan (42:49)
Yes.
Nathan Toups (42:50)
And I think that the reason it's not mentioned in this book is that the audience already has a ton of teams, right? You're not talking about one team here. Somebody is going to own the ordering service and some, you know, or some group of people ordering service and some group of people is going to order, you know, own the, I don't know, authorization service or whatever. And it's just really makes sense. And that that team can just handle those things. ⁓ you know, it breaks out a lot of, it breaks out a lot of concepts that you.
Carter Morgan (42:57)
Mm-hmm.
Right, right.
Nathan Toups (43:20)
⁓ you wouldn't have otherwise. And so one of the big things that they bring up here, and I think is worth talking about is now that you have this idea of quanta, you need to figure out how to do decomposition. so a good rest of the book, like think chapter four, five, and six all kind of talk about how do you pull things apart? ⁓ And I think that's about half of what we...
Carter Morgan (43:46)
right.
Nathan Toups (43:49)
experience, like half of what we read this week is about how do you do this in a way that doesn't introduce regressions? How do you do this in a way that ⁓ you actually know that this is the right way to decompose the application? ⁓ And again, there's no one size fits all. This book is very careful about understanding. Is there anything in that section that kind of like spoke to you?
Carter Morgan (44:18)
Now,
I think I agree with you, like microservices are a team problem. I've been thinking about it. Our company is gearing up to hopefully raise a series B. And so I've been thinking like, okay, how big does the engineering organization get? Even that's kind of an open question with like large language models of like, well, how many engineers do you need? And so I'm like, okay, if we grew to 20 engineers.
Nathan Toups (44:36)
Mm-hmm.
Carter Morgan (44:43)
Could we all work on kind of like the same monolithic code? was like, yeah, I think if we grew from 10 to 20 engineers, like we'd be fine. What about 10 to 40? Right. At what point do you start feeling that kind of that overhead pain and you know, and then I think about like, okay, I mean, that's what this whole, the whole case study here is about, like taking a monolithic application and dividing it into separate domains. And how do you do that? And I've just been trying to think like, okay, like
If we had to get to that point, where would I draw the lines in the current application? And would that even be helpful? Would it, you know, like we can probably get like a hundred engineers all working on this. It's one of those things where it's like. With this rearchitecture, like we finally got like a decent amount of traffic flowing to it. We've got like half of our traffic now on the new service. ⁓ and so I was able to do like some load testing and just some like performance testing and was very pleased.
And our old service was not good. It wasn't good at all. And so we've gotten like 10 X performance improvements with like latency and in terms of the load it can handle. Again, that's not because we're doing anything cool or crazy. And this new one is more that we're just not doing it's because we're not doing anything crazy. The old one was doing crazy stuff. Um, and so I was talking to my CEO about that. was saying like, yeah, like we're easily going to be able to handle like 10 X the traffic of what we used to be able to do. And he was like,
Nathan Toups (45:55)
All right.
Carter Morgan (46:05)
He's like, what about 100 X? He's like, what about 1000 X? He's like, could it do that? And I told him, I was just like, like, I don't know. Like you'll just, you will discover those problems as you start encountering them. And I feel that way with like, as a system grows, like you're going to start feeling the pain points. And when you feel those pain points, that's when you need to, you know, start taking action. But that's where books like this, like fundamental software architecture, like team topologies, like.
Building evolutionary architectures, working effectively with legacy code. mean, all these kind of like meat and potatoes books of like, ⁓ I guess team topology is a little more organizational, but you know, how do you organize teams? How do you break apart code? How do you ⁓ segment things? I just, look, we may get to a point, I don't think we will, but never say never, right? Where the large language models just get so good that they can just do our job, start to finish. Again, I don't think that'll happen. I have a lot of reasons why I think that won't happen.
⁓ but you know, assuming we kind of remain in this world and see like incremental improvements where like the models get progressively better at generating code, like I've been so glad that we started this podcast and that we have read all of these books because intuition and judgment and taste as a software engineer is at a premium these days. Right. There was a bit before where you're like, you know what, I'm just going to code and I'm going to find it along the way. Right.
Nathan Toups (47:20)
Mm-hmm.
All right.
Carter Morgan (47:35)
And now like having these kind of solid foundations about what a system should look like, where the seam should be, what patterns are out there that you can use. And you don't need to have like an encyclopedic knowledge of all patterns. You just need to remember like, I heard about this pattern once. Let me go explore that a little further. And because it's fairly cheap to generate code, could, can get a version that looks like this. mean, John Osterhove talks about designing things twice, like designing things twice has gotten a lot cheaper these days. Right. And so.
Nathan Toups (47:51)
Yep.
All
Carter Morgan (48:05)
But I think that's great because now you can kind of iterate faster. I don't know. All that to say that like, if you're listening to this podcast, like I just continue to believe that this deep technical understanding is so important to being a software engineer. So, you know, that's my little.
Nathan Toups (48:05)
Yep.
Yep.
No, and I
think that like, you know, a lot of good architecture is figuring out what to eliminate from the system that's unnecessary, right? Perfect example is that one of the things that made Apple innovative, the reason that the Apple one was so interesting was that Steve Wozniak was just a whiz at, you he used to work at Atari. Atari was all about making video games as cheaply as possible.
Carter Morgan (48:33)
right.
Nathan Toups (48:52)
but are logically correct. And Wozniak was really talented at reducing the number of chips on the hardware to put an arcade into the arcade system. That was the economics. So he thought about elimination a lot. And so they figured out with all these off-the-shelf tools, I can build a personal computer with a really basic version of BASIC, and we could do it for under a certain amount of money. That was the sort of innovation that Apple brought to the table, is there's this home lab.
sort of homebrews, sort of like scene, trying to build these personal computers. I think that we're in the same thing here, which is that I can get a large language model to get the right shape of an idea, especially if really what you're doing is gluing some existing concepts together, right? what Leland, perfect example is like your company.
having a way of expressing orders or an onboarding flow or like a matching market between coaches and coachees, none of that from a web framework standpoint is like completely insane, right? Like really what it is is that it's the marketplace that's a new idea. It's the way that you're doing the matching. It's the way that you cultivate coaches and cultivate the coaches to get outcomes that they want. That's where the innovation is. But like,
Carter Morgan (50:01)
You're right, exactly.
Nathan Toups (50:14)
making a platform in which people sign up and fill out a profile and schedule. These things are things that a large language model has lots of good examples in the world. And you kind of pick and choose, yeah, I like that, I hate that, put these things into place, which is a taste level thing. ⁓ And then really what you want to do is eliminate it down to its essence. And they even talk about this in this book where they talk about reaching back to the end
Carter Morgan (50:17)
Right, right.
Exactly. Right.
Nathan Toups (50:40)
who was it? it attributed to Michelangelo or there's a few other ⁓ sculptors, right? Who you basically say, you know, ⁓ the work is actually in the block of stone. And what you're doing is you're like freeing the, you know,
Carter Morgan (50:45)
yeah, the sculptors. Yeah.
Yeah,
you're removing the marble that doesn't belong, right?
Nathan Toups (50:57)
that doesn't belong there.
And I think that is actually our job more than ever is can I get the shape of this and okay, well, this is actually an unnecessary cog or this is an unnecessary piece to the pipeline until all of a sudden I've gotten it down to its essence. And that is not something that is just easily attainable. And it's the opposite of the AI psychosis thing, right? It's like, okay, you expressed it in 70,000 lines of code. I did it in
Carter Morgan (51:18)
And that's the... Yes, yes.
Nathan Toups (51:26)
1,000 lines of right? Now that's a true master, is the person who can do in 1,000 lines that you do in 70.
Carter Morgan (51:34)
I was seeing the primogen talk about like ⁓ how he wants, he still codes for like one to three hours a day, like hands on keyboard actually typing, because he wants to keep those skills up. And I've been thinking, we've just been having to move fast right now. And so I haven't been able to do that, but I think when things slow down a bit, I might go back to doing that and like, you know, not even like cursor or anything. I actually got rid of cursor. just use a BS code and have the cloud code extension. Cause I think maintaining those muscles are important because I mean, I
I have not written a line of code, like handwritten a line of code aside from like the occasional log statement, right? In a while, right? But yesterday when I was like trying to come up with like this new like order service and using the strategy pattern, like I was shocked in the amount of times I had Claude generate it. I was like, no wrong. And I have to like highlight this and that and be like, I don't like this. I don't like that. Let's let's try to make it look a little more like this. but that idea of like the sculptor, right? Who's just
Removing the marble that doesn't belong that they mentioned as tactical forking ⁓ Did you use that term? I don't want to reuse that term if Because we were talking we're talking about the concept yes tactical forking is in the book But basically what he says so instead of carefully like you're trying to decompose a monolith, right? Instead of carefully extracting services from the monolith They say you know what fork the whole thing just clone the whole monolith and then just remove stuff
Nathan Toups (52:41)
I don't think I know, but that is in the book. Yeah.
Yeah, I thought-
Carter Morgan (53:01)
just delete
stuff. Yeah, I'm like, that's not a crazy idea.
Nathan Toups (53:05)
And I've actually seen this. I've actually seen this. They didn't mention it exactly here. we actually, when we were trying to do this with it, we had a big monolith. We also, before we even removed a single line of code, we ended up realizing that we could run the monolithic web application in what we would call modes. And so we literally had a feature flag of which mode the entire code base would run in.
Carter Morgan (53:29)
That's cool.
Nathan Toups (53:30)
And so then we basically, OK, well, this is in the web server mode, and this one is in order management mode. ⁓ And it basically said, can we at least get it to flow over the wires so that they talk to each other? And then once we did that, we forked the code base and start reducing it down just so that order management was just order management. ⁓ And so I didn't realize that that's what this pattern was, but it worked beautifully, beautifully for this. ⁓
Carter Morgan (53:56)
Yeah, yeah.
No,
I agree. And it's like, can kind of like, you can kind of try to look at a system and tease it apart and be like, okay, well, we're going to need this, we're going to need that. And it's like, why bother? Like just, just clone the whole thing and just start removing stuff. Say, well, we definitely don't need this. Okay. Those deleted and any season software engineer knows how good it feels to delete code. Right.
Nathan Toups (54:07)
Mm-hmm.
Yeah,
actually, think it was, what was it, ⁓ Laura Taka, who is the CTO over at DX. She's now over at Amazon doing developer experience stuff. She talked about using large language models to remove code. Like, she'll even do passes in where she's like, all, she's not getting it to write any net output code. It's literally, I eliminate, can I analyze this code base and remove lines of code from our code base that are no longer needed?
Carter Morgan (54:31)
cool.
interesting.
Nathan Toups (54:51)
which I think is a really powerful practice for you to go through as well, because you need to scrutinize its conclusions. Do I break tests? Do these things? But yeah, every line of code, and I've said this in other environments, every line of code is a liability, potentially. Every line of code is a, every dependency is a liability. If you look at Axios or these other crazy supply chain hacks that are happening right now, ⁓
Carter Morgan (54:57)
Right.
Yes, yes.
Nathan Toups (55:15)
I did this recently with, think it might've been BookOverflow's website. was like looking through some stuff. I saw a weird dependency that I was trying to update and I realized it was some helper library that was like a hundred lines of code. And I was like, I don't want this. And then I was like, okay, which functions are we actually using out of this library? Let's just vendor it ourselves. Like we're just going to make our own implementation of it. in BookOverflow, and again, this is, you know, it's our little website, but I just, it was like one...
Carter Morgan (55:28)
Right, Yeah, yeah.
Ryan.
Nathan Toups (55:43)
less dependency on the dependency graph because all I thought was this dude's going to get burned out. He's going to replace it with something. And the next thing you know, credential farming is going to happen on the Book Overflow website. And I was like, or I could just eliminate it and handle the little bit of logic that was useful for me. ⁓ And we should always be scrutinizing ourselves this way.
Carter Morgan (55:53)
Yeah, I know, right.
Right.
Yeah.
Yeah, I blew some interns minds with that because you know, the interns are all very excited. They have Claude code and they're like, oh my gosh, like we can generate, you know, like people who are non-technical. It does feel magical that it like, can generate a little JavaScript script. And they're just talking about like code and, and, uh, and they're just like, yeah, I'm just still trying to understand code. And I kind of, I was walking by, I'm like, here's the only thing you need to know about code. Code is a liability. Our job is to generate as little of it as possible to get the job done. And this was a completely like,
foreign concept to them, right? Like, which I don't blame them. you, that's what's been so weird about like this kind of like, I think, I think again, like AI psychosis moment is like, I remember this story from, it was like one of like the, the early like principal engineers at Apple and you know, it's like the eighties and they're still trying to figure out like, how, how do you track a programmer's productivity? And so, you know, they thought, okay, lines of code, like lines, you know, every week you got to fill out this form and write the, the,
the number of lines of code you generated, right? And he spent the whole week trying to solve like this very critical, tricky performance bug. And it wound up being that he had to remove two, I think he like removed three lines of code and replace it with one line of code, right? And so he gets this form, what do I do with this, right? And he writes negative two and submits it. And then from then on, he wasn't asked to submit the form anymore. ⁓
But yeah, like I just.
It's you need good discernment, especially in this day and age. And I think there's a lot of people trying to convince us that like the fundamentals of software development have changed. And I just, the more I dig into it, the more I don't think they have, which is not to say that our workflows haven't changed, right? I find these days I'm much more working on two things at the same time. Like I'll kind of like two agents.
in two separate Git work trees and I'm bouncing back and forth between them. And that's a very different way of working than software engineers have in the past where we've really admired like getting into the deep meditative flow, right? And I'm not saying that's how all engineering is. I'm just saying I'm noticing that in myself. ⁓ We are taking our 10 person engineering team used to be two squads and now we're dividing it into like four squads because we think that it just, you know, there can be.
Nathan Toups (58:17)
Mm-hmm.
Carter Morgan (58:33)
smaller units can move a little faster. So a lot is changing, but this idea that like, now all of a sudden lines of code generated is a really good productivity metric. like, I don't know about that, right? This idea that like, you don't need to understand the line by line code anymore. I don't know about that either. Like maybe LLMs will just get better at like the fundamentals of the job, but I don't think the fundamentals of the job have changed.
Nathan Toups (58:59)
So I'm gonna make a vote here is that we actually, even though we read about half of chapter six, which is on pulling apart operational data, I think we should probably stop here and then use this in our conversation next week. ⁓ Just because there's some deeper things that I wanna make sure that we have time for. And I think it feels like we can get into hot takes and kind of round things out right
Carter Morgan (59:12)
Yeah.
Yes, yes.
This whole episode has been a hot take. ⁓ But my hot take is that like, look, the monolith is very much back in style. I like microservices. think there's something really good about a team having full autonomy over what they ship. just about that, so long as the API contracts are maintained, it doesn't matter what the other service is doing. And think about it, any time you use any sort of third party dependency,
Nathan Toups (59:26)
Yeah, you're right. You're right. You're right.
Same. Yeah.
Great.
Carter Morgan (59:53)
You call S3 with the AWS SDK. You don't know. You don't know what that's using, right? Like, or, or that's written under the hood, what they're doing for their database, how it's running. so, you know, don't go do anything crazy, like have one microservice per engineer, but like, I think the operational overhead of trying to have like 40 engineers work on the same service, and then it becomes unclear. Like if the CI pipeline gets broken, like who's, who
fixes that, right? If who is in charge of how it's deployed? Like, I don't know. Microservices aren't so bad, folks.
Nathan Toups (1:00:27)
Right.
Yeah, I agree with that. And I also think that, know, especially if your systems can handle things like partial failures, we don't get into it. like, you know, one of the nice things about breaking out a database into its service oriented components is that your system might be resilient enough to have a partial outage, right? Maybe, yes, maybe you can't take orders at the moment, but your profile and communication parts of your system can continue to work.
Carter Morgan (1:00:52)
Yes, 100%.
Nathan Toups (1:00:54)
⁓ If you have a monolith and your database falls over, the whole thing goes down, right? So I think...
Carter Morgan (1:00:59)
I think a great example
of this is like Facebook, for example. It's possible that the ads platform, the ads management platform, has been broken for the past month. I wouldn't know, right? Because I don't ever use that. Facebook's bad example, because I'm not really on Facebook these days. maybe Twitter is a better example. If I'm being served static ads as the user, what do I care? Maybe that part has been broken for a month. I don't know. But because it's decoupled, because it's not, you know.
⁓ because it's probably governed in a microservice-like way. ⁓ Yeah, it tolerates that partial failure. So, know.
Nathan Toups (1:01:35)
One funny thing
is, and actually it's fixed in the print book. So I think they obviously acknowledged this in the audio book, they say in ancient Greek philosopher, it's in chapter two, it says all things are poison and nothing is without poison. The dosage alone makes it so a thing is not poison. And it's this ⁓ Parcellus or Parcellsis.
Carter Morgan (1:01:48)
yeah.
Nathan Toups (1:01:58)
quote and I was like talking to my wife and I was like, oh yeah, you know, there's this kind of interesting quote and it's ancient Greek philosopher. I look it up and it's like, he's not ancient Greek. He's actually like Swiss German from the 16th century. And I was like, what the heck? And, but he has this like Greek sounding name. And so anyways, just like a funny little thing. And then I looked in the print book and it's like, oh, about the philosopher and they dropped the ancient Greek part, which I thought was funny.
Carter Morgan (1:02:23)
Okay, let's talk about what we're gonna do differently in our career. For me, I just wanna focus more on getting technical alignment before making a decision. Again, I think in a world where velocity is faster, direction is so much more important. And so I wanna be better out of the team kind of all coming together, even for like medium complexity decisions and getting people's input on it. How about you?
Nathan Toups (1:02:40)
Yep. Yep. I've been doing this
recently, ADRs. So these ADR documents are something that sits next to the code and says why we made this decision. It actually makes a software implementation a little more permanent. And so I'm in the same boat. I've actually started using this already. And I love it because it gives me a little more weight to being like, yeah, this is why we do this thing this way.
Carter Morgan (1:03:03)
There we go. For book recommendations, like I said, I'd probably do fundamentals and then maybe you could skip building evolutionary architecture, but definitely fundamentals first. And then if you like both of that or are wanting the next chunk, read this book. How about you?
Nathan Toups (1:03:15)
Yeah, this
is you've been going to the gym, you know, you may be been doing the bicep curls or whatever. And you're like, you know what? I want to get into technical, you know, clean and jerk Olympic lifting. Right. This is the this is the Olympic lifting barbell workout for software architecture. Advanced topics. This is really about the hard parts. If you're super serious, this is the book for you for sure.
Carter Morgan (1:03:27)
Yeah, yeah.
There we go. We'll be back the next two episodes to talk about the rest of this book. You can always find us on Twitter at BookerRobotoPod. I'm on Twitter at Carter Morgan and contact us at contact at BookerRoboto.io and you can find Nathan and his work with his consulting agency Rojo Roboto at rojoroboto.com. Thanks folks. See you around.