Steve Flanders Reflects on Mastering OpenTelemetry
Book Covered

Mastering OpenTelemetry and Observability: Enhancing Application and Infrastructure Performance and Avoiding Outages
by Steven Flanders
Get the book →Book links are affiliate links. We earn from qualifying purchases.
Author
Hosts & Guests
Transcript
This transcript was auto-generated by our recording software and may contain errors.
Steve Flanders (00:00)
we believed that the future was going to be kind of open source here, democratized from an instrumentation data collection. We had heard based on our customer interviews, that's the pain point everyone had, like switching vendors was hard, managing this instrumentation was hard. No one wanted to do it.
So we did place a bet there that it would be necessary.
Carter Morgan (00:25)
Hey there, welcome to Book Overflows, the podcast for software engineers by software engineers, where every week we read one of the best technical books in the world in an effort to improve our craft. I am Carter Morgan and I'm joined here as always by my co-host, Nathan Toops. How are doing, Nathan?
Nathan Toups (00:37)
Doing great, everybody.
Carter Morgan (00:39)
Well, we're very excited. got a special episode for you today. This is our interview with Steve Flanders, the author of Mastering Open Telemetry. Very cool. I mean, if you've been following the podcast, you know that I, Nathan has been working with open telemetry for a while. I have been kind of just diving into it. We actually read this book at my suggestion because I just wanted to understand this better as I was implementing it ⁓ on my team. So to be able to go from reading this book to actually
talking with one of the founding members of the whole OpenTelemetry project. So cool and such a cool guy. mean, Nathan, give them a sneak peek of what they're about to listen to.
Nathan Toups (01:15)
Yeah, we got to dive into a lot of the topics that we ended up bringing up over the two episodes in our podcast and got him to elaborate on a lot of the cool things. Because, you know, he wrote the book about a year ago, but things were already changing the sort of landscape, what he's excited about with AI and the future of what Othel can accomplish. Also, some of the challenges and anti-patterns. There's just, I don't know, I think it's slightly less than an hour that we talked to him.
but it's dense with a lot of great information. Yeah, all around. Fun time.
Carter Morgan (01:47)
Yeah, absolutely. mean, add them to the list of fantastic authors we've had on who are just have every reason to have an ego the size of Mars and ⁓ instead are just really humble, really cool guys. ⁓ Steve, such a cool guy, such a cool interview. Excited for you to listen to it. So please enjoy this interview with Steve Flanders as he reflects on his book, Mastering Open Telemetry.
Nathan Toups (02:01)
Mm-hmm.
Carter Morgan (02:14)
Well, Steve, thank you so much for joining us. It's a pleasure to have you.
Steve Flanders (02:17)
Thank you so much for having me.
Carter Morgan (02:18)
Well, we are really excited to have you on. read open telemetry. This is actually, or sorry, mastering open telemetry. This was a book I insisted on reading because I had taken on this responsibility at work to get all of our open telemetry set up. And I was like, I gotta understand this better. ⁓ obviously, you know, you are one of them. Maybe before we get started, help our audience understand exactly what exactly is your role with open telemetry? I believe you are one of the, I don't know if you want to inventor.
Creator? mean, tell us a bit about how you would describe yourself. Founding member, okay.
Steve Flanders (02:49)
I say founding member, right? So there are lots of people
that like made OpenTelemetry possible. And I can't say that like I am a founder itself, but I was part of the project since the very beginning. So I actually worked on one of the precursor projects. So OpenTelemetry was formed out of OpenCensus and OpenTracing. Those two projects existed before they merged and they formed OpenTelemetry. I was working on OpenCensus with Google and Microsoft initially. And then at...
Carter Morgan (03:14)
Okay.
Steve Flanders (03:18)
I was at a start-up called Omnition where I was a founding member as well. That was eventually acquired by Splunk and became the Splunk APM product line. But at Omnition, we wrote the OpenCensus service, which is now the OpenTelemetry collector. So I've been involved with the data collection aspect for a very long time.
Carter Morgan (03:33)
Okay.
Steve Flanders (03:37)
And then my role at Splunk, was actually leading a team that was responsible for all of Splunk's contributions to open telemetry, which included instrumentation, the specification, the governance committee, like all different facets of open symmetry overall. So been involved for, since the very beginning and get kind of very passionate about this generally.
Carter Morgan (03:58)
Well, very cool. So maybe explain to us then describe the computer science landscape at the time you wrote Mastering Open Telemetry. And obviously it's been involved with the project from the very beginning. If you want to any other context there and just kind of why you felt that it was necessary to, write the book on this.
Steve Flanders (04:16)
Yeah, yeah. So the book came out almost a year ago now, right? So the month of November last year. So I guess good, timing in terms of like talking about what happened there. And I did it for several reasons. One is I learned a lot from others coming up in my career, right? Other books, other conference talks, blogs, things like that.
And so I see this as a way of kind of giving back. Since I've kind of been involved in this space for a very long time, I know like a fair amount about it and I'm hoping that I can provide information that'll be useful for others.
And a year ago was a good time because OpenTelemetry had really matured, right? Traces, metrics, and logs, the three pillars of observability, those were mostly stable. And OpenTelemetry was actually advancing beyond that, including things like profiling or real user monitoring and other aspects that are also important from an observability perspective. Also, just felt like there wasn't a ton of material out on OpenTelemetry. There were a couple books.
but not a lot of material generally. Lots of great conference talks and kind of blogs, but not as much material that had like the breadth and depth that I was hoping for. And so I thought maybe I could put something together that might be useful for folks.
Carter Morgan (05:31)
It's a Wiley book. so talk to us about the process for like actually getting the book started. Did Wiley approach you? Did you approach Wiley? ⁓ And then we've also read some fantastic books that were self-published. And so maybe what's the logic for maybe going with Wiley as opposed to self-publishing? Talk to us about how it all came to be.
Steve Flanders (05:49)
Yeah, yeah.
So, ⁓ actually, a couple publishers reached out to me initially. And ⁓ I was like, Okay, Mike, there appears to be like an interest in this. And I feel like I know enough on the topic, let me try to put together the material. So if you're ever written a book, you have to like define the outline, you have to have descriptions for all the sections, it's actually pretty difficult, because you have to think through the entire structure of the book without actually writing the entire thing. And it has to be like compelling, right? So so I did that. And I submitted it to to the
Carter Morgan (05:54)
very nice.
Hahaha.
Steve Flanders (06:19)
the publishers that reached out to me and they immediately were like we want to we want to work with you we think that this book has a lot of potential and that got me thinking like okay well if they're interested maybe other publishers would be interested.
Carter Morgan (06:31)
Uh-huh.
Steve Flanders (06:31)
And
so I actually reached out to other publishers as well, Wiley being one of them, ⁓ and kind of submitted the same exact material that I put together. And everyone that I reached out to was interested in getting the book written. So ⁓ after a bunch of like back and forth talking to different publishers, I really liked what Wiley was kind of talking about and kind of their way of more educational based books. So their mastering series is really about like helping people understand.
concepts and see actual technical demos of how to actually get it working in practice. And they have this notion of an intro and a conclusion to each of their chapters, which you probably saw as you were reading the book. ⁓ And it's interesting because I've actually had people reach out to me after reading the book being like, hey, that's their favorite part, that intro and conclusion section. And I probably wouldn't have written it that way if I wasn't working with Wiley. So it was definitely a process.
Carter Morgan (07:20)
Yeah.
Steve Flanders (07:25)
But I enjoyed working with Wiley and I think the final product came out.
Nathan Toups (07:31)
Yeah, that's great. Yeah. I thought this book was really useful for our podcast because my background is ⁓ DevOps and site reliability engineering and platform engineering. And so this is the vocabulary and I've been seeing the evolution over the last 10 plus years of how we even think about metrics and observability data. And it was just like, it was a cool time because first of all, Othel is like, oops, do you pronounce it Othel? How do you pronounce it for the shorthand?
Steve Flanders (08:00)
Yeah, O-T-E-L. ⁓ didn't.
Nathan Toups (08:02)
Okay, that's right, okay.
Carter Morgan (08:03)
I know
that because for our audio listeners, you cannot see Steve's excellent shirt, which says, tell me more. ⁓ But we had heard some people pronounce it, ottle. And I'm like, that's sacrilege. You can't call this ottle. Yeah, there we go.
Steve Flanders (08:11)
you ⁓
I don't think I've ever heard that. way.
Nathan Toups (08:17)
Yeah. That's
so funny, yeah. ⁓
Steve Flanders (08:22)
Now they went with OTEL because OT would have been open tracing and they didn't want to confuse open tracing and open symmetry. So we went with OTEL.
Nathan Toups (08:30)
But I've actually found it really useful to have conversations with software engineers through the framework of open telemetry. And I think that was exactly true with this book. What I wanted to bring up though is that we kind of noticed this when we read this book in two parts, it's deeply technical, but there also was some sort of like introductory sort of information. ⁓
sprinkled throughout. And so it kind of had that you had to kind of switch from super deeply technical mode to like introduction to concepts. Was that a goal that you had from the beginning or was that feedback that you got from Wiley? I'm always curious like how do you balance that?
Steve Flanders (09:09)
Yeah, so this topic, open telemetry and observability in general, is quite complex. So if you're not in the field, it's not necessarily the easiest to understand. There's a lot of concepts to it. There's a lot of acronyms and abbreviations of things. And there's code, right? So software development background sometimes is kind of necessary. I was trying to structure the book in a way that I would hope that it's approachable, but the largest audience possible. So I didn't necessarily care where you were.
Nathan Toups (09:21)
All
Steve Flanders (09:39)
Like if you were a pure software developer or if you had never used observability before, I was hoping you could get something out of the book. And so I've actually, especially when I started writing the initial chapters, I rewrote them five, six, seven times, trying to kind of strike that balance of how can I provide at least some introductory information that will be generally applicable if you know nothing about this concept. But if you do know the concepts, you can kind of go deeply to the technical, here's the code, here's how you'd actually change it, here's the output of it.
So I'm trying to give a little bit of both worlds. There's pros and cons to that approach. Some people would prefer that you just keep it really high level and just touch on the concepts. Others would say, hey, maybe you should have gone all the way and just run purely technical and deep. But I tried to strike the balance. I'm actually curious what you both thought as you were reading it.
Carter Morgan (10:29)
Yeah, I think we always, we talk about on book overflow and we jokingly call it the end to end test, which is just by by nature of the podcast, we have to read books cover to cover. Right. And, and, and some books are really easy and enjoyable read that way. And then some books like mastering open telemetry are just, it's very, it's very substantive and very deep. had one book we loved, what building evolutionary architectures.
which broke the podcast because it was like 180 pages. And we're like, we think we can get this done in a week, but halfway through, were texting each other. Like we can't do this, man. There's too much in here. Like we've got to, we've got to break this into two episodes. so, um, I, selfishly, I think as someone coming to kind of from, so we're a little Nathan and I have kind of overlapping skillsets, but a little different where Nathan you've been really deep in kind of like the SRE stuff. I'm more of your general, like full stack product senior engineer. so, um,
I think selfishly, I might have liked the book to kind of come a little more from the perspective of like, we trust that you're already a senior engineer. Maybe this is your first kind of branch into open telemetry. But then also, Nathan, I think you might've enjoyed a book that was a little more targeted towards you. That was like, I have deep experience with SRE and I just want to get, you know, like even deeper.
Nathan Toups (11:49)
Yeah, so I think the way I look at this book is it felt like a what I would call like a skip around to the interesting topic kind of thing where I have some books like you know, Martin Fowler's refactoring, you're not supposed to read that cover to cover you you you realize that I have a certain type of problem I need to solve. I go deep dive into maybe the philosophy and some of the characteristics. And that's how I felt about this, which, again, that's the kind of book I want to sit on my shelf. It's going to sit up there with the other reference books and hop through. ⁓
I did enjoy... Oh, did you? Oh, cool.
Carter Morgan (12:19)
I cracked it open again this week, because I was having like a collector.
Yeah, yeah, I was reviewing like our config file. like, the book talked about this. Let me go look that up, right?
Steve Flanders (12:29)
So this is great to hear because this was definitely the intention. I I don't expect everyone to read it cover to cover, but I'm hoping that you go to a chapter, you get some introductory information, you can go pretty deep and hopefully you can get something out of it. So that's great to hear.
Carter Morgan (12:31)
Yeah.
Right,
Yeah.
Nathan Toups (12:41)
Yeah.
And I also love, ⁓ you know, exploring things like anti-patterns or the traps that are in there. I feel like those are always ⁓ super, super useful. ⁓ One of the things I loved about, and this is the way I was thinking about ⁓ open telemetry as well, but there's this idea of like pipelines, not storage tanks, ⁓ the sort of concept that's in there. ⁓ You know, I think a lot of times when I'm having conversations about this, folks are using tools like
Carter Morgan (12:47)
Yeah.
Nathan Toups (13:10)
data dog or new relic end to end, right? They just kind of grabbed the SDK, followed the guides that are on, you know, this new vendor that they're onboarding. And this book does a really good job of advocating for the fact that like this vendor neutral approach, at least inside of your business logic, like inside of the code itself has a lot of advantages. ⁓ I would love for you to like elaborate a bit on that shift that's required in your thinking if you want to take like open telemetry and then where your storage sinks go at the end kind of idea.
Steve Flanders (13:40)
Yeah, I mean, this is the biggest concept in OpenTelemetry. probably the reason why the project has been so successful to date is this notion of getting to a vendor agnostic way. Like, I can kind of standardize on the way that I instrument and collect data in my environment. And I have control as the end user to do whatever I want with that data. If I want to send it to a commercial back end, can. If I want to send it to two different commercial back ends, I can. If I want to redact information, I can. If I want to enrich the information, I can. It's like full control. ⁓
Carter Morgan (13:50)
Mm-hmm.
Steve Flanders (14:09)
But
the mindset shift, think, was before OpenSymmetry existed, if you really wanted the best, you had to use the vendor's SDKs. And if you wanted it quickly, again, you had to use the vendor SDKs because they optimized it, right? Very simple to kind of install, to start getting value out of it. You didn't have to understand all these deeply technical concepts.
Like what is a trace ID? What's a span ID? What is context propagation? Like what's the benefit of like having ⁓ metadata on metrics? What is high cardinality? Like a lot of concepts you need to kind of understand. The vendor took care of that for you. But the downside was you then had to rely on that vendor. If there was something wrong, you had to ask them to go fix it.
Carter Morgan (14:48)
Mm-hmm.
Steve Flanders (14:55)
performance problems, security problems, a new feature. And then the harder part is over time, those vendor costs keep going up and they'll reach a point where maybe you're not getting as much value for how much you're paying and you're gonna want to switch. And the switching cost is so high, like you have to basically rip out everything that vendor gave you.
and install what the next vendor is giving you. And you don't get any value from that other than maybe saving some money in the future. Right. So that, that was kind of the, the, the, the wave, I think that led to open symmetry being successful is now if I install this, which, mean, I still need to get off the vendor that I'm currently using. So there's a one-time lift to get on hotel, but once you're on hotel, all the value that you get, all the flexibility that you get, doesn't matter if it's open source commercial, whether you're running it yourself or you want to send it out to a third party.
all of that's now kind of possible. ⁓ So that's why I think we're seeing so much adoption and investment in this area, but it's still hard. A lot of people still haven't fully transitioned to hotel. It's easier for Greenfield than Brownfield, which I kind of covered in the book as well.
Carter Morgan (15:58)
Right.
Okay. But, but here's what surprises me about kind of the evolution of hotel and, these, got your dad, your dad, dogs, your new relics, right? Which is like, you think that these vendors would hate hotel with a passion and would do every, and would, fight in every possible way to make sure that this thing, you know, to, kind of strangle this in its infancy. ⁓ and, and you guys on the other hand, it's, it's not, forgive me if I'm wrong, but I imagine when this started, there probably wasn't like a huge.
Financial incentive on your part. It's not like if we can make hotel work while get filthy rich, right? It's more of a labor of love and yet you want Right, like the the big guys are now striving to be hotel compatible. I mean, how does that even happen?
Steve Flanders (16:44)
So I think there's
a couple of different ways that it happens. So first, the big brands that you're talking about, the big vendors, they had really large teams that were building exactly what Othel is, all the instrumentation, all the data collection. But every big vendor had to have a really large team. That doesn't necessarily scale. It's a very hard problem. It's a big investment. It also makes it hard for startups to come in and take advantage of it. So you need some standardization to occur.
Carter Morgan (16:55)
Mm-hmm.
Steve Flanders (17:12)
But I think the end users, the consumers of this data, they started feeling the pain as well. It wasn't as simple as installing it, right? Supply chain attacks, all the security things that you've seen coming up, the performance hit in your application. They started having more and more friction, even though the vendors were giving it to them, that they really wanted to just not worry about this problem anymore. They didn't want to invest time and effort kind of into it. And...
Carter Morgan (17:17)
Mm-hmm.
Steve Flanders (17:37)
I mean, yeah, we made a bet and omniscient when we were kind of starting our company, we believed that the future was going to be kind of open source here, democratized from an instrumentation data collection. We had heard based on our customer interviews, that's the pain point everyone had, like switching vendors was hard, managing this instrumentation was hard. No one wanted to do it.
So we did place a bet there that it would be necessary.
And then what you saw in O-Tel is not just did vendors and like cloud providers get behind this, end users were contributing to O-Tel, right? That tells you that they were feeling the pain and just kind of proved like this was a necessary problem to kind of solve.
Carter Morgan (18:08)
Mm-hmm.
Steve Flanders (18:16)
And then luckily other open source projects picked it up too, right? Like Jaeger was very early on involved in open telemetry. Now Prometheus with their newer versions has much better support for open telemetry. And you kind of build the economies of scale there, especially when you're seeing other large projects adopted as well.
Carter Morgan (18:33)
OK, I have a question about your personal journey, ⁓ which is I think when you think about, OK, a lot of engineers naturally gravitate towards one area. They find it more fascinating than the other. And I think for people outside of software engineering, and maybe even some people within software engineering, there are some areas about that that make sense. If you say, I'm really into AI, it's like, well, of course, that's the hot new thing. It's bleeding edge. If you say, I'm really into UX, it's like, well, that's very flashy. There's a lot. You gravitate towards observability.
Right? Which like, if you're not in software, you don't even know that's a thing or a concern that you should have. What makes you so interested in this field?
Steve Flanders (19:12)
I think it's just the way that I came up in my career, right? So very early on, like the early days of like SaaS at scale or even PaaS at scale, I was involved in. And so I was running large operational environments, like infrastructure that we had like in the company itself. Like this is before like AWS is a big thing, right? Like we were running our own hardware, scaling those services, running those services, keeping them online, helping the developer understand where the problems were.
That's where I kind of started early in my career.
And so I used to write custom tooling, which would basically help network engineers troubleshoot their switches or help system engineers understand why the infrastructure is behaving the way that it's behaving. And when I started to actually play with, I mean, guess monitoring tools, it wasn't even observability tools yet. I saw the value of, I don't have to write custom scripts for you and hand that over as static configs. You can start querying for as long as you send us data. You can start querying for the problems that you have.
some custom alerts for you, you can then enrich those alerts. I can give you more kind of control of that data. And I kind of saw the value of it. Now I started early on with logs. Logs are great from like a last mile root cause analysis, but I don't know if you've ever tried to stitch logs across disparate systems. Not a great experience. And so I kind of realized early on, well, ⁓ I know what I'll do. I'll go to the software development teams and I'll have them update their logs.
Carter Morgan (20:33)
yeah.
Steve Flanders (20:44)
Yeah, that just doesn't work, right? Like it's just, it's not, not a viable option and getting everyone to kind of fix it. And then I got introduced to tracing and I'm like, ⁓ context propagation. Like this is great. Now I can actually pass context between my calls and I can stitch this entire thing together and I can still get down to logs when I need them. So I just, found myself dealing with complex software and troubleshooting, monitoring, observing them all the time. And so I really got interested in this space.
Carter Morgan (20:47)
Hahaha.
Steve Flanders (21:11)
And then I had the opportunity to work in observability product at VMware, which really got me into this, got involved in the open source community and just, kind of snowballed from there. And I feel like I've been kind of lucky because I've had the opportunity now to build logging metrics and tracing platforms. So I've seen like all different aspects of observability and like the pros and cons, the problems that people experience. And now you mentioned AI, right? That's the next wave on top of all of this that I think is going to be really interesting. So I don't know, I just, I've grown.
passionate about it and I like providing value to end users so they can hopefully observe their systems better.
Nathan Toups (21:47)
So I'm gonna dig in on that last thing you just said. What are you excited about as far as observability and AI tooling? Like where do you see that going?
Steve Flanders (21:55)
Yeah, I see at least three different areas for AI and observability. So the first would be like observing AI systems themselves, right? So again, if you go back to OpenTelemetry, it collects telemetry data. There are LLM telemetry data that needs to be collected because you're going to want to monitor those models. How are they behaving, performing? When are there errors? You even want to probably get into like token utilization.
or whether or not your queries are efficient or effective or not, if you're leaking PII, all sorts of interesting things that you might want to monitor or observe from an LLM perspective. And there are a couple projects that support OpenSymmetry today that are helping do that. OpenLit and OpenLimitry are the two big ones. I'm sure that there's others as well, but those are the two that I'm quite familiar with. But I think that's an interesting space. How can we get GPU data? How can we actually monitor this and pull it into our observability platform? So that's one category.
On the opposite side of the house, it's in the observability products themselves, what AI capabilities can you provide to the users that are consuming your product? So today, you get static dashboards. Great if you've had a problem before, or great if you want to monitor whether things are upticking in the right direction or the wrong direction. But if you want to dynamically understand what's happening in your environment, static dashboards are not really where it's at.
And I'm actually, I believe that in the future, that static dashboards will start being replaced by dynamic dashboards that are AI driven. I think that you'll be able to like ask questions literally of your like chat bots in your observability tool. And it will kind of run the queries for you, build a dynamic view, help you get to problem isolation and root cause analysis very, very quickly. So that's kind of like both sides of the spectrum. In the middle, I think it's what you're doing with the data and how you can like analyze it better.
So going back to pipelines, for example, I think there's a lot more that can happen either in the hotel collector or maybe some sort of edge processor that maybe a vendor or another open source project provides where you can do some rich AI analysis of the data that you're collecting, what's valuable, what isn't, what metadata do you need to add or remove, how you can kind of stitch these things together, maybe how you route or filter or aggregate.
So I think there's a really interesting space in what I'll call edge computing, could be edge in your environment, could be edge of the observability platform you're ingesting from. And we've seen a little bit of this in O-Tel today. There was a PR in the O-Tel Collector to try to do some AI analysis from a processor perspective. But I think that's probably the earliest days. If anyone's doing it, it's probably a startup and that probably means you have to go pay for it. So I don't see a ton in the open source space yet, but I expect that to change over time.
So at least those three areas from an AI perspective.
Nathan Toups (24:39)
That's really cool. A follow up to that too is, I'm seeing open telemetry pop up in more and more places, which is great. Teams that I've talked to, or ⁓ again, Greenfield projects are fantastic. Even if they say they're using something like Datadog in-house, if we have a new Kubernetes cluster, we'll typically default to the hotel collector work and then integrate that way.
which again gives us just a lot of optionality early on and you know, a lot of business decisions can be made later. I've still seen struggle with integrating things like APM tooling and how to get ⁓ observability to go across those two. Is that a knowledge gap that folks have or are there existing tools that have made that easier with O-Tel or how, what's a, how do I bridge something that's like, you know, data dog, APM magic plus my O-Tel backend and things like that.
Steve Flanders (25:32)
Yeah, so this is hard, right? Because the value of APM is that I can stitch and pass context. The downside is it has to be consistent end to end, or you have disconnected views and it doesn't work, right? So if you're using Datadog context propagation and you add OTell, they don't necessarily play well together because they're different context propagation formats.
So you have to pick one or the other basically. And then again, that switching costs, there's going to be a period of time where you're losing visibility because you haven't switched all the way over to like the W3C trace context, like open standard that Othel supports as an example. So that's a hard problem. It's specific for the most part for APM ⁓ because metrics, you don't typically pass as much context. Maybe you're enriching it with like infrastructure metadata, but like...
doesn't matter as much. But even in the case of metrics, if your metrics names change, then your dashboards may not work, your alerts may not work. So even that has a cost associated with it. This is why you need all the data to route through, say, the hotel collector. And then you might need to do some pre-processing to normalize some of this. ⁓ But it's not easy when you're talking about taking a non-standard or a vendor-specific standard.
and then trying to map it to an open standard, like OpenSlimitries trying to provide. And arguably, APM is probably the hardest signal. It has the most promise and value, because if you can do it end to end, you get metrics. You get red metrics, request error duration. You can get log records. You can get root cause last mile. But stitching together and building APM into all your products, your options are automatic instrumentation, which
may over instrument, may add overhead to your environment, may or may not work. And manual instrumentation, which requires you to understand all the concepts, make code changes, test those code changes, and then have to support that code kind of going forward. Neither are great. OTEL is trying to help that aspect. There is a donation that was made about six months ago, I think, called the OTEL Injector.
The idea is that the Open Symmetry Collector can inject the auto instrumentation in your environment. It'll discover Java processes or .NET processes, and it will add the instrumentation for you. So you can get some visibility. And OTEL supports automatic and manual. So you can start with automatic and then enrich it or make changes from a manual perspective when you're comfortable with that. So there are options, but it's work. It's not.
free ⁓ and so there's a learning curve associated with it and a time component to it as well.
Nathan Toups (28:09)
Okay. Quick follow up with one of the things that you mentioned in passing, it was also something that was interesting in the book, the W3C trace context. What's the goal of this and what's the sort of like intended outcome and how do you hope to see it implemented in the real world?
Steve Flanders (28:28)
Yeah, so specifically for APM, right, you have to pass context between your calls. So if one microservice calls another, it has to pass some information to it so that you can actually stitch that together. For example, the trace ID. The originating service will generate a trace ID. Any downstream calls, they need to pass the trace ID so that that service knows that it's a child of the parent call.
So before W3 Trace context, there was no open standard for this. So Datadog has its own format. New Relic had its own format. AWS X-Ray has its own format. And as we were just kind of describing, unless you're all in on that vendor, if you use another vendor, you're going to break the context propagation because they're two different formats. Or if you want to switch vendors, again, you have to remove the Datadog's version and then install X-Ray's version. And there's a cost associated with that.
W3C is a standardized version, a completely open standard that's available that provides this context mechanism. And if you use it, the idea is it doesn't matter if you're using X-Ray or Datadog. You can stitch everything in an open format, and you can switch vendors with a much lighter lift. That's kind of the promise of it. It also supports
Nathan Toups (29:35)
⁓ yeah, that's the dream.
Steve Flanders (29:38)
That's the dream, right?
And again, for greenfield environments, easy. For brownfield environments, more work. But if you're going to get off a vendor anyway, you might as well go invest that work right now, because then you'll move to that open standard and hopefully not have to worry about it again.
Nathan Toups (29:53)
Thank
Carter Morgan (29:57)
When reading this book, you talk about, we compared it almost to like Taylor Swift, which is funny, where basically what I had said with Taylor Swift is that I, when Taylor Swift was doing her big errors tour, I was like, you know what? I don't really listen to a lot of Taylor Swift, but I should, because like, this is a big cultural moment and maybe I should figure out like if, you know, if this is something for me.
And then I listened to it. was like, you know what? Like, I don't think I'm a Taylor Swift fan. Like I think she's writing great music for a lot of people, but it just, doesn't make sense to me, but I can appreciate that. Cause she knows who she's writing for and she writes towards it. And for better forest, Taylor Swift obviously has very broad appeal, but it's kind of chosen to alienate a certain amount of probably 31 year old men like me who, know, with the four kids, right? Who don't quite get it. ⁓ so you're writing this book and you say that you want to kind of do the opposite. You want to go with the very
broad approach for kind of anyone. mean, as an author, talk to me about the pros and cons of like, why did you choose to do that broad approach as opposed to instead of thinking maybe about someone more like Nathan and being like, you know what, this is going to be the book that's going to level up and experience SRE to be an expert SRE. ⁓ And yeah, I guess maybe explain why you chose the approach you did.
Steve Flanders (31:15)
Yeah, I think it's just based on my experience dealing with customers and end users, right? Like, I don't, I'm sure you've all read technical documentation before, right?
Carter Morgan (31:25)
Yeah.
Steve Flanders (31:26)
it's
really hard to write good technical documentation. Do I give you like a step-by-step guide? Is it procedural? Do I tell you how every feature works? How do I handle release notes? What happens if things change between the versions? How do you make sure like the documentation you're writing today is applicable a year from now? So with that kind of mindset, I was like, hey, ⁓
Carter Morgan (31:29)
Yeah.
Steve Flanders (31:47)
I want to try to make this be approachable where it's not like you have to have all this deep expertise. You have to be an SRE in order to get any value out of this, right? Because open symmetry is kind of an open community. It's still growing, very inclusive, I think. And the idea is we want to get more thoughts and more people that can kind of ramp up on the topic because bringing in that outside perspective, I think is what's going to provide the value for the project long-term and actually evolve open ⁓ observability as like a topic. So I approached it from that.
mindset with the hope being that if it is really successful, I'm more than happy to do a deep dive just for SREs or just for experts or just for developers that are dealing with observability problems. I thought that that could be a follow-up, but I didn't want to start with that because I think it would have really narrowed the market of people that would be interested in the topic. And I figured I'd get more feedback on a breadth-first approach and then could do depth later if need be.
Carter Morgan (32:35)
Yeah.
Nathan Toups (32:43)
In Carter, I think this is a great example actually where you don't have any SREs at your company, right? It's all software engineers. So I mean.
Carter Morgan (32:50)
No, no, we're a small
startup. We've got a team of about 10 engineers and it's kind of an everyone does everything setup.
Nathan Toups (32:57)
Yeah, so it sounds like you, was a, yeah, mean, because other than we've had some private conversations, which again, you know, mostly just kind of like egging you on about having some confidence about making these decisions. But I mean, it's been cool to see software engineering led efforts to do these observability things, because it really shouldn't be gate kept by someone like me, right? ⁓ So that's really.
Carter Morgan (33:07)
Ha ha.
right?
Yeah,
this all started because I was explaining. I joined this startup about six months ago or so. it's a series A, and so it's maturing and just barely getting from the point of every single day our butt is on the line and we might not exist tomorrow. like, OK, we have some room to mature our systems. And so I was describing the current state of our infrastructure to Nathan.
And he was just roasting me over text. He's like, what are you doing? And I was like, shut up. And so I was like, you know what? I'm going to fix it. Yeah. I Right. So I was like, open telemetry. Like we're to do that. ⁓ I know. Right. We know that elastic beanstalk is running. Thank you. Right. ⁓ And so I, yeah, yeah, I agree.
Nathan Toups (33:51)
was like, can't measure anything, come on. ⁓
I was like, you don't know what the health of your system is. Come on.
Right?
Steve Flanders (34:07)
You have to start somewhere.
Nathan Toups (34:09)
⁓
Steve Flanders (34:10)
that's the other
part of the book. I try to approach it from if you're in a startup or you're in an enterprise company, there are certain things you can think about and you may choose not to prioritize them. You're probably not worried about scale today in a 10 person startup, but eventually you need to think about those concepts. Should you make changes today or should you just worry about that later? I tried to cover some of those topics as well because everyone's environment is different. Everyone's business use case is different. If you're rapidly growing,
Maybe you don't care as much. If you're always having observability issues and your system's down, you probably care a lot more, right? Like where are you on that journey as well?
Carter Morgan (34:45)
We've talked about this with some other authors. By the time this goes live, our interview with him will be live too. Dan Heath, who, have you ever read the book Made to Stick? Not everyone has. ⁓ It's a great, it's not a computer science book and it was written like back in 2007, but it's all about this idea of kind of like what makes ideas naturally sticky. ⁓ of like, know, ideas that you remember, ideas that kind of get passed on, the archetype of that kind of being almost like urban legends. ⁓ Really interesting book. And so we had him on the podcast and we kind of asked him and said like,
Steve Flanders (34:54)
No.
Carter Morgan (35:16)
what, why did you choose to write about this? And he said, like, cause it sounded fun. I'm like, is that how you do your whole career? Do you just choose what sounds fun? He's like, yeah. And he said, because he trusts the natural kind of flywheel of like, if I work on what I'm passionate about, I'll work hard. And if I work hard, then people will recognize that effort and that will reward me with other opportunities. And out of those, I can just kind of pick what I, what I seem the most passionate about. It sounds like you might've taken a similar approach that you were just
passionate about this. And I guess have you noticed that in your life too, that just as you work on what you're most passionate about that you've seen those kind of same opportunities open up for you?
Steve Flanders (35:55)
I would say yes. Like what you just said resonates entirely with my experience, right? Like, ⁓
I'm very passionate about the work that I do generally. I like to put in a lot of effort. I want to be successful or provide value or something to that extent. And for me, I have to believe in what I'm doing and what I'm working on for that to happen. So yes, I have to like the area that I'm kind of working on. Even like the last startup I was at, right? I had this whole decision matrix of would I join that startup or not? At the time I was in a corporate job, do I want to go back to kind of startup life? And for me, I had to believe, do I think that
Carter Morgan (36:24)
Mm-hmm.
Steve Flanders (36:30)
the company could be successful? Did I think that the team was actually like really good and could actually go execute? Did I think the timing was right? And did I believe kind of in the strategy of where it was going, right? Certain check boxes had to be, had to be hit for me. The same applies, I think, to observability and like open telemetry, right? Like I like solving technical problems. I like providing customer value. Observability is a technical problem that needs to be solved. It touches on different aspects like scale, distributed systems, evolving architectures. ⁓
migrations are moving to like new tech stacks, like I love those types of things. And so that allows me to put more energy and emphasis kind of behind it and hope that others share that passion or kind of see the value kind of out of it. And luckily, like OpenSlim Matrona's availability remains successful. So yes, I totally resonate with what you were just saying.
Carter Morgan (37:21)
That's awesome. I sometimes in like some like the experienced dev subreddits or the computer science subreddits in general, people will kind of be like, what keeps you passionate or how do you keep working hard? And it's a common response. I respected, I get it from people to say, I look at my bank account. That's what keeps me working hard. And like, I get it. Like I think there's a bit like that for better or for worse. I'm just not wired that way. Like I, I I'm just learning that about myself. Like you got to work on what's.
Nathan Toups (37:47)
That doesn't feel like an authentic
answer. That feels like a Reddit answer to me. I'm gonna call that one out.
Carter Morgan (37:50)
Hey, you know what? Maybe it is, right? Because I'm
learning that about myself. I'm got to work on what I'm passionate about. So I find stories like yours very inspiring because obviously, you know, that I think it's easy for people to kind of look at someone like you and to be like, ⁓ well, he must just be a genius. And obviously you're very smart. But ⁓ I think people underrate what we're talking about, which is that more you found something you're really passionate about.
And you've been able to leverage your natural abilities effectively because you've pursued what's interesting to you. And I know, I find that really inspiring. That's not really a question. I just want to say that I find that inspiring.
Steve Flanders (38:34)
I mean, for me, that's what works, right? Everyone's different. So you gotta choose your own kind of adventure, but.
For me, yes, you want to find out what you like versus not, what you're good at versus what you're not. It's not that you should shy away from what you're not good at. Like I've grown a lot in the observability and hotel space. Like I had done open source before, but not at scale with huge governance committees. And like, how do you like get alignment and consensus and like SIG meetings and all this other stuff. So I've learned a lot along the way, but I at least had an interest there. had some experience there and I was able to build on top of it. and I know that like having an impact from a bigger
Carter Morgan (38:48)
Right.
Steve Flanders (39:10)
scale and including other people's opinions were like important to me. So like leveraging those I think really really matters. So yeah.
Nathan Toups (39:18)
in the book, you present a maturity framework and I love these kind of things, it's just helpful. Again, we were able to talk about different companies we've been in and what maturity level we kind of felt that they were in. I think level one was something like basic monitoring and all the way up to, I think it was level five, which is like autonomous observability, which I read that and I was like, what a, I wish that I've ever been in an environment like a level five. ⁓
Carter Morgan (39:40)
Hahaha!
we were talking about that because I used to work at AWS and I was, which is obviously a very mature observability system. And I was like, I don't think we even had that. Right. And so very aspirational and very cool to think about.
Steve Flanders (39:57)
Yeah. Yeah.
Nathan Toups (39:58)
So my question is what percentage of organizations actually ever get to like level three plus? you know, something that, which I think is the proactive observability level, I think is the book.
Steve Flanders (40:09)
I think it's quite low. I would say less than 10 % in my experience and I deal with a lot of companies, right? So ⁓ I think it's, and it's because A, it's a hard problem. ⁓ But B, it's what type of companies are going to do that? Either they're doing it from day one and a startup, like that is the mentality, that is the culture of the team, or they're a large enterprise and they realize they have to get there.
Nathan Toups (40:11)
No.
Carter Morgan (40:12)
Hahaha.
Nathan Toups (40:13)
Okay.
Carter Morgan (40:16)
Ha ha.
Right.
Steve Flanders (40:34)
But the latter is much harder. You probably have a bunch of acquisitions. You have talent coming in and out. You have multiple technology stacks. You're trying to merge together multiple observability solutions. How?
I've seen really large corporate companies that have like a center of excellence just for observability. And those teams struggle. like really have a hard time getting everyone on board, instrumenting the same way, normalizing things like getting to like trusting the observability tool and what, it's saying is the problem. They're like, no, like my senior engineer knows better. So they're going to go investigate themselves, right? Like three plus I think is actually really, really hard.
Nathan Toups (41:11)
Yeah.
Steve Flanders (41:11)
But
it is like the desired state. We want to get to four or five. That would be great, but it's not an easy problem.
Nathan Toups (41:18)
Yeah, I've noticed this as well. And I think one of the best observability stacks I had ever, well, I helped build it, but it was like, were, we just had this high trust environment. were firing on all cylinders. really never wanted to be called at three in the morning. And we all had the autonomy to sort of very proactively work on these things. And so we actually were an hotel environment and, ⁓
had a really kind of nice Istio service mesh, Kubernetes kind of stuff that had a lot of self-healing properties to it. I've seen this a lot, actually. People criticize that this is, you're adding all these layers of complexity. And I have a lot of empathy for that. I think that learning these topics is a pretty steep learning curve. But when you see it execute well, you're like, I understand why the complexity is there.
Right? It actually allows us to have these really, if you do it properly, these little tight feedback loops where the complexity is just sort of this orchestration layer on top of everything that we're doing. Teams that seem to be struggling with this, guess, I'll tie into, I think it was chapter 10 talking about these sort of anti-patterns and other things that are coming up. You outlined a bunch of great ones in the book, but are there any that stick out in your mind or things that you've observed?
in the real world of anti-patterns that people end up getting stuck in or trapped in or getting a bad taste in their mouth about open telemetry.
Steve Flanders (42:50)
Yeah, the two big ones that always come to mind is either you've under instrumented or you've over instrumented. Those are the two that come up again and again and again. If you under instrument, it's exactly, exactly. If you under instrument, you don't, you can't observe your systems fully. You have gaps and then you're like, hotel doesn't work.
Nathan Toups (42:56)
Okay.
Carter Morgan (42:56)
Mmm.
Nathan Toups (42:58)
You need the Goldilocks.
Steve Flanders (43:09)
And then the flip side is, especially when you're using like zero configure automatic instrumentation, sometimes you end up on the over instrumented side and you're like, I'm getting all this data. It's not valuable. It's adding application overhead. It costs me money. It's too hard to search over. And so again, Oatel stinks. I'm not going to use it. Right. And so both of those are pretty big anti-patterns. And you need to really find that sweet spot in the middle. And my definition of that sweet spot is probably different than your definition. Right. Like some people take APM. Some people are like,
Carter Morgan (43:14)
Mm-hmm.
Steve Flanders (43:39)
I will always use tail-based sampling. I will always use a 1 % sample rate. That always works for me. And I've seen the opposite. I do not want to sample. I need to see every single trace because there's one in particular that's going to cause a problem. And if I miss that, I can't fix my observability platform. There are these dogmas or things that people stick to that also lead to the anti-patterns of being on one side of the house or the other when it comes to instrumentation and telemetry generation.
Nathan Toups (44:07)
Sampling was one of the big selling points. So we did an internal restructure where we were, before I had joined, they had recently moved over to Datadog and basically just annotated everything with Datadog, everything you could imagine. And then ⁓ a couple of us on the platform engineering team championed doing this hotel sort of overlay within our Kubernetes infrastructure so that ⁓ Datadog was in there, but we just managed it and...
everybody who was writing code was just doing hotel native stuff. And we actually like helped the teams sort of champion this over. It was very successful. Like we were super happy with it, but one of the major drivers for it was actually from the FnOps team. And it was because of sampling. It's because we were just getting destroyed on the amount of data that we're sending off to Datadog. And we were like, hey, you actually need more control. And actually there's a lot of other good reasons for these pipelining, but here's one thing that we can solve near term. And I felt, I always felt that was
useful for me to kind of come in and try to make a business case as to like why we should do something and also know that, this gives us optionality for things in the future. ⁓ So where do you see trends going with O-Tel and what are you most excited about over the next two, three, five years?
Steve Flanders (45:26)
Yeah, so I mean, I'm excited about moving beyond the three pillars of observability. Traces, metrics, logs, great. You need to have them. They're foundational, perfect. But there's two other sides of that, like things like profiling, going deeper into the application code. Othel's clearly working on that. A little bit slower than I'd like to see, but like progress is being made. Sometimes open source communities are not the fastest. And then the other side is kind of the end user experience. So things like RUM, real user monitoring, or DEM, digital experience monitoring.
I want to see more like browser-based or mobile device-based kind of visibility because then you kind of know the impact to the end user and that's valuable from an observability perspective. So I like those. Beyond telemetry, I'm really excited for what else you can do in the OpenTelemetry project. like semantic conventions or like OpenTelemetry Weaver I think is really cool because it gets you thinking about what happens when you change the names of metrics or metadata or things like that.
How can you update how you handle your software development lifecycle when it comes to observing your systems? Really cool. Or entities are being discussed. How can I stitch these disparate systems together? How do I identify what these objects are? And how can I make an entity map based on the data that I'm generating? Kind of a cool concept, right? And especially with AI models becoming more prevalent, I think entity models are going to be pretty important to power the AI observability use cases, seeing work there.
And personally, I would love to see O-Tel even get into some of like the dashboarding and other aspects. Not that I want them to build the UI, but like, what if there was a more standardized query language that was more generally applicable? Then I would have even more vendor agnostic aspects. Telemetry is great, but like still my dashboards, my alerts, whatever are Datadog specific or New Relic specific. What if I could standardize that in O-Tel so that we have a common query language?
Nathan Toups (47:08)
Interesting, yeah.
Steve Flanders (47:18)
Now I really have flexibility and choice across my back ends. That would be really cool. But Othel hasn't gone down that route just yet.
Carter Morgan (47:27)
This is, this interview is funny for me because, all the questions I want to ask you are just like troubleshooting my hotel config, right? I'm just imagining us like interviewing Tim Cook and me be like, ⁓ so my iCloud reminders don't seem to be sinking. I wonder if you knew anything about that. I mean, I think something we talk a lot about on the podcast.
Steve Flanders (47:35)
Happy to help.
Carter Morgan (47:57)
an engineer is like breadth versus depth. You know, like how deep are you going into one subject versus just general breadth? I'm trying to figure out, would you characterize yourself as someone who's gone deep in one particular vertical, but at the same time, O'Tell is kind of so expansive that I guess there's a lot of breadth in that vertical. I mean, how do you kind of describe your learning and your career?
Steve Flanders (48:21)
Yeah, so that's an interesting question because I think it's so specific. So in Open Symmetry, I understand most of the concepts of things that are going on, but I'm clearly not deep in every single aspect of it because the project is just so large.
Carter Morgan (48:32)
Right.
Steve Flanders (48:36)
I can go really deep in the hotel collector. I've been involved in that since the very beginning, right? So like even in hotel, I have a combination of breadth and depth depending on where my interests are or where I've invested kind of that time generally. ⁓ I think in order to have conversations about this, you have to have at least a little bit of breadth knowledge. Otherwise it's really hard. ⁓ But you're right. Some people want to go right down to the nitty gritty details and you can't know everything. So for me, I see myself as more of a connection maker.
If I'm able to answer questions like that, great. If not, I probably know someone in the community that's working on that particular aspect and I can make an introduction or I can make a suggestion on like how you can open a GitHub issue or go to like a special interest group or a community meeting and kind of learn more. So I think I'm more hybrid. I do a little bit of both. There are certain areas that I love going very deep in, but I realized that you can't go deep in everything. It's just not possible. These projects are too big.
Carter Morgan (49:30)
Yeah. it's one thing I love about our industry though, and our craft is that there's, there are some fields that like, if you work hard and learn a lot, like there's a bit of diminishing returns. Like you can imagine like being almost like, like a cashier or like an Amazon delivery driver, which like, Hey, there's, there's value in an honest day's work. Right. But to a certain extent, if you get really, really excellent at delivering packages, it's like,
What's, what's kind of the limit on returns there? You're going to bump into it pretty quickly. It's just not true in our field, right? Which is that the more you learn and the harder you push yourself, like you get, I mean, you have to kind of play your cards, right. And, and know which jobs to take things like that. But in general, like you're rewarded, which is really neat. ⁓ have you always been someone who's kind of been driven to go deep on a subject? Have you been kind of a lifelong learner like that?
Or is it something maybe, Nathan and I kind of joke about that ourselves in some ways we're late bloomers when it comes to ⁓ software engineering. We both kind of found it later in life. ⁓ Yeah, is this something where you you always kind of driven in studios like that as a child or did you kind of grow into it later in life?
Steve Flanders (50:41)
Yeah, I love figuring things out when I was younger, right? Like going deep in something, understanding how it works, understanding how to fix it generally. And I'm not even talking about just software, like anything could be hardware, plumbing, electrical, stuff like that. I've done all sorts of interesting projects, of understand how they work. So I think I've always been naturally curious and a bit of a perfectionist. I want to understand how that thing works well enough that I can provide value or fix it or understand how to change it if I need to change it.
Carter Morgan (50:50)
Right.
Steve Flanders (51:11)
that kind of transferred over into my experience and career from a software development perspective. Again, early on, I think my focus was, if I can learn one thing really, really well, that would be great. I remember early days back when VMware was still kind of popular, I loved virtualization. And so like I really dove into that, tried to fully understand like how it works, why it works that way, how I can optimize for it, and kind of brought that passion into my work.
But I would also describe myself as a lifelong learner. Like, I think I realized as I went to college, as I started traveling for work, as I started like meeting other people, that there was so much that I did not know or did not understand. And again, you can't know it all, but I would rather have some exposure to it and learn and figure out like where I need to understand better or maybe another topic that I want to go deep into. And so like that lifelong learning, I think is something that has been very important to me.
I got my MBA as kind of result of that. I wrote a book kind of result in that. Like I did different aspects just to kind of get more of that breadth generally. And even now, right, like instead of like focusing deep, I think I'm focusing on more like strategic wide things, but I'll still own certain aspects of that strategy and try to drive them to completion. So I'm always trying to find that balance. But I think
Personally, my growth has been from more specific to more broad. I'm kind of moving in the opposite direction as the way that I see it.
Carter Morgan (52:39)
interesting.
Yeah, I like that. And I joke about how I remember like graduating college and like you, have like a skills section in your resume and you list like what your proficiency with each one is. And I think I put like Java expert or like prodigious or whatever. Right. And now it's like 10 years later and I've worked in Java the whole time. And I think I'd put novice, right? Like I know, right.
Steve Flanders (53:02)
There's so much to learn and know that you just... Yup.
Carter Morgan (53:09)
Well, it's been such a pleasure having you on Steve. And yeah, we always love when we get to connect with our authors, especially after spending so much time with their books. We'd like to ask you, we ask all of our authors this, are there any books you'd recommend to our audience? That can be technical, non-technical, fiction, non-fiction, whatever you think our listeners might enjoy.
Steve Flanders (53:27)
So I just recently read the Executive Engineering Primer by Will Larson. I actually thought it was a great book, right? Because ⁓ on the management side of things, like as you move up in your career, what does that mean? How do you deal with different stakeholders? If you're coming into a new company, what is your first 30, 60, 90 days look like? How do you negotiate your offers? I thought it covered really great topics.
Carter Morgan (53:32)
Okay.
Steve Flanders (53:50)
I think it's a more high level book. It doesn't go as deep into the specifics. And sometimes I kind of wish that there were kind of deep dives in some of the chapters, but I got some great value out of it and I definitely recommend it to folks.
Carter Morgan (53:50)
Okay.
Okay, great. We read Staff Engineer by Will Larson, which we're very excited about. ⁓ And yeah, have you read Zerje Oras's book, The Software Engineer's Guidebook?
Steve Flanders (54:04)
Yes.
I haven't queued up. haven't read it yet, no.
Carter Morgan (54:15)
Yeah,
yeah, that's a great one. That's one where it might make a little more sense to just like skip to the back third, because I mean, it's great. I joke with that book that like, I feel like I had to kind of painstakingly assemble all of these career lessons by like trying and failing. And then I picked up this book. I was like, he just wrote it all out. I should have just read this. you know, um, so that, that's, that's a great book. Um, well, awesome. So great to have you on Steve. Anything you want to plug for our listeners before we sign off?
Steve Flanders (54:30)
Yeah.
Well, mean, hopefully see everyone that's interested in OpenTelemetry in the OpenTelemetry community, right? There's a lot of great community meetings ⁓ weekly, bi-weekly. There's a whole calendar dedicated to this. The project's very open. So if you go to any of the OpenTelemetry GitHub repositories and go to issues, there's a label called good first issue. Great for people that want to kind of get their hands dirty with OpenTelemetry, whether that's code, documentation, examples, there's a demo environment. There's so many ways you can kind of contribute.
Carter Morgan (54:47)
Yeah.
great.
Steve Flanders (55:11)
And of course, if you attend things like KubeCon, North America is next week, Europe's gonna be in a couple months here, right? Another great way to kind of plug into the hotel community. hoping to see people out there.
Carter Morgan (55:21)
That's awesome. And I love when an open source project kind of labels like good first issues, because I have often found it intimidating to contribute. that's great. So listeners get out there. Come on, don't just mooch contribute. Well, it's so great to have you on Steve ⁓ and listeners. can always contact us at contact at book overflow.io. You can find us on Twitter at book overflow pod. I'm on Twitter at Carter Morgan and Nathan and his work with his newsletter, functionally imperative or functionally imperative.com.
Steve, can't thank you enough. Such a pleasure to meet you and to get to ask you all of our questions about the book. And again, thanks so much for coming on. All right, we'll see you around folks.
Steve Flanders (55:57)
Thanks so much for having me.