The One With Stephanie Hippo and Observability
In this episode, Steph Hippo, Platform Engineering Director at Honeycomb, joins The Prodcast to discuss AI and SRE. Steph explains how observability helps us understand complex systems from their outputs, and provides a foundation for SRE to respond to system problems. This episode explains how AI and observability build a self-reinforcing loop. We also discuss how AI can detect and respond to certain classes of incidents, leading to self-healing systems and allowing SREs to focus on novel and interesting problems. She advises small businesses adopting AI to learn from others' mistakes (post-mortems) and to commit time and budget to experimentation.
[THEME MUSIC]
NARRATOR: Welcome to season five of the Prodcast, Google's podcast about site reliability, engineering, and production software. This season, we are continuing our theme of friends and trends. It's all about what's coming up in the SRE space, from new technology to modernizing processes. And of course, the most important part is the friends we made along the way. So happy listening, and may all your incidents be novel.
MATT SIEGLER: Hey, everyone, and welcome back to the Prodcast. I'm Matt Siegler, and joining me today is my co-host, Florian Rathgeber. Now I'd like to welcome our guest, Steph Hippo of Honeycomb. Steph, tell our listeners a little bit about yourself and what Honeycomb does.
STEPH HIPPO: Hi. Yeah. So I'm Steph Hippo. I'm the Platform Engineering Director at Honeycomb, which is an observability tool for understanding complex systems as you run them in production. And I was at Google for 7 and 1/2 years. And I'm enjoying life on the outside, but it's been really nice to get to come back here today and talk to you.
MATT SIEGLER: How long have you been outside of Google with Honeycomb?
STEPH HIPPO: I've been at Honeycomb for a year and a half now, but I left Google in May of '23.
MATT SIEGLER: Great. Tell us a little bit more about the Honeycomb stack and what you're doing there.
STEPH HIPPO: Yeah. So for Honeycomb, like I said, I'm on the platform side. So I am responsible for SRE teams, a lot of our enablement teams. So we've got our front-end enablement team. And that's really just about helping those teams-- the rest of the developer teams deliver better and faster.
And that's where we're seeing a lot of the AI usage come into play for us, and how we can make sure that we're building to the quality that we want, and being able to empower all of those teams to reuse the same components for the design system, better work across design and data. And so that's been a lot of fun.
We run our own database that you can actually read about on the blog, if you want, called Retriever. And that is how Honeycomb is able to do a lot of the cool data observability, math, and features that you see. And yeah, does that answer the question?
MATT SIEGLER: Yeah, it does. In fact, I think I heard you say something about observability. This is a big word. It comes up in a lot of circles. But I think I'd like to hear you be a little bit more specific about what that word means, and speak to it about what Honeycomb offers, and something more in depth about what that should mean to us, especially in production.
STEPH HIPPO: Yeah. So at the heart of it, observability is about being able to observe and understand complex systems based on what you can see from the outputs. So you can instrument the kind of data that you want to be able to track, maybe different events as they occur throughout your system, and then, as you're running things in production, being able to watch some of that live and look at, where are we seeing problems in the system? And if we're getting a lot of user errors, can we alert on that? And so it's really at the foundation of SRE being able to understand what's happening in your systems and respond to it so that you can deliver the best user experience possible.
FLORIAN RATHGEBER: That's a very good definition. Thanks, Steph, for setting us the stage. So this season of The SRE podcast, we are continuing our theme from Season 4, which is Friends and Trends. And, well, obviously, you're an observability friend, but we can't-- I guess, we can't avoid jumping into some of the buzzwords that are floating around everywhere. So in your view, how does AI influence observability and vice versa?
STEPH HIPPO: Yeah, they're buzzy for a reason, right? So when you're seeing more talk about AIOps or how different companies are adopting AI tooling, at the heart of all of those approaches is really rich data context. And that's what helps make AI valuable and helps good AI stand out from bad AIs. And good versus bad in this context, let's say helpful versus unhelpful.
And so what we're seeing a lot of with AI and observability is they actually feed into each other. The more that you're using AI, that is a complex system, because you are looking at things from a more probabilistic perspective, rather than deterministic. That's going to introduce some uncertainty. There are appropriate places to use that and inappropriate places. And I think, as an industry, everyone's kind of feeling their way out at what they're finding AI to be particularly good at problem solving at any given time.
And at the same time, AI is getting faster and getting better, being able to hold more context. So as you're running those AI systems, you also need more observability, or I would say, more refined observability, to understand what effect that's having on your system. So I see AI in observability as kind of being a self-reinforcing loop back into each other. So observability is what gives you confidence in your AI. And your AI can also help you better navigate your observability and understand your system better.
MATT SIEGLER: In the point of contrast, could you explain to us perhaps where we were not too long ago when these AI insights were not so available? What were we doing just prior, when we were just using plain old math?
STEPH HIPPO: It's all math underneath the hood, right? But the plain old math. I think, when I look back on the SRE teams that I kind of managed at Google, the thing that we always had drilled into us was, know your system, know your system, know your system. As time went on, and as it does in the history of computing, we're able to pop up the stack in a level of abstraction with each new technology.
I remember my operating systems professor in college telling us about how he used to artisanally install operating systems, and it would take 30 days, and they liked it that way. And that's something we've advanced quite past from these days. And I think we're going to see the same things with AI.
If you're taking a look at-- the best example that I use, how do you raise a junior SRE? When you first come in as an SRE, we might be looking at system diagrams, getting them familiar with maybe specific dashboards that we had to craft ourselves. Hey, here's where you find the logs, which are separate from the traces, which are separate from these metrics that we have over here.
And they start building their picture based on the team knowledge and being walked through that system with quite a bit of hand-holding. I actually think that's valuable from a team perspective, and we can talk more about that later. But when you're now training a new SRE today, there's a lot more available to them for the AI tools to get their bearings.
So if you were to drop me into a new system today, I would want to know, where are the problem areas? What graphs get looked at the most? Are there any troubling trends you see over time? Show me your SLOs. And those are things that we're seeing AI can help bring to the forefront.
And so if you are a junior SRE that maybe doesn't know yet the right questions to ask, AI is helping to bring those more to the front. I think it's also been fun to compare-- I remember even when I was a junior SRE, I was paired with a senior SRE that was new to our team but had been at Google for a long time. And it was as we were starting to have the rise of more web app kind of tools for understanding SRE. And he was mind boggled that I was reaching for the web tools first, when he was like, the muscle memory of the commands that he would run to get the same information were just baked into him.
MATT SIEGLER: Right.
STEPH HIPPO: And yeah, we just laughed. We were both getting the same answer in just very different ways, and neither one of us would have thought to reach for the other tool the person was using. And so I think AI is going to be another layer of that.
FLORIAN RATHGEBER: Yeah, totally. So thanks for sharing your experience. Like, if you try to put yourself into the shoes of a new aspiring SRE these days, either from your personal experience at Honeycomb or also what you imagine it be like, because you also have a reputation to care a lot about the end, the management side of engineering, like eng management and whatnot. So what change has AI brought to the experience of a junior SRE?
STEPH HIPPO: I think there is value to asking questions to your team members directly. That is how you build rapport. It's how you feel out safety. Who's going to be somebody that I can ask, maybe, the dumb question in front of that I might be embarrassed about or learn who the experts are in your team in a particular area? And so I don't think it'll ever be fully AI for onboarding or things like that. That human connection is still what makes top-performing teams and high levels of psych safety.
So where I do see more junior engineers getting a lot of benefit out of AI is asking better questions. Maybe you ask the AI a few questions first so that you can show you've done your homework when you go to ask the human. Julia Evans has a great set of blog posts that I always include in my team onboarding documentation, no matter which team I'm running. It's about asking good questions, and then making sure that you're making a good use of people's time.
And so as an industry, we have not always been the most welcoming that way. There's Let Me Google That For You, or RTFM. And you actually don't want to shut down all of those questions because that is the psych safety building.
But if I can ask an AI, hey, where do I learn about this part of the system? Maybe it can give me some places to start. And then it can quiz me on it or something or rubber duck so that if I do get stuck, I hit the edge of what the AI can do for me, now I can go connect with human, and I can actually focus more on where I want to have the conversation.
When I first became an SRE, I had a great TL that did an Ask Me Anything session with me three times a week. And the deal was, I wasn't getting out of it. I had to show up with questions. And if I didn't, he was going to pick a topic and just talk about it.
And so I think about how I would do that these days. And I would maybe tell my mentee, go ask the AI about this kind of thing, and then come to me, and we'll fill in the gaps that you weren't able to piece together. So I think that, as a junior engineer, can be really powerful.
On the career development side, you've got job ladders. I do a ton of work as a manager, especially with junior engineers, really helping those engineers understand what their job is and what growth looks like. And I'm always pulling out whatever the job ladder is, being able to teach engineers how to talk about their work.
I'm a big fan of keeping a work journal and writing things down week to week, not just for being able to prove things at performance review time, but because that reflection helps you get better at your own job. So I still do it, and I can take a look at my journal at the end of the week and be like, OK, I spent a ton of time on this kind of work. I think I need to set aside some deep focus time for that, or I need to ask somebody that has more experience in this area for what I should do next.
Or, hmm, I see the next level on the job ladder talks about this kind of work instead. I don't think I'm getting opportunities to do that work yet, so I don't even know if I'd like it or if I want that next level job. Maybe I can talk to my manager about what those opportunities would look like.
And I think there's a lot of room for AI there and rubber ducking and seeing, hey, is the work that I'm currently doing matching that job ladder? And so I really see AI there helping to still get at the heart of engineering manager fundamentals, but doing it in a way that is more structured to help people make the better use of that time when they are face to face. I don't want to see AI take the human element out of running teams or management.
FLORIAN RATHGEBER: It sounds like you have really dug quite into the career development and what's the role profile and the expectations and whatnot. In your view, has that significantly changed in the couple of years in the light of AI?
STEPH HIPPO: Yes, and I think it's going to continue, too. I think it's a tough time to be a junior engineer, trying to either break into the field right now or folks coming out of boot camps, things like that, because there's a lot of talk in the industry right now on, hey, is AI replacing your junior engineer?
And I think that particular question is for the shock of the headlines. It's not going to replace junior engineers because senior engineers don't grow on trees. So as an industry, we do still have to keep growing our engineers. That's just something everybody needs to do. And there are a lot of benefits of bringing in fresh eyes to your team.
I love to tell the story of when I was new to SRE. I had established that relationship with that great TL. He had handed me a project. And I went to him because I was so stuck. I did not know how to handle this one edge case. And I said, Dave, sorry, I need help here. Can we talk through it on the board? And how do I handle this edge case?
And Dave just kind of stopped and put the marker down. He was like, we have to cancel this. This will break so many things. Don't do this. And I was like, oh, OK. And he was like, I hadn't thought about this edge case, but hey, that's the value of you having an outside perspective. Yeah, we can't do this.
And so, I mean, the downside there was I had my project canceled, and I was bummed, but I didn't break the bigger system. So, you know. So it is going to change the kind of work that we're giving to junior engineers when they first come on. I think there's the traditional advice, you have the starter bugs and things like that. I don't think that will go away.
But if we're seeing AI being able to take on more of those kind of background tasks and AI agents, first off, you actually don't have to have the AI do all of that. There is still some value into just leaving that there for more junior engineers. That's typically not business-critical work. It's context building, rapport building work for the team. And there is value in an engineering team as well.
So you don't have to give everything to AI. But it can also change the type of work that's available. And I think you'll see more ride-alongs for junior engineers. So I want to see junior engineers pairing with an AI and a senior engineer. But I want you, as a junior engineer, thinking about, how am I seeing the senior engineer use AI? How is the senior engineer asking questions, and why are they reaching for those questions first?
One of the things that I really loved about my time at Google, when I first got to SRE, all the incident command was still done on IRC. And I could, as a junior SRE, just follow along, outages that my team wasn't responsible for. Stayed out of the way, but I could listen and watch. No one would mind that I was there.
And I was looking at, hey, what is this senior engineer looking at first? Oh, I would not have thought to check that. How do I learn how to go check that and watch what they were doing and how they were talking? And I just thought that was so valuable. And so what I'm hoping we see with AI agents-- and one of the things that we talk about at Honeycomb a lot-- incident management in particular, it is a very social activity.
So when I, again, was an individual contributor doing incident management, you might have the incident doc going, you've got your incident comms going out, you're talking back and forth, maybe with other teams that are affected, trying to understand what could be going on. I think you're going to see a lot of that in shared AI agents now.
So if I can ask the AI, like, hey, what are we seeing here? It's really helpful if we can all just have the same view. Honeycomb is fully distributed, and so we don't have the shoulder-surfing advantage that you might get to some of the in-office work. But the benefit of that is we're forced to build tools that have good collaboration.
And so, instead of just pasting around the same dashboard link to everybody to open up, can we all just be looking at the same AI agent that's saying, hey, look at this, look at this, look at this? And being able to talk back and forth on what we're seeing, whether we think something is relevant to what's happening or might be a red herring. And so, again, a lot of the social part of engineering, I think, is going to stay for a while. It's how we interact and how we talk about it that's going to start changing the shape of it.
MATT SIEGLER: That was a really excellent walkthrough, by the way. I very much appreciate how you've taken us through the ecosystem of your technical staff, the bigger picture of the incoming talent pool, the maturity of your existing talent pool, the fact that we're in a very disruptive time with a very high rate of change of the influence of AI on both the engineering stack you're using, as well as the skills of the people operating it and responding to it. So this is a strange world we're in right now where both the tool kits, and the people using the tools are influenced by this as well.
So that's pretty fascinating, and it sounds like you're handling it, I would say, in a fairly progressive and inclusive way, which I'm finding kind of fascinating. And I'd like you to talk a little bit about the business realities that you're in and how maybe you're met with both either resistance or inclusiveness of it. Can you talk about some of the contrasts when you say, hey, we're using some of these tools, and let's talk about some of your client interactions, or people are receptive, or maybe even not wanting that. How's that gone, when you say, we're doing this, it's great stuff, and they're like, whoa, we definitely don't want you to do it that way? Or like, cool. Here's some opposing perspectives on this.
STEPH HIPPO: For sure. I actually think one thing that's kind of unique about my viewpoint right now, I'm just coming back from four months of maternity leave. And so I did not do a ton of AI with my baby while on leave. And so, it turns out, four months is a long time for AI to make some jumps and advancements.
And so I was getting updates from the team. I saw some launches going out, but I didn't really get to start playing with some of what we had actually shipped until I got back a couple weeks ago. And so it was so cool to see, back in May, before I went out, I had a lot of engineers that were kind of skeptical. They were like, oh, this still isn't useful to me. It's actually like more work to try it out than the benefit that I'm getting. I'm not seeing the ROI.
Jump to now, some of the folks that were very skeptic and moved to either bargaining or acceptance and saying like, OK, wow, I get this now. I'm seeing the value. And we are seeing that with some of the customers too. So we recently were hosting our O11y Day in San Francisco. There's some great videos online about that.
And our AI team did a LinkedIn takeover, if you want to check out some of the things that they were talking about. There's some great examples there of where, I think, you can see-- and this isn't necessarily unique to Honeycomb. But you can see where engineers are starting to hit a tipping point of, OK, I'm finding more places to do this. I think engineers really crave nuance in these conversations.
A lot of the places where I was seeing engineers get frustrated, maybe, in some of these initial waves of AI, is they would see one thing promised in marketing. And then they didn't really see that being useful for their set of problems yet. And part of it was like, OK, that marketing is actually-- you're not the target audience for that.
But for your AI tools and your engineering tools, here are the things you have to try. It is a skill set, to be able to get more value out of that. And I think we're seeing customers figure it out too. And so everyone's starting to have their light bulb moments.
As I mentioned earlier, with observability and AI feeding into each other, having that rich context about your own systems just lets you unlock so, so much. I'm not looking to go back to being an individual contributor, but man, does it seem like it would be a lot of fun to get to experience that sort of magic of understanding and just asking AI, hey, tell me about user adoption of these tools, or these features that we shipped.
And I can pull up and see how our rollout has been going, see how it's holding with our error messaging or our performance and our SLOs. How does that correlate? And so things that I used to have to build those queries by hand, AI is just like, cool, I got this, and is this what you want? And I can say--
MATT SIEGLER: Definitely.
STEPH HIPPO: --close, or close enough. And so that is making a big difference. And so, yeah, I think we'll see more and more of that kind of loop. So if I have good observability, I have good understanding of my AI systems and how to improve them. And then as I continue to launch more features, AI or not, if I'm doing the work of instrumenting that code, then it's going to feed back into my observability.
And that's the kind of feedback loop that we're always looking for in engineering, to make things stronger. And so I think that's what we're going to see, and that's what customers are already starting to see now, with some of the AI tools and observability tools that Honeycomb and elsewhere.
FLORIAN RATHGEBER: Yeah. So you mentioned how much things have evolved during the time you were out on parental. So internally at Honeycomb, or in your experience, how do you stay on top of that super rapidly evolving field and all the complexity that it brings?
STEPH HIPPO: Yeah.
FLORIAN RATHGEBER: And stay up to date and, yeah, basically managing all that.
STEPH HIPPO: So again, I think it's just such a big part of engineering that's still social. So we want folks to be taking that time to experiment and try new things, and then just be really honest on what they're finding valuable and not. You don't have to come in and pretend that AI is bringing all this value and solving all your problems if it's really frustrating. Let's have a nuanced conversation about where it could be better, and then how do we apply that to our product?
Honeycomb also dogfoods a ton. We use Honeycomb at Honeycomb to understand Honeycomb. And again, having that feedback loop is just super helpful. I ran a book club earlier this year on ethics and computing, and we talked a lot about AI there. Where do we feel the ethical line is for problems in our space? And how are those going to continue to change and evolve? And what are things that we want to keep an eye out for?
And so we love playing around with new tools. And especially as a platform org, I want to know what's going to give us the biggest bang for our buck. Do we want to try this new tool? Do we want to hire here to be able to explore more on building some of those features into our software development lifecycle? Where does it make sense?
And so one thing that's always been true, but that AI, I think, is still going to change how we do it, is you need to leave room for that innovation and experimentation budget. There's always going to be a deadline coming, but you have to set aside that time for learning and exploring. And if you don't, you'll actually fall behind, and you'll spend so much time polishing something that's quickly falling out of date because the rest of the industry is continuing to keep moving. So that's how we approach it. And try some things out. If you hate it, tell us. But be an informed hater and help make it better.
FLORIAN RATHGEBER: If we only had more of the more informed haters rather than the destructive kind.
STEPH HIPPO: Yes.
MATT SIEGLER: Speaking of being informed, some of our listeners are really small or quite small businesses who do want to adopt these tools. They find themselves struggling to make sense of the chaos around them. They want to innovate on their work. How do you suggest they take an approach that makes sense in their business that isn't just throwing things at the wall, and do it in an intelligent way, incorporate it into their practices?
Clearly, you're doing it in a way that's really working for your industry practices. What do you say to someone who wants to get started that wants to do it in a sensible way? They want to do it in a safe way. Give some advice.
STEPH HIPPO: Again, I'm an SRE at heart and really feel like, when I stumbled into SRE, I found my people. But the cheapest way to learn is from other people's mistakes. In SRE, we call that post-mortem culture. And so being able to hear other people's success stories, but also, there's so much value in, hey, this is what worked, and then this is what didn't. And here's what we would have done differently.
And so, you always have to filter if the advice is right for who you are, where you are, and stage of company. But go listen to what other people have done. Read what they've done. If they're publishing post-mortems or retros, whatever it is, ask the AI. You can see so much of what people wish they had thought of, or, oh, I would not have thought to approach that problem that way. Much like when I said I was learning to be a junior SRE.
And so one of the things that has been at the forefront of my mind lately, in college, I actually worked at a startup called Explorys that was purchased by IBM to become part of IBM Watson Health. Actually, my mentor there just published a book on this. It's The Rise and Fall of IBM Watson Health. That is a post-mortem, and it is fun.
OK, I'm a little biased because I got to see it. I had front-row seats. But a lot of that talks about where maybe AI back then had the overpromising and under-delivering. Where did the tech fall short? What could we have done better? That's all relevant now. And Explorys was sold to IBM in 2015, I think. And so those are the kinds of things. Go find those.
And they don't all have to be books. There's articles, there's podcasts. But go listen and try things out. Set a budget for yourself, if you're worried about how much money or time you might sink into it. But there's so much value in committing to a minimum. And so I often, when I'm doing career coaching or management, say, OK, you said you wanted to get better at this thing. We've been talking about it for weeks. I haven't seen progress. What's going on?
They're like, oh, well, something else always comes up. OK, well, carve the time out. We're going to commit to it. And then figure out what you need to be able to stick to that commitment. So I personally am somebody-- I will not go work out unless there's a social component to it. I need my team members on my soccer team to expect me to be there and need me to be there, or I will find reasons, like, I don't need to go do that. Same thing applies to anything that you're learning or experimenting with.
So do a book club or get a buddy, where it's like, OK, we're both going to sit on this video call for an hour every Friday and try this out and see how far we get. Those are the kinds of discipline that you're still going to need to be able to adopt these tools. There's nothing special about AI there. That is just the skill and how you build skills.
And so don't be afraid to set that minimum limit. So like, OK, I'm going to give myself $100 to go learn more about this and commit this amount of time every week to doing that. And using 100 for a round number. But again, do what works for you. That's the kind of thing that will actually get your team moving and experimenting with those things, instead of just saying, man, wouldn't it be cool if we spent some time learning on that? Because you don't get the value out of just wishing you did.
FLORIAN RATHGEBER: Yeah, there's so much actionable advice that you've given us. I just wanted to pick up on something you said earlier, that you're still an SRE at heart. And you mentioned-- you touched on a bunch of SRE traits and what you were, quote unquote, "brought up with." But now you're the director of platform engineering. So which SRE traits or characteristics have carried over into that role?
STEPH HIPPO: Yeah. SRE is all about systems-thinking, and it's about curiosity. And there's so much that translates both into management and into platform engineering. So engineers love making systems work better, whether that's people systems or technology systems. They don't want to be woken up by a pager. They want to make sure that they're moving towards something better, something that needs less of their time, so they can go spend more time on the new, interesting problems that are popping up.
And so that's how you design a platform. Hey, this is a pain in the butt. What could we do to make this better? And then we know the tools and processes that help make things better. So getting customer feedback, that is a post-mortem. Hey, we got feedback. The user tried to do these three things, and it didn't work.
Well, we actually can do those things, but they could not find the path in our tool to do that. So is the user holding it wrong, or could we be doing a better job of our UX? And that kind of mindset that you have to bring to platform engineering. And that was always at the heart of SRE. Do something, fail, learn, repeat.
And that cycle is actually what leads to success. No one-- I shouldn't say no one remembers little failures, because I do. I'm talking about them. They help me learn. But what gets the headlines, right, is the success at the end from all of those iterative cycles.
FLORIAN RATHGEBER: Yeah, totally agree. Incidents are unplanned investments, as they say, right?
STEPH HIPPO: Yes, exactly.
MATT SIEGLER: Well, we're just about out of time. But before we round up, I'd love to ask you one more question. I'd like you to stretch and think as big as you can. What's something coming around the bend that you're really excited about, something you anticipate or are really excited about, something big 5, 10, 15 years from now that could impact your work or all of our work that you'd want to share, something you hope for the future? Anything?
STEPH HIPPO: Oh, yeah. I'm going to think big. I would love to get to a point where we're getting to types of incidents that AI can detect and respond itself, those self-healing or self-annealing systems. I think we're still a ways off, but I think you will start to see certain classes of incidents filter out first.
So, hey, if it's very easy to see we did a rollout and something shot up, OK, move that back. Where it will be harder for the AI to catch up is some of those slow burn incidents, or maybe you have a kind of time-bomb bug in the code where you don't actually see problems until you hit a certain point of scale. That would be cool to see AI being able to solve that on its own.
I think you'll probably see proactive recommendations from AI first. So say, hey, we saw this rollout. Hmm. You should check these things first, human. Is this what you intended? Maybe not. But I think, over time, we'll get there, and we'll get more classes of incidents that move up.
My team at Explorys, back in the day, was on quality automation and continuous delivery. And we were trying to tighten that feedback loop there too. And we always used to say, well, what are we going to do when we're done, when we've automated all of it and it's good? And the joke was, we're going to go bowling.
[LAUGHTER]
And so that's what we would say to each other, like, all right, let's do this, and then we can go bowling. But there's always something next. But I don't know. I think the AI, 10 years from now, I think we'll all be bowling.
MATT SIEGLER: Yeah. I love this. A future of all bowling.
FLORIAN RATHGEBER: Or at the very least, we're only left with the novel and interesting incidents, right?
STEPH HIPPO: Yes.
FLORIAN RATHGEBER: The rest, the AI takes care of for us.
MATT SIEGLER: Fair.
STEPH HIPPO: Yeah. More time for Bowling. Maybe I'll say that.
MATT SIEGLER: More time for bowling.
FLORIAN RATHGEBER: 100.
MATT SIEGLER: Well, thank you very much, Steph. Florian, thank you for co-hosting. This has been the podcast. And farewell.
FLORIAN RATHGEBER: Thanks.
NARRATOR: You've been listening to the Prodcast, Google's podcast on site reliability engineering. Visit us on the web at sre.google, where you can find books, papers, workshops, videos, and more about SRE. This season is brought to you by our hosts, Jordan Greenberg, Steve McGhee, Florian Rathgeber, and Matt Siegler, with contributions from many SREs behind the scenes. The Prodcast is produced by Paul Guglielmino and Salim Virji. The podcast theme is "Telebot" by Javi Beltran and Jordan Greenberg.
[THEME MUSIC]