Sean Sebring and Chrystal Taylor chat with Josh Stageberg, SolarWinds GVP of Product Management, about the future of Agentic AI.
RELATED LINKS:
Chrystal Taylor:
In case you missed it, on April 15th, SolarWinds introduced SW1, a context aware digital teammate designed to support your path to autonomous operational resilience. Learn more by watching SolarWinds Day on-demand or join us live in person at our 2026 World Tour. Visiting 30 plus cities worldwide, the tour brings together IT professionals, customers, and partners for live demos, technical deep dives, and conversations on AI, observability and more. Experience it live, visit solarwinds.com/events.
Sean Sebring:
Howdy, everyone. I am your host, Sean Sebring, joined by fellow host, Chrystal Taylor. Today we’re talking about AI, but not just AI, agentic AI. The move from simple scripts to systems that understand intent, take action, and change how modern IT gets work done. We are joined by Josh Stageberg, GVP of product management here at SolarWinds, to explore what it means for ITSM, operations, and the people behind the tools. So before we get started, I wanted to give Josh a chance to introduce himself. So Josh, over to you.
Josh Stageberg :
Sure. I am one of the product management leads here at SolarWinds. I’ve been in software, B2B product management for certainly as long as I can remember anything with any level of detail. And at big companies, small companies, and serving mostly IT professionals through that time. And I started my career as an IT pro myself, so that’s sort of me.
Sean Sebring:
Well, it’s lovely to have you here. And I was talking to someone just earlier today who didn’t know I did a podcast with SolarWinds and said, “Hey, we’re doing an episode.” “Oh, what’s it about?” “Well, of course, AI again.” But today, I won’t say it’s niche, but a more specific, narrowed approach to discovering and learning more about AI. And that’s the agentic side of things, which I just completed a long series of foundational courses myself. So I’m excited to have a bit more knowledge to hopefully keep up in our conversation today, Josh. So I suppose, again, I mentioned foundation just a second ago. What’s a good foundational approach to describe when we’re thinking agentic AI versus AI? What would you tell the audience?
Josh Stageberg :
All right. So I think there’s what we… AI is kind of a big term, right? And one of the things that we think about a lot more recently when I think in the public consciousness, we use the term AI. We’re thinking about the part of AI that is based on large language models. And so that’s not to discount all the other things that fall under AI, but I think that’s where the popular focus is today. When we talk about that, we talk about the zeitgeist that started with ChatGPT and the, I guess, somewhat household names, at least in somewhat tech-savvy sort of homes around Anthropic or Mistral or Meta or Google or OpenAI. There’s many more. I should probably have a little more copy before I try to name them all, but those are all folks that develop large language models that sort of train and produce large language models.
The majority of us have had interactions with these large language model type applications programs in the context of chatbots. So whether you’re trying to order fast food or get the fast food bot to help you debug a Python script, whether you’re on ChatGPT or one of the similar things, Gemini, et cetera, on apps on your phone or on the website. And I think that’s a way that… And the reason I mention all that is that’s kind of an approach angle to this, but it’s not really what we mean when we say agent or agentic AI. So when we talk about agentic AI, we’re still talking about things that are based in that LLM technology, the large language model technology, but we’re talking about that large language model technology and the apps around them, not just answering questions, but actually taking action.
So that’s sort of the term agent, but I think we’re going further. So actually, so taking action, taking action like you’ve probably seen or heard of demos where the agent can talk on the phone with someone and make a reservation at a restaurant, or an agent could write some code for you, or an agent might help you to restart a server. Agents do things, chatbots sort of chat. Now, when we talk about agentic AI, what all the buzz is about now is not just doing things, but doing things in a, call it self-directed manner. So the best way that I try to explain this to folks these days is if you’ve ever had children or taken care of children, even if they’re not your children, from at various ages, there are different levels of autonomy that children have. And the example that I like to give is the going to school in the morning example, because I think that’s common for most kids or probably most kids that you might’ve dealt with if you’re listening to this podcast.
So it’s a certain age, right? You can tell your kid to put on their shoes and they just sort of look at you like you’re an alien. I’m like, “What do you mean?” “This is something that other people do and I sit or perhaps I run away, but there’s no sort of putting on your own shoes.” And then at some point you get to the point where, and maybe they’re Velcro, but put on your shoes results in the child putting on their shoes. And then at some point you can get to… So that’s like the basic type of agentic behavior. It’s like, “Hey, do an action,” and then you get an action. Now, with the type of agentic behavior that we’re talking about when we say agentic AI in 2026 is a little bit more complex than that. So stepwise, so that I don’t lose the audience, we go from put on your shoes to put on your clothes, and then put on your shoes, and then maybe put your lunch in your backpack.
And so we have explicit commands, and the child is able to do more with those explicit commands. Eventually we get to a point where we say, “You need to get ready for school.” Right?
Sean Sebring:
And it knows what all those things between where they are-
Josh Stageberg :
And that [inaudible 00:08:47] to themselves, right? Okay, that means I need to put on my clothes, I need to put on my shoes, I need to put my lunch in my bag. If I can’t find my lunch, I need to either, depending on how old the kid is, probably either make the lunch or go ask someone where the lunch might be. “Do I have my water bottle? Do I have my homework in my bag? Oh, I can’t find my homework folder. Now I got to go look for it. Now I got to go find it. Now I go grab it, make sure the homework’s in it, then put it in my backpack.” There’s a lot of potential, let’s call paths that you kind of have to follow and then come back to the main directive and then interrogate that main directive to see if there’s other things that I have to do.
And finally, at some point you go, “I am now ready for school.” I could get into what that looks like from a psychology function, but we call that broadly executive function in humans. What we call that in agents is often just agentic AI. And what we mean by that is the ability to orchestrate and reason around, I’m using reason kind of lightly, but reason around what actions might need to be taken and then to take those actions and come to that conclusion so that you can say to a sufficiently advanced agentic AI system, “I want to go out to dinner on Friday night.” And instead of having to say, “Find a restaurant that is reasonably within a distance that I might travel, make a reservation at that restaurant.”
Night does not mean make a reservation at 11:30 at night, because as I reason, this person does not usually make restaurant reservations at 11:30 at night. Well, I don’t, but maybe others do, but there’s a lot of checking, understanding, distributing, finding that sort of goes into it. And that’s kind of what we refer to as agentic AI here in 2026 is that level of ability that a very smart child might do if given a task. So that’s the general tapestry probably, at least at 100,000 foot level.
Chrystal Taylor:
So is it fair to say, so my understanding of agentic AI is more of an environment of agents that sort of can interact with each other. So is it fair to say that while one agent is the child getting ready for school and there’s another agent that is also making your dinner reservation, and then there could be another agent that’s cleaning, they have their own tasks and then they can interact with each other as needed?
Josh Stageberg :
This is a layered and complex problem to try to address, but I would say that there’s two mainframes that I would say would be useful to think about as we sit here in the first half of 2026. And one would be there’s a bit of necessity around having different agents doing different things that is driven by the state of the technology at this point. So a generalized agent may not be able to perform all those functions as well or as cheaply, may still be able to perform those functions. So that’s sort of one way to think about it. If we believe, I’m not saying whether you should or should not believe, but if we believe that we will eventually achieve a state of AI that is a more generalized intelligence, then you would expect that eventually a single agent could do a lot more. And I would say that a single agent can probably do a lot more in 2026 than they could in 2025 or even now versus a couple months ago.
So it’s important to think about that. And then it’s also important to think about what is an agent in that context, right? I have a somewhat reductive way to think about it that I think is easier to hold in your head than the sort of technical reality of what might happen in January or March or May or August. And it’s that the brain of the agent is effectively the LLM. So if you create different apps that you call agents and they’re all using the same brain, it gets a little inception-y to try to think about like, is this a different agent? Is this the same agent? And I have a background in modern philosophy, so I tend to, this is one of the things that I think about, but putting that aside, I think it’s going to be very interesting. I guess I’ll put it this way. I think it’s going to be very interesting over the next year or two or perhaps even three to watch the definition of agent start to evolve.
And certainly for us in the technology landscape, it’ll be interesting to see how marketing shapes those definitions, because we really are early on in this technology journey. And so we’re kind of in process of agreeing on the definitions of certain things. I’ve tried to, in my definitions in response to the first question on this podcast, I tried to keep two things that feel pretty solid, solidified or solidifying, but in this case, I think this is one that’s, it’s going to change, I think, over a lot over the next six months, one year, two years. And I think that our definitions of agents will have to change with that. And part of that is, again, just my background and philosophy thinking or speaking.
Sean Sebring:
It sounds like an agent might even be a stepping stone towards what feels more like a being, not to get too psychological with it, but the two big takeaways from the way you presented agent, and correct me if I’m wrong, add to this because again, it’s all being agreed upon these definitions, but the major differences I got from the general consensus of AI is that I’m engaging with a chatbot. I ask it some questions and it gives me a direct response is with an agent, you have more autonomy and reasoning. So autonomy being like, “I’ll take some actions that you didn’t explicitly ask me for because you asked me for an outcome. So I’m going to ask myself to do some things in order to get you the best outcome.” And the reasoning is how it figures out what steps to orchestrate, which is another good word you used in there.
So it’s reasoning what things should I orchestrate in order to deliver the best outcome by myself. And then when Chrystal had kind of asked, so that was an agent. She asked about agents being able to work together, which is this exciting concept of like, “Well, if I know how to do this thing really well, you know how to do this thing really well.” That’s why I was saying at the start of this bit is a being is a lot of different minds working together to say, how do we create… I don’t know. And so we all have different pockets of our minds that reason things differently. And so I think of those agents as little pockets of my mind that’s when I approach this problem, I reason things like this, that problem I reason things a different way and different functions take over in myself. So maybe a being is the next step past an agent.
Josh Stageberg :
I think I’ll probably stay away from that bit a little bit on my side.
Sean Sebring:
Fair enough. Fair enough.
Josh Stageberg :
But what I will say is I think the piece that is most interesting here is, do we start to align the idea of an agent with the LLM that’s behind the agent, or do we start to align the definition of an agent more with an app? And remember, all of this is applications.
So just to demystify a little bit, I think that we end up thinking about AI and software as separate things, but AI is software and they’re concentric circles that’s part of software is AI. Not all of software is AI, but AI is software. And in that context, the manifestations of AI are applications. And so if we think of agents more as apps or services or whatever granularity you want, or we think of agents more as the LLMs that are behind them, the brain of the agent, that’s where I get hung up and maybe some technologists wouldn’t get hung up, but that’s where I get hung up when I think of the philosophy of self, like what makes me aware? And not to say that agents are aware they’re definitively not, but if some future self-aware application is watching this at some point, there’s no…
Sean Sebring:
Got to be cautious.
Josh Stageberg :
Yeah.
Sean Sebring:
And it’s funny you bring that up because I think one of the most aware parts of technology in my mind, and there’s AI to it as well, but it’s mostly just data farming, I guess, maybe, is marketing because marketing is listening all the time. You have to really be super intentional to suppress cookies. And is my phone listening to me even though I told it I don’t want it to. And then all of a sudden, my wife and I have a conversation and the next day I get onto some form of media and there’s such a specific thing that shows up. And I’m like, how could this have come across me if it wasn’t aware? And again, this is talking about hundreds of different things, listening all at once and putting things together, but yeah, awareness is a creepy bit.
Chrystal Taylor:
But that’s maybe not awareness and more a task that someone said it.
Sean Sebring:
Well, right, but that’s what I’m-
Chrystal Taylor:
That’s how I think about that.
Sean Sebring:
Totally fair, totally fair. But I suppose if things were allowed more access to listen and touch that kind of data, then more awareness could be there, at least how I interpret it.
Chrystal Taylor:
Sean, you’re going to freak me out.
Sean Sebring:
Oh, sorry.
Chrystal Taylor:
You’re going to freak me out.
Josh Stageberg :
I think it’s important for me personally as a networking person to say, you have to pay attention, for instance, I don’t want to scare people away from home-based devices where you can address them and then ask for things. And I think that’s actually a common sort of bugaboo in this context. I think it’s find a network person, find someone that runs something like SolarWinds on their own personal network, plug in that device to that network and tell them to show you the traffic that’s going back and forth from that device when you’re not talking to it. And I think you’ll find that it’s not quite as exciting in 2026 as it may have once been, but so I don’t want to scare people away from those things, but I do want to talk about, Sean, the concept of ambient AI.
Again, remember I said there’s a lot of AI that’s not directly based in chatbots based on LLMs or agents based on LLMs. This is still LLM sort of adjacent and certainly becomes a lot better with LLMs due to the ability to predict language and understand better, faster words, accents, those types of things. But one thing that you might’ve seen, or you might not have, depending on who you see and in what context, but one thing that folks are starting to see more often in doctor’s offices is ambient AI. And so the way, if you’ve ever had a doctor or someone in that profession tell you, “Hey, do you mind if we record this session so that I can pay more attention to you? ” It’s an interesting thing in terms of patient outcomes. When we talk to medical professionals, if you use an ambient AI, you can get all the notes without sitting there staring at the computer and typing while listening to the patient. And we, as I’m assuming that the three of us aren’t doctors, I’m not a doctor.
Chrystal Taylor:
I’m not a doctor.
Sean Sebring:
No comment.
Josh Stageberg :
I actually appreciate when the person who I’m going to, whether it’s just my annual wellness visit or whatever, is looking at me and talking to me and not feverishly typing notes about what I’m saying on the computer. And the way that they accomplish that in the case of my doctor is they use what’s called ambient AI. And what ambient AI is doing is it’s actually doing exactly what Sean was talking about, is listening to and transcribing and then working on all the things that we’re talking about. And so that I think is going to become more ubiquitous as bandwidth, as folks’ privacy concerns continue to be eroded.
I think that will become more and more a part of what we expect out of things, which I think is actually separate and distinct, but this is a different topic which I won’t go into, but it’s separate and distinct of what we’ve seen from the ability of Meta or Google to be able to target ads, which I think is actually a set of technology that we saw Facebook’s, this is going to date this podcast pretty specifically, but we saw Facebook or Meta’s earnings last week and they had a extremely positive result in their earnings based on using these new LLM based technologies to better target ads. But I think a generation ago in this ad stuff, when I say a generation ago, I mean a year ago, right?
Sean Sebring:
That’s how fast AI is growing. Yeah.
Josh Stageberg :
Significantly less.
Sean Sebring:
Last year was a whole different generation.
Josh Stageberg :
Yeah. Well, even though we feel like it was extremely sophisticated, it was significantly less sophisticated a year ago without these LLM technologies. So it’s kind of an interesting… Anyway, that’s a different topic and I’ll try not to go down too many rabbit holes here.
Sean Sebring:
It’s so easy to rabbit hole.
Chrystal Taylor:
It’s interesting stuff. I mean, we talk about AI a lot and obviously AI is permeating every layer of reality at the moment. I mean, we’re using it at work, we’re using it at home, it’s in restaurants, it’s in your doctor’s office. It’s like literally it’s just everywhere, so it’s hard not to talk about it. But learning about the concepts and the technology behind it is very interesting. I think that we don’t know… We all have now mentioned some of the fears that people have around it. And I think that not that fears aren’t valid because they totally are. As I just mentioned, I have my own fears around all of this, but I do think it’s really interesting to help. Learning about stuff does help keep those fears reasonable, I suppose, more than anything where you’re learning about what they’re actually doing is more helpful than the mysterious unknown, which is kind of, I think, how the general consumer thinks about AI.
“I don’t know what it’s doing. It’s magic.” And for me, the magic of it is it’s exciting, but also is terrifying in its own way where, not to say that I understand it technically at all, but understanding the concepts at least is, it personally makes me feel better about what we’re trying to do with it. And I think that working at a company that is using AI both for our roles and responsibilities and also building it into our products, it helps me to know we’re building things with purpose. There is a reason for it and how we’re building them has a reason for it and how things are going to be interacting all has purpose. And I think that that is more of a helpful context.
Sorry. That’s more of a helpful context for me as a consumer of AI as well. Maybe I won’t always agree with the purpose that it’s being used for, but in those cases, I can go turn it off and not use it. But for me, knowing that there is purpose behind it and that things are interacting in ways that are not… At least what we’re doing, I know, is nothing is designed to go. We’re talking about a level of autonomy, sure. It can make decisions. But I know that based on our previous conversations here on TechPod with Doug Bennett and Matai Wilson about how LLMs function and how they “think,” because it’s not really human brain thinking really, but it’s all probabilities. They’re trying to find the most probabilistic answer, which I also think is helpful to think about. It’s trying to complete the sentence that you’re giving it and why giving you the most likely answer, which I think is also interesting context.
Josh Stageberg :
Yeah. I mean, we’re preconditioned to look for the most likely answer in a way, insofar as we’re all used to Google search, but But yeah, I think the two things that are super important to think about in that are one is it is a probability guessing algorithm around what is the next word going to be, but the way that it’s trained is based on the internet, which is also interesting.
Sean Sebring:
Yeah. Wild place that is.
Josh Stageberg :
Well, as you both know, anything we read on the internet is clearly true. And thankfully the internet is there as that source of truth and that constant companion to show us the way.
Sean Sebring:
SolarWinds disclaimer. This was a joke.
Josh Stageberg :
Yeah. I think it is really interesting to think about. Well, we try to divorce the technology from real world application, and I think that’s what Sean and Chrystal, you were both trying to get at. When we try to talk about the technology as if it doesn’t exist as part of the real world. We get into what probably sounds to many folks like technobabble on Star Trek. And ultimately the impact of this technology is how it exists in the context of the world itself and the world around us in our daily lives. And so in that way, it is important to think about what are these models trained on? How are they trained to respond to us? How should we think about… We need to be conscious that in all of our dealings with computers, I mean, all of us have been in the computer space for quite some time.
And we’re conscious about, hopefully, very conscious about whether we click on a link in an email or whether the sender looks legit. Whether if we see an advertisement on a webpage, and even if it looks like something that we might want to buy, does the advertisement look legit? Do I want to click on that or do I just want to go to a website? Do I want to go to a search engine and find the actual brand and then go through the brand webpage? There’s a lot of things that we sort of take for granted as to how skeptical we are about what we see on the internet.
We know that if we go to a Reddit thread, it’s just another person and we may want to cross reference something if they say, “Hey, the best way to cure a hangover is to eat a bunch of wood pulp.” I mean, you go, “Maybe I should get a second opinion.” Or maybe I find after talking to my doctor that that’s actually a disclaimer, this is not a podcast about curing hangovers. So in LLMs, we also have to be skeptical. We have to be skeptical, one, because we don’t know if we are unintentionally misleading the LLM. You’ve probably had a conversation with a friend or a family member at some point where they said something confidently and you kind of just assumed that was it. Or maybe you watched a movie and in a movie they said that this is the origin of this phrase.
And you go, and it’s not. And it’s like we have to have a healthy skepticism when we interact with these things. And not just because of my new favorite word, confabulation, but we haven’t discussed confabulation yet.
Sean Sebring:
Well, no better time than the present. Confabu… What?
Chrystal Taylor:
It’s a fun word.
Josh Stageberg :
If you get nothing out of this podcast except for the word confabulation and you hear it enough that you can use it three times in the next 24 hours, just work it into your conversation with folks and you’ll be able to tell if they’re paying attention because if you say that word and they just… Then you know-
Sean Sebring:
You’ll know.
Josh Stageberg :
They’re thinking about what they’re doing this weekend or something, but they’re not thinking about what you just said. So confabulation is… We often use the word hallucination. And confabulation is the actual official term for this.
And when I say official, I mean the NIST US government term for what we think of as hallucination. And so it’s not actually more complex than that, but it is a fun vocabulary word. It’s something to impress your friends at cocktail parties. Oh, I think you’re talking about confabulation.
Sean Sebring:
That was confabulous.
Josh Stageberg :
Confabulating, I don’t think is a word. So let’s not.
Sean Sebring:
To kind of hone back in on the agentic side of things, from how the technology operates, not just its purpose, but from how the technology operates, what’s a key differentiator in your mind on this makes it an agent? Is it a set of different rules? Is it more access to things outside of just it’s a specific data set, like its ability to do different kinds of research? What would really help separate it from a technology standpoint that it’s not just your run-of-the-mill GenAI?
Josh Stageberg :
It gets difficult to pin down a hard line here when you try to pressure test on one side or another. But I think, again, the way that I think about it is taking an action. I think that, and in many cases, you are going to interact with an app that is an LLM based app that your primary mode of interaction with it is the chatbot mode, and it’s still going to take an action on your behalf. And in that context, I think it’s easier to think of that action as agentic. But I think if you want to be more … If you want it to be a bigger and more exciting definition, then I would say that reasoning and orchestration sort of bit the ability to set, again, the sniff test I’d recommend you use is like, is this a five-year-old thing or a seven-year-old thing?
Is this a, “Okay, now put on your shoes. Okay, now pick up your backpack. Okay, now we’re walking to the car or is this a it’s time to go to school, we need to get out the door”? And the child understands that means I’m going to put on my shoes, I’m going to get my backpack and we’re going to walk to the car and then sort of delegates those functions. Again, when I say executive function, I’m not using that term in a sloppy way. Executive function is the ability to come up with like, “Okay, based on this, I have to do these things and follow through on those things and not sort of just stop.” And we all have very much, as humans, we have varying levels of executive function. And certainly when we don’t sleep, even if we might have higher levels of executive function when we do sleep, when we don’t sleep, we can experience much lower levels of executive function. So it’s not a concept, right?
Sean Sebring:
So I think that helps a lot. I guess from the beginning part, we got the reasoning and the orchestration from this reiteration you gave, it was more about that action versus… So I’m still trying to make myself better at separating my typical understanding of GenAI versus agentic, and it’s that action bit. Whereas even if it feels like an action when I’m talking to, let’s just say ChatGPT and I say, rewrite this email, that feels like an action, but it’s all still just kind of discussion based. Nothing was actually done somewhere else.
Josh Stageberg :
This is where it gets hard to pressure test the edges of this because it’s all LLM based technology manifested through apps. So if you want to go to what we… Again, if you want to go to the definition that’s… You can use the definitions that’s based on I give you a directive, you reason out the directive, you figure out things to do and you take actions based on that. That’s a pretty advanced version of that, which really only manifested, I think, in 2026 or late 2025, depending on what circles you’re in. But I think agentic was already something that we were playing with before that. It was just a question of it was much simpler. Take, again, the progression of children.
And this is why I think this is so important. When we talk about where we’re at literally six months later, it’s night and day difference in terms of capability. And six months from now, I don’t think any of us have a clear sense of exactly where we’ll be. Maybe `has a clear sense or Sam has a clear sense of where we might be, but I think most of us are keeping up and understanding where we’re at in six-month periods or three-month periods at this point. And in that context, to my previous comment is, I think the word agent or agentic is going to continue to get a little less squishy over time and it’ll become more of a standardized term, and I think marketing in B2B will certainly have a strong role that sort of plays into this, but I think just the capability will change. So if today we have seven-year-old capability in my analogy, do we have 10-year-old capability or 13-year-old capability a year from now?
And if we have 13-year-old capability, then the question starts to become okay. Now has agentic become the 13-year-old version and the seven-year-old version is somehow semi-agentic or something? We’re going to need to grow into these words.
Chrystal Taylor:
Refine it.
Josh Stageberg :
I guess that’s kind of a non-answer, but I think it’s the reality of where we’re at right now.
Chrystal Taylor:
Well, let’s talk about that real quick. Let’s talk about the speed at which things are evolving because I think over the past however long, there’s been technology changes at a more and more and more rapid pace. It feels like year over year and now we’re in week over week almost. It’s evolving even faster than before, which of course right now means that the AI technology and agentic technology is evolving very rapidly. Things that I’ve seen that we are building between December and January were vastly different. I mean, between then and now are two completely different things, even though arguably we were working with similar technology to build the same kind of functionality. So talk to me a little bit about that. A, how are engineers and things like that keeping up with the speed of the technology change to implement this stuff? And then following that, how are we as I guess consumers of it, adopters of it, going to keep up?
Josh Stageberg :
Putting aside the discipline of software engineering and how that changes, because I think that’s probably an hour-long podcast in itself, and I’m probably not the best authority on that, but that is unquestionably changing. I think the way that engineers are thinking about the problems that we’re trying to solve for customers, for organizations, in at least the B2B space, we’re starting to think about how we can not just do incrementally better. So what do I mean by that? That’s super reductive. And as someone who’s devoted many years of my professional life to bringing SaaS products to market for B2B, I’m conscious that that’s super reductive to say things are incremental, but for what I mean incremental here, I mean faster, cheaper, easier ways to do the same things that we did before.
So instead of really an example I heard recently, which predates me, was a room full of people doing math versus a single person doing math on a spreadsheet. The faster, cheaper, easier. Now we’re in this interesting space where it’s actually not… You could argue that there’s faster, cheaper, easier, but I think that the value that we’re starting to talk about providing is not faster, cheaper, easier. Sorry, let me be specific. When I say faster, cheaper, easier, I mean you doing the same things faster, cheaper or easier than the same things that you did previously, right?
Chrystal Taylor:
Mh-hmm.
Josh Stageberg :
Now we’re starting to enter a space, I think, in where we’re starting to think of technology applications where you just don’t do the thing anymore. And I don’t mean in the alarmist, there won’t be jobs sort of thing, and there’s a lot we could discuss there, but we won’t have time.
I mean, in the sense of we are going to get back to a type of innovation that says that your job changes. Right?
Chrystal Taylor:
Mh-hmm.
Josh Stageberg :
And so you could look at it as cycles of a little bit of stagnation and then a little bit of progression. And in the progression bits, there’s your job changes, like what you do changes, and the stagnation bits are sort of what you do is the same. You’re just doing a little more of it or doing it a little faster or doing a little easier. And so what do I mean by this? I mean, in our world, in an IT world, we spend a lot of time doing repetitive tasks and our repetitive tasks today are a little bit different than they were 20 years ago, but they’re not like… We’re still doing them. So while we might have been, this will date me, but while I might’ve been taking a server out of a rack and replacing the power supply and putting the server back in the rack, we might now be managing fleets of containers in hyperscaler, but we’re still like that container or this set of containers are bad, destroy them, spin up new containers.
We’re doing a lot of repetitive things still. And I think one of the biggest promises in this first wave of agentic AI, because that’s to bring it back to our topic here, is doing a bunch of those repetitive things for you so that an IT practitioner can say, “I’m not going to do those repetitive things anymore.” An agent is going to do those repetitive things and it’s going to free up more of my time to do more bigger project work, strategic things, like things that require us to align business outcomes or drive value to our top or bottom line, which is something that I think IT has largely been trying to shift more and more towards for many years. But I think this might finally be sort of the end of what we call toil in the IT space. And it’ll take some time, like everything.
It takes some time for this type of technology, any type of technology to become ubiquitous. We tend to think of it as going really fast and then in companies it ends up going a little slower than we’d like, but it’s like I do think we’re here at this threshold of getting out of the toil. And I think IT teams in particular will actually be able to get a step function of productivity in the way that they’re constantly striving towards out of this type of adoption of this type of tech.
Sean Sebring:
So Josh, to avoid any confabulation, if you wouldn’t mind, help summarize in five minutes and also in that same five minutes, give us kind of your thought on the future of this. And again, it sounds like so much of this is happening so fast that the future could be as early as tomorrow morning, but give us your thoughts in the world of Josh, from the mind of Josh.
Josh Stageberg :
Yeah. I mean, I think things are moving very fast in terms of what the technology is capable of. I think that we, as individuals, as society writ large, no matter how fast technology moves, we have our own speed and that speed has to do with our ability to get comfortable with that technology. I think we’re still trying to get comfortable with cell phones in many ways. I think we’re probably past that phase with laptops, but laptops have been a thing, albeit very, very large laptops. Laptops have been a thing for most of my conscious life, which again, has been longer than I care to admit on a podcast, but I think we’re at the first innings of this, right? And whether you believe in general artificial intelligence or whether you’re skeptical of that and you focus on the current generation of tech that we have in this context, I think that even the current generation of tech opens up possibilities for us in a way that it’ll take some time for us to come to grips with, and it’ll take some time to really get full utilization out of.
My hopes are that what we get out of LLM does more than replace toil, though I do professionally hope to replace a lot of toil for IT folks in the coming years. I hope that it helps us find medical breakthroughs that were more difficult to find previous to this tech. I hope that it helps us get safer streets and roads for our families. I hope that it helps us to find ways to get more ubiquitous power for this planet and realize less hunger. I think that there are opportunities here that transcend, like whether I can order a pizza or get a restaurant reservation by orders of magnitude. And maybe I’m not able to predict the future, but that I hope that’s the future that I kind of hope for. And I hope that some of the realities of what we achieve in 2026 with this and similar technologies start to bring that horizon closer to the present so that we can start to see some of that benefit for ourselves and for the world.
Sean Sebring:
That’s a noble outlook on it. Marketing does try and make pizza a priority for me more often than I care to admit, but I agree. And I think those are great things to hope for and would be an absolute great use for the technology itself.
Chrystal Taylor:
Yeah, I like this outlook. I think that this has been a really good conversation for me. Sean and I have talked about it in the past, but I kind of look at this sort of AI evolution revolution that we’re in right now as any other great technological revolution. And it is painful sometimes in the moment, but at the end of it, we’ll be in a vastly different place than we were before. We’ve talked about it in comparing the advent of cameras to paintings and the assembly line to hand manufacturing. And there’s all kinds of technology advances that have happened over time that have changed the way that we work fundamentally. And I think that we are in the next stage of that. And it’s a little bit scary, I think. But I like that you have a very positive outlook on it and I appreciate that a lot.
I think that it is something that we have to learn how to come to terms with for the AI deniers. I don’t think it’s going anywhere, so you got to come to terms with it and figure out how to make it work best for you and your use cases. And it’s going some cool places. I’ve seen some really amazing things already, and I am not a person that’s creating any of this stuff. I’m just a lovely user.
Josh Stageberg :
Well, we can all hope that there’ll be less spam in Gmail one day.
Chrystal Taylor:
Yes.
Sean Sebring:
Well, confabulous everyone. This has been great. And yeah, so I suppose that’s it for this episode of SolarWinds TechPod. And Josh, huge thanks for helping us try and unpack what agentic AI really means, not just IT people, but anyone today. So thanks so much, Josh.
Josh Stageberg :
Yeah. Thanks, Sean. Thanks, Chrystal. It’s been great.
Sean Sebring:
If this conversation sparked ideas for how you use AI in your environment, make sure to follow SolarWinds TechPod so you don’t miss out on future episodes on AI and all other things technology. Thanks for tuning in.