Matai Wilson, Senior Director for Product Management at SolarWinds, joins TechPod to explore the evolving landscape of AI technology. They discuss common misconceptions about AI, its evolution from machine learning to generative AI, and the concept of agentic AI. The conversation delves into the challenges of building AI technology, the exciting future of AI in IT operations, and insights into the upcoming features and tools SolarWinds is developing to enhance user experience and operational efficiency.
RELATED LINKS:
Chrystal Taylor:
Welcome to SolarWinds TechPod. I’m your host, Chrystal Taylor, and with me as always is my co-host Sean Sebring. Today we’re excited to dive even further into AI than usual. Our expert guest, Matai Wilson, Senior Director for Product Management at SolarWinds, will help us explore common misunderstandings, how we, at SolarWinds, think about AI, what it’s like working on the technology, and more. Matai, welcome to the show. Can you tell us a bit about yourself?
Matai Wilson:
Yeah, that was really good, by the way. So I’m not sure that I’ll be able to do that just as well.
Yeah, my name’s Matai Wilson. I am the Head of Product for Platform, AI, E-commerce, and Data here at SolarWinds. And I actually have been here for less than a year, so I’m still learning our product portfolio, but have been highly involved in AI at other companies prior to this, as well as SolarWinds. And I’m excited to be here.
Chrystal Taylor:
Awesome. Well, let’s start at the top of things as it were. Let’s talk about AI, the term and the technology because I think that there are a lot of misunderstandings and maybe misconceptions out there about what AI actually is, and the fact that it is an encompassing subject. There are more than one type of AI, and I think as it has become more popular in mainstream media and in just everyday use, people use Chat GPT to plan their vacations and things now. People have misconceptions maybe about what we might be talking about.
So let’s start there. Tell us about AI the term, and how that might be misconstrued or what the technology is like.
Matai Wilson:
The topic of AI is this very hot topic today and as such and you know, hot topics, they get confused. Sometimes people get confused with them. There’s a little bit of misunderstanding around what AI actually is, what it isn’t. So I like to think about it in two specific ways.
There’s the product of AI, which is what you and I use every day, ChatGPT, Claude, Gemini, etc. And then there’s the practice of AI, which is more about what the companies and engineers and the scientists are actually going and building. And it may be difficult to understand what the practice of AI is, but the very simple way to think about it is that it is the practice of trying to make machines behave intelligently.
It is extremely broad and it is literally as broad as that statement. How do I make a machine behave intelligently? That is the umbrella practice of AI, and the way the two relate is that there are many, many layers of topology between the practice of AI and the actual product of AI that you and I know today.
You start with the practice of AI. You narrow it down into machine learning, and you narrow it down again into deep learning, and then you narrow down again into transformers and large language models. And those are the core technologies that actually deliver the product of generative AI, which is the ChatGPTs and the Claudes and the Geminis.
And so the way people should think about it is that AI is all of these technologies that are singularly focused on making machines behave intelligently. And then there is this one very specific thing called generative AI, and that is where they take in a ton of data, trained on that corpus of data and then statistically infer on that data to generate the most likely outcome or output that you want from your prompt.
And the key point here is generative. So if you think about other things within the practice of AI, you can go all the way back to, let’s say Deep Blue, for instance, was one of the world’s first looks at what we thought might be AI. And that Deep Blue was used to beat the grandmaster of chess.
And what Deep Blue actually was was a bunch of people sitting down and writing discrete If-Then statements to go teach a computer the strategy to beat somebody at chess. And then the next big thing was like, okay, well, chess is a pretty complicated game. Takes a really, really smart person to beat a grandmaster, but it’s even harder to beat the champion of the game Go.
And so, to create a machine that could intelligently beat somebody at Go there were too many variables to consider. You could never sit down and manually code all of these different If-Then statements. And so they used the advancement of the technology, which was now machine learning and deep learning, where they trained it on all of these different game strategies and things like that, to develop its own strategy, to then be able to have that single goal of like, I must win at this game called Go.
And eventually we all know that AlphaGo did end up being the greatest Go champion in the world. And then again, you take that step further into our transformer model LLMs. And now the problem is so much broader. We don’t have this singular focus of like, I need to win at this specific game, chess.
I need to win at this specific game, AlphaGo. Now, the possibilities are endless. When you think about generating or creating, it’s literally an infinite number of things that you can do to create. And so you have to train on this massive, massive corpus of data, much larger than any of the training that AlphaGo encountered.
And, then you do all of these statistical calculations that basically allow you to output the most probable outcome based on the prompt that you give the actual technology, or in this case, the LLM or the generative model that’s building or creating your image. And that is where AI from a consumer perspective lives today. And that’s why people are so crazy about it, in why ChatGPT hit 100 million users in a day or something.
It’s because it was totally groundbreaking and revolutionary to have something that could learn that well how to interact as a human being. And that was kind of the original, like “Have we built AI?” It’s called the Turing test. You know, can a computer converse with a human and fool the human into thinking that it’s a computer?
And ChatGPT kind of blew that out of the water. And so there’s this really interesting dynamic of this evolution of AI, and we’re getting further and further and further. We keep changing our definition of what AI is. But the way to think about it is that AI as a product is ChatGPT, it is this generative AI thing that we have today, and AI as the practice and the technology that everybody likes to think about — and now put on their resume — is this broader umbrella of different technologies and practices that can all make machines behave intelligently.
Maybe that was a little too in the weeds. I’m not sure.
Chrystal Taylor:
No, I was enjoying it.
Sean Sebring:
No, I think it was fantastic. Just to tangent, break away for a second, what’s the new term going to be? I feel like we’ve been using AI for too long. I just think of Terminator and I’m like, “Come on, there’s got to be a new term.” Machine learning came up, and you said AI is a subset of machine learning. So I’m like, “What’s the new term for AI going to be? Are we going to have a new term?” It’s getting abused, it’s getting used and abused.
Matai Wilson:
I think the term AI is going to stick around. I just think that it, and maybe it will evolve to become this broad encompassing thing.
Sean Sebring:
Cognitive Computing.
Matai Wilson:
But from a research perspective, from how you think about it from a technology and everybody’s like, “Hey, if you don’t have AI on your resume, you’re losing. You got to put AI on your resume.” And I’m like, “Well, 10, 15 years ago when I worked on predictive analytics, now I’m supposed to go call that AI as if I was working on AI 10 or 15….” I wasn’t. I was doing predictive analytics. That’s a very different thing than generative AI. So being a little more purposeful about our use of like, “Hey, is it Gen AI? Is it machine learning? Is it predictive analytics?” By the way, creating an actual autonomous, intelligent IT operations platform involves all of these things. You can’t just say, “I’m going to throw a bunch of data at my Gen AI and it’ll just figure it out.” One, because that’s super expensive. And two, because it’s probably going to get it wrong. There is a combination of discrete functions, predictive analytics, machine learning, and generative AI that will give you the best solution together.
Chrystal Taylor:
Thank you very much, by the way. That’s actually super interesting. We talked in our pre-show call for the listeners about how I always thought of it the other way around, that AI was the umbrella. And I think this is partially because like with other famous brands, they’re treating the word AI almost like a brand. So think of Band-Aid, or Kleenex, where okay, Kleenex is a brand of tissues, soft tissues, but soft tissues are the actual thing. And the same thing with Band-Aids. The brand Band-Aid has become the word for adhesive bandages, but the thing is adhesive bandages, and I think that the same thing is happening because marketing has just gone bonkers with it.
So the fact that it’s become so mainstream is what is actually detracting from actually defining the technology. So you almost have to think about them as separate things. When you read about things that are AI and you see these things like you mentioned earlier, there’s a lot of things that claim that they have AI and that are not AI, and I think that that is what’s similarly happening of marketing has just taken over that. And I work in marketing.
Matai Wilson:
It’s that, and it’s also people are driven by incentives. You can’t be a company today that doesn’t talk about AI and survive. You won’t get investors, you won’t get funding, et cetera. Google is a great example of this where by the way, Google has been a leader in actual AI for decades, and they never talked about it. Because they used it, pieces of it such as their machine learning algorithms, et cetera, in advertising, to fine-tune their advertising and make their businesses more effective. AI wasn’t a hot topic because it wasn’t a consumer good, and so they never talked about it, but when Chat GPT 3 launched by Open AI, all of a sudden Sundar, the CEO, was under immense pressure. And so the Google IO conference, the following one after that launch, he said AI 80 times within 30 minutes or something, because he was now incentivized to talk about it. So it’s not like the company pivoted to AI all of a sudden, it’s just they needed to market it because the incentive was there to do so.
Sean Sebring:
I will say, I’m grateful Google’s back to being useful as far as a searching engine goes. Because when ChatGPT came around I stopped using Google. Because a lot of the sponsors, that’s what took over, and again, profitability, yada yada. But now that they have the AI engine to do search engine stuff for me where I type it in Google, I actually get a ChatGPT-like response. And again, it’s just meeting what the demand is. That was my demand. When I went to a search engine, I expected, help me find an answer. And ChatGPT became better at that. So Google had to step up and say, “Fine, we’ll still be best at giving you the answer.”
So real quick before we leave the topic, because we’re talking about names, labels for different types of AI. The newest one to me, and I hope I’m not moving too far ahead or putting something in our topic list, Agentic. Agentic is the one that I feel like has come up in the last, for me, my perspective year or so, it’s the newer, more risque of the AI words like, “Oh, Agentic, that’s the new hot topic for AI stuff.” So Agentic, can you tell us about that? Because I know that is very AI-driven, that’s the new one. Tell us about that, Matai.
Matai Wilson:
So Agentic is another really loose term, but it’s all about, in my opinion, how do we create things that do something for you? You talked about for a while ChatGPT replaced Google because it got you answers. And that was something that the LLM as a Generative AI machine was really proficient at doing, was giving you answers to your questions in a natural language format, that’s very easy for you to understand and digest, instead of you having to go click through a bunch of links on a search page and read it yourself. Agents take that to the next level. They’re still predominantly this very Generative AI, natural language interface kind of thing, but there’s an entire underlying framework beneath that. And again, this is a very loose term, it’s not like industry-defined.
But in general, if you think of previously I have a button, and I need that button to click on it and go do something for me, perform an action. It will probably have a few API calls to different services, and those different services will perform those functions, and then call back for that API, which will give you information and then will give you the output. We’re essentially doing the same thing, except now you are orchestrating through what is essentially like the API of these LLMs, and that’s called the MCP server. And in that MCP server, you then call out to different things. And you can call out to agents. And agents are a very speciated version of, let’s say that LLM, that is trained to do, or fine-tuned to do a task, and one task or a few tasks really, really well. And it’s constrained to a narrower space.
So when people talk about Agentic, I think it’s a little bit too loose, because it’s hard to understand what’s going on. Agents are more specific. It’s a tool that’s meant to go do a specific action for you, book a flight for instance. And that agent is going to have access to all of that flight data, and the booking data, and be able to go figure out what’s the best flight for you to take. And then it’s going to come back and say, “I can book this flight for this amount of money.”
It’s not going to go search the entire World Wide Web for all of the different flights or something like that. It’s worth noting though that in an Agentic framework, this is why I say it’s a loose term, the Agentic framework can exist of agents that may not be AI or Generative AI. They may be discrete functions like searching something. So it may call out to an actual search engine, and then just go search and give you back the top result. That’s a very discrete function that doesn’t necessarily need Generative AI to go figure that out. Or, “Hey, what’s the weather outside?” You don’t need AI to do that. It’s going to go call on the weather API and then return that into your actual prompt.
Sean Sebring:
What I think is really neat about that is the potential for the explosion of now these agents talk to each other, and they’ve got their specific design and function. Is that where this is headed? Almost like I have a specific role at SolarWinds, Chrystal has a specific role, but we’re able to interact and work together to accomplish a greater goal. Is that how the Agentic, the agents will start to exist is they work together to still all accomplish a mission?
Matai Wilson:
I think that’s how we’re thinking about it. That’s a very nascent thing. The way that we think about it at SolarWinds, and the way that I think about it is you’re still going to need your general contractor, who is going to go find the contractors who are good at doing that specific job, and orchestrate them all in tandem to do things at this time, and then get that data from here and then give it to somebody else, and so on and so forth. And you actually see some of this already occurring in things like deep research modes, where it’s going to self-prompt.
So what’s happening in these deep research, or these thinking or reasoning models, is you’re offering it like a short prompt. Tell me more about nuclear fusion. And it’s going to say, “Okay, well what does this person want to know about nuclear fusion?” And it’s going to self-prompt and ask itself a bunch of different questions. And then it’s going to go out and answer those questions individually, and then compile them and summarize them. So that’s a very general contractor role of, first I need to understand the problem, then I need to break down the problem into an actual scope, and then go deliver on each of those scopes individually, and then bring them all back together to have a finished product.
So the general contractor goes, finds the contractors such as the guy who’s going to frame the house, the guy who’s going to do the plumbing, the guy who does the electrical. And he’s going to schedule when they’re going to deliver, and what they’re going to deliver. And at the end, the general contractor is going to deliver you a full-built home. This is the same thing as how we think about the Agentic evolution. So the electrician is not necessarily going to go talk to the plumber, and talk to each other and figure out exactly what to do. They’re going to have an intermediary, which is going to be the general contractor. And a general contractor is going to see, “Oh, the electrician is doing this, but well, they need to wait to do that before the plumber finishes over here. So hang on, plumber, don’t do that yet.”
Electrician tells the general contractor, “Hey, I’m done with this job.” Then the general contractor goes and says, “Okay, plumber, go do this.” That is how a more efficient way to think about it. Because if you have all of these LLMs talking to each other individually all the time, it’s the same thing as all these contractors talking to each other individually all the time. It’s chaos and it’s very inefficient. So you want a governing orchestrator who understands the intent of your prompt, to then go figure out exactly what it wants each of these contractors or agents to do to deliver on that entire thing. And by the way, that’s a very human thing. This is where we started really getting…
Sean Sebring:
That’s exactly where I was about to go with this. This is my closing remark on this, unless Chrystal, you want to add. Is the idea of self-prompting. I’m like, “Okay, we’re getting a little close to consciousness here. You’re questioning yourself.”
Matai Wilson:
Yeah, but that’s the goal. The whole point of the transformer model, and the way that we’ve used generative AI and LLMs as statistical analyses that deliver on that, that’s actually modeled directly after how we think the human brain actually thinks. We intake a bunch of information, and somewhere in the back of this neuron-filled brain of mine, I’m doing a bunch of statistical things to predict what I should say next, or what I should do next or what is going to happen in that car. That’s actually why human drivers, for instance, are surprisingly good at driving, is because we are good at predicting what’s going to happen to a car, or what they’re going to do, even though we don’t actually know. What we’re doing, is doing a statistical calculation back there somewhere and thinking like, “Oh yeah, they’re going to do this or they’re going to do that. And oh, they’re braking, so now I need to brake.”
Chrystal Taylor:
So first of all, that’s crazy what you just said. My brain doing statistics in the background, in the subconscious is like, I don’t even know about statistics in my conscious brain, but in my subconscious brain it’s doing statistical analysis is wild to me.
Matai Wilson:
It’s not doing a discrete calculation. It’s a probabilistic thing. Think about catching a ball that’s flying through the air, and the amount of actual calculations that you would have to do…
Chrystal Taylor:
To catch it.
Matai Wilson:
To know exactly where that ball’s going to land and put your hand there to catch it. It’s an immense amount of calculation. Instead, you don’t do the calculation, you just infer, you’re like, “Hey, based on the direction it’s heading, and how I feel the wind, I’m pretty sure it’s going to end up right about here.”
Sean Sebring:
Probably…
Matai Wilson:
And that’s what AI is doing.
Chrystal Taylor:
I just want to say, that’s an incredible analogy and my mind is blown over here right now. But I want to take it back a little bit because so you were describing Agentic AI and what it is, which is all very instructional for me because I don’t know much about it yet, the technology itself. And I wanted to ask you, I don’t know how much of this you can answer, but does this make it more challenging to meet those AI By Design principles that we have for security and data privacy and that kind of stuff?
Because one of the big problems with these big LLMs is that they’re completely open basically to the internet, and all of the data and all of that therein. And there’s always these big concerns about data privacy and data security and all of that. But the way that you were describing the contractors and subcontractors, they don’t necessarily have to talk to each other, which would indicate in my brain at least, that that means that you could keep parts of the data isolated from each other, so there’s less crossover there. From a technology perspective, is it more challenging to secure that than maybe in other types of machine learning, or AI in the Agentic AI system? Or is it easier because you have these isolated segments?
Matai Wilson:
So I should caveat this by saying that I am not on the GDPR council and I am not an expert in data privacy. But how I would think about it is, this is really no different than how companies handle your data already. There are pretty strict rules and regulations around PII masking, and what data we do expose, how we encrypt it, how we don’t encrypt it, how we tokenize it, etc. And none of that has changed. So really the question around data privacy is people think, “Is my data going to get absorbed, trained-on, essentially, by the model, and then someone else accessing that same model is now going to be able to prompt to get answers about information that I’ve given the model?” So it’s a different issue. It’s not like as we’re transporting data from one entity to another, one endpoint to another, how are we encrypting, or tokenizing, or masking that data to make sure that there’s absolute privacy?
All of that has remained completely unchanged. The data question is, is the model learning based on what you’re telling it? And then is somebody else going to then be able to extract that same data on the other side? There’s a really, really good way to think about how we’re going to do this, and that’s that, the answer is simply no. We’re first and foremost, we’re not training these models on any customer data, and there wouldn’t be a really compelling reason to do that in the first place. But the main way to think about this is, every model that exists today is trained on a massive subset of data, 90 billion parameters or things like that. That is its pre-trained data. So that’s why it’s called GPT Generative Pre-trained.
On top of that for it to do anything else, like infer on the context that you’ve given it for instance, the information that’s in your private company database, it adds context, and then basically does that statistical calculation from the pre-trained data in combining that context and then getting an output based on that. It is not taking that context and adding it to the pre-trained data set. It actually can’t do that. So from a pure technology perspective, the only way for a model to say, update itself with the data that you’re giving it, and then give that data to somebody else when somebody else prompts them, “Hey, my competitor I think, is using you. What do you know about them?” That can’t happen because that context window where you drop your context in, especially at an enterprise level, essentially just disappears once you’re done with it.
And yeah, there are these models that are like, “Oh, well, we’ll remember this stuff.” But all of that, again, that’s just a pure data storage thing. So when I say, “Hey, I’ve got this AI, and it’s going to remember what’s going on in your configuration environment, it’s going to remember the SQL database that you configured.” What I’m essentially doing is just saying, “I am taking a snapshot of your database and I remember what that database looks like.” I’m not training the model on it. I’m just holding that snapshot as context for later use in my inference. And so when you think about it that way, that snapshot of data is still subject to the same data masking, and data privacy, et cetera, that everything else would’ve otherwise already been if we were just transporting data back and forth.
So there is no world where we’re training a model, or updating the pre-trained section of data on your data, and so you never have to worry about that. And then any data that we do use, let’s as context to infer upon to get you better answers, that is still just your data and it’s still subject to the same privacy and the same everything else that keeps your data secure.
Sean Sebring:
That actually helps me a lot.
Matai Wilson:
Does that makes sense? Should I explain it simpler?
Sean Sebring:
No. I suppose just for Sean’s layman’s terms, if I had the ability to actually train ChatGPT with how ridiculous I engage with it, that would be a problem. So of course what I’m doing with ChatGPT isn’t training it, it’s already trained and I’m giving it the context about me, my questions, what I’m trying to do. And that’s just context for it to say, “Well, this guy’s a lunatic, but here’s what he wants to know. So let me tell him.” It’s model isn’t changing.
Matai Wilson:
With a minor caveat, ChatGPT from a consumer perspective, not an enterprise perspective.
Sean Sebring:
Oh, sure.
Matai Wilson:
ChatGPT is going to train on your data.
Sean Sebring:
Oh, it is?
Matai Wilson:
It’s not constantly training on it. It’s not like you’re dropping in conversations and it’s updating the base GPT model over. What they’re doing is they’re going to store that data, they’re going to compile it all into the next 100 billion parameter model that they’re going to build, ChatGPT 6 or something. And all that conversation that you have will now be context for it to be trained on, and will then become a part of the larger GPT model. But on an enterprise perspective, even Open AI knows, and by the way, when you use it at an enterprise level or a company level, it’ll say, “We don’t train on your data.” And that is again, they’re not taking the conversations that you have and then going and creating a new model on top of that. But as a consumer, they’re absolutely doing that.
Sean Sebring:
Yeah, well take advantage of me. It’s free, whatever.
Chrystal Taylor:
Yeah, that’s not a new thought either, right? That’s just public versus private anything data. When you use a publicly open model, it’s public. Google’s been training their advertising based on us for years and years, so that shouldn’t really be a surprise to anyone that the public model of any of these large LLMs.
Matai Wilson:
I would also even by the way, for consumers who are concerned about their data privacy. Number one, if you don’t want somebody to know about it, don’t put it on the internet. But also, I like to think about this in a weird analogy. But sure, if you were the only user of ChatGPT, and then they trained their data on just your conversations, there’s probably a data privacy issue. It’s going to uniquely know, and then tell the next people who use it like, “Oh yeah, this is exactly what they talked about.” More or less. You’re one of hundreds of millions of people. So your ten-minute conversation that consisted of maybe 1,000 words out of their literal tens of billions of parameters that they’re training on, it’s just so insignificant. You really don’t have to worry about it exposing your secrets from its GPT-based model. Now, if they store your secrets separately and then get subpoenaed for that, that’s different.
Sean Sebring:
The hubris to think I’m that important though.
Matai Wilson:
Exactly, yeah, that’s what you have to remember is yeah, that could be an issue, but realistically, is your data so important that they’re going to individually go and tell the GPT, “Hey, remember Chrystal’s conversation with Sean about something at SolarWinds?” Oh yeah, we should remember that, and then let everyone else query that exact data. That’s not going to happen.
Chrystal Taylor:
Interesting. Well, so you clearly enjoy the technology and know a lot about it. So what is it like for you building the technology? Because I think that we had a conversation with Derek Daly, I want to say a little over a year ago, talking about functionality that exists inside our products already, and what the conversations were around building those pieces of technology and where do you host things and all of that stuff. So what’s it like for you building the technology?
Matai Wilson:
It’s extremely exciting, but it’s also so difficult, because SolarWinds is trying to be on this bleeding edge of Generative AI and operational AI. With that, we end up in this situation where we’re ahead of where customers are. And especially as a product manager, when the customer doesn’t know what they want, it’s really hard to know what to give them. And so that’s what we find ourselves in of, it’s difficult to validate our hypotheses without just going and building it and seeing what the reception is like. But that’s an expensive way to do business, especially with respect to AI, because AI is very expensive to build and to run. And so there’s this constant trade-off and balance that I’m trying to hit of, “Let’s go chase the next big thing like agents, but let’s also be reasonable and actually deliver on something of value that our customers will want and that will then build shareholder value.”
There’s a very, very difficult balance to be had there, of chase the new shiny object, deliver business value. And this is, I’ve actually, I’ve been in technology and software for a while. I’ve never been in a situation that is as lopsided in this case, because AI is, as we spoke before, such a hot topic. If you’re not talking about AI, you are a dinosaur. That it is pulling that business value this way. People want it more, but then for a split second, they’ll look over here and be like, “Wait, wait, wait. But money still matters.” Profits still matter. Revenue still matters. Margins still matter.
And so there is a very tangible tug of war going on between, if you don’t deliver on this AI thing that everybody else says they’re delivering on, the company’s dead. But also, it’s really expensive to do that, and you don’t know exactly whether customers are going to want it, or even use it or whether it will work, and that’s going to weigh on this. So we have to balance about, how do we still deliver features of tangible value, that we are 100% sure, and we have done proper research to understand that our customers really want, and then how do we go experiment? And how do we really try and go do something special that other people aren’t doing or can’t do, et cetera?
Chrystal Taylor:
I love this. And also, just to give you some context, the reason why I’m also very excited about this is because historically, SolarWinds has not tried too hard to be on the bleeding edge of technology in any capacity. We’re usually several years behind. And there’s lots of reasons for that. There wasn’t really an appetite from our customer base for us to be on the bleeding edge. There are always people that want it, but then the larger part would’ve rather have stayed in a comfortable zone, and making sure they have the support that they need and all of that. And so having been around for entirely too long, SolarWinds products for about 15 years, I can tell you that it is a new, exciting prospect that we are trying so hard to incorporate this technology. I’ve had loads of conversations with the THWACK MVPs and stuff like that where we talk about these things, and they are very exciting. Even if they don’t really know what they want, it is very exciting because SolarWinds has previously not really done much of that.
There have been some things that have come out over the years that are newer technologies for sure. We do get there, but usually we’re two, three years behind the curve, I would say. So waiting for it to be established, and know our customers want it and move into the space of how do we support that. So hearing about us trying to incorporate technology that is so new, and as you said earlier, the framework is still being defined for a lot of this stuff. And this is true in a lot of ways, and partially because technology is advancing at such a rapid rate, there’s no possible way for us to have frameworks for everything anyway. I just find it really exciting and interesting to hear about how we’re thinking about these things, and how we’re working on incorporating them to better enable our customers to save themselves time and all of the things that, all of the stuff that we’ve already been working on, and already talked about and already announced our design to do, to help them save time and ultimately save money as themselves.
It’s just really exciting. So thank you for sharing your experience with us, because I think that that has been really interesting.
Matai Wilson:
Sure. I would also say… I would disagree a little bit that SolarWinds has been behind too much. I think I talked about Sundar and Google, and I’m not trying to compare SolarWinds to Google, but in a very similar vein, SolarWinds has historically done a lot of what people are now calling AI. They’ve done a lot of machine learning, done a lot of predictive analytics, and we just didn’t talk about it as AI. We were just like, “Here’s a new feature called anomaly-based alerting, and it’s going to help you reduce noise.” Done. But now it’s like, “Oh, here’s AI anomaly-based alerting.” So we’ve been doing these things, we just haven’t done a good enough job of talking about them and marketing it with that AI thing. And so I think now we’re getting more into the tune of, “Let’s talk about AI specifically as that umbrella term, and all the goodness that we’ve already delivered over the past three, four or five years.”
Chrystal Taylor:
Yes. Yeah, behind was probably the wrong word. I didn’t really mean that. We have had some transformative technologies put into our products that people still to this day, can’t explain how NetPath works for instance. And that was a huge transformative piece of technology. And even when I go to events now, people are like, “What’s that? How does that work? How do I use that?” We’ve certainly been doing it for a long time and I talk about it a lot. So I don’t mean to demean us in any way, just I don’t know how to describe what I’m thinking I guess. But more that we weren’t so on the bleeding edge of technology. Or at least like you say, we weren’t vocal about what we were doing. I think that if this is one of those times where we’re talking about the things that are very current, this is what’s happening transforming technology to now, and that is unusual for historical SolarWinds.
Sean Sebring:
I am fortunate in the SolarWinds portfolio that I’m very close to our ITSM product, SolarWinds Service Desk. And just seeing Generative AI in there, and being able to engage with it in the product and see, oh my gosh, the potential. And the hardest thing was getting it in there. Now that it’s in there, it’s like, “What do we want to do with it next?”
You talked about something, Matai, it’s hard to develop and design sometimes because the customers don’t even know what they want. I have a great example of that specifically, and it has to do with analytics and reporting. A lot of time I get with customers and they want to talk about reporting. I’m like, “Okay, well what story do you want to tell? What are you trying to unveil with the data that’s in there?” And a lot of times they don’t know. They just know they want to know. And so this is where I’m like, “Man, if AI can help open doors for things like that, just literally be an assistant and an advisor to customers for a lot of different things.” Analytics is a big one because being able to work with data to produce the stories is a specialty skill.
And if AI can do that, now I’m more powerful as just a Service Desk Manager, because I had a tool that helped me dive into the data, to tell a meaningful story that I can then bring up to leadership. Which without, I’d need a consultant, someone to come in and review, maybe even a BI person to take the data and transform it into a story. And I always think of reporting as storytelling. It’s all storytelling. If you want a chart, you want a graph, that’s a supplemental piece of content to your story you’re trying to tell.
So I’m super pumped for what more AI is going to come. Especially, and this is why I brought up the Agentic piece. SolarWinds has loads of products if you weren’t aware, listeners, customers, potential customers, and the ability for things to talk to each other to me is huge, because not everything’s going to be an expert at everything. So we need the ability for me being an expert in one field, and someone else being an expert in another, we come together and then we have a solution. That’s how the operations world works. And so I’m excited for the technology to have these capabilities as well.
Matai Wilson:
Yeah, it’s definitely a really exciting time to be working in technology. And for me, one of the things that I actually think Generative AI is the most useful at, is extremely relevant to our industry, is the whole age-old adage of you don’t know what you don’t know. Generative AI is incredibly good at telling you what you don’t know. And when it comes to IT observability, what you don’t know is the thing that will bring down your system. And so when you couple this idea that Generative AI is good at telling you what that is, it creates more resiliency, it creates more stability, more availability.
We in the industry talk about 3, 4, 5, 6 9’s of availability as gold standards. And I think that there’s not an unrealistic possibility that we could get to a world where availability is 100%, because AI is not going to just tell you what’s going wrong. It’s just going to know what’s going to go wrong before it happens, and it’s going to solve the problem before it ever becomes a problem, and you will never go down. There’s a whole really interesting world of possibilities when you think about it that way.
Chrystal Taylor:
Well, loads of people will be happy if we ever get to that world where we’re at 100% availability. Speaking of excitement, and it doesn’t have to be related to your work or SolarWinds products, but what, just as a fun end and wrap-up here, what are you the most excited about the possibility of AI being able to do?
Matai Wilson:
So I’ll split it into I think two sides, from a personal perspective and from a work perspective. And I think from a work perspective, it’s really on the thing that I just hit on, and I’ll go a little deeper into that. So the way that we think about AI at SolarWinds is what we are trying to do is create this IT observability system, this platform for observing your environment. And there are really two major components to that. The first component is the software, which then exposes itself via UI on your browser, on your monitor. And the second component is the IT admin, or this human involved who actually then integrates with that software, and makes it do whatever you need it to do to resolve issues, to identify, et cetera, observe. As we tiptoe further into the world of AI, the eventuality is that we will get to a fully autonomous version of that IT observability.
The goal is that the software takes over more and more of the compute, and the human cognitive load becomes less and less. And so what we want to do is allow the IT admin, or whomever is sitting in front of the computer working with our software to eventually just take their hands off the wheel, and let AI drive. And that gets us to this point of autonomous operations. And to do that, we have to replace the human cognitive part of it. And there’s three big parts. The first one would be the digital subconscious part of that human cognitive load, which is, how do I reduce noise and amplify a signal? How do I reduce mean time to detect an event or an incident?
The second part is the digital consciousness, which is how do I take that and action it and resolve? So reducing mean time to remediate or resolve an incident? And then the last part is that part that I’m super excited about, which I’ve referred to as the digital intuition. And that digital intuition says, based on everything that I’m seeing, I anticipate that this node over here is going to fail at this time. And so what I’m going to do is go proactively remediate instead of retroactively remediate. And that is how we get to, again, that instead of 4, 5, 6 9’s of variability, we get to 100% availability. And so that is super, super exciting.
We’re already doing a ton of stuff from the digital subconscious, and a ton of stuff from the digital conscious. But one of the things, that missing piece, the digital intuition is made available by things like Generative AI, because Generative AI is so good at predicting. If you think about what it’s really doing, you’re giving it a set of information, and it’s predicting the answer that you want based on that information. So you can apply that same theory towards a subset of data, and like, “Okay, well predict what’s going to go wrong here, and then now go remediate.” So you get to this digital intuition enables proactive remediation, which enables full availability all the time, and that is really, really cool. And then from a personal perspective, I’m excited for AI to graduate from being the really eager but kind of dumb intern, to an actual useful helper. But also maybe I’m a little afraid of that because that signifies that my job is probably at risk.
Chrystal Taylor:
Well, you just got to learn to cope with it.
Sean Sebring:
Going back to the business side of things, I work in sales, and so I work a lot with trying to help customers understand value. And the talk of intuition is great on paper, but you’re also investing in a potential that hasn’t become a reality. Do you fear, or see, or anticipate that there’s got to be some resistance to accepting that, “Hey, this intuition.” How do we approach convincing people to say, “This will be worth investment? It’s preventative.” I just see pushback, trust issues with saying, “You’re just selling me snake oil saying it’s going to prevent, but how will I actually know because I didn’t feel or see the cost if it didn’t happen.” So you’re just, do you see where I’m going with this?
Matai Wilson:
That’s a great point, and I should say there should rightfully be a ton of, maybe not a ton, but a certain amount of skepticism towards AI. AI is not the end-all be-all today. It can’t do everything, and it does get things wrong, and especially in your IT enterprise environment, you don’t want it to get things wrong. Or at least you want it to get things wrong less than a human would. On the flip side of that, I would say a lot of these technologies already exist, and it’s a matter of how we combine them together to create something really valuable. And that value is actually not invisible. It’s very easy to measure.
So for instance, let’s take the digital intuition and the proactive remediation. There are things that you can measure like MTBI, Mean Time Between Incidents. And if MTBI is at some baseline threshold, and all of a sudden you turn on proactive remediation and it drops, you are literally going to see an immediate impact and like, “Hey, normally we spend this amount of money on this amount of incidents that occur for this amount of time, and now we’re spending half that amount, hypothetically.”
So these are things that are not difficult to measure. It’s just a matter of communicating that effectively. Again, the technology is here, it’s only going to get better. Now, will it plateau? Will it continue to accelerate at this exponential rate? I think GPT-5 might suggest that it would be the former, not the latter, but also there were a lot of decisions with GPT-5 that were more product decisions than probably technological decisions. And so the technology will get better. There’s no way around that. It’s not going to get worse. And so if these technological paradigms are already shifting towards AI can do this, AI can do that. While you should have a healthy sense of skepticism for what it can do today, the idea that it can proactively remediate six months from now or in however much time, this is not outside of the bounds of reality. This is going to happen. It’s just a matter of which software vendor do you want to do it for you?
Sean Sebring:
Yeah, no, the MTBI I think really helped with that because it’s about understanding, “Well, since we won’t have this problem, what metric should we track to see how much better we’re progressing?” And MTBI mean time between incidents isn’t a common one, I would say nowadays with a lot of what I interact with. And so instead of…
Matai Wilson:
Yeah, everyone’s focused on MTTD, GTR, because we’re solving things retroactively.
Sean Sebring:
But what if we were in a state where it’s now instead of tracking how long it took me to fix it, how long between anything that needed fixing is taking place?
Matai Wilson:
Exactly. Back to the autonomous driving analogy. The human, as we talked about, is able to predict an accident before it happens, and then mitigate and avoid that accident. Hit the brakes before they’re going to hit somebody or turn into the other lane before somebody else moves in and hits them. This is the same kind of thing. We are going to hit the brakes before we’re overloading resources, so that you don’t end up with a critical shutdown or something like that. It is absolutely going to happen. People are trying to figure it out in autonomous vehicles, people are going to try and figure out in IT observability, they’re going to figure it out all over the place because there’s so much value to be had about proactively remediating incidents versus retroactively. Retroactively is expensive. Every time an incident occurs, it’s expensive. So there’s a ton of incentive to go do this proactively.
Sean Sebring:
Oh, I have so many other questions.
Chrystal Taylor:
I know. I was just going to say.
Sean Sebring:
We don’t have infinite time.
Chrystal Taylor:
I was just going to say, I think we could have a whole other episode just talking about business metrics that you should be measuring in determining how effective your monitoring, or observability or both solution is, including your incident response, because people don’t do it enough. I’ve talked to a lot of people, I’ve worked in a lot of environments for our customers and how they set things up, and just like any other major IT project, before you implement AI, you should have a measurement of the before and the after, just like any other major changes you’re going to be making. So if you’re not measuring mean time between incidents, if you’re not measuring how much it costs you every time you have an outage, and when you made changes to those alerts, you reduced that time, and you reduce that cost by whatever amount, you should be. A, it’s really good for your job security. You should be telling people the changes that you make are improving those things.
It’s good for the IT business as a whole. Because historically the department of IT has been seen as a cost center. So being able to show where you’re saving the business money should be really important to you as an IT department. Measure stuff now. It’s good for you now. It’s good for you to present to management and to your upper levels. If you’re not already measuring those things, measure them now. And when you make changes, you’ll be able to see how those changes are affecting those metrics as well, which in reality just goes to the business bottom line.
Thank you so much, Matai, for joining us today on TechPod, and for sharing your knowledge and experience with us. It’s been enlightening for me, so I appreciate it.
Matai Wilson:
Thanks for having me.
Chrystal Taylor:
If you’re interested in learning more about how SolarWinds thinks about AI, check out the blog series, AI By Design, linked in the show notes. And we also have upcoming the next SolarWinds Day on October the 8th, where we’re going to be talking about the future of IT arriving. And there’s going to be some interesting things being shown there. So if you’re interested in what we at SolarWinds are doing and thinking about with AI, please join us on October 8th. We’d love to have you. Thank you to our listeners for joining us on another episode of SolarWinds TechPod. I’m your host, Chrystal Taylor, joined by fellow host Sean Sebring. If you haven’t yet, make sure to subscribe and follow for more TechPod content. Thanks for tuning in.