IT Trends and Predictions for 2026 — SolarWinds TechPod 105

Stream on:

SolarWinds TechPod returns with its annual IT trends and predictions episode — and 2026 is all about Agentic AI.

In this episode of SolarWinds TechPod, hosts Sean Sebring and Chrystal Taylor are joined by Sascha Giese and Lauren Okruch to break down how AI, ITSM, automation, governance, and resilience will shape IT operations in 2026. As a leader in IT management, observability, and IT service management, SolarWinds offers a unique perspective on how Agentic AI is moving IT from automation to autonomous action, and what that means for governance, security, and the evolving role of IT teams.

Topics covered in this SolarWinds TechPod episode:

  • What Agentic AI means for modern IT organizations
  • How SolarWinds sees AI evolving beyond traditional automation
  • The rise of shadow AI and shadow IT in enterprise environments
  • Why IT governance and trust are critical in 2026
  • How ITSM is changing with AI-driven workflows
  • Energy, sustainability, and cost considerations of AI at scale
  • Resilience, multi-cloud strategies, and right-compute decision making
  • Why IT is no longer just a cost center, but an innovation engine

This episode is essential listening for SolarWinds users, IT leaders, sysadmins, service desk teams, and technology decision-makers preparing for the next era of AI-powered IT operations.

RELATED LINKS:

Lauren Okruch

Guest | Lauren Okruch

Lauren is a passionate Product Marketing Manager at SolarWinds, specializing in IT Service Management (ITSM). With a deep appreciation for the balance between structure and… Read More
Sascha Giese

Guest | Tech Evangelist

Sascha Giese started his IT career in the early 2000s as a network and system administrator in a public school, where he learned that teaching… Read More
Sean Sebring

Host

Some people call him Mr. ITIL - actually, nobody calls him that - But everyone who works with Sean knows how crazy he is about… Read More
Chrystal Taylor

Host | Tech Evangelist

Chrystal Taylor is a dedicated technologist with nearly a decade of experience and has built her career by leveraging curiosity to solve problems, no matter… Read More

Episode Transcript

Sean Sebring:

Hello and welcome to another episode of SolarWinds TechPod. My name is Sean Sebring, joined as always by my fellow host, Chrystal Taylor. And today, we’re going to be visiting something we do every year, IT trends and predictions this year for obviously 2026.

Today, we’re joined by two guests, which we are very happy to have. A regular guest of ours, you see him often. He’s all about SolarWinds, our very own Sascha Giese. And joining us for the first time on TechPod is Lauren Okruch. So, Lauren, since this is your first time joining us on TechPod, would you give us a quick introduction about your role and what you’ll be bringing to the table as far as your experience and thoughts goes?

Lauren Okruch:

Yeah, of course. First of all, thanks so much for having me. It’s super exciting to be on the infamous TechPod. My name is Lauren, as Sean said. I’ve been at SolarWinds for over a year and a half, not quite two years. I work in product marketing. So, I hope today to bring a perspective, not just about the technology, but maybe about perception and how everything matters along the fronts of the trends as well.

So, at SolarWinds, I tend to focus on our ITSM products and newly acquired incident response products. So, I think we’ll have a lot of great things to talk about. Definitely, talk about the tech trends. But also, maybe the broader picture of what that tech means for businesses across the globe.

Sean Sebring:

Absolutely. No, great stuff. And selfishly, super happy to have another ITSM advocate on the call. I’m usually tooting my horn by myself over here, even though these guys will unwaveringly support all of my… Anyway. So, we’re here to talk again about trends and predictions. Again, something we do every year at the start of the year. A not surprising theme will show up in here. Our friend Al. I mean, AI. Yes. AI. It was just visually on the paper. That’s what it looked like as Al. Forgive me.

So, AI, we’re not surprised that’s going to be showing up, but there’s something that’s a little bit different this year as opposed to last year. It’s not that it wasn’t here, but it’s definitely hitting hard in the trends piece. You almost can’t not hear it, and that is the shift to Agentic. And so, Agentic is taking on… It’s almost like when ChatGPT first hit, right? It was this huge rollercoaster of, “Oh, my gosh, it’s revolutionizing everything.” And now, Agentic itself is changing what we thought of as AI, at least as far as how it’s leveraged in the business and tech world.

So, to get us started, because a lot of the trends and talk about Agentic is going to revolve around this. I’d like to get us started with, maybe Lauren, you can take a whack at it first. What do you think Agentic means to the public as far as how it is used in the industry?

Lauren Okruch:

Yeah, a great question. I think if we look at the solution that Agentic might bring, especially maybe in the world of ITSM, a lot of those, we’ll call them papercut issues. Those things that are really repetitive, they seem really small, but like a papercut, when you get so many of them, the pain is very real.

So, I think Agentic is going to really start to… We were in generative AI or in regular AI, let’s call it traditional AI. We were solving level one issues. And I think Agentic is going to take us to that level zero or even that LN, like the… It’s before it even becomes an issue, and it’s going to take all of those papercuts away. And I think that’s really going to… It could really do things like boost team morale. I think people might be a lot happier in their jobs. I think this was a topic last year. It’s going to shift what people do for their work. They might not be doing the traditional things they were doing before because now the Agentic might solve that.

So, they’re going to shift into maybe more strategic roles, things like that. But yeah, I think other than resolving those papercut issues, which I think are really undervalued in their impact, I think it might really transform the way we approach our work, not just the resolutions.

Sean Sebring:

Yeah, I was seeing that in the trends as well. Human jobs aren’t going away. They’re changing. So, we’re more of a supervisory to AI: auditor, compliance, accepting things, risk included.

So, again, it’s very similar, especially from the frontline approach. We want to fix small issues with AI. But Agentic is taking a little bit of a different approach to it than just how we were previously, like you said, with traditional AI. So, Sascha, anything you want to add to how you think Agentic would differ from our generative.

Sascha Giese:

Well, many years ago, I left a job because the repetition killed me. I always did the same, the same, the same stuff over. And I think we all go through these phases where you master a job. And when the repetition continues, it gets terribly boring.

So, I have high hopes for Agentic AI to remove that element. On the other hand, it will still keep us busy because as you just mentioned, we have to supervise the AI. We have to continue to teach the AI new things and we have to control it. That’s very important. And I guess this will be a challenge at some point when an Agentic AI gets so complex that it does do things that we no longer understand, but that is maybe a prediction for next year, not for 2026.

Chrystal Taylor:

I agree. I think that one of the things that’s interesting about Agentic is that it really opens the doors for connectivity. So, agents are talking to other agents, which really gives us a lot more flexibility in what we can do. So, these tasks that you guys are talking about taking over, the reason why we’re going to be able to address those things is because we’re now going to have agents that can talk to each other in different platforms and things like that. And that’s been really interesting to think about how we’re… Like you said, Lauren, think about how we’re doing work and it’s going to change the way that we do work.

I do also think, Sascha, that I agree with you that in some ways it does mean that we are probably going to approach a state of not knowing how to do the task anymore because there is… I went to Ignite two weeks ago and I attended a talk while I was there. It was about vibe coding, but I think it’s very important to also think about it in the terms of Agentic of that in order for us to continue to build and manipulate AI, we still need to understand the fundamentals and the basics.

And while right now, it’s very exciting. Agentic’s exciting. All the things that we’re doing with AI is exciting. It’s also, there’s fear because there are some of the predictions were about cybersecurity concerns and things like that because while there’s additional connectivity, that increases risk in a lot of ways as well. Now, there’s other entry points where they can be used. They can use Agentic AI to connect to other things.

So, if the landscape is changing as it always is because this is IT, the landscape is changing, but it is going to be really interesting to see how Agentic AI transforms every role, not just tech roles, because it is not going to be just tech roles. I mean, you can see virtual agents and ITSM platforms now on websites, and those are going to continue to expand as they hook into Agentic AI. And now, you can ask questions about other things.And so, all of these things are going to expand, and that is a little bit terrifying because there isn’t enough governance around that. And I don’t think that we’re going to see enough governance around it come up in 2026.

So, I’m with Sascha on maybe following in the next five years, we’ll start to see that. But it feels like governance is behind. And so, industry, you’re going to be responsible for your own governance. And that can be scary in a way as well, because it means that people are more willing maybe to take risks for the bottom dollar than they would if their whatever entity, government entity or compliance that they have to adhere to were to say, “No, there’s major fines,” because as we know, it comes down to the bottom dollar. So, I think it’s going to be a really interesting year with Agentic really taking off because it’s already begun. We were already seeing it places and it’s really, I think it’s going to explode next year.

Sean Sebring:

To tie a bow on the what is Agentic, all great stuff, the big theme that stood out to me in looking at the trends and all the ways it was phrased is it’s autonomous versus just automation, which means, it’s taking action versus making just recommendation.

So, the fact that it’s taking action, I think the word that Sascha used, and I can’t remember if I said it as well, was supervising, right? If I have an agent, which Agentic is in the word for sure, right? If I have someone on my team, I’m delegating that to them now with a level and degree of trust. And so, my job is to — fingers-crossed — hope that they’re doing the best and accurate things for the most part, and every now and then I’ll step in and monitor and audit and supervise the work that they’re doing. But yeah, it is a huge risk as well and the governance that you would need because we’re actually saying, “Okay, AI, I trust you enough to say, I’ll supervise what you did, not supervise what you are doing.” So, that level of risk is definitely there, but the power that it holds is crazy.

So, I think you’re right, big time. Over the next few years, it’s going to become easy enough to implement, but the gap we’ll have is around the governance to make sure what it’s doing is ethical and correct and accurate, I suppose would be a good way to put it.

Lauren Okruch:

The way you said it there though, to me, it doesn’t sound too different from a traditional org structure, right?

Sean Sebring:

No, not at all.

Lauren Okruch:

A supervisor and an employee, there is a high degree of trust. I think the thing that you really bring to mind for me is like, at what point do the supervising actions take place?

Sean Sebring:

Yeah.

Lauren Okruch:

And I think that’s an important part of the puzzle.

Sean Sebring:

Absolutely.

Lauren Okruch:

How much is autonomous? And where does the human stay in the loop before?

Sean Sebring:

Well, that’s where that governance piece will come in is there will be more policies around review checkpoints and at what point during development of an Agentic role do we say it’s ready for a release? What types of UAT you’re in, right? It’s its own new type of app development within a process, right? It’s like you’re developing apps for processes now as you go instead of apps for apps and what apps do, right?

So, as confusing as that could be. But no, these are exactly the kinds of things that I think are what we’re trying to predict is what’s going to be important next year as this technology is on the horizon. Well, it’s already here. Let’s be honest.

Chrystal Taylor:

I think though, based on what you were just saying, Lauren, the place to start is the same place we start with all automation, which is things that are already repeatable actions. We don’t have to just give it the keys to the kingdom right away and say, “Hey, go solve all of my problems.” While that is a pie in the sky dream for our bosses, I’m sure, of just being able to say, “Hey, go solve all the problems.” We don’t have to have any input. We’re not there. The technology isn’t there.

But also, I think that you can have a measure of starting to build trust within your organization. If you start at the same level, you would already start doing automation now, right? Things that you’re already handing off to an intern or a junior engineer or something like that that you’re already like, “We can hand this off to someone else. We can hand this off to a script. We can hand this off to an automation process.” That’s a great place to start building trust in the work. And then you can start exploring what other things can we do because that will, A, will build excitement because all of those tasks take time that you don’t want to be spending doing those tedious tasks.

As Sascha mentioned earlier, we get stuck in the routine. We get doing these tedious tasks all the time, and then our jobs get boring that we start looking for something else. So, I think that it’s a good place to start. And I hope that — my hope for 2026 is that is where we don’t put the cart before the horse as it were. Be a little more conservative with how you’re implementing your Agentic stuff so that you can build that trust. Because one of the things that I think we’ve seen with ChatGPT blowing up and all of that stuff over the years with AI is that, initially, there’s huge skepticism and rightfully so because it’s not quite ready for prime time, but everyone’s really excited about it.

And so, then there’s all the jokes about like it can’t tell you how many Rs are in the word strawberry or like whatever. There’s clear and present flaws that you can find very easily, and then that gets rid of, dismisses your trust that you have. And so, then people won’t want to use it for the actual task that they need to use it for. So, consider that as you’re moving into your Agentic era, work on building trust because I think that that is important. Not only building trust, but reinforcing that trust that is trustworthy. Otherwise, it’s going to be a real interesting time.

Sean Sebring:

One thing I want to take from my notes that I really hope persists through is a description of what Agentic AI means to the humans roles coming up is stewards of governance. And I’m like, “Gosh, can I please get that in my title?” I want to be a steward. Let’s bring titles like this back. I want to be a steward of governance. Or a Duke, I’d take a Duke, that’d be cool too. But no, good stuff. Part of transitioning out of just talking about what does Agentic AI mean, I stumbled over saying we’re creating processes and automating the processes with other processes. And that brings me to something else that showed up in the trends, which is shadow vibe apps, which I hadn’t really heard until I started doing research for what does the rest of the world think is going to go on in 2026.

But Shadow vibe apps is exactly that, is like I have such access to tools where I can automate my own processes that I’m basically making apps to solve my own problems without going through any of the governance. It’s so accessible to me. I can just make my own automations. I can create my own potentially Agentic solutions to my own problem so that I can have more early glasses of wine because I’ve taken care of everything I need to in the day. I wanted to spark Sascha’s interest and pass this one over to him first.

Sascha Giese:

That was my keyword. I suppose when you work from home, we all are in the situation… When the mouse doesn’t move often enough, we go into like a hibernation mode. Maybe with the vibe code stuff, I can create a small tool that will move my mouse and IT won’t notice. So, it’s a Shadow app isn’t it?

Sean Sebring:

We’ll call them Sascha Giese vibe apps now.

Sascha Giese:

Yeah. Well, I mean, it is a real problem. We tend to use these tools. We used to do any kind of tool in the past and IT had to put a lock in front of it for security reasons. And now, giving employees such a powerful tool so they can build their own — following recipes in the internet basically — that opens a can of worms that can be very, very dangerous, particularly, when those are used by people who have no idea of IT. So, they don’t really know what is the impact of a possible outcome and that is far worse than having a glass of wine too much, I suppose.

Chrystal Taylor:

I think that the shadow vibe apps are really just the next evolution of shadow IT. So, what we should see in the next year or so is software coming out that detects these shadow vibe apps. When you’re doing monitoring and things now, you can run discovery scans to find IT things, switches, whatever, machines that are coming in, BYOD has always been, it’s been a problem for a long time. You can find virtual machines that have been left behind. You can find all kinds of stuff right now with the stuff that we’re doing, but the next level of that is going to be searching out those shadow apps because that’s where we need to be going.

We need to be solving that problem because Sascha is right, this is going to become a real problem, not just for individuals, but for the companies, because they’re going to have stuff running that they don’t know about. And then, we already have situations where people are spinning up things in the cloud and then that has unexpected costs and this is just going to escalate all of that. It’s going to get way worse.

Sean Sebring:

So, what we need is an Agentic process to identify other unauthorized Agentic processes. Okay. So, Lauren, do you have any thoughts on this before we change topics a little bit?

Lauren Okruch:

I do. And I’ll just preface this with, I’m an eternal optimist, right? I’m a glass always half full kind of girl. So, while I think there’s huge risk and everyone has pointed that out, isn’t it an amazing time of innovation? I feel like if organizations don’t just dismiss these shadow vibe apps, these shadow IT items, if they actually govern them, say, “You can’t use this, but explain to me what you were doing. Let’s talk about what you were building and see, can we replicate it in a way that’s compliant with what we’re working in?” I think this is a really exciting time.

People have so much more ability and freedom to solve their own problems. Isn’t it Steve Jobs that said like, “I’ll hire lazy people all day long because they find the shortest route to the solution.” While shadow IT has a lot of risk and it’s very serious and I don’t condone it at all, I do think the level of innovation that’s available right now is really exciting. And I think if people just harness that by people, I mean, organizations, if they discover these things, like get across the severity of it, but also take the time to say, “Now, show me what you were doing and let’s see if we can make it something that works across the org.” I think it’s something to be concerned about, but also a really nice area of opportunity for innovation.

Chrystal Taylor:

Oh, I completely agree. I think that this is a huge time for innovation, but I also think it’s the shadow part that we have an issue with. So, really what would be optimal is if businesses provided a sandbox that people could go play around in that wasn’t going to jeopardize everything else if something comes.

Lauren Okruch:

Smart. That is the answer, Chrystal.

Sean Sebring:

So, I couldn’t agree more. And this is a great segue into what I think would be a great next topic. And we’ll focus on IT’s perspective, but know that it could be applicable to any part of the business anywhere. It’s just, I see it starting in IT again because IT is just more comfortable and familiar with playing with technology. And that’s the, let’s start with IT isn’t just a cost center anymore.

And so, one of the reasons and things that comes to mind, and this goes to exactly what both of you were saying is, what if the metrics for me are now changing as an IT person? It’s not about the number of tickets I close. Maybe it is, but that’s not the biggest one. What if it’s making sure that the tickets closed are in good shape? But then, another key metric that my management is now looking for is number of innovations that I did, right? How many processes did I automate? And with that, it comes with, again, the kind of governance aspect that we brought up earlier of what’s our process for approaching the creation of an automation and saying, “Here, I want to automate this. Do I have a safe place to test it out? Can I build it?” And then hand it over to a different team that’s responsible for vetting my automation?

But again, encouraging companies to more than they already do, encourage the innovation. Because I’ve been at several companies who have said, “Oh, we’ve got innovation prizes, we’ll have campaigns where we say, pitch your innovation idea.” But if we really truly are much more capable of leveraging the Agentic AI to do a lot more work, I’m now free to be more innovative. I’m more free to now say, “Okay, I need to focus on optimization,” then it should be much more focused on and given the space to the employee to do that. And I think of Google when I think of that because there’s a percentage of time that Google intends to give, at least in the past, this is how it was, to focus on this kind of innovation, something that’s not dedicated to your normal specific role of just working on tickets.

And so, I think, again, I don’t know that that will happen in ’26, I would love it. But I think it’s something that we should start considering or developing the governance around in saying, “When I look at this role,” and this is for recruiters out there maybe and managers is when I’m looking at my role, do I want to create space in my job description for this? And do I want to set it as expectations for something that I want to see from an employee rather than just what the current job description looks like today? Chrystal, maybe you should go first. I see the teeth gritting in, yeah.

Chrystal Taylor:

Okay. For one of the times that I’m in disagreement with you, Sean. I think that we are going to see new metrics, yes, but I do think that harnessing a metric around innovation, a KPI, a requirement for non-creative roles, even for creative roles is extremely challenging to create metrics around creativity and also can be extremely damaging to people who are not interested in doing that.

There is a large subset of people that work in tech roles that just want to head down, do their task, finish all their tasks for the day and leave for the day, and injecting that requirement could be damaging not only to your employee morale, but also it could be damaging to just employees themselves. They may start looking elsewhere because they don’t want those requirements where you look like you’re ready to do a battle. We’re about to have a rap battle here.

Sean Sebring:

Oh, this is a great juicy debate for what we can predict though, because roles should be evolving too. And so, feeling like we want to stay locked into accepting, you can stay doing that same old role. We don’t need to evolve the role. I think that’s the dangerous part where all these predictions and trends say, “If you don’t evolve, you’re going to get left behind.” So, maybe the roles have to evolve. And if you want to…

Chrystal Taylor:

I think they’re two different things though.

Sean Sebring:

Okay, fair.

Chrystal Taylor:

I think that the roles do have to evolve. Their responsibilities do have to evolve, but requiring creativity is different than requiring them to use a new technology. You have to go learn a new technology and learn how to use that and even learning how to automate. You could argue that that’s creativity, but as soon as you put a KPI around that, a lot of people will immediately balk like, “Oh, I don’t know how to…” I get overwhelmed with too much when someone tries to tell me, “You can go do anything you want.” It’s why I can’t play Minecraft. “You can do anything you want in this space. I need you to do 10 cool things in this space.” And you know what I’m going to do? I’m going to do something really boring because I’m completely overwhelmed and I’m just going to make a thing I know I can make that’s already been made before.

So, I think that the challenge here is that some roles could probably benefit from something like that, but how can you require it? If you have five people in one role and two of them are innovators, it’s still extremely great for your company. It’s extremely great for your role. The other three people might benefit from those innovations. But being the person that creates those innovations might not be in their wheelhouse. And I don’t know how you can make it a requirement. Making creativity as a requirement is difficult.

Sean Sebring:

Hey, we’re predicting. We’re not aligning. Okay?

Chrystal Taylor:

That’s true.

Sean Sebring:

We’re both just predicting individually, which is a great… Let’s pass it over to our guests. Lauren, what are your thoughts?

Lauren Okruch:

I think you guys both make great points, and I think the point you’re both making is that the job landscape after AI and Agentic is like Minecraft. It’s going to be constructed in a way that I’m not sure that the four of us can sit here and say, “This is how it’s going to go,” right?

Chrystal Taylor:

Yeah.

Lauren Okruch:

So, I think what will happen as it always does when there’s new technology, people will show different skills. Those skills will be valuable to an organization. A job will be created in which a job market will evolve. And probably, it’s going to be a group of people who are innovators and creative and a group of people who are really good process-oriented doers.

And I think you’re both making that point, which is the funny thing is that that’s how I see it, is that it’s going to evolve. And I think we’re seeing that what’s going to be required with Agentic is going to bring forward maybe really unique roles in tech that haven’t existed before, especially in IT. There’s going to be some things people might show skills that they didn’t show in the past because the application is quite different or people might move into the IT space that have lived in other types of roles that aren’t as, I’m not going to say necessary, but that have been automated. But maybe now some creative people might move into… It’s hard to say. But I think everyone has proved the point that it’s quite wide open.

Sean Sebring:

It’s funny you say that. My previous organization, I did have a role created for me and just bear with and use your imagination. My title was up in the air. My VP actually asked me to write my own job description and my title didn’t exist yet. What we almost landed on until we realized the implications was process improvement managing professional.

And so, that was almost the title I went with, but we ended up just sticking with process improvement manager. It was exactly that kind of role though. I was basically a reconnaissance guy who just went out into any part of the business and we were a smaller insurance company and just said, “Show me how you’re doing your processes today. Let’s see if we can automate them more.” And it’s with the tool that I’m now helping support sell, which is the SolarWinds Service Desk. But just a funny little piece of history there that yes, creating the role was definitely something I actually went through and it almost had a really interesting title. But Sascha, what are your thoughts on how this transforms the role landscape of ITSM, or IT, sorry, isn’t just a cost center anymore. We’ve been saying things like this for a while, but what’s your take on how it’s going to change things?

Sascha Giese:

I think we always need to evolve a little bit. And there’s probably a phase for every one of us when we start in a job, we are impressed by the way things work and we try to align ourselves with the processes. But after a while, I guess we look for shortcuts. We look for ways how we can improve our own job. And ideally, those improvements have a bigger impact on the organization.

And we might call this innovation now, which is a fancy word, but the process itself is nothing new. Companies do this for decades. They just call it like optimization of processes, efficiency, cost reduction. And that is an innovation too. If I change a process in a way and save my company so and so many first world currencies, that is a great benefit. And that will obviously have an impact again on me because it will get me, a raise or whatever it is.

I think what Chrystal mentioned is that some of us are just happy to do the job the way it is. Let me throw a third persona in the room. So, the innovator, the person who keeps the head down and the sine wave, the human sine wave. There’s probably a time where we just want to go with the flow and there’s a time where like, “Hey, yes, I have this idea. I just do this now.” And I think we all find ourselves somewhere there. And as long as we do something useful, at least for us, we do something useful for the company we work for.

Sean Sebring:

I love our little debates, guys. This was great. It was a great way to actually individually have some predictions, and I’m mostly talking to Chrystal. We don’t battle as often as we could in our podcasts here. But Sascha, since you were the last one talking, I think the next topic I want to go back to you is something that we’ve had dedicated episodes on. And I just want to see what your thoughts are for 2026 because the energy implications of all this AI is just starting to get even more real.

I think it was only visible to IT for a little while. And now, governments are much more heavily involved in, “Uh-oh, what’s going on?” And infrastructure concerns and the heat, all sorts of different things. So, Sascha, I just want to get your take on it’s been another year and quickly as part of just this episode, what are your thoughts on that?

Sascha Giese:

Well, new tech, more requirements, more data centers, more energy consumption, bad for the environment, yada, yada, yada. That’s something that happened in the past and continues to move on. And I think yesterday, I read something about Microsoft building a new data center somewhere in the US and they’re reopening a power plant in the neighborhood which was already retired. I just read it somewhere.

I think this goes under a slightly bigger umbrella, the overall cost of artificial intelligence. And that is something that organizations might not see in the madness to innovate, or for innovate. And everyone is like, “Yes, we can use artificial intelligence to improve the efficiency of our individuals.” And yes, organizations rarely don’t talk about it, but one of the ideas is to make some certain people redundant. No one talks about it until the next layoff happens, whatever.

So, the idea is cost reduction, fine, and that is perfectly valid, but they don’t see the long-term cost of artificial intelligence. Now, the energy consumption, the governance, we had it a few times. That is a lot of money. And I don’t know why it is so difficult to put everything together in a calculation and see what are the real costs, not just for the environment, but for the business too.

Chrystal Taylor:

Do you think that as we start to realize the real cost, as businesses start to realize the real cost of AI, that we’re going to start to see a claw back similar to the move to the cloud when everyone was so excited and move everything to the cloud, and then they realize that it’s quite expensive to have everything there and maybe it’s not the best answer for everything?

Sascha Giese:

I think we saw that already, right? There is a couple of companies who laid off some people, and then they discovered that maybe that wasn’t the best idea and rehired many of them, or most of them. I have one particular example in mind, but I don’t want to name the company. The reason for rehiring them was based on poor feedback from the customers. “I’m talking to an AI.” So, it’s not really cost-related, but I think mistakes will be made. Maybe it’s a good idea to not alienate your former employees and the customer base in such a process.

Lauren Okruch:

On the topic of the environment, I just have a question to see what everyone thinks. Do you think it’s the responsibility of the data center or of the company using the data center to find additional energy options that are maybe a better environment for the earth? Like for example, it’s a huge topic here in Ireland that we have a lot of data centers. Ireland is quite small and its energy infrastructure needs growth to support these data centers.

So, is it the responsibility of the companies utilizing the data centers or involved in the data centers to help with those initiatives in a way that is best for the environment? Or is it like, “Nope, they can just use them?”

Chrystal Taylor:

I feel like morally, yes, it should be. But practically, they offload that by just buying the energy from somewhere. So, then that responsibility is offloaded to whoever they buy energy from. So, morally in a utopian world, if you are going to be destroying the environment with what you’re building and it’s really cool, then you should be responsible for finding a more efficient or less…

Lauren Okruch:

Sustainable.

Chrystal Taylor:

Yeah, a more sustainable way of doing that thing. But at least in America, I don’t foresee that happening anytime soon. And we already see it. And we’ve talked about this in the past. We did an episode with Sascha on sustainability and environmentalism last year or the year before, and we talked about that. Part of the perceived responsibility, can they offload somewhere else? If you are outsourcing by hosting your entities in the cloud, where does that energy consumption responsibility fall on?

So, it’s very complicated, I think. Maybe in a utopian world, if you know you’re going to be using the Microsoft building new data center and reopening a power plant, right? If you know you’re going to be using that amount of energy and it’s going to be harmful to the local area and all of those things, it would be lovely if you are responsible for increasing the green output in that area to balance things out. But I don’t think that the world we live in supports that.

Lauren Okruch:

Fair. I had a really big aha moment when I read… Okay, because I’m from the Midwest originally, right? And I’m a long-term resident in Ireland. So, there are a lot of pleases and thank yous in my life. We are polite, two very polite groups of people. And I used to often interact with the AI in a similar way, right? I would say, “Oh, thank you for that,” or whatever. And it would send me another response, “You’re welcome.” Somebody alerted me to the fact that that was a huge energy waste and I had never considered it. But actually, yeah, every time you require additional processing that’s not completely relevant to your query, that’s additional energy usage. So, now I’ve become… I fear, not very kind to the AI, but in hopes of considering…

Sascha Giese:

You are efficient.

Lauren Okruch:

Yes, efficient.

Chrystal Taylor:

Well, I mean, take that a step further, right? Look at kids today. Kids are using AI for everything and they’re asking all kinds of questions. People who don’t understand technology or the price or the cost or the energy consumption or any of that stuff use it to answer questions all day.

So, even if they’re not being polite, I mean, it’s now the default in search engines, you get an AI response. So, whether or not you’re prompting the AI, it is being prompted. So, I think that there’s also just a dearth of knowledge in the general populace. We’re not talking about IT now. A dearth of knowledge in the general populace about the cost of these things, and all they see is how it saves them time. And so, that’s all they care about. So, there is… I don’t know how we maintain that balance, but…

Sean Sebring:

I think Lauren’s being wise, it’s just preparing for the inevitable takeover and you may want to go ahead and be kind to your overlords. So…

Lauren Okruch:

That’s where I was going with it.

Sean Sebring:

… a quick thank you might mean everything in the future, Lauren. So, I don’t know. We’ll see.

Lauren Okruch:

Yeah, I just loop in the kindness with my original query now. That’s all. And no additional questions, just thank you so much in advance.

Sean Sebring:

My politeness says, “I have to figure out a way to incorporate it efficiently.” So, very good. We already hit this topic that I wanted to get to next, but maybe we can do a little bit more on it or maybe this will just be me attempting to wrap it up. But you mentioned the post-cloud era. It’s not really a post-cloud era, it kind of is, but there’s a thought, and I’m curious if you have follow-ups on this, that edge computing is becoming powerful enough to run some real AI models locally.

And so, there’s lots of questions about could we bring it local? Does AI have to be part of that? And a lot of it is being brought more local maybe because the power is becoming greater and smaller and more capable at a local level. But anyway, it’s bringing up other questions like there’s this new term I’ve also seen, which is right compute first, right? Is that how I’m going to start making my decisions versus is it edge, right? Is it local? Is it cloud? Am I looking for post-cloud freedom? And obviously, we know cloud is not going anywhere. If anything, what I’ve understood is like there was the, if you’re not cloud, you’re not doing it right. So, there was the blind leap.

And so, now that’s where that term of right compute first is coming into play, which is, where does it make more sense? Companies are being a little bit more strategic about where to do it. And it’s, do you think it’s going to be possible with AI in ’26 or it’s still a good ways out from the compute power? That’s where I wanted to go with this. But go ahead, Sascha.

Sascha Giese:

Tell me you were surfing gartner.com without telling me you were surfing gartner.com. So, right computing. Wow. No, I mean, it’s fine. It’s fine. Edge computing was a hot topic maybe eight years ago or something, a little bit before COVID, right? But the use cases we had was like administrating traffic, et cetera, to put something closer to the streets.

To use AI for this is another idea that could be good. I see advantages when it comes to data server entity and running a model closer to the business actually. But obviously, the power is another thing, availability is another thing. Oh, we should talk about resilience actually later. So, it won’t be that easy. It’s an interesting use case. I don’t know. We could probably try it. I’m pretty sure some will try it, but does the infrastructure give it? Do we need 6G, 7G before we can actually use it? I’m not sure if I see this happening in 2026 or… Well, no, let me rephrase it. We might see it in 2026, but will it be useful in 2026? That I’m not sure about it.

Sean Sebring:

Well, you brought up a great segue then. Resilience, Sascha, take us away.

Sascha Giese:

So, we are now in December and a couple of weeks ago, half of the internet was gone, and a week later, the other half of the internet was gone. And it turned out that one hyperscaler, and then the other hyperscaler. And then, two weeks later, another company starting with C had a bit of an impact. So, our businesses choose these platforms for getting a few more 99.9999, uptime, resilience, et cetera, et cetera. And those organizations have fancy marketing statements. And usually it worked, but we saw now that there’s instances where this does not work.

So, it is probably individual problems somewhere there are these companies, but there should be a rethinking in 2026. And this is more on the architectural level. How will I lay out my, as an organization, my own infrastructure? Will I trust a single provider? Even if they say, “Hey, we have 20 different locations in your country alone.” No, that’s not good anymore. So, organizations need to rethink how to deal with big scope incidents in 2026. Probably should have in 2025, but that’s gone now.

Sean Sebring:

Well, we had to spark in ’25, so now we’ll figure out how to put out the fire. So…

Chrystal Taylor:

Yeah. Multi-cloud is already becoming more common because of what Sascha just pointed out, that incident is not the first time in the last year that there’s been an outage across half the internet, which is a real problem. There are a lot of organizations that cannot support that level of risk when it comes to their things. They need the resilience and they already purposefully build into their platforms, HA, disaster recovery, there’s a lot of that. And so, maybe they had previously moved into the cloud and they were trying to centralize in one, but I don’t think centralized cloud is going to be a thing anymore. It’s not supportable. I’m with Sascha on this one.

Lauren Okruch:

I agree completely. And I think as the dependency on technology increases, this is a critical, critical topic to understand. A personal experience last summer, I was flying back from visiting my family in the US on the day that there was, I believe it was a Microsoft outage. But it took down a lot of the airline software.

Luckily, the airlines had a bunch of, I guess, processes in place and we still flew, but there was no online check-in, there was no update, there was no baggage check-in. My flight was on time, but I think it was probably in the very, very small percentage of those that were. And that had a massive impact, how much money was lost. But also, the safety protocol, all these things are really dependent on technology.

And I think if you think about those real-world examples of resiliency, as we become more and more dependent for things like this, okay, travel is generally a privilege, but there are safety protocols involved and there are things that are really dependent on the health and wellbeing of people. So, we have to consider outages as a matter of urgency thinking of those things in regards. So, I completely agree, but that’s my two cents to dramaticize everything.

Chrystal Taylor:

You sparked another question for me, Lauren. Do you think that this will prompt an education campaign? So, the reason why I’m asking this is because we are all of a similar or close to similar enough age that we grew up before the internet and with the internet, but there are about to be or already are professionals and adults that have grown up in a time where they always had the internet and the apps and they don’t know what to do.

So, in the situation like you described, they may not have the tools that they need. We can figure it out, call people, whatever. But there is a generation coming and future generations are coming up in the world and maybe have no clue what to do when the technology isn’t working. So, do you think that these internet outages are going to prompt more of an education campaign? Because people do have backup systems, but you might not know what they are if you’re just a user of those things.

Sean Sebring:

I’m thinking of technology bug out bags. I have this bag in the corner of the room I never use, but it’s there in case the apocalypse happens and I need to take my family into the woods to survive. We need one of those as an organization, right? It’s what we’re getting at is if the internet goes down, what do we do? What are our processes?

And I think my opinion on it is I don’t think that it will happen until we see more outages like the one that happened, the two that happened earlier this fall or this fall/winter, right? Until we start to really experience, because there’s just too much, I suppose, blind faith in the persistence of the convenience and availability of this kind of technology, that there’s not a market for it.

So, if there’s not a market for it, there’s going to be your small cult of folks that have their little community. They might have some of their own guidebooks, processes, websites, and maybe there will be a Kickstarter and one org will make some okay money for a little bit, it will trend. But until technology starts failing on a trend, I personally don’t see it becoming something that shows up in the near future at all.

Lauren Okruch:

Or is it that until those of us who have lived analog and digital lives leave the workforce, is it a requirement? Are we always there to know the processes, to know the backups? Because I actually don’t know how to answer this because I have the brain that has lived both, right? So, I feel quite confident that I could solve things without digital, but I don’t have that fear of the unknown.

So, it’s an interesting question. On my flight day when I went to drop my bag off, they gave me a handwritten tag. I asked, “Am I ever going to see this again?” He said, “I’m not sure.” And we went on our way. But there was obviously a process and I did receive my bag, so there was obviously a process. But yeah, I’m with Sean in a different way. Maybe until the workforce is a majority digital-only people, they might not feel the need to create this education or campaigns until it’s hopefully not an emergency.

Chrystal Taylor:

Okay. But is that not then our responsibility to leave them with the tools? Because this is actually very similar question to the… We’re losing fundamental skills because of AI question, right? It’s very similar on the lines of like, “Are we just going to leave them with no tools to support that?” Because they don’t know how to deal with those situations. So, I mean, are we going to go through the process of building and learning those things again? Is the whole like everybody in every industry going to have to go through that again in 20 or 30 years or whatever when we’re all retired and not working anymore?

Sascha Giese:

So, Chrystal, did you teach your son how to use a pen and a paper?

Chrystal Taylor:

Yes.

Sascha Giese:

Okay. So, he’s equipped for the worst failover scenarios.

Chrystal Taylor:

In theory. In theory. But their first port of call is using the internet to solve all their problems. So, the way that we teach even is internet-first. And there is, I mean, obviously, this is, I guess, partially a question about parenting, but I’m talking about people that are already in the workforce really because as a parent, I am responsible for teaching him how to live in the world of technology fields, sure. But if we are taking it for granted, like you have all said, we have reached a state of this is the world that we live in now, it’s always going to be there. We feel assured that the internet is always going to be there. So, will you even think about doing it? Sean has kids, will you even think about doing it? I like this question.

Sean Sebring:

Yeah, it’s a great question. And as far as how individuals operate, absolutely.

Chrystal Taylor:

Interesting.

Sean Sebring:

I think we’ll all keep our own idea of what should go in that bug out bag that’s in the corner, right? But as far as how industries go and bringing it back to 2026 predictions, I don’t see it being something of anyone’s concern. Someone else on the call, I think it was you, Lauren, who mentioned or phrased it this way, there’s just so much excitement about innovation right now that nothing else is on anyone’s mind, right? There’s just, what else can we do, not what will happen if these other things don’t work anymore.

So, do I see more outages like what happened this fall/winter in ’26? Absolutely. And now that we’ve become so much more dependent, I think it’s going to, and this is another topic, even though I don’t know that we’ll have a chance to get to it. This is going to increase bad threat actors to feel incentive to see what happens if they take away our toys, right? See what happens. See how the world flails without these things. And that’s something that AI is going to play into as well, is how those threat actors take action.

Again, I’m not sure that we’ll have time for this. But it’s a fun, philosophical, scary topic. But it’s definitely something that I think we shouldn’t stop asking is like, when do we say, “Okay, I can only be so dependent on technology without realizing I need another plan.” Lauren, go ahead.

Lauren Okruch:

I do think it brings up a really crucial, like the essence of what we’re talking about is the massive change of decision making, right? Our shift is in how we make decisions. So, now I don’t feel the need to critically think about a decision I need to make. I can just go ask AI. And I think that covers what we talked about earlier in the job scope.

So, let’s take a service desk agent, for example. Once I have the AI, is my prior knowledge null and void because now my decision-making is based on the AI? Or do we view it as it’s leveling the playing field that people can participate in higher levels of a job? But I do think in this example of having a fail-safe plan, we need to understand that our decision-making has changed and are we still going to continue to equip people with the ability to make a decision or research things or find their own solution?

I think that’s the essence of what we’re talking about is, are we going to completely go all-in on letting the AI make our decisions? Or are we still going to equip people with the ability to problem-solve and critically think based on what they have available for the problem at hand?

Sean Sebring:

I guess to tie this one up, I like the way you put that because it made me think of it a little bit differently. Even if I’m supervising and approving all these things, I still have the majority of all the knowledge that I would have if I was manually doing them. Instead, it’s just a, let’s call it how in-shape are my muscles to perform a specific activity? Can I do all these things? Sure, but I’m pretty out of shape when it comes to quickly responding to somebody. I would have to go back to a different approach to do the research or respond.

So, I think it’s more of like a muscle memory or being in shape for what the necessary activity is. But since humans are so adaptable, we’re the ones who got the processes this far anyway, I think we would be able to adjust as necessary.

Lauren Okruch:

I agree.

Chrystal Taylor:

I think as well, you get better responses out of all of the AI stuff if you have that fundamental knowledge. If you know the right questions to ask, if you know the right prompts to give it, you don’t have the fundamental knowledge of whatever you’re trying to get it to do, you won’t necessarily be able to get to the end result that you want. So, I agree. I think you still have to work those muscles, maybe just not as hard as you used to.

Lauren Okruch:

Fair.

Sean Sebring:

I’m getting old. I’m just kidding. So, I think we’ve done a pretty great job of exhaustively talking about AI especially, but some good topics and trends for 2026. Guests, do you have any other topics that you want to approach as we prepare to end our episode today?

Sascha Giese:

Well, I like to talk about games like the majority of us does. Now, I read something the other day, so we should probably postpone this discussion because GTA is not making it next year, right?

Chrystal Taylor:

It’s been delayed again.

Sean Sebring:

Well, we’ll definitely have to do an AI-dedicated topic when it comes to games and maybe entertainment altogether.

Chrystal Taylor:

Oh, that’s a contentious topic, my friend.

Sean Sebring:

Well, then it’ll be some good content.

Lauren Okruch:

I’m curious. I want to know everybody’s favorite at-home automation. What things are people automating in their own lives?

Sean Sebring:

Google photo.

Sascha Giese:

My mailbox. I live in an apartment building and I don’t see when anyone drops a letter into my mailbox. So, I put an environmental sensor in my mailbox and using Z-Wave as a protocol. And my watch is going like whenever I receive a mail. Unfortunately, this will only get me a little bit more closer to the bills I receive all the time, but the idea was there. It works. It works.

Sean Sebring:

That’s great.

Lauren Okruch:

Oh, that was a good idea.

Sean Sebring:

That’s great. I had said Google Photos and it’s actually so automated, I wouldn’t even do it if it didn’t already do it for me as a process. But I love it because I’m really good at being in the moment to the point where I don’t capture any moments. So, it’s such a blessing to me because I grew up with photo albums and I’m not on social media. So, when I take pictures, I almost never touch them or look at them again. So, I’ve got all these pictures on my phone that I’ll never use, never look at. I don’t post really to put them somewhere, to go back and be like, “Oh, I got a like. Let me go. Oh, yeah, I remember that picture.”

So, what Google does is they obviously bring up memories if you use Google, right? You’ll see it says, “Oh, this is you last year, you seven years ago.” But it also has a book option. So, I’ve created a tradition to where I will do a book based on the year at Christmas every year. So, every Christmas we get a “The Sebrings” and it’s just me and my family, what happened in that year. And so, now I have my hard copy talking about getting off the internet. I have my hard copy of memories that I wouldn’t have.

And Google does so much of the work for me, including just reminding me like if I started the book and didn’t finish the book, it’s like, “Hey, you were working on this. You want to finish buying it?” And I love that because I would just forget to make the time to do something like that without it.

Lauren Okruch:

That’s really nice.

Chrystal Taylor:

That’s really nice. Yeah, mine is boring. It’s lights. I just got new RGB exterior lighting installed, so in time for Christmas and all of it, all of my lights are automated, so many RGB lights in my house right now.

Sean Sebring:

The internet of things. Lauren, you brought it up. So, what’s yours?

Lauren Okruch:

Okay, I have an existing automation and a 2026 automation. My existing automation, I have ADHD. So, in your iPhone, in your reminders, you can set for it to remind you of something when you leave a location, which was like a huge crux for me growing up is that I would always leave the house without the thing.

So, now I can tell Siri to set a reminder and set it to remind me when I leave, like it knows my physical location or when it connects to the Bluetooth in my car, this has changed my life. So, as long as I remember in the moment of the thing I need to take with me and I set that reminder, I can’t tell you, things are so much better for me. But my 2026 automation is I think I want an automated cat litter box. I want that bad boy to just detect and clean.

Sean Sebring:

My brother…

Chrystal Taylor:

My cousin has one. She loves it.

Sean Sebring:

Yeah, it gives you some pretty interesting metrics that you wouldn’t think are necessary and you start paying attention to.

Lauren Okruch:

Really?

Sean Sebring:

Like how much your cat weighs before and after and that kind of thing. So, it’s some unique metrics, but very handy when it comes to keeping it clean, really, really, helpful.

Lauren Okruch:

That’s what I’m looking for. So, that’s my 2026 automation goal at home.

Chrystal Taylor:

You know what? That’s my last-minute 2026 trend addition is that we’re going to have even more unnecessary metrics added to literally everything.

Sean Sebring:

Litter-ally everything?

Chrystal Taylor:

Yeah.

Lauren Okruch:

Litter-ally.

Sean Sebring:

Dad joke.

Sascha Giese:

So, when we sit here next year, Lauren, you can tell us about the poop summary of your kitty, right?

Lauren Okruch:

I can’t wait, yeah. I think I might pair it with the litter that is color-coded to tell you when certain things. I think I might cross-reference all of my metrics.

Chrystal Taylor:

You’re going to have so much data.

Lauren Okruch:

I’m going to come to 2027 IT trends with a spreadsheet. Don’t worry guys.

Sean Sebring:

Well, it’s not so bad being trendy. That’s a line from a song. Go look it up if you want. Great Ska band, older. Anyway, I want to say thank you to our guests. Again, it’s fun when we get extra guests, a bigger conversation. So, Sascha, thank you so much for joining us.

Sascha Giese:

Happy New Year.

Sean Sebring:

Lauren, thanks for joining us and hopefully we’ll get to have you back on again soon.

Lauren Okruch:

I would love that. Thanks so much for having me. It was fun.

Sean Sebring:

It’s been an exciting episode. I’m your host, Sean Sebring, joined by, of course, fellow-host, Chrystal Taylor. If you haven’t yet, make sure to subscribe and follow for more TechPod content. Thanks for tuning in.