Hello, everyone. I'm Dave Rubinstein, editor-in-chief of ITOps Times, and I'd like to welcome you to today's presentation, "AI Meets Network: Now What?" As we've been covering, there's been a lot of buzz around AI and networking, but the question is, is it real? In today's program, we'll explore the potential and the limitations of AI, and you'll be better able to understand where AI can add value and what is merely hype. So joining me today to lead the presentation are Leon Adato. He's the principal technical evangelist at network observability platform provider, Kentik. Thanks for having me. Hey. How are you, Leon? Doing good. Doing good. Good. Also with us is Charlcye Mitchell. She's a technical marketing engineer at Kentik. Hey, Charlcye. How are you doing? Hey. Great. How are you? Good. Good. Thanks. So before we get started, I have just a couple of quick announcements. One, there'll be an opportunity to ask questions during the presentation at any time just by entering into the chat, and they'll get to them, as they can during the presentation. Secondly, they were this is being recorded and will be available on demand shortly afterward. And now to take it away are Leon and Charlcye. Thank you again, Dave. Okay. So I'm gonna dive right into slides just because they help frame, our conversation. Here we go. There we go. Okay. So, once again, welcome to everybody. The title is AI for networking real or hype. And just to give you an idea of where we're going with this, our goal today is to inform, what AI is overall. There are some layers to this. And also more specifically, what it can do, today for both infrastructure and operations and network. And, also what is on the horizon. And finally, what's easy and what's hard. Because I think certain things we see that come up to speed very quickly and certain things take a lot longer to implement in any particular platform. And so Charlcye and I are gonna try to contextualize some of that, so you understand why. And, I personally wanna try to steer clear of, like, school lessons. We wanna inform, but not in a really boring sleepy way. Charlcye, anything to add to that? I am going to do my best also not to delve into school lessons. Great. And as you can see, we're we'll both try to steer clear. We're gonna make this a fun conversation. We're gonna make it really informal. I've got my tea, honest to god, this is tea, in hand. And, so what we're covering today, the, first of all, we're gonna talk about the AI tsunami. Really just a brief history of AI time. And then once we've contextualized that, AI for infrastructure operations, I and O, and networking, what's present and what's in the future, and finally, separating the wheel from the hype. So pretty straightforward organizational structure. We've already done the introductions. Again, my name is Leon Adato, and, my partner in crime today, Charlcye Mitchell. So diving right in. How did we get here? This is that brief history of AI time. Charlcye, you know, I'm gonna make this brief, but you may or may not be surprised to hear that this is not our first AI hype cycle. We have been here before. So I like, there's a movie called Desk Set starring Katharine Hepburn in, like, nineteen fifty seven that is about the LLM, like, the large language model that has been installed in the office and whether or not it is going to take everyone's jobs. This has happened multiple times in the past. So in that, like, fifties, sixties AI scene, that was followed by our first what we call AI winter where it's just the hype and the funding all died down. Because at that time, compute was really expensive, and we were using rule based methods, which are not great at dealing with real world variability. Right. And and we've seen we've seen runs at AI like things. I mean, I I started in tech thirty five years ago, and one of the first encounters I had was with a little tool called q and a. There was a database, but the idea was that as you built the database, you answered questions about the field so that you could ask plain English questions about the data. We'll talk more about that as far as labeled data or tagged data. But, you know, that was back in eighty nine. So we've kept we we want this. Right? Every look. So much of what we deal with today is modeled after Star Trek. You know? I mean, from cell phones, which had the flippy uppy thing because that's what we saw with Kirk's communicator, to everything else. So, really, we try to model things based on the sci fi. And I I wanna point out that that a lot of times and even now, AI, the thing that we we label AI, ends up being nothing more than three f statements in a trench code. And, yeah, I had to work that in because I have this great graphic, and it's a lot of fun. But we wanna part of today is to make you aware of when something is just really spicy autocorrect. You know, it's just predictive text and when something really has the elements of machine learning and artificial intelligence. So, back to back on track. You know? How did we get here? Let's let's so we were talking about the seventies and the winter, and that takes us into the late eighties again with all that. Charlsie, what what happened then? We had another algorithmic innovation that thawed that winter, which was the introduction of back propagation into these AI algorithms. So that allowed the models to learn from their mistakes by adjusting the weights in the model, and that really unlocked new use cases. We saw successes with AI, in medical diagnosis and chemical analysis. And then we still had some challenges in that cycle, with, like, the vanishing gradient problem. So just kind of these models had a very short attention span, and the more you gave them, the worse they got. They're only fit for certain purposes. And so once again, the hype died down. Right. And and just to contextualize that vanishing gradient is that when we say attention span, we don't mean it in a human term. What we mean is that the amount of previously added data disappears. It falls off the end. It couldn't continue to think about it. As opposed to I'll I'll compare to what we have now where you can ask multiple questions over the course of time to derive more and more specificity. So you can say, you know, I wanna know all of Taylor Swift's greatest songs. Okay. Now narrow that I can just say narrow it down to the late two you know, the early two thousands. Narrow what down? Early versions again, that gradient problem would say, you know, narrow narrow what down? I don't remember. I just asked you a minute ago. Now we have a wider range of things. So that's some history of you know, that may explain why you hear so many people talking about AI isn't new. It's just a hype cycle. Yes. Those things are true, but it might you know, it it lacks a little bit of context. In reality, there is a lot that is new now, and that's why we're talking today. So this sort of brings us up to speed into the the, mid two thousands with, let's say, two thousand fifteen, TensorFlow, PyTorch, and some other things. So what is it about twenty fifteen that was that was notable? These frameworks make it really easy for people to build and train and deploy models. You don't have to be a machine learning researcher to do this anymore. So private companies can do this. Academic researchers can do it. You can do it at home, which is, you know, opens up a huge ecosystem of community contribution. Right. And and that's that was the thong. That's when we started to see the the technology leap forward. And I'm just gonna just bounce right ahead to twenty seventeen. We had another sort of measurable leap, the transformer model. And, and, Charlcye, I'm just gonna lean on you real heavily. Like, you know, tell what what was twenty seventeen notable about? This is when we see that attention is all you need paper come out, which a lot of people have seen at this point. This transformer model architecture, allows just like we're past the vanishing gradient problem at this point, and you no longer have to just, like, make an inference, for instance, with natural language processing. I might predict the next word based on the last word in a sentence. Now I can predict the next word based on all of the previous words from all of the possible next word. And so we start to see really powerful, robust, potential for these language models. Right. I this is where we start to get into what some people may have seen that term stochastic parrot or stochasticity. That's not an easy word to say. And for those people who haven't encountered that term before, the idea of a parrot, a a nonhuman intellect who has no context and no experience in terms of human experience, but knows all the words. So they can say things. They can string together meaningful sentences and even respond in ways that appear to be to the outside observer, appear to be human like, and yet they don't have the actual experience. So, you know, you if you say to the the parent who knows all the words, help, a bear is chasing me, they may respond in a completely odd way because they have no concept of bear. They have no concept chasing. They don't know why that would bother you in any particular way. So stochasticity still is part of this stage of development. But it takes us to twenty eighteen. And in twenty eighteen, there's another incremental leap with generative adversarial serial networks or GANs. And, so, Charlcye, see, help help us understand what that is. I just remember during this time seeing NVIDIA publishing these GaN models that would generate graphics, like, on their own. Right? You could tell it, draw grass, and it would draw grass. And so suddenly, this is getting used in video games for graphics, and it's just amazing that computers can generate their own novel graphics. GANs kind of got replaced by diffusion models. But but we're starting to see a lot of generative the power of generative AI in this this era. Right. And and, again, to contextualize this, we do this also. Right? All of our decisions we make today are based on experiences or data that we've had up until this point. Like, we do that. Right? That's why kids that's why people who have never encountered goat stove put their hand on the stove because they don't know and then they don't. That's literally putting your hand on a hot stove and then never doing it again is the idea of of that generative experience. Then with a greater amount of experience or data, you can begin to infer what might happen next and make some predictions about outcomes and, therefore, some decisions about outcomes along the way. And that is when we talk about training. What we're basically doing is throwing a lot of hot stoves at the computer model and letting it burn its hand in different ways, get it along a bunch of ways so that it can then begin to make more correct decisions as it goes forward. Did I did I catch that right? Or That's great. Don't throw any hot stoves at your model. But, yes, that's Gotcha. Okay. And this brings us into the current decade. Oh my gosh. Twenty twenty. That is not the beginning of a, adolescent post, post apocalyptic novel. In twenty twenty, OpenAI comes on the scene and does some cool stuff. And, Charlcye, see, I'm gonna let you just run through. Just run through. Yeah. Okay. I like, they actually train GPT one in twenty seventeen. I think twenty twenty two is GPT two twenty twenty three. They come out with GPT three, which is trained on hundreds of billions of words. And I think this is kind of when we start to realize the more data we throw at these models, the better they get. This this was, like, ten times larger than any other, like, LLM that was on the market at the time. So I think I read it cost about five million dollars for every training run when they were training this LLM, and all in all, easily over a hundred million dollars to train GPT three, which as you know is less sophisticated than the GPT models today. Right. And and this is also where networking starts to add in. Not in the way that we're gonna talk about later, but the training required a a huge amount of not only parallelism, but the data had to happen on time. And so this isn't something that you could do between multiple cloud vendors. It had to be on-site because the amount of millisecond delay on a train on a packet as part of training data was incredibly important. This is something that my colleague, Phil, Gervasi talks about a lot is getting the timing right. Just the mechanics of the network timing of the data transfers was critical and really contributed to the, expense. You had to have the best bidder with the fastest networks, with the fastest processors that was able to take in all this training data and process it quickly and in the right order. Because, again, your stochastic parent doesn't know what a bear packet is. It doesn't know what that is, and therefore, it gets it wrong. It'll just say, okay, and then your whole model is thrown off. So that was, that was really where things come in. And and this also explains the the amount of money that's already gone into the training, explains why it makes sense for companies to use an off the shelf proprietary or open source model for the training rather than trying to roll your own, you know, because then you can adjust an existing model, to the purpose and the techniques that you need and then either fine tune or and and we'll talk about exactly how that works with, it's called retrieval augmented generation or RAG. You know, rather than trying to train your own thing, it's just not for most people, for most circumstances, not necessary and not economically feasible. So that takes us to twenty twenty one where we get the foundational models, including BERT and, again, GPT three, as a base layer of things. So what was happening here? Yeah. Yeah. We I mean, you start to see, like, hugging face model repository, datasets repository pop up. I think there's, like, sixty thousand something models out there now. And so each of these models is really good at something. And and so you need like, when you are trying to do something with AI, you have to find the right model for your purpose and then, you know, as we said, Leon, we can fine tune it. You can do things like rag, to to make it better fit for your purpose. But we're starting to see just a ton of momentum in this space. And and not just momentum, but also the, for lack of a better term, miniaturization. Right? Because at this point, at this point in time, twenty twenty one, you can run a hugging face model on your laptop, okay, or your your grandpa box, the desktop machine, or whatever it is. And even on a mobile device, I mean, a lot of what we see with Siri and with, Bing and with, you know, the different, assistance is that it's running on a mobile device. And it's really starting to take in at this point in time, twenty twenty one, it's really starting to take off in the tech community. There's a lot more buzz about it. But here, I wanna make a clear differentiation. Running an an LLM, a large language model in AI, is computationally simple. The barrier to entry is the training, the raw training. And once again, it emphasizes why running an preexisting model and tweaking it is way more economically feasible for most people in most cases than trying to build your entire monolithic structure from the ground up. And that really takes us to twenty twenty two. Just twenty twenty two, where we start to see real adoption by, you know, major organizations. Governments are now starting to adopt this for use in different areas. What was what was happening there that was notable? Well, this is actually so this is the year that OpenAI releases TattooPT. So I feel like that's one of those, like, where were you when moments. Chat g p t is the first time that everyone in the general public, even outside of the tech community, is suddenly aware of the power of these generative models. And so that includes regulators, fairly. I think there's a lot of concerns about whether or not we were ready, if we had proper policy, if we had proper governance and rules for these types of models. And so suddenly, we are all, talking about it and and focused on how we can get value out of AI and also how we can make sure that the disruption is not out of bounds. Right. I mean, this is to to quote the famous movie, you know, we were this was when we were so busy figuring out if we could that we didn't take enough time necessarily to figure out if we should. You know, we could make Chat GPT do a bunch of stuff, and we probably ran over some boundaries that if we had been less interested in velocity, we might have thought twice about it afterward. I will say, luckily, I don't think we made we as as humanity made any major, unsolvable, unfixable mistakes. We made mistakes, but we didn't make anything that we can't go back and rethink later on. This is also the time when CheckatpT becomes synonymous with AI in the same way that Google is, you know, is, like, you say Google instead of search. And because there are I think that may not like, it's great that we all have an awareness now of generative models and and chat g p t was released as a large language model, and now it's a large multimodal model. It can generate sound. It can generate images. But because now AI means chat g p t to a lot of people, that conflates this question about is it just hype? Because there are a lot of things that AI and machine learning can do that chat GPT, for instance, is not fit for. Right. And we're gonna get to those layers, in a minute. But I think I wanna try to just wrap up with, what happened in in twenty twenty. There's the letter. There's the picture. No. I wanna finish with twenty twenty three and where we are more or less today, with everything. So so, you know, what was happening here, you know, a year ago that's worth noting or keeping in mind as we move forward and talk about the actual implementation of things? Yeah. I think today there are a ton of amazing providers, model providers, infrastructure providers, l m security, like, all of the solutions have gained a lot of traction. We're all working as hard and as fast as we can to solve all of the new challenges that AI brings us. But I think we're at a point now where if there's something that you would like to solve with AI and you have the data that would be needed, and the money for the compute and infrastructure and talent that you need, we really have unlocked a lot of new capabilities that were not accessible to us before. Which brings up a good point, which is those you know, if you have enough talent and compute, and I those are the bottlenecks. Those really are today the problem. There is not enough talent, people who really, really understand machine learning and AI in in the deep technical sense. And there's really still, a dearth of hardware. You know, chips are still hard to come by. And I don't mean, like, the chip in, like, your laptop, but I mean the, like, the hardcore chips that we need to do really heavy duty computational work for AI. That is true. We are seeing so much incredibly rapid progress in this space, just mind bendingly rapid progress in this space, but I don't think that's gonna be a challenge for much longer. So let's talk about what we can do. Okay. So this takes us to the onion model, which, honestly, Charlcye, he, turned me on to just last week. We were talking about this, and this is one of the best ways of of describing, again, the nuances. When somebody says AI, you then have to sort of identify which layer of of this onion model are they particularly talking about. So walk me through this like you did the other day. Yeah. So just in general, like, when we say AI, that means basically anything that's making computers act the way humans would interact. Right? So Mhmm. Some people might, like, immediately grasp onto chat g p t, but there's a lot of things that like chat gpt chat gpt is not particularly useful for robotics for instance, right? So machine learning is like a subset of this where models learn from their own data. They can they can learn to do things without you manually three if states in a trench coat, like, telling it what to do. Deep learning learns on more complex data and can do more complex inferencing. Generative AI produces novel things, the the image images, the voice, the responses that you see, from large language models. Large multimodal models that's like, normally, it'd have a little bubble in here that's LLMs because that's what chat GPT used to do. But now it's multimodal. It can do multiple things. And then, like, chat GPT, for instance, is a tiny bubble of, like, millions of bubbles inside that bubble. So when we talk about what AI can do, we're not just talking about LLMs. Right. And and that's and and that takes us back again to that stochastic parrot and, you know, all of that. And it's it's also I think the the point that we're at today, and one of the things that we need to be clear about is that when chat GPT came out a couple of years ago, one of the things that everyone emphasized was it only knows and can only know about the things it knew with the data it was trained on. So even though it came out, I'm gonna say people became aware of it in twenty twenty one, twenty twenty two. The data it was trained on was twenty twenty. And so if you asked it anything after twenty twenty, it didn't know. It couldn't know because ChatGPT wasn't Google. It wasn't a search. It was a, again, a a linguistically aware model that would answer you in English like phrases or language like phrases because it would do it in French and Spanish and all the rest. But it would respond to you in a way that was linguistically solid, but not factually solid. But now it's actually able to take new data in, which is why you see things like, for example, Copilot, where it can look at not only the you know, what is Python or what is a language and what are the common libraries, but look at your actual code base and then make inferences from that and then begin to build off of that to figure out, oh, I see that you're writing this. I I know it sounds like spicy, clippy. But, you know, I see you're trying to write a, a function for this. Let me help you with that so it is consistent with the rest of the code base. And the same thing for languages and same thing for other things. It can use existing information in a way that it couldn't just a year or two ago. Do I do I have that right? That is correct. It's still important to understand that these models, even modern models, are making predictions based on the data that they have access to. So you're providing current real time data to those models, and they're making better predictions based on that. They don't reason. Right? They're making predictions based on the data that they've had access to. Right. Which is a key point that we're gonna get to later is you gotta give it good data, and you have to give it the data that you want. Okay. Which brings us to the idea of, you know, AI, artificial intelligence, as it relates specifically to infrastructure, operations, networking. And now I think we have the background to talk meaningfully and specifically about that. So big slide. Here we go. Okay. This is you know, there's there's a lot going on here. So, why don't you start us off with, like, where these nodes and things are going? Well, just think about all of the data now, connected data sources that you can provide to your AI models for them to make inferences on the task that you would like to accomplish. Right? All of this data is here. It's accessible. It's well structured. It's connected. We can suddenly do extremely powerful things with AI. Right. And and in terms of network management, it really allows automation to be proactive rather than reactive. It allows it again, using those inferences, everything from the very minor, you know, event correlation, right, where we have the data coming in and we've always had event correlation, but the correlation rules have largely been human based. A human has to say, well, when you see this many users on the system and the query speed is really slow and that this is happening or that is happening, then it's a problem. You needed a human brain. Now you really can get into, you know, a machine looking at that and saying, no. This is a pattern of bad. As opposed to, I will say, a pattern of unique. Today, a lot of observability solutions will talk about high cardinality. Cardinality meaning uniqueness. So this is a really unique thing. I have not seen this in the last fifteen minutes or hour or day or year or whatever it is. Either a unique event or this combination of elements, I have not seen in combination over a period of time. That's cardinality, but it doesn't indicate good or bad. It's just unique. And for a lot of time, observability solutions had to lean on cardinality as a stand in for something that a human might wanna know. But now with the ability to continually train a real AI, not just a stochastic LLM, but a real AI, it can begin to make inferences off of that. Right? That's I mean, I I think that's where we're going with this. That's where we're going. Yeah. We're still in a phase of having a human in the loop. I don't think we'll be replacing network engineers anytime soon. We can quickly identify patterns of bad, but we're not we're not quite in the phase of, like, automatic healing where we're gonna push new configs to your routers. Right? So that's that's where we're at. And and this is where I launch into a little bit of a tirade of mine. I I call it the calculator rant. I am I am a a person of a certain age. I was in early in elementary school when pocket calculators became affordable. And I am here to tell you that, the schools, elementary schools, high schools were losing their ever loving minds. They were convinced that if they allowed pocket calculators to come within a mile radius of their school, all the children would stop learning how to do math. Like, that was the reaction that people were having to it is that, oh my gosh. If we let them use calculators, they're never gonna learn anything about math. They're never gonna learn their multiplication tables. They're never gonna learn how to do addition and subtraction and stuff. And I'm here to tell you that calculators made it into the school, and we're still mathing pretty mathy as we go. And the reason is that they were afraid of the tool without recognizing it wasn't the tool that was the critical piece. It was the knowledge of how to use the tool, and AI is the same thing. So to continue on my calculator rant for a minute, if I'm trying to balance my checkbook and I decide the thing I need to do is use the square root function, something is very, very wrong. It's could be my finances, but it's probably that I don't really know how to do math particularly well, and I think that that would be really cool. Using a tool the wrong way is still gonna yield the wrong answers. AI, whether we're talking about that LLM or something higher level, If the if you don't know what you're trying to accomplish, having a computer to help you accomplish nothing faster isn't going to do it. You still need a human brain behind the wheel giving the right information, asking the right questions, ensuring the data is correct, validating the the decisions or validating the assumptions that are being derived from that. But what it is gonna do, like a calculator, is that it's gonna help create a consistent set of response and a faster set of responses than a human could have done in the past. Again, like, that that sounds like where we're going. Tell me if I'm wrong. That's absolutely correct. The efficiency gains are incredible. Okay. Great. And, you know, so we can so what we're looking forward to is automating routine known tasks that are nevertheless complicated. You know? So we're not talking about necessarily, an AI provisioning a router or a switch or a server because that's an easy known thing. You have a configuration file and off it goes. We're talking about provisioning based on traffic patterns that could be deeply unique, and yet the provisioning process still matches, I almost wanna say, like, a template, but it's a behavioral template. So as an example, a couple of years ago or maybe it was last year, Verizon implemented AI for a dynamic bandwidth allocation. And what it did was it looked at, you know, during peak usage times, it, used network slicing. And what that did was it sent the traffic over virtual end to end networks that were tailored to specific use patterns, and it optimized network performance and also cost. And they slashed costs using it. Now when you think about create a personalized end to end virtual network, a VPN, you know, and make sure that VPN is optimized for the traffic that's happening at this moment, that is something a human could do, but not so fast. But it was something that an AI was trained to do and could do in in nanoseconds or milliseconds, and that was the cost savings along the way. Right? Anything else that you wanna add, Charlcye, before we move on? No. I think that that highlights the power of AI in networking today. Cool. Alright. So, here, you know, this is really, like, almost a an amuse bouche. This is almost like an appetizer tray of things that we want, Charlcye and I and Kentik, want the you, the audience, to consider as far as where AI can or even at this point does, do, fit in to I know and networking. So we talked about in the upper left quadrant, traffic pattern analysis, but there's some other elements. And before I dive in here, Charlcye, what are some of the ones that jump out to you? Like, what are the ones that catch your eyes being, oh, that's really cool. They're doing that today or, oh, I can't wait until we get to this part of it. All of these are phenomenal. Network security. I mean, they really are. There's just so many things that as a as a network administrator, you have to be on top of all the time. And so, I mean, having an assistant that can assist you with any one of these tasks, let alone all of these tasks, is Right. Amazing. Right? Very exciting. Okay. So I I wanna point out that some of the stuff that jumped out to me is not just DDoS detection because that, again, is just looking at traffic patterns, looking at particular kinds of data as it flows across. Now it's it's a massive amount of data because you can't just check for I mean, you can check for DDoS on your edge device, but that doesn't tell you much. But looking at entire service providers, multiple service providers for patterns and then correlating that together and saying, oh, there's a DDoS that is traversing this geographic area or this set of networking and saying, this is touching on your network in certain ways. Detection, we do that today. But protection requires responding to that and going in and making some very specific routing changes that right now tend to be on the manual side. You know, you have to call your provider and say, I'm seeing this, and then they do it. Or they notice it and they do it, but it's not automated. So the DDoS protection jumped out at me as being sort of the next step. IoT. I have been talking and blogging about IoT for a really long time, for a decade. And IoT continues to be really cool, but, also, it introduces a lot of stress and challenge on a networking side, on a network management side, on a awareness side. Do I really know what all of my light switches and all of my thermostats and all of my baby monitors are doing at any given moment? Can I tell when they've been hacked? Can I tell when the traffic is being filtrated to other places? It's hard. It's it's really an individual job. That's why we like to say security is everybody's job, which basically means it's nobody's job because nobody wants to do it. But IoT protection is another one of those perfect examples of things that could be used that AI could be leveraged for because it is complex, it is vast, but it is also pattern it it it has the same patterns every time. Right? Any other ones before we before we roll forward? No. I I could go on a rant about each one. So Okay. Okay. So then so, next up, you know, taxonomy. And I think that word I because a lot of people have a reaction to the word taxonomy. They they think a thing about this, but it's really just, like, categorization. Right? Like and so when we say the taxonomy of AI for infrastructure operations and networking, like, what are we what are we talking about? Well, I think everybody right now is trying to figure out their strategy of how they can implement AI, where to implement, how to prioritize it, what they wanna focus on. And so for network professionals, this is just a useful framework to think about where you want to improve using the power of AI. Right. And and when you're looking this slide and say, well, which ones is the can AI real obviously, I'll say real AI. I'm not casting throwing shade at chat GPT or anything, but, you know, again, those layers of the onion, the higher layers of the onion. AI can have an impact on all these things. But I think that for the folks watching this, what you need to do is take a look at this and say, where is AI gonna have the most impact for me today versus can I have AI injected into every single aspect of this? Because there are certain things okay. I'm gonna take the, second column to the right, AI assisted. That means, can I ask English language or or natural language questions of my network platform and have it respond to me with things that, a, make sense, and b, use my data? That's there today. We're you know, we'll talk about it again in a minute, but, you know, that's that's already there. But then when you're thinking about AI, you know, which is a bigger priority? Cost and capacity, you know, or hybrid cloud observability? Both are important, but now you wanna think strategically about which are the things that are gonna make the biggest difference to you along the way. But, again, AI is is part or will soon be part of everything you see on this slide. Alright. Real or hype? Here we go. I wanna point out that, Kentik, you can see Kentik's implementation of AI in the first two, of these, bullets or elements today. We have Kentik query assistant. It's there using natural language. I I I default to say plain English, but it's not just plain English. I have done queries in Spanish and done queries in French, and they work. And those are the two languages I speak. So, you know, you're welcome to try any other language and see how it goes, but it's natural language queries. And then journeys is the natural extension of that, meaning the query assistant is query assistant is, you know, can you ask a question? Journeys is can you ask multiple questions that continue to build on each other and drive deeper into the data and create more meaningful insights along the way. That's what we have today. The other two bullets, number three and number four on this slide, are there under the hood in some in some ways, and they will become more visible as time goes on. And I do not mean in the coming years. I mean, in the coming, you know, months. Maybe a few months, but in the coming months, at least in Kentik. You're gonna you know, you're seeing it in other places also, but, obviously, Charlcye and I are from Kentik, so we know that the best, along the way. Charlcye, I don't know if there's anything you wanna add as far as your experiences with this or, you know, or your thoughts. No. I just think it's amazing the the sudden speed with which we are making progress in these areas that, you know, automatic remediation is something that we've chased in our industry for a very long time, and now you can you can see the path there. We can see the light at the end of the tunnel. So, super exciting time. Right. And and if somebody wanted to summarize this slide in, like, one phrase, the one I like to use is we're gonna make your weekend longer, really, because you're gonna be interrupted less often. When you're interrupted, you'll know what it's about. You'll know it's meaningful, and you'll know what to do about it. So, you know, you have less time on your on your weekends or off time getting pulled away for things that ultimately don't matter. Okay. We have talked for over forty five minutes. We've we've been going for a while. I wanna point out that we've got a series of micro webinars coming up. These are fifteen minute mini webinars that are more demonstrative. They have more actual hands on demos, and you'll find more information on these coming soon. But in the meanwhile, as Dave promised you before, we are ready for your questions. If you have things that you wanna know about it, we are we've been probably, you know, answering questions along the way, in the chat. But if you have things, ask away, and we will do our best. Great presentation, by the way. Thank you. Very informative, and I did not feel like I was going to school. Great. You have achieved your goal. Achieving a lot. Awesome. That that's really good. So, just one thing I'd like to ask of both of you. You know, there's a lot of information there, a lot a lot to unpack. If you could just, each of you come up with, like, a key takeaway from all this that our audience can leave with and, you know, have a real solid understanding of how they may wanna begin or what they need to do? Charlcye, you first. I'm gonna start with you, Leon. For me, like, generally, I just feel that while it makes sense that a lot of people believe they have seen this before and that it is just type, we have had a lot of innovations that make it not just type. That's a that's a good one. I I think mine is a little bit more more basic, which is if your if your mental model for AI is what you see with with LLMs, you know, I'm not gonna name any because I don't wanna cast aspersions on them. But, you know, if you if you're thinking about how something interacts with you linguistically, there is so much more that there is that can be done, that will be done. And I I would just encourage the folks watching to expand their idea of what AI is so that they can also expand their idea of what the capabilities and the potential are. Fantastic. Good stuff. Alright. So Leon Adato and Charlcye Mitchell, thank you both for your time today. Really appreciate it. Thank you. So I would like to thank, Kentik for sponsoring this presentation today, and a special thank you, of course, to our audience, for hanging in there and joining us today. So just a reminder that this is, being recorded and will be available on the I t ops times dot com website, so you can watch it on demand to your heart's content. So until next time, I'm Dave Rubenstein, editor in chief of ITOps Times. So long for now.
Join Dave Rubinstein, editor-in-chief of ITOps Times, as he hosts an enlightening discussion with Kentik’s Leon Adato and Charlcye Mitchell on the convergence of AI and network monitoring. This webinar explores both the potential and limitations of AI in networking, providing a deep dive into how AI can add value and where it might be overhyped.
Key takeaways from this webinar replay include:


