Kentik - Network Observability
More episodes
Telemetry Now  |  Season 1 - Episode 22  |  August 22, 2023

Demystifying the role of AI and large language models in networking

Play now


Ryan Booth
Ryan Booth
Software and GenerativeAI/MLOps Engineer

Software and GenerativeAI/MLOps Engineer. Passionate about building solutions that users can easily consume. Network/Infrastructure Engineer at my core.

Ryan's LinkedIn Profile

Transcript

Phil Gervasi: There's probably no bigger buzzword right now than artificial intelligence. All right, well that's two words, but still it's such a hot topic in the tech world and also even in popular media right now. So what I want to do today in this episode is demystify what artificial intelligence is really all about, but specifically what it means for networking With me today is a personal friend of mine, Ryan Booth, who has years of experience as a more traditional network engineer, pretty high level network engineer holding a CCIE among other certifications. But for the last six or seven years has been laser focused on the DevOps and the software side of the house. And more recently, Ryan's work and experience has brought him squarely in the center of this conversation using AI technology and networking. So we'll be defining some terms, dispelling some myths, and hopefully shedding some of the marketing fluff around AI and getting to the heart of the technology itself. My name is Philip Gervasi and this is Telemetry Now. Hey Ryan, it's really good to have you, man. You and I have talked many times and we've known each other for years. I really appreciate your background in networking and I've been following what you're doing now in this space as far as AI and really your transition as far as your career into the more DevOpsy world you could say. So thanks for joining the podcast today, and as we get going and before we dive in deep into our topic, can you give us a little bit of a background on yourself? I spilled the beans a little bit and told everybody that yes, you have a background in networking and now you're working in more of an AI space, but from you, what are you up to these days?

Ryan Booth: Yeah, no, I appreciate it. I'm always excited to have this conversation right now as it's a very hot topic and one I'm building a lot of passion through. So like you said, my core background's been in networking and infrastructure. I've been doing that for the better part of, I did 15 plus years, worked my way up to CCIE, worked for various companies, various organizations, saw all sorts of different types of networking. Went into the vendor space mostly with a deep interest into network DevOps, doing network automation. And working through that, I built more of a passion around development and working through the development skillset as a software engineer. I ran that for quite a while up until right now, and that's currently where I earn a paycheck is managing a software development team doing web applications for our systems. Recently over the past year, year and a half, somewhere in there, I really started picking up on AI and ML. Mostly it started from a personal standpoint with a lot of the AI art stuff out there. That's where I got my hooks into it and really got excited about it. And then when ChatGPT landed and the whole AI craze hit, it just accelerated it. And as I started digging more and more, I found avenues inside of my current employer to pursue those. And I've just been working through that now. And so that's where my career has progressed and where it's going. So yeah.

Phil Gervasi: So then I'd really like to know why though you went from the networking space in a DevOps direction. What was the impetus to do that? Because I know I remember when everybody was getting into network automation for the first time and that conversation was happening out there in the community. What was the reason that you chose to shift from traditional networking, configuring routers and switches and turning a wrench into automation and the DevOps world?

Ryan Booth: Yeah, that's a good question. It was twofold. I would blame my CCIE and the studies for my CCIE on a lot of it.

Phil Gervasi: Okay, I did not expect to hear that.

Ryan Booth: The repetitive typing of rip commands into a command line and building router config, BGP config over and over and over and over really wore me out. And then also that was around the time when SDN was picking up a lot of traction, or at least a lot of market buzz is the better way to say it. And through a number of vendors, through a number of social events, things like that, I recognized that there should be a better way to do all this. And that's where I was like, " Okay, we need to add programmatic interfaces to network. We need to be able to automate instead of manually typing these commands." And that's where it started. As things progressed over the next few years, and this was probably about 2014, 2015, somewhere around in there, as things progressed, configuration was getting more and more complex every single day. You have NPLS, it's always been out there and the complexities with it. But then EVPN hit for the data center and the complexity with that just went up as well. And it's like, " Okay, we can't keep doing this manually. We have to introduce some sort of programmatic way to handle this at scale." And so that's where a lot of my passion shifted into. I don't want to configure a router through the CLI anymore.

Phil Gervasi: And do you think that the more recent shift from a network automation DevOps into a focus on AI is the logical progression of that change?

Ryan Booth: I think so. I think it'll be a decent jump, but it's hard to say, but I do. AI is just not going to sit down at your chair and take over your job and do things manually for you.

Phil Gervasi: Oh, thank goodness.

Ryan Booth: It's got to be built into a workflow. We got to figure out how to handle it. And all of this stuff is still way up in the air in how it's done, but it's going to be done programmatically, that's for sure. And so to be able to work on infrastructure and to work on network kit, you've got to be able to do it programmatically.

Phil Gervasi: So that's the spirit, the underlying premise here of everything is that the real benefit of the shift from traditional networking into the DevOps mindset and network automation and now into the application of the AI concepts and workflows and networking is to add a programmatic element to actual network operations, to the mundane tasks of running a network, troubleshooting, fixing, learning what's going on if you're looking at it from a visibility perspective. It's really operations focused. I don't want to downplay the importance here. That's a very important thing. We're talking about application delivery and the applications that run my hospital and run the United States military and run mundane things like my productivity tools like Word and PowerPoint that I have online these days. So as much as we look at these shifts as we just want to make configuring devices easier, it's really solving a network operations problem. It's not like magic. I don't like it when people say, " Look at what we do, it's magic." And that's silly. This is technology. It's code and specific database choices. It's specific technology choices in order to solve a specific problem. And in this case, I really feel like it's an operations problem more than anything else. Do you agree or am I wrong?

Ryan Booth: Yeah, no, I absolutely agree there. It's totally operational. If you look at anything out there, the design and deployment is a smaller percentage of the lifecycle of an infrastructure. So there are places there to be able to improve and to automate, but operations is where we need that focus and we need that effort. So yeah, totally agree there.

Phil Gervasi: Okay. So I was talking to one of our data scientists months ago actually at KENTECH, so internally, and we were chatting about his background and he made the comment that he's not in networking, or rather he is now, he wasn't in networking. So he's got a PhD in computer science with a focus on machine learning among other things. And he was working in some kind of an aircraft manufacturing industry. I don't know if it was manufacturing or not, but in any case, what they were doing was collecting an incredible amount of telemetry information data from the various systems of a particular aircraft. And then they were analyzing that and applying advanced analysis workflows, I'm going to call it that and not use AI and ML just yet. They were using these data analysis workflows to figure out what was wrong, what elements in their various visibility tools were correlated to each other ultimately for the purpose of sending out a tech, of sending out a human being to go fix the problem. Now we know what the root cause is, let's go fix it rather than let's go mess around with wires and widgets for hours and weeks. We need the aircraft back online right now, so let's use this tool to send out a human. And I really feel like that's what we are doing now. We're applying these more modern data analysis workflows. When I say modern by the way, sometimes I wonder if they really are that modern because we've been using them in other industries for a long time, but only more recently in networking, which I want to touch on that later. But we're applying these new workflows, new to our industry at least, in order to send out an engineer in order to get a human being to fix the problem faster. But in that conversation, he didn't use the term AI even once. He didn't say artificial intelligence even once. So for me, this begs the question I'm talking to you Ryan, what is artificial intelligence? Is it different than machine learning? Is it different than just a college level statistical analysis? Is it just a buzzword or is it something unique that we can point that says, " This thing over here, yes, this is AI,"?

Ryan Booth: Yeah, no, I think this is a very important topic for us to discuss, especially in the modern day right now, because there's a lot of buzz, there's a lot of stuff going on and there's a lot of misconception here. And I like to throw the analogy out there that we've seen in the industry a number of times. We get these buzzwords, we get these cool new technologies and everybody instantly, if you relate it to cars and the different style of cars and the different aspects and problems that the cars solve or what they provide, features, everybody wants to instantly jump to a Ferrari or a Lamborghini. That is what's going to solve all of our problems. And we don't realize that what we have and the problem we're trying to solve can possibly be solved by a Toyota Corolla. And that's how I see it here. You have these large language models, you have ChapGPT, you have even deep learning stuff that goes way more advanced in it as well. But a lot of problems you don't necessarily need to go that far. You can use just basic machine learning with basic algorithms. So I break everything down into three sections right now. And the first one being the basic machine learning, the stuff that's been around for a very long time, we're talking the eighties or further back. And it's a model that has very few layers, one, maybe two, it's simple algorithms. Well, relative to AI and ML, it's simple algorithms like linear regression and then just pushing that through a few layers doing basic what's called forward propagation and backwards propagation for your training and your fitting of your model. And so these types of models are usually very, very focused on a given task. They're not generalized like what you're seeing with ChatGPT. They have one specific job and they do it well. And so that's where ML comes in pretty good. And that's what's been around for a while there. After that, you basically don't change up a lot of what you're doing, you just introduce new layers and you go a little bit deeper with the same type of stuff. And how you stack those together is where deep learning comes into play. So if you have multiple layers inside of an ML model and you replicate that however many times you need to, that's when you start getting into deep learning.

Phil Gervasi: So then I see artificial intelligence is a broad category. It is the idea of saying, " Let's teach these systems or rather create these systems to think like a human." And it's that broad. It doesn't really mean that much more than that. It's programmatic computers, or rather it is computers that we program to think like a human being. And machine learning is a technical component of how we do that. So we're talking about training the model with data, however. And there's different ways to do that. I get it. And then the application of models to data sets to then derive some sort of insight. So machine learning in that sense is a technical function of the broader category of artificial intelligence, right?

Ryan Booth: Yep. And I would even argue as far as it's the foundation of it all. And myself personally as I've been ramping up here, it's where I've focused on my energy is really understanding ML, the various models going on and then digging into deep learning because I think that's the core of all of this. Well, you look at these models and how they're built and how the LLMs handle stuff. It basically is that at its foundation, but just done at a very large scale.

Phil Gervasi: Okay. So then machine learning, which has been around for a long time, it's been around for decades and decades. Why do you think that we're only starting to apply these types of data analysis workflows to the networking industry today or at least in recent years? Maybe not today as in literally 2023, it's been a few years, but recently.

Ryan Booth: Yeah, that's a good question. I think the networking industry and the infrastructure industry has always been a little slower to adopt newer technologies, especially networking. Security seems to be the same way in my opinion. Server infrastructure, they went through their virtualization and then docker containerization, stuff like that. But to really take this type of stuff and start applying it, I think companies have tried, but the ML buzzword for machine learning I think really didn't take off until 2015 or so when a couple of things progressed. And then most recently with generative AI and LLMs, that's been even later 2016, 2017. And so I think you have various companies out there and I think most large vendors and most companies that are working in the space have dabbled in it or tried with various products to get there with mixed results. So I think it's been there, but the marketing hype really just hit just over the past few years.

Phil Gervasi: Yeah, yeah. I remember the marketing hype around other buzzwords over the years. Remember around the time when you and I first met, everybody was talking about SDN, and even then it meant very little. What is software defined networking? And it's funny because here we are, I don't know when that was, that was probably eight years ago when we met, nobody uses that term anymore. It's gone from the zeitgeist and from the narrative in our community. We don't even talk about it. Maybe because it's become literally integrated, the concept of SDN has been so integrated in so many of the tools that we use that maybe it's just ubiquitous and therefore we don't talk about it. But I think we don't talk about it because it was mostly just a buzzword. And that's why I had that bad taste in my mouth when I hear the term AI thrown around so flippantly and loosely, because I know from experience, I know for example, why my company applies certain ML models and why we don't apply certain ML models, it's to solve a problem. So for example, we are looking to find seasonality in network data and we apply a model and it's way off and it makes no sense. So we don't use it. And maybe instead of using a more complex algorithm, we can do something much more simple akin to what you would learn as a junior in college. But lo and behold, it gives us the answer that we want. So I really see the technical components in modern data analysis workflows as tools in our tool belt. We use something when it makes sense like we would use any tool in our tool belt when it makes sense, and then we don't when it doesn't. So we're trying to forecast or detect anomalies. So we're trying to identify, here's something that we struggle with in the industry I know, but is identifying dependencies among short- term dependencies and long- term dependencies. So we have this causal relationship in the data where we can say, " Hey, look, this thing over here is causing this manifestation in the network over here." But it's a short- term dependency. Those things, like a CNI on a container, that thing doesn't even exist for very long. So it's not actually going to cause that for this long- term time period, whereas a configuration might be a long- term dependency because that's something that's more static. So how do you encapsulate that into algorithms? How do you encapsulate that into literal math that lives in Python, that lives in Jupyter notebooks that lives in whatever database that you're using? That's tough. And so I really look at all of these components as tools to help a human being engineer solve a problem. I found that just choosing the right database so you can query databases fast, to query all that data just fast so you can actually use it when you're trying to troubleshoot wire your application stinks, that in and of itself is a great step forward. And yes, that's part of the overall picture here because it's part and parcel of how do we ingest data? How do we query data as part of a data analysis workflow? I get it. But think about it just which are we using a relational database or a kilometer database and what is the benefit and the drawback of those? So I think there's so much more than just we use machine learning. I remember seeing, I don't remember who it was, some company, they had an ML button in their UI, in their menu, you know what I mean? It was a screenshot I saw, I think it was on LinkedIn or maybe a YouTube video. And I just remember thinking, I paused it and I'm looking at it and I'm like, " What does that do?" So I clicked that button, the machine learning. Really? That makes no sense. To me, it's an underpinning function that produces a result. And it's like, all right, you got an alert that there's this problem, or you get this message from the system if you're using some sort of chat ops maybe, and it says, " Hey, you have this increase in cost in your AWS egress over here, and we believe that the likely cause is because you've shifted from data center A to data center B on this part of the world." That's like insight derived from data. To me, that's machine learning happening. It's not a button that I press. You know what I mean?

Ryan Booth: Yeah. And I think it starts there. Those are where a lot of problems can be solved and that's where a lot of problems should be solved. There's definitely been talk around the industry, exactly like you said, that let's get to a point where we can have a single application, do this root cause analysis for us, or we can have this central dashboard that'll show us absolutely everything we need to know and we just click buttons to solve it. And I think where that's always fallen short is being able to tie the people together who know how to make those elements correlate with the people that know how to build the system and build the algorithms and then build the ML system to put it together. And I think that's where, as an industry, we've struggled for quite a while, but it's also massively complex. You take one given instance, say an interface flapping for an example. Knowing that an interface is actually flapping between two nodes. Is it an optics issue? Is the CPU on the router overloaded? Is the cable bad? What is it? And just a simple thing like that to actually break down and have a system be able to correlate and figure that out. That gets complex and that's one of the simpler issues. And so putting all that together, it becomes a massive ordeal. And where you can take something like ML or even deep learning models is be able to pump that data into it and let the model actually recognize with normal traffic flow coming in and out at constant times where it's seeing issues and where things crop up. I think where that gets complicated, I obviously have not worked on this hands- on directly, so I'm just going from a theoretical standpoint, but where that gets complicated is you have to be able to identify your various channels and identify the data that actually matters. And at the various layers of the model or what gets inputted into the beginning stages of the model and where it learns from there. And then it's a matter of training and retraining to go through it. That's where it could really help out. We don't have to manually correlate all this stuff together for every single possible issue that could be out there. Let the models do their job and find stuff for us.

Phil Gervasi: Yeah, and actually, in talking to some data scientists both at KENTECH and other places, that's actually easier said than done, or rather it's hard. Easier said than done, yeah, that's the phrase, because it's actually easy to find correlation. It's not hard. And the difficulty comes in when you find things that are correlated, but who cares? It's how do you add the subjective component, the human part of engineering. The idea is, okay, these things are related, but that doesn't affect the end user experience in my New York office. It's like, " Okay, well then do I care?" And if they are correlated, are they correlated? Is it a spurious correlation or is it a causal relationship? Is a third variable at play here that we're not identifying? So I've seen that correlation is not difficult to identify, it's just assigning a correlation coefficient based in math. It's not hard. But having meaningful correlation or finding meaningful correlation, it's different. Because again, it's the idea of we have very dynamic, not static networks. We have ephemeral information if we start getting into containers and end users possibly if you want to collect information from there. We're talking about very, very divergent data. So if we're looking at analyzing data, it's not just like in the healthcare industry, just analyzing information from MRI and it's all similar data and then you're trying to figure out and forecast some sort of... I've seen that and that makes sense. But when you're looking at network data, you're looking at very, very diverse types of telemetry that are incredibly different scales and formats and types and represent very different things. And a lot of it's subjective, not subjective. A lot of it's not quantitative in the sense that it's a tag, it's a label. It doesn't represent any particular activity. It's like a security tag or an application tag or a user ID, process ID on a container. So how do you fit that in into your algorithm? So I think that I've always thought that that's one of the reasons that it's been a little bit slower uptake in the networking industry is because we have that difficulty to overcome. But you mentioned ChatGPT a few times already. You mentioned LLMs a couple of times already. What does that have to do with anything? First of all, what is an LLM and what can I solve with that in the networking space?

Ryan Booth: Yeah, so LLMs and basically the invention of what's called transformers, if anybody's heard that buzzword out there. It's not a buzzword, it's the technology actually used. If you take a deep learning model that's a large number of layers, and then you take that and you make it much, much larger with a much larger corpus of data, a collection of data that is very generalized and you push it through a transformer model. So basically if you're looking at deep learning, so if we're talking LLMs and we're talking ChatGPT I'll connect that route. So this is usually with text generation and text prediction. So what's the sentence or respond to this question that I have? Deep learning is one of those when you're looking at some of the examples out there, like the older models like RNNs and what's called LSTM, which is how our RNNs are built, is you pump in a handful of words and it guesses the best next word after that. And so the best way to visualize that is if you think about going to Google and you type in a search, it has a dropdown that suggests the next words to use. Same thing with on your phone when you're texting, it gives you text predictions, but it only predicts the next word. It only predicts the next two or three words that you might be using. It doesn't take the entire sentence under context and respond in full. And that's where LLMs really took a step forward is the transformer model allowed you to input a larger set of text, one or two sentences or even more and get a full response back, not just the next word. And so that opened up a lot more of context for you because that's the key here is if you're just predicting the next word, you don't know the context to the past three words back. And with the transformer and with GPT models like GPT- 3, Bard, Llama, Anthropics model that are open, all of those are where you can actually start interacting with it. And that's what we're seeing now. And so that's when the LLMs really started coming into play. And that's where we are right now with the generative side of it.

Phil Gervasi: And I have a subscription to ChatGPT. I use it pretty frequently. It's very interesting. I have experimented with it to see how I can break it. I say break it in air quotes, what kind of responses I can elicit. To me, it's not hyper useful because of what I do for a living, so I don't necessarily need it to succeed at my job necessarily. But how do you think that ChatGPT or at least similar type of technologies, what role do they play in networking or will they play, do you predict?

Ryan Booth: Yeah, it goes a number of different ways from what I've explored right now. And I'll say the absolute most important response to that question right now is we don't know. We have guesses. Some of us and a lot of us have started playing with it and started exploring and figuring out where we can and can't use it. But for the most part, where it's going to land is really hard to tell right now. It's going to be a very integral part of our jobs, but we just don't know how yet. So you mentioned earlier you have a ChatGPT subscription. I do as well. I highly recommend anybody out there, highly recommend if you can get that subscription, beg your boss, get them to pay for it or any of the others that are out there and really start using them, just figuring out and exploring how to use them. Because that's how we're going to figure this out is just doing, just by trying, experimenting, playing with, seeing what works, seeing what doesn't. But for right now, the roles that I see it play is the first one right out of the gate, especially with the LLMs and GPT models, is it's going to improve our interaction with programs, with applications, with devices by introducing natural language communication. And so you'll see it out in the industry termed as NLP, or Natural Language Processing, which is a whole group in and of itself. But being able to actually use your natural language in whatever language you speak as well and interact with those devices, that's going to be the critical part and that's going to be what comes out of it. Now, how it does that, that still needs to be figured out. So instead of having to know a specific CLI, instead of having to know a specific programming language or what operating system you're working with, that stuff should get smoothed over and you just talk with a natural language to what you'll want.

Phil Gervasi: And I remember that was one of the first things that I did experimenting with ChatGPT when it came out a while back was telling it to configure a thing. I would give it parameters and then it would spit out a configuration in Cisco CLI or Juniper or whatever I was asking it to do. And it was okay. So you think that that's where we're going as a first major step is using that technology as a means to make managing, configuring, interacting with our individual devices and then therefore networks as a whole more just easier?

Ryan Booth: Yeah. And I think it goes across the board. So instead of having to sit down and spend a week or two or however long it takes to build the config for a new router or a new firewall or a new switch or even a new server, you can very quickly just on a chat interface say, " Hey, I need this, this, this and this," build it out in a bullet point list, send it into it and get configuration back. And if it's not perfect, especially right now, a lot of what you see out there isn't perfect. You get probably 75% to 90% accuracy if you're lucky, but that cuts down a lot of work that you have to do. And so now you can do that last 10% to 15% of tweaking and you saved yourself X number of hours of time. The other area that I feel this is going to be pretty universal as I think it'll have the same impact that the Internet had on everybody is you will now have an expert in your pocket at all times for basically almost anything. And we're seeing this all over the place. So we've always joked about in the industry, especially with automation, network automation is at any given time, an end user is going to be able to click a button and provide the same operations as a CCIE would or get the same level of configuration or exposure to the infrastructure as a CCIE would give you, but you have it in front of you. And that's also what I think we have here. And so through my experimentations and through the stuff I played with, I can go into these tools, I can go into ChatGPT or go into any of the other models that are more fine- tuned for something and I can ask, " How do you solve this specific problem in Python?" And then I get their response and it's like, " Okay, you do this, you do this and do this." I wasn't thinking about how you could do it that way. Okay, well now how do I do that in C? Or how do I do it in Go? And then it'll turn around and give it to you and Go. So instead of having to ramp up and learn all that information or know how to get that information off the Internet or off your team, it's just right there at your fingertips. And that will just continue to improve. And I think it's opening up the doors for a lot more people.

Phil Gervasi: I think one of the things that I'd like to see happen is more than just the interface with our devices and it being able to spit back information about our devices, or at least to give us configuration and to help us, like you said, solve problems in that way is the integration with the actual data of our networks. So that way, hey, whatever we name it, like Lieutenant Commander Jordy Laforge talking to the enterprise, " Computer, why is my Chicago office operating very slowly?" And then it looks at the data and then it's using artificial intelligence and machine learning and applying models it probably already has. Hopefully there's an alerting system where it tells you before you ask it, but in any case that it says, " Ah, we identify that there is a slowness on this particular interface and it's likely caused by this thing that's happening over here. DNS resolution times and Route 53 is really slow with this particular file," whatever. Something that yeah, we could have figured out as engineers, because it's not like the computers are necessarily quote, unquote" smarter" than us. They're just doing everything at scale much faster than us. Maybe you disagree with this, this is why I say if I could afford a team of a few hundred PhDs from MIT or something like that, I could probably just say, " Yeah, you are my human ChatGPT. You just analyze the data, do all your stuff and be in that room doing it for me." But instead I have computers that can do it dramatically faster and at larger scales, ultimately hopefully being able to derive insight than I wouldn't have otherwise been able to do. So I would love to see that integration with also the ability to have that human language component that we've been talking about just now and the ability for it to spit back answers and spit back configuration and maybe one day then we follow up with, okay, that sounds good, push that configuration to my North America offices or something like that. I don't know. I have to say we started off talking about how SDN was buzzword stuff, but I have to admit in my mind's eye, that's what SDN was always supposed to be. You know what I mean?

Ryan Booth: Yeah, I think so. I think everybody's talked SDN to death, but I think we'll go through similar steps in motions with AI that we did there in that SDN was supposed to just be smoothed out for the end user. It was supposed to just be easy. And while stuff did come out of it, you got these automation tools out there that build infrastructure for you automatically. You have stuff like MPLS at the WAN Edge, you got technologies like SD-WAN and they were basically the right step for SDN, just not how the industry looked at it. And SD-WAN didn't come in and say, " Hey, everybody's got to learn to do this complex MPLS configuration for your WAN." No, you plug in this box and you click these five buttons and you're done. And you have this really complex config, but it's smoothed out and simplified for you. I think we get the same thing here. We've made attempts in the past with multiple vendors and we build these big telemetry boxes or we go through this really complex data aggregation where we pull all the data in there and it's all at your fingertips. But the problem is you have to know the syntex to query it, and you have to know how to do that effectively. And there's only a handful of people out there that know how to do that effectively and know how to interpret the data. So we are seeing stuff come and surface over the past year or so where companies are utilizing the newer LLMs that are utilizing the newer technologies like NLP to be able to simplify and allow anybody to come in and ask that question and get reasonable data back. And then that's the other part with it with NLP. So you can interpret the question I'm asking into the complex query, so that's natural language understanding. The reverse side of that is you get this massive amount of data that comes back at you and you have to also understand how to interpret that. Well, part of NLP is NLG or natural language generation. So your LLMs your solutions, they actually shrink all that down into a nice summary for you. So instead of getting this large complex set of data back that you got to interpret or logs and events, you get a simple response like, " Oh no, it looks like interface XC0001 is having issues with packet drops." You get it boiled down to a simple response. I think that is where it's crucial and where it's going to simplify stuff for a lot of people.

Phil Gervasi: Oh yeah. If I had that tool when I was troubleshooting networks and trying to figure out problems where I could literally just ask the network, that would be amazing. And if that's where we're headed, that's great. So what are your experiences then working with ChatGPT in your experimentation? You don't have to get into what you're doing at work necessarily, but I know that you've built some applications with ChatGPT. What are your experiences there with that?

Ryan Booth: First and foremost, the one I love the absolute most is the repetitive task. So if I'm going in and right now as a software engineer, I need to build out a feature in my application, and that feature has all these different components that have to be built out, and there's this whole list of stuff that needs to be done. It's pretty cookie cutter. And in the past it was basically just manually type all this stuff out and just grind through it. And that was part of a software engineer's job. And we see that a lot in network automation with the various config files we got to build or the various modules that have to be built to handle our infrastructure, yada, yada. I can just quickly go to ChatGPT and say, " Hey, build this, do this for me," and give it the specifics I need and it spits it out. And so it's then just copy and paste into the code base, update it as I need to, and then move forward and do the complex work myself. So that's a massive time savings there. And then it also gets into what I mentioned earlier with the stuff that it's like, " Well, how would I solve this problem if I got to do this, this, and this? How do I build that? Or what's the most robust way to do it?" I can turn around and very quickly ask ChatGPT, " Hey, how would you do this?" And take what they say and be like, " Oh, okay, okay, they're doing this, this, this and this," and then you can take it from there. And so you get that expert opinion or at least the starting point to move forward. So to me, those are the two biggest ones. I love to use it to type emails, especially general emails that happen all the time, or there's a lot of emails that I just don't enjoy typing because I struggle with how to word it. And so it's like, " All right, ChatGPT, how are you going to word this email? Okay, that's way better than what I would say. Thank you," or tweak it. And so that helps me stay focused on my day to day. Another one that was really cool that my team lead actually did, I won't take credit for this one, but I thought this one was awesome. We were in the middle of a planning session for our next sprint for the next couple weeks of work that our team was going to be doing, and we were trying to scope out a feature that we were going to be building, and it was one that we wanted to add in, but we didn't know how much time it was really going to take or how complex it was going to be. So really we couldn't guess the number of hours or estimate the number of hours it was going to take, so we couldn't tell if we were going to fit it in the sprint or not. And so what my team lead did, he shares his screen while he's doing all of this. He goes up, he asks ChatGPT, " How do you build this feature? How do you do this specific thing?" It spit it out. He had to tweak it a little. And as he's seeing the code get built on the screen, the whole team's watching as well. And so we can see, okay, if we're building it for ourselves, we can visualize everything that needs to happen. And it's like, " Oh, we didn't think about this part, but ChatGPT caught it." It's like, " Hey, you got to build this out." And we're like, " Okay, then we can better scope this." And then we were able to say, " Hey, it's probably going to take a week." And so that was another cool feature that we were able to just use it on the side. Now those are more admin feature type things, so those are more boring.

Phil Gervasi: Boring, but needful. I really feel like what we've been talking about, because the broader topic of this conversation today is AI, which then ended up being about ML, but that the large language models, specifically in this case ChatGPT, are or is a interface between a human being and then all that AI that's happening. And so now we have a easy way to get access to the data, not just the data, but meaning in the data, the insight from the data, and be able to manipulate it on the fly and then get answers very fast, get information very fast. And then we're talking about developing config and all that stuff as well, and I get it. But what are some of the problems then with this interface? Did everything go perfectly according to plan in your development of these applications and then in your team lead developing that application?

Ryan Booth: Yeah, no, and it doesn't. I think we're a decent ways away from getting total 100% accuracy on anything, but also well... So that's a lot of the misconception and let me call that out right now. One of the core understandings of data science and ML and AI is that there's a certain point that you can't get any better than a human could. So if a human is able to do a job at 95% accuracy, it's very hard to say in a lot of situations that a machine model, an ML model, an AI model can do any better than that.

Phil Gervasi: Really?

Ryan Booth: Now, there are applications where it can, but there's a very small gap between how much better it can do it than a human. Now, there are exceptions and there are exceptions to that rule. But in general, we shouldn't be expecting it to do much better than a human in a lot of scenarios. So if you are building config and a human can only do it at 98% accuracy, probably not even that good. If you get that close, you're in a win scenario. In my experience, I found that knowing how, knowledge of what you're trying to build and knowledge of what you're asking the AI or ML system to provide you is critical. So you get much better results if you are able to be more detailed in the prompts that you give. So people have started talking as prompt engineering as a career. I don't know if I'd go as far as that, but it will play a very critical role in all of this, being able to provide very good prompts. With that, I've learned. I did this project that I'm just playing around with going forward, and I was like, " I'm just going to have ChatGPT build me a full containerized web application that is built on Flask and Python and yada yada and have it deploy it into a cloud environment or Kubernetes or whatever." And I'm not going to touch the config at all. I'm going to have AI do everything, specifically ChatGPT- 4. And then, so what I did is I started walking through the project and it's like, " Okay, how would I build this out myself?" And so I started asking ChatGPT for it to build all the configs for me. The first thing needed exactly was the knowledge of how the application is built so I can get the configs out of it. Then what I found out later on is that order of operations was pretty critical. So when I would start going through it, things would go smooth, configs would get updated as I typed request for new configs in. But later on when I had to jump back to a specific config file or to a specific module in Python and say, " Hey, I need you to update this to add this, this and this," it had a harder time with that. And so the workflow and the order of operations as you go through it, I feel are more critical because the AI still gets lost in some of that information. So to me, those are the areas that you have to pay attention to. And as we get more deeper into using it and we build these into our workflows, I think those are the areas we've got to focus on for accuracy.

Phil Gervasi: And ultimately, like anything else in technology or really anything else that exists, everything is iterative, right?

Ryan Booth: Yeah.

Phil Gervasi: I think it's amazing where we are and to suggest that it's no good because it's not magic yet is silly. It's a huge step forward and I appreciate it for what it is and look forward to what's coming down the road, which ultimately begs the question, it doesn't beg the question. To me, this implies that you really don't need a PhD in data science or advanced understanding of machine learning or even a cursory understanding of data analysis to be able to interact with the technology now that we have this human language model that can sit in between us as a human being at a keyboard and the actual artificial intelligence workflow that happens under the hood, right?

Ryan Booth: Yeah, I agree with that. I learned when I did my transition from network engineer into software engineer that it's not one of those where I was going to become a software engineer overnight. And you got to understand and appreciate that these data scientists and these engineers that build these models and build these systems, they got PhDs in this stuff. They've been working in these industries for 20 plus years to do this. And for us to just jump in there last minute with a handful of years of experience, we're not getting the same quality as they are and that has to be appreciated. But as an end user, that's what we are, that's our focus, is we use the products they build. And so we have to learn how to leverage them in those ways, to your point. And it's become easier and easier and every single year it gets better and better and easier and easier for us to actually leverage these tools. I think that was one of the bigger explosions. Well, the explosion and the hype around ChatGPT and LLMs didn't necessarily come from open AI's awesome GPT models, okay? They were pretty standard and they're pretty good. They hold the industry standard for performance now, but it wasn't necessarily that that model exists and it was much better. It was how it was delivered to the users. It gave everybody a very simple interface to play with it. You didn't have to understand all the bells and whistles, and that is where I think the explosion in the industry happened is because of the simplicity there.

Phil Gervasi: Okay, yeah, that makes sense because frankly, I am not that smart and I can log into my ChatGPT subscription and start doing productive things with it right away. And I am no data scientist, that's for sure. But I do look forward to how we apply this technology, not just the large language models and the natural language... What does the P stand for in natural language NLP?

Ryan Booth: Processing.

Phil Gervasi: Processing, thank you. Not just that, but the underlying artificial intelligence workflow, the machine learning workflow that's adding that insight to the entire process that we may not have otherwise had or that would be just insurmountable for us to attain on our own in a reasonable amount of time, considering the amount of data that we have to look at now and the complexity of networking today. So anyway, Ryan, this has been really excellent. I'm looking forward to seeing more of what you are personally working on, and of course, just tracking your career as well because I know that's just really interesting. If I could have it to do over again, networking is great and I love it as a industry and as a career, but I am so interested in this stuff. To me, it's not just like the SDN fad, it's very real. And I can see not just because it's being used in networking, but you can see how this technology is used across the world in a variety of industries and has real value. It's very attractive to me. I would definitely look at that as a career option if I was 18 years old graduating high school.

Ryan Booth: Yeah. I would add to that too, and it's career advice and take it as it is. At some point in time, we all have to pick where we're going to focus on and we pick what we're going to do when we grow up. And a lot of us, I think we get stuck in the rut that that's where we got to stay. And I've never really liked that myself. You can see that I've bounced around a lot. I was given the advice a while back that we all get to choose our own adventure, and so I really take that to heart and I would hope that everybody does that as well. There's a lot of movement that can happen in this industry, and you can follow those interests as much as you want. It's just how much effort do you want to put into it? And you just got to be very open- minded about it and go for it.

Phil Gervasi: Yeah, I agree. We reserve the right to change our minds. And I add the caveat, especially when I talk to my own kids, I have one that's starting to look at colleges, I say to her,'You have the right to change your mind about majors and career options, but whatever you do, you give it maximum effort. Don't play around. Dive into it and give it your all. And if you want to change your mind five years later, that's fine. I've done it and it's worked out okay." Anyway, Ryan, it's really been great to have you today. I really appreciate talking to you. I'd love to talk to you again and get into the weeds even more about maybe specific ML models and how we apply them and why. So if anyone has a question or comment, how can they find you online?

Ryan Booth: So Twitter and LinkedIn are the best ways to go. Twitter's whatever it is right now, and maybe it'll be gone by the time this is published. Who knows? But I'm @ thatoneguy_15 most anywhere like Reddit, LinkedIn, Twitter, all those various places that one guy 15 is my handle, but LinkedIn's probably the smartest way to get ahold of me. Reach out. Anybody's welcome, DM me, reach out and chat. I love to talk about any of this. My GitHub's out there, the project that I talked about, about building the web app with ChatGPT- 4, that's actually out on my GitHub page. I have the full context of the whole thing, just as an experiment, and I continue to play around with it and push stuff out there. So you're interested to check that out as well or contribute if you want. I don't care. Those are probably the best ways to get ahold of me.

Phil Gervasi: Great, thanks. And you can find me on Twitter still, network_phil. You can search my name LinkedIn, and if you have an idea for an episode or if you'd like to be a guest on the podcast, I'd love to hear from you. Just give me an email at telemetrynow@ kentech. com. So until next time, thanks for listening. Bye- bye.

About Telemetry Now

Do you dread forgetting to use the “add” command on a trunk port? Do you grit your teeth when the coffee maker isn't working, and everyone says, “It’s the network’s fault?” Do you like to blame DNS for everything because you know deep down, in the bottom of your heart, it probably is DNS?

Well, you're in the right place! Telemetry Now is the podcast for you!

Tune in and let the packets wash over you as host Phil Gervasi and his expert guests talk networking, network engineering and related careers, emerging technologies, and more.

We use cookies to deliver our services.
By using our website, you agree to the use of cookies as described in our Privacy Policy.