Telemetry News Now.
Welcome to another episode of Telemetry News Now. We are recording on not July. June eleventh two thousand twenty five. And, Justin, I believe you are in the midst or maybe just wrapping up NANOG in Denver, Colorado right now. Is that right?
I am. Yep. It's the final day of NANOG here in Colorado, so it's where I am at and what I am doing.
Yeah. And thank you so much for taking a little bit of time out of the day. I know it's very busy when you're at these events, talking to folks, manning the booth if you're even doing that. So appreciate it. So let's jump in the headlines for today.
Starting off, we have from the Cisco blog from yesterday, June tenth. Well, not only is Nanon happening this week, but Cisco live is going on this week all the way on the other coast from me in San Diego. And so far, they have unveiled kind of a sweeping set of innovations, networking innovations as well, focused on enabling the shift to agentic AI. Surprise, surprise, we're leading with an AI article today.
So, of course, that kind of emphasizes the need to rethink the way that we do infrastructure for this new AI era. So some key announcements. They include an AI ready data center with enhanced security, integrated NVIDIA support. We can't talk about AI with without bringing up NVIDIA, of course.
The launch of something called AgenticOps.
I like that a lot. AgenticOps. It's very marketing. And I work in technical marketing, so it appeals to me.
Interesting.
Powered by Cisco's new deep network model and tools like AI Canvas for dynamic collaboration. It also sounds a little bit marketing to me, but who knows? We'll see as these things develop. Cisco also introduced its largest refresh of core networking devices in over a decade, and that's the stuff I love as a previous network engineer.
We love hearing about actual, you know, new network gear on this show. And part of that was highlighting its deep integration with Splunk. So I'd say that all of this really reflects Cisco's urgency to lead in the AI space, especially on the infrastructure side Mhmm. And make sure they're they're right at the forefront, especially when we're looking at other vendors as well and, you know, at the forefront with their customers and especially those larger enterprise customers who are sort of ramping up to do their own AI thing now.
Right?
Yeah. You know, I mean, there's clearly a lot of marketing hype in the article, but I actually felt like Cisco for, I don't know, for once is maybe a little too strong. But I, I think they actually have a nice amount of details in here about what what this actually means, what these announcements actually mean. They link into some of their product announcements.
The NVIDIA one that you mentioned sounds like they're actually going to be integrating with NVIDIA's smart NICs on their I presume the NICs that connected the fabrics from the NVIDIA GPU systems. So they're the least according to Cisco, they're the first non NVIDIA switch that's approved for and certified in these AI environments. So that could be really exciting for Cisco, right, to try and unlock, some of the spend in these big fabrics that people are building for AI workload training. And then then just a lot more on what they're doing with some of the LLMs to try and make finding your data easier.
So, yeah, decent amount of, technical detail actually in this leaking out to some more details about the various different products they've announced this week.
And and when you say this week, it is the middle of Cisco Live, so we're going to see stuff roll out over the next few weeks. You know, you know, summertime is a little bit slower, especially for product launches and and for marketing initiatives and things like that, but on the heels of Cisco Live is prime time. So we're gonna definitely see more. But it is to me, it also speaks to this idea of the AI data center in a box.
So it's askew. And so now as, like, a solutions engineer, sales architect, whatever they're called. Right? You know, your different titles for presales folks that are designing these things for their customers.
If you work for a VAR, if you work for Cisco, whoever, you can look at like a grouping of SKUs and buy what you need for your own AI initiative, because you are talking about enterprises that are not all in on AI necessarily, where they're that that's their product. They're a bank or they're a financial institution of some sort or they're like, you know, a medical facility, a hospital. So the AI initiative isn't, like, the core of who they are. So if Cisco can hand them a SKU of gear, of orchestration platforms, of observability tool, all that kind of stuff, you know, that's that's very compelling to enterprises that wanna go in that direction but need some help to get there.
Yeah. Hundred percent.
So from TechCrunch on June eleventh, that's today, Uptime Industries, that's new to me. It's a startup. They have introduced Lemony AI. Lemony is l e m o n y, so lemon with a y.
It's a they introduced a compact, low power, quote, unquote, AI in a box device capable of running large language models, AI agents, and of course all the related workflows locally. When I say locally, I don't mean that you're buying a gigantic data center and running it in your data center locally that you just bought locally as in a box that you can pick up a server. So this is pretty cool. We're, you know, clearly designed for on prem use cases with each node supporting models, AI models of up to seventy five billion parameters.
That sounds like a lot, but keep in mind that, you know, the the largest foundational models are many hundreds into the trillions of parameters.
And then, of course, you can cluster these for scalability. So on prem, to me, it means a focus on privacy, security, ease of deployment, being able to deploy very quickly in strange and interesting places. So it definitely appeals to industries like finance, health care, I would say. Now, the startup has raised two million dollars Not a huge amount, but they have raised two million dollars in seed funding.
They plan to expand its Lemony OS, the operating system, to other hardware platforms and multi user environments. Now, if this is something that you are interested in, their pricing is at four ninety nine per month for up to five users to operate this system. Now, of course, like I said, this is a much smaller model than the huge general purpose models. But those are general purpose models.
Mhmm. And people, they've been looking. Folks have been looking at ways, how can I do something with a much smaller model that is more efficient, easier to deploy, much cheaper that's been either, like, fine tuned or custom or whatever, and it's good for my specific use case, my environment? So, no, it's not general purpose.
So, you know, this is kind of akin to that in some ways, I think. Yeah.
And I, you know, I think that's the trend that we've started seeing here over, I don't know, the last few weeks, maybe a few months. Like, for a long time, the announcements and all of these new AI models, like, look, we've made it even bigger. You can do trillions of parameters. As long as you have ten thousand GPUs clustered together, you could run this. Right?
More than that too. Right? Right. So I hear you.
It which, you know, is great if you're a hyperscaler. Right? But as this starts to trickle out to the, you know, large enterprise, even SMB who probably going to have some use cases for for AI and for workload training and inference and so forth, the models have to get smaller. Right? And you have to be able to, like, how do I start small and and scale it over time as opposed to just going all in and building your own data center and, you know, sucking down ten megawatts of power and buying all these GPUs and so forth. So I think it's, you know, it's interesting to see these companies that are coming out with much more efficient models, you know, for more targeted use cases. So I think this is something that we'll, continue to see more as a trend as we go forward.
I mean, I agree and disagree here. So I agree on a couple points. Yeah. It's gonna be a trend moving forward.
And, yeah, it opens up the door for more use cases for enterprises that don't wanna build a huge data center or can't or won't. Right? Or shouldn't. Mhmm.
So I agree on those points, but I disagree in the sense that we don't know if this is efficient or not. Mhmm. Seventy five billion parameters is just smaller scale. So this may be just an OS that does what it does on a smaller scale.
And the other thing is, it's not a replacement thing for me, because the big the foundational models, and I use the word foundational, refer to those huge, huge hundreds of billions and then multiple trillion parameter models that are really just offering their AI as a service. Mhmm. That's still a offering their AI as a service. Mhmm.
That's still a great use case. And I would say that often it is the right thing to do as opposed to running a small model or or or building a model or training a model. Just hook into the, you know, the API of whatever and then connect it to your data in whatever RAG systems or agent workflows you're using, whatever. I think that's still a very, very compelling model, just like we use SaaS for a whole bunch of other applications.
Right? So I agree and disagree. But at the same time, the points that you made, I agree very much on those use cases where folks are like, you know, we can't use this publicly available model, or we can't do this over here, whatever it happens to be, or we need more control because of our use case. We're going to see more and more of that as folks are finding ways to apply AI into situations and not just, like, run POCs all day, but actual, like, real life things that it's helping their organization with.
So from Reuters on June eleventh, more NVIDIA partnerships. We're hearing quite a bit about NVIDIA and partnerships, especially. So NVIDIA and Perplexity. If you're not familiar with Perplexity, it's a it's a platform that's a little less known than GPT and Claude.
I use it sometimes. I like it a lot. For a while, it was the it was the model that you went to that had, like, web search and things like that. Now they all do.
But at the time, that's, that was kind of their thing, looking at doing browser stuff. In any case, they announced a partnership with over a dozen AI firms across Europe and the Middle East to help develop advanced AI models in local languages and then distribute them to to regional businesses and organizations like that.
Now, to me, I don't know about you, Justin, this whole language discussion with LLMs Mhmm.
I think it's kind of picking up lately.
Mhmm.
It wasn't mentioned in this article, but this idea, you know, it's something that I'm personally noticing lately in various headlines and news and podcasts that I listen to. The idea here is NVIDIA is gonna help with generating the synthetic data to support low resource languages and then train models capable of more complex reasoning tasks. And then once they're developed, Perplexity will help deploy those models locally in various countries for business use cases, I assume more than just business use cases.
And according to the partnership, NVIDIA Perplexity, the goal is to create culturally relevant AI solutions. It's very interesting. Oh, and as a side note, we don't know anything about the money part of this whole thing.
Yeah. You know, I saw another announcement. In addition to Cisco Live and Nano going on this week, there's also Apple's WWDC, the worldwide developer conference that's going on. And I I read an article, I think it was yesterday, coming out of one of the announcements from that that they're investing in their translation engine, right, from, one language to another.
In fact, I think if I read it correctly, they're now able to do it, like, live and on a voice call. Right? Like, I don't know how well it works. I haven't actually played with it myself, but that's what they're trying to accomplish is being able to have someone, like, for example, that speaks English, talk to someone who speaks German, not speak each other's language and have the LLMs do the translations live on the fly, which just is amazing to think of.
I don't know how well it actually Yeah. Will work in near to long term, but yeah.
Universal translator from Star Trek. Yeah. All you Star Trek fans out there, which always boggled my mind because, you know, you're watching the show and, like, you hear somebody speak in a language, the universal translator picks it up. But, you know, it's just strange how the the plot holes exist in Star Trek.
In any case, this idea of, like, language translation, though, has been around for decades and decades. So, you know, certainly, you don't even need the most sophisticated LLMs to do that for you. We have, pattern recognition and some translation things that more traditional and smaller scale NLP can do. But, of course, the idea of just accommodating more languages than just English and maybe English and Chinese, right, which is basically what everything is developed in right now is interesting because it's democratizing access to the, you know, being able to use various models.
And, you know, we're gonna see, I think, a lot of innovation happen as more people I mean, it's just math. Right? More people get access to this technology. We're gonna see more innovation for sure.
Yeah. I mean, I was in Japan. I think it was, like, two years ago, and I obviously don't speak Japanese. And it was amazing how well both the Apple and the Google Translate apps worked for being able to, like, scan a menu that's in Japanese and figure out at least within a pretty good amount of error, like, what is that actual thing on the item on the menu and be able to then point to it and say, this is what I want for lunch. Right? Like, be able to order in a restaurant that doesn't even have an English menu. It's amazing just how, you know, how the technology isn't impacting our our lifestyle.
Oh, for sure. Absolutely. And and I would say even beyond just translation, you know, just the cultural figurative language, all the men all that stuff, the context that's part of language, the nuance of language. If you're in another country and you're you and you're not an English native speaker, you're not an American or Canadian or Western European or or any of those or or Chinese, and you're interacting and interfacing with these models that are trained on a language that has that cultural component and that context and that nuance, there's going to be a gap. So imagine having models that were trained that have that through their semantic similarity and through transform models and all that stuff that they use are able to understand, synthesize, and then even reproduce language in your culture and in the context of your the nuance of your language. I think that's gonna be very, very impactful.
Mhmm.
Alright. Well, moving on. Next article is from Reuters titled Salesforce blocks AI rivals from using Slack data. So this one's interesting. You know, I have mixed emotions on this one. On the one hand, I can kind of understand Salesforce's position on this with Slack data.
We've talked a lot about privacy of data and, you know, allowing models to train on it. Does that allow data to leak? And if you're an organization, Syntex, where we work here, Philip, of course, a Slack organization. There's a lot of really proprietary information that we all chat about throughout the, you know, Workday and talk about new product releases and marketing campaigns that we're working on and so forth. So if that data were to leak out just using one example, it can be really, you know, really bad for our company and for our strategy. So I could sort of understand why Salesforce wants to be a little careful on what they allow as far as training on their model.
On the flip side, I'm a big proponent of a customer's data as a customer's data. Right? So, like, they're kind of locking this away to where even with the APIs, the company Glean was the company that they were, was in the article that they were talking about that they're not allowing to access their data through through their APIs for model training on that on that data. If you're an organization and you want to use Glean or you wanna use ChatGPT or you wanna use some other AI engine to make yourself more efficient, I think you should be able to have access to that now.
You should be able to, you know, opt in, opt out, but just to basically make a blanket statement saying, nope. Sorry. Just can't do it with, you know, Slack's product. Like, oh, it seems a little heavy handed to me.
Yeah. I mean, I see both points. Like you said, opt in, opt out as an option, but also the security concern around personal data and in this case, customer data. So it's your third party data that you're responsible for. I understand both of those sides, but I actually expect to see more and more of this continue as we enter, I don't know what to call it, a data drought. I mean, when folks need more, I mean, we talk there are organizations looking at synthetic data to train models because there isn't enough data out there anymore.
But so if there's if there's these pockets of data that are relevant to the organization via Slack or anything else, it's always going to be a target.
And so I think that this security concern, especially around PII and customer related data and things like that, it's only going to increase in scope. We're going to hear about it more for sure because I think that's where we're at right now, where folks are like, I need more data.
Of course.
I like Johnny Five in Short Circuit, like more input, you know, more data is how how we can continue to progress. And without it, we can't.
Hundred percent. Yep. The next article is from Network World titled Netgear's enterprise ambitions grow with SaaS acquisition. So, looks like Netgear has acquired a company called Exium.
That's spelled e x I u m. So I wasn't familiar with this company, but apparently, they were founded back in, twenty nineteen and they're, of course, you know, a sassy company and and product that they offer. I find this one kind of an interesting acquisition by Netgear because I really think of them, and maybe I'm too narrow in my thinking of of what Netgear does, but I really think of them as being more like consumer grade type stuff, maybe low end of, like, the SMB market. I don't typically think of them being in the enterprise, and I think of SASE being more Mhmm.
You know, relevant in the target market for, SASE being more large to medium sized enterprise. But maybe I'm thinking about this wrong.
Well, I don't I don't think you're thinking about it wrong, but I mean, certainly, I agree with it. Netgear has traditionally been residential grade, so home stuff and SMB for sure. And I've seen, Netgear even in enterprise where folks will have these, like, throw away unmanaged switches and stuff like that. So you do see that.
Under my desk right now, I have a five port Netgear switch. Of course, I have my big switches over there, but, so I agree. And you know, this idea that well, that's this is interesting, because SASE is not necessarily SMB. I don't I don't know if I agree, because I think that there's a huge market for things like SD WAN and other, you know, SaaS and now SaaS services to the SMB market that doesn't wanna build infrastructure.
I mean, even enterprises that want to. So I think this is an interesting, partnership for sure. Definitely agree with you that we are talking about things like SMB, probably a lot of retail, small office, home office, that kind of thing. Huge market.
You know? Mhmm. It's a different market.
Yeah.
But, you know, even the big vendors have been in it. We have seen Cisco dip their toe into the SMB space, maybe even to the home office space. We've seen not necessarily Arista and Juniper, but we've seen those kind of things happen, you know, with Meraki a little bit into the SMB space as well. Fortinet used to be thoroughly SMB, and they've kind of gone upmarket.
So I think these organizations definitely realize that there's money to be made there. It is a different total addressable market than large enterprise, of course. But, you know, this is the ubiquity of ubiquity, which is a company, just how ubiquitous this technology is becoming. I remember when SD WAN came out and we were talking about it in the context of enterprise and all that stuff.
And little by little, I started to see the local pet store with eight branches in your area, just in your area. They were like, I don't want to run site to site VPNs. And this and they just ran it, you know, some Silver Peak boxes. It's done.
You know? So Yep. I actually think this makes a lot of sense. I agree. I don't I've never heard of Exeum.
It's a brand new company anyway. So Mhmm. You know, we'll see how this goes for the SMB world.
Yeah. And I mean, the article did kind of focus on that, Phil. There was a quote from the CEO of Exium saying, what I see as an opportunity uniquely for Netgear given that our roots are is to address the needs of small and medium enterprise customers. Right? So that is the target market. Like you said, it may not be exciting to talk about compared to a Fortune five hundred company, but there's a long tail, a lot of companies out there that fit into that small and medium sized enterprises, like you said, pet stores or, you know, local retail. There's a lot of different companies that can be addressed there at the low end of the market, and you make it you know, make good money in volume there.
So Yeah.
I guess that that angle definitely makes sense.
Or a huge enterprise that's super cheap and designing their network poorly. Come on, Dustin.
How many times have you seen in your days?
Yeah. I have been in I have been in when I was a bar engineer, I'd be on-site somewhere and I'm, you know, doing my thing and I'm like, oh, look at that. A bunch of daisy chain, you know, Netgear switches over there.
That's that's not good.
But without redundant power supplies. So as soon as the power supply dies, there goes the entire thing. Yep. No. I'd that is, I guess, you know, as a network engineer, that would be one of my concerns with this is that folks who don't understand all of the technical details and proper design would be like, oh, great. This is perfect. We'll just buy this, and it'll work great and it'll be enterprise great.
Yeah. But, you know, there's going to be an easy button, which honestly, we see that in enterprise stuff as well anyway. But, but certainly, the easy button is there. That's a big part of SMB for sure.
Yeah. All right. The final article for today, comes from the Broadcom newsroom. Broadcom ships Tomahawk six, the world's first one hundred and two point four terabit switch.
So I've been out of the hardware game for about eight years now. I've been over here at Kentik, and it is amazing to me how much things have changed since I left Juniper in twenty seventeen, just the amount of capacity that we can now put on a single chip and a single switch. I think Broadcom was on, like, maybe Tomahawk two when I left, and here we're at Tomahawk six. So there's been as Moore's Law dictates, right, there's always the next generation coming out with bigger thirties and higher density and all kinds of stuff.
So as you would expect, the article does say that this has, got AI optimized features, whatever that means, built into the ASIC. Right? So have to put some AI on there. Right?
You can't can't have a press release these days without words AI in there. But, no. I think it's it's interesting to see just how much there's an explosion of bandwidth in in the on these chips.
Well, the bandwidth alone is related to the AI conversation. So I'll give them that. Right? You know, the the more bandwidth you can squeeze out and, clean, you know, lossless connectivity, which is the key here with running AI workloads. That's certainly and this is, double anything else that's out there right now. So we're probably gonna start seeing deployments. I don't know how long is it until we start seeing these in the wild?
You know what? Normally a while, but probably not that long considering how fast people are spinning up new AI data centers.
Well, and, you know, one of the things they did say in the articles that it's compliant with a lot of the newer stuff that's come out from, the ultra the ultra Ethernet consortium and the UEC. Right? So some of their newer specs, this is compliant with I mean, this is obviously designed these chips are designed to go into switches. They're being installed in these GPU cluster environments. In fact, they said that, it's could scale up to a cluster size of five hundred and twelve XPUs and then a hundred thousand plus XPUs if you do a two tier scale out network with two hundred gig links. So, you know, clearly, it's designed and optimized for those type of environments.
Yeah. Yeah. And, you know, and I joked about, like, you got we guys talk about AI. I mean, there are actual technical things here.
We talked about the bandwidth, but there's advancements in how they deliver telemetry to you, which is important. Like, you know, where am I dropping packets? Congestion control, avoidance, that kind of thing. Failure detection, that's obviously very important.
We're talking about typically one to one subscription fabrics. You know, we're not oversubscribing one. There's no, like, five to one, three to one, no idle links. So that kind of stuff.
So there's there's other things like load balancing we have to understand and how that's going and just having that as a feature in the first place.
Well, a lot of the UE C stuff that I've been reading about, they're actually not even doing load balancing in the way you and I think of it. They're actually just spraying the packets across all the paths available and then having to re assemble, like, reorder and reassemble things on the other end, which is gonna be fascinating to see, you know, how well that works. I'm presuming they'll they figured it out, but, I don't know if you ever had experience with that when, like, in the old frame relay days, Phil, but, like, you know, in the early days, there were things where you would do spraying of packets, and then you need big buffers on the other end to be able to accept all those, put them back in the right order, put the packets back together, and send it up the OSI layers, right, to be to the upper to the applications and so forth. And if you don't do that correctly, you have a bunch of drops and out of order packets, and that affects the performance too.
So Yep. Exactly. I didn't I didn't do that with working with Frame Relay. But I did working for some financial customers back in the day, like New York City Financial, where it was like, you know, sub second transactions kind of stuff. And so we were looking at lossless data center connectivity beyond what you could do with just like Nexus switches and stuff like that.
Alright. Let's talk about upcoming events. We mentioned there's a few going on this week. Moving to next week, I'm going to be heading over to Dublin to participate in a panel, in an event called CCT Global Cloud Content and Telecom Executive Summit.
The topic of the panel that I'll be on is, trends in network as a Gervasi. So talking a little bit about some of the SaaS y stuff we were talking about earlier in the article, just kind of what we see as trends in the industry. So if you're, planning to attend that one, come by. I'd be honored to have you listen to my panel, going on on Wednesday next week, June eighteenth.
So the conference runs June seventeenth to the nineteenth, but my panel is on the eighteenth.
Next up is Kentucky NUG in Lexington, Kentucky on June twenty sixth, Ohio NUG in Cleveland on July tenth.
And then I think there's a little bit of a lull, in some of the events. The next one that, I've got on our radar, Phil, is, the AWS Summit in New York on July seventeenth. Are you planning to go check that one out? I know that's kind of in your area there.
Yeah. It's local to me, so I'm just gonna hop on the train going for that for the day.
Cool. Yep. And I'll just put a plug in here as we wrap up for, the regional NUGS. If you go out to the USNUA website, you can sign up and get notified, when an event gets scheduled in your local area. They're free to attend. Come and talk shop with some other folks who, are networking professionals in your area. So I encourage everyone to go and sign up and participate in their their local NUGS.
Thanks for listening, everybody. Those are the headlines. Bye bye.