Kentik - Network Observability
More episodes
Telemetry Now  |  Season 1 - Episode 17  |  June 27, 2023

Demystifying CDNs: Behind the Scenes of Rich Content Delivery

Play now


The way we consume content today has meant content creators and service providers have had to come up with ways to deliver high-quality and real-time content over the public internet down to the device in our hands. 

CDNs, or content delivery networks have developed to solve this problem, and today, as we focus on more livestreamed content, they’re at it again to solve the technical challenges of delivering high-quality live content over the public internet to wherever you are in the world. 

In this episode, Nina Bargisen, a subject matter expert in the service provider and CDN spaces, joins us to talk about why CDNs exist, how they work, and what the future holds for content delivery.


What's your favorite streaming service? Was it Prime, Netflix, Disney, or maybe you prefer streaming YouTube videos more than anything else? Or maybe you're a gamer and you play live online multiplayer games.

And, of course, today, even the biggest sporting events in the entire world are streamed live.

The way that we consume content today has meant that organizations, that's content creators and service providers, have had to come up with ways to deliver high quality and real time content over the public internet down to the device that's in my hand.

CIDR or content delivery networks have largely solved this problem as have some of the largest content creator organizations themselves.

And today, as we focus more on live stream content, they're at it again to solve the technical challenges of delivering high quality live content on the public internet to wherever you are in the world.

So with me today is Nina Vargison, a subject matter expert in the service provider and CDN spaces, a prolific writer and speaker and an experienced engineer and network planner designing the very solutions that we use to get our content.

So we'll be talking about CDNs, why they exist, how they work, and what the future holds for content delivery. My name is Philip Dhruvasse, and this is telemetry now.

Hey, Nina. Thank you for coming on today. Really appreciate it. I know that you have a very extensive background in this particular topic. So I'm really interested to to to chat with you about content delivery network CIDR, but before we get in, especially from a, you know, a technical perspective, I do wanna hear a little bit from you about your background in the space and in networking and technology in general. Can you give us a little of that?

Yeah. Sure. Phil. And and thank you for having me.

It's, it will be a pleasure. I'm sure to to chat. So, obviously, I'm all self self learned, what you call it. I I lack the English word for that.

But basically I started out working in telco and technology, like, a couple of decades ago, and I started out as a project manager And then I did the opposite of what most people did at that time where they went from some technical role and they they needed to grow and do something else, and they turned into project managers.

I did project management, and then I was like, this fucking sucks.

I am not good with keeping track of people and I mean, I can talk to people and make them work, but then writing all the summaries and and yeah.

No. Not for me. So I wanted to be I wanted to be more, one of the tech people who did cool stuff, you know, because the project manager is a facilitator and that's a great role. It's really good.

But I wanted to get my fingers in. So I started asking people, hey, how does this work? This IP thing, what's that about? And eventually, I ended up working as a network planner, managing all of the in traffic on an ISP backbone, doing traffic engineering with PGP, handling all the peering relationships, debugging, why why is one of these customer's office in India sending some of their traffic one direction around the world and some of their traffic and other direction around the world. This doesn't make sense what fuck is happening.

And then realizing it was due to how Tada was traffic engineering and load balancing over two different transects where the prefix where their van connection came from was routed one way and their land was routed another way and and shit just got messed up.


And then, as an network planner, we've going to, in particular, why I know a bit about how CDN works, at some point when you run an access network at, at some point, in the, early two thousands, mid two thousands, CDN shows up. Right?

And then you first, you have the Akama, then you have Google, you have YouTube shows up. We remember how YouTube grew so very, very fast.

And then the video on demand services, Netflix starts launching over the world. And, we figured out that we needed to we needed to do this right. So so Basically, I spent, a year or so on on researching how how is the CDNs that we are putting into our networks, how they're working how should we deploy them inside our network? What should the longer term plan be?

And when I did that, I realized there was a lot of limitations and a lot of assumptions that you make when you just look at it from the material they give you and and your thoughts about how this works. Oh, you should go to the nearest server. Like, that sounds great. I'll just put servers all over work, and that dinner will work, and then you realize, oh, there's, there are methods and there are reasons and their definitions of what the nearest server is.

And it's not always doing what you wanted it to do on this, you really know. So realizing that you need a really good understanding of a, how each CIDR end works and b, how is your network design and what is the architecture?

Not just with the connectivity or the topology, which have been my focus, but also your name servers, your IP address, plan, your planning, your aggregation, all of that shit. And and then you run into things like security as well. It's sort of like, oh, you can't do because uh-uh, you know, we can't announce more specifics, but but we need it and But no, no, no, no, no, we have put all this security in place to make sure that we're not announcing more specifics, but then this shit doesn't work.

Oh, and then you had to debate for three months to get a permission to write specific policies to announce to the to the servers inside your network, and announcing more specifics there. And and then you realize Oh, oh, you can't do this because all the prefixes attacked with no export and shit. It's always so so yeah. It was a long journey, but eventually we we we got there, but, and I learned a lot.

Yep. Yep. That's the self taught part. Right? You learn a lot by doing I think that's that's probably what you meant, right, that you, you on the job but also being self taught and, looking for the materials that you need to, be successful at the project that you're working on that day or that month or that year. And then you add that to your body of knowledge and then move on. So I kind of feel like a lot of my career was haphazard like that.

Not not that to say that yours was haphazard, but sometimes I feel like mine.

Oh, yes.

Yeah. Okay. Good. We're on the same page. Well, yeah. I I got, like, I got a a VX line project and a data center deployment.

I'm, like, I never configured VX line before. I guess I'm learning that now. And and and and then such went my career for many years. And then by the end of a decade, it's you have this body of knowledge and, of course, hopefully, you're using that that knowledge over and over again.

So it's valuable. Sometimes, no. Like, I figured, you know, OTV in a data center once and that was it. That's the Cisco proprietary overlay.

Never use it again. So there are those. But, yeah, the the the security conversation that you mentioned, yeah, I've had those. That sounded very reminiscent where security is like you can't do this.

Well, nothing works if we don't do that. Well, you know, and then you have this tension and you go back and forth forever, especially when I think, you know, there's that balance between policy of least privilege. I get it and then, you know, trying to only permit the traffic that you're that that you want to permit and and all of that. So there's that balance But then working sometimes with with security folks that don't have a super strong technical understanding.

And so that it was sometimes I so I actually that specific part I did not relate to because we did have some very, very good and strong engineers that were also handling, the security.

So I think so what I had to do with them to to to win this discussion or to, to get to do what I wanted us to do.

I had to go I had to do the money talk because We we were looking at we were basically just looking at how we're building our network. I mean, and we were using big iron as we called it back then. Right? You know, big, expensive, routers, lots of features because we were running multi service network. We had We had important traffic. We had less important traffic. We had, we we, you know, we were running a voice.

We were moving voice. We were moving TV. Like, the cable TV will be moved on to run on the IP backbone, still going out in their closed circuit to the, sort of, the TV cable TV saying not not running as IPTV yet, but but the transport layer was moved away from the dedicated lines onto the IP back home. So we had a we had a lot of traffic on the backbone that that we needed to make sure that it was high uptime and all that shit.

And then we had all the over the top video and we're like, well, we're building an expensive back boom, but we so we don't want this traffic on it because we know our customers want their traffic, but we also knew that we couldn't sell. We couldn't get paid. We were too small to ever win that debate and and honestly also we really don't want to, but, you know, our managers wanted to, but, yes, No. Not not not gonna happen when you're network that size.

So you're coming from the service provider space very, very focused on the service provider space as far as content delivery, which makes sense because that's what CIDR run on, you know, But ultimately, it's it's the idea of traffic engineering and making sure that the services are being delivered the way they're supposed to be delivered, which which is different than probably some in our audience who come from an enterprise engineering background.

Like traffic engineering is you you do that to an extent, but you're not necessarily engineering a, you know, a backbone at a at a even a large enterprise necessarily.

And, and, and you mentioned the word access network. You know, that that term is used differently. So what is it what is an access network on the service provider as, CDN space mean?

So actually, I think it's one of those terms that can mean so many different things. So if you're working if you're working if you're working at the service provider, right, an act the access network, that is that's basically after IP. So that's where IP stops. So that is that is the that is the last mile, I think, call people would call it as well. So that's a line from either Jupon or the BRAAS or the mask.

So it's a a radio network from a mobile provider. That would be the access network.

All of the DSL lines back when we were using DSL and now it's the fiber and the d pawn and the whole fiber access network out to, out to the consumers. And then that that network is connected to the to the IP level that goes. And then you have an aggregation layer or a metro layer, and then you have your core backbone or your board, your backbone.

That's one way of using it. Yep. As soon as you start working in the CDN space where where I also where I went after working at the service provider, or at the at the access network I was working at, then you can get a network that connects consumers or connects end users, because it can be businesses as well, is an access network because it provides access to the internet.

How does, how does enterprises?

What does access network be important?

Well, it's the same it's the same general idea.

But, like, like the access layer in a three tier design. Right? The access layer is where end users actually plug into the into the, the campus. So it really is the same thing logically, except that instead of, these autonomous networks, right, or not when when I say autonomy, like an ASN, but, like, these large organizations, like an entire campus accessing the provider network, it's a literal individual person accessing by plugging into a into a jack or connecting over the air or whatever it happens to be or a server connecting into a into a switch So it's the same idea.

And, and, and, and there are technologies and methodologies that go along with that. So it's not as simple as plugging something in. And I'm sure, you know, the service provider the same. It's, you know, preventing layer two loops and broadcast storms and thinking about spanning tree and things like that, which you still have to think about in in a campus, with with devices that are spitting out bad traffic sometimes, you know.

So You know, we because when you built like a Metro network, that was at least back when I was working, that was typically a a a an ethernet network.

It was, you know, fiber running ethernet natively on on top of that. And so you would be then you you would be thinking about, all that kind of layer two shit that I actually won't know a lot about. I know, you know, you have to spanning tree. I remember that one, you know, which is broadcast.

Right? Is it wrong? Yes, Tom going on in this era? And then, you know, somebody had to run and and and do something about it because it was knocking off customers.

Yeah. And as I progressed in my career, I did focus on on layer two less and less. I do think that a lot of people think maybe it's less sophisticated networking. And I and I get that because you're not doing the cool, like, traffic engineering and stuff like that. But, but that is the point at which people connect to the network and and utilize it and consume those services. So it is, it is mission critical.

I think you just you do yourself a disservice if you think that ethernet is simple. Yeah.


No. There's so many things that that, you know, the idea seems simple, but but but if you look at the internet, we have the internet know, if you look at IP, you know, it's just you go from a to b. Every individual router had their own table, they'd look up and they go. It sounds like a simple a simple thing, but we have people, we have a lot of people making a living on doing research on what the fuck is going on on the internet.

Because we build a really, really complex animal that nobody can understand all in their head, be because of how distributed all the intelligence in the network is.

Oh, yeah. Yep.

So, so it's I I find that fascinating and a little bit the same with Ethernet when you dive into it. It's sort of like all these communications and and and and signals that go on all the time, and and he was like, yeah. Yeah.

Yeah. We don't really deal with that day to day as engineers. You know, serialization how you know, a computer actually turns information into plus and minus voltages on an actual wire or how some of that gets converted to radio frequency and sent over the air. And there's a there's an entire realm of physics that we don't really deal with very much as network engineers, but it's absolutely critical for the proper functioning of networking.

I guess there was a little bit, like, for well, if I have to decide on what kind of fiber optic cable to use, you know, maybe having an understanding of how the light travels over the wire might help. But otherwise, yeah, it it's really intriguing stuff to me very, very much so.

I don't think anybody in our business can be successful if they never heard of the OSI model and, don't understand it.


Right. Because you you you will then unless you're, like, really, very intuitive and, and lucky, you'll never be able to debug anything if you can't do it systematically from layer to layer because, you know, if the if the lowest layer doesn't work, it doesn't matter all all the debugging, you'll do one of three layers out Right?

Well, I mean, I when I was troubleshooting issues, I always started at the DNS layer.

That's my joke. That's my joke for the podcast.

This DNS work. And then the second one is, is power plugged in.

Did you plug your computer in? So why did CNN's CIDR, excuse me, why did CDNs develop in the early two thousands like you were saying? Is it because you know, IPTV became a thing. Is it because the technology allowed it, or is it because these companies, like you mentioned Akamai and then, of course, I know you were with Netflix for a long time. They developed something and they wanted there was a demand for better quality, you know, which came first And why did that happen all of a sudden? Because it really did in the span of just a few years become a thing.

Yeah. So, it happened It happened because, there I think it happened because there was a demand for faster connectivity. Like, there was a demand for I mean, I remember, you know, remember going on the internet back then. Right? You know, you would, first, you would have to wait for the modem to call off. You have all the bleeder sounds going beep beep beep, And then if you were looking at at a photo, you would you were waiting for it to load from the top. I mean, it's always been fascinating how it was from the top.

You know, and after some time, that's just not fun anymore. Right?

So people realize that, you know, you need you need faster delivery and and we were all wrong about fifty six k being enough for everyone. You know, who was it? Was that that was Bill Gates, right? So, like, don't see how anybody can need more than. I don't and I don't remember if it was fifty six k or if it was some other bandwidth he was mentioning but it was hilarious, and we've been laughing of it since. Right?

No. So it was the it was the demand for richer content. The the idea, or realizing that if you put richer content out there, more people would watch and then realizing, okay, we need to do better with rich content.

And we can't really do that if we serve we are waiting for that picture to be replicated from us over in the in in in in San Jose when the user is sitting in France. Right? We don't want not gonna work. So we had to figure out a way of doing that better. So they come up, they came up with the idea. And and and it was those brilliant folks whose name I always always forget, at Akamai who came up with the algorithms, right, who came up with the two important things, to think about when you when you wanna try and replicate, rich content and put it closer is like, well, how do we figure out where which files should go?

And how do we figure out a way of making sure the consumer of the content gets to the closest?

And that's basically the idea of a CDN because that is what a CDN is.

You wanna put files somewhere but you need to figure out how to put them there and you need to figure out how to get the consumers of the files to the right file.

Now is it the geographically closest location? Is that always the case, or is it could it be the location? It might not be as close, but it has better latency?

See, that's the whole thing because I'm I'm being vague in this definition because as CDNs have evolved over time, the different CDNs have come up with different ways of defining the closest, and it will always you you always have to take into account. Well, what is the method of the end user to get to the server? And what is the method of how do you identify, the requester of some content and how can you tell the the requester where to go?

And what is the metric you have to look at? So we went from, you know, what what Akamai did and still do going well, the end user is using this DNS server and this DNS server belongs to everybody using this DNS servers belongs to this CDN server.

As a method of defining closest and directing, where it should go.

To today with the very advanced CDNs, are measuring constantly measuring latency method of reaching the end user.

Like, is it transit, pairing, embedded into the end user CIDR, not CDN ISP.

Along with a whole set of of quality, KPIs they might be looking at, and then deciding based on that, hey, end user, you're gonna go to that server to get your content. So the idea is the same, but closest is is just defined by many more parameters today than it did back then.

And so does a, our our our CDNs monitoring the quality of connection, quality of the service delivery to the actual end user? To, like, me in my, in my house, or are they to, like, the local CEO or to a region or or things like that? What are they monitoring specifically?

That's, that depends on the servers they're serving.


And also on the on the CDN. So and and different CDNs do it differently. Right? Mhmm.

So if we take if we take a service like like Facebook, and also some of the more general CDNs have been around for a long time who have a lot of different services that they're they're serving.

Typically, they would be monitoring, the quality to the the resolvable that you're using.

Right? Because they're still using resolvers as mapping mapping you. So there would be there would be sort of equalizing connectivity and quality of network to the resolver with the end users that are using this resolver.

Right. Right.

But if you look at a CDN as Netflix that's serving videos, but where they have an end to end control, they have a client who knows stuff. That client is still working pretty simple, but it is that is monitoring, well, How fast am I playing?

So that is the bit rate of the file it is playing? And how fast am I downloading the next segment of that of that stream.

And if it's playing faster than the download, the next time it asks, it will ask for a little bit rate.

And it will go up again, and it it has a it has a it has a recipe of widespread rates they should ask for when going up and down. Depending on the device that the client is playing on.

So there you have some intelligence in the client defining and measuring what is going on on the network. But here, very simply and very, very specialized just looking at What am I playing and how fast am I downloading the next segment?

Other streaming technologies could be, Sai, which is what prime video is using for their live event.


Where the CIDR technology, the side server is constantly monitoring, evaluating or estimating the the bandwidth to the client.

And again, here we're talking to the client. Again, they're client based. They have end to end control, So they can do measurements from the closest edge server to the client. And then based on that, deciding which bit rate the stream that are pushing up to that client will have. So again, very specialized, but depends on the application and the content that you're consuming.

But there's no need to do be that advanced when you're just downloading a for, pictures for your website.


Or on demand video or ads or whatever it is. Right?

Yeah. There's no real time component to that.

It's Exactly.



Yeah. So I remember I remember it used to be that the content creator wasn't necessarily also the CDN.

Whereas today, you've mentioned some names of CDNs that are also creating the content themselves.

So have they taken ownership of that because they just wanted to have that control or maybe because they're just so big they can do it themselves and it's cheaper. I mean, why why has that shifted as well?

So it's, it's, it's, it's, it's good question. So as I see it, it is that some content and some content providers became big enough that it made sense for them to build their own and they were specialized enough to build their own. Yeah. Remember that Google and Google Cloud started out just by new YouTube servers that were placed around in into, ISP's network?

But they have managed to really build on that and is now having a big business, you know, with Google Cloud running on, on some of the same on the same, embedded service. Right? Mhmm. Yeah. And and and Netflix as well is is the poster child, I guess, also of how building a very specialized CDN just for the video delivery. We best remember that everything else that Netflix is doing all the compute, all the customer, interactions, the validations, all the security, all of that lives in the cloud. But for this one specialized part of their business, they built, a global CIDR.

That's one of the biggest CDNs in the world today.

But they still have to partner with providers. So when you say build a CDN, I mean, they're still co locating resources embedded in various ISPs around the world to actually deliver it. Right?

Yes. So but that's Oh, yeah. That's a funny way of looking at it because the way I've always looked at it is that, well, there's still part of the CDN.

And the CDN and the must remember the CDN is just the servers and a method to distribute content and a method to, direct users to the servers.



But then Netflix built their own network. So they have routers. They have a backbone now.

Where they have a lot of their servers placed at their own locations.


And then they have the embedded solutions where they give, servers two ISPs and say, Hey, you can put them wherever you want. We give you we actually they actually give them a lot of control over those servers.


And and that is also what Akamai came up with having their own locations, but also offering ISPs. Hey, you can put these where you wanna put them because you know your network best, but then we will require you to to do the physical operation of these servers, but we will we will do the virtual operation or the the logical operation of them.

So the decision but to do one or the other can depend on just cost and operational expense. Right?

Yes. And and and all these pieces need to also to think about when it will make sense, to do one thing or the other, right? Because if you have if you have an access network, but you have five pops. In all of those five pops, there are internet exchanges and ability to connect to to connect to to content through peering.

It doesn't really make sense to do embedded because it will save you nothing, but it will give you the work and the extra cost of having those servers sitting inside your network where you could connect to to that content for free or more free then then getting the servers.

But if you are, like, an access network in the UK, if five, seven, ten years ago, you would only be able to connect to content in London And then it would make a lot of sense to get to say, yes, please. I will take some of your servers and put those in Scotland or in Manchester and in other big locations. Where your end users are, but there's no ability to connect to any content.

Right. So in all depends on the topology, the ability to peer, how widely build out, the peering, ecosystem is in where your particular network is.

And ultimately, all of this is then gonna be from a technical perspective. That's all gonna be unicast traffic down to an end user.

I remember learning CCNA almost two decades ago and learning about, oh, yeah.

We use multicast where we I think I've I configured multicast for voice over.


IP applications, in my career. I've done that a bunch of times and build out trees that way.

There was, like, one or two other applications. One was at the pharmaceutical company that required multicast. Other than that, I mean, really isn't used at all.

Also, the same thing about, you know, for networks who implemented quality of service classes in the network, and everything. Oh, yeah. We gotta we gotta put I remember that. So when I was at, working at Netflix and we were, you know, implementing servers into their network or setting up pairing, and they would go, yeah. I know we we'll put we'll put the video traffic into the, a sure forwarding class, so, you know, because they read in our CCNA book that, that's where video traffic belongs, and we would go, no, no, no, no, no, no, no, don't do that. Don't do that. You need me to put it in CNC, it's a net class.

But but but but but no, no, no, because so modern streaming protocols are built for, for the internet. Mhmm.

They're built to run on the internet. They work very badly if you put them in an assured forwarding class because of the drop profiles because the short forwarding is kind of assuming that you're running it at UDP.


And even though UDP is coming back into streaming for live streaming in particular, All of the video on demand is running on HTTP or TCP based protocol. Right?

Yeah. Absolutely. I mean, how many Zoom calls do we have and the call is fine. The video is I mean, obviously, it's not super four k, but it's fine. And the audio is just fine.

And there's no delay. Right? I mean, it's so, so, so but that was, that was super fun, but we wanna do it so well for you. Yeah. But please don't.

So the the reason that the reason that we moved away from QOS, like as far as campus networking and and class of service and quality of service is is one of the main reasons was that bandwidth was growing so fast and and cheap. And, you know, today, I you can deploy one hundred gig in an enterprise QSFPs have come down in price. So that's not even a big deal. When I was still actually turning a wrench, hundred gig was like really fancy. But you have this incredible amount of bandwidth and and QOS classes. Those kind of things don't actually activate until there's contention.

And so if you have incredible amount of bandwidth, who cares? So do you think that a lot of this has been enabled on the provider side, the content delivery network side because of the the advancements in the No, not advancements, but in the increase in bandwidth, both in provider cores and also in in the access network, I have gig fiber to my house. Which a decade ago was unheard of to your house, at least.

Yeah. Yeah. Yeah. I have one gig fiber into my house as well, and it's kind of low?

I mean, to be fair, I never even come close to using it. I have it because I've won it. That's that's the only reason.

But I gotta have it because I can.

No. I think, yes. I mean, obviously, it's been it's been all about the the prices of the high bandwidth been going down, down, down, down, down, tightened technology, EZNet providers with, with, you know, more and more and they keep increasing the bandwidth, just technology wise, and you develop services that take advantage of that.


And I think in particular, in particular, the Q West going away because of just bend with this cheap, we might as well, but but every single engineer in the world will always also choose more bandwidth over QS configuration.

I mean, I remember when we were implementing it, it was we we sort of like it would be nicer if we could just owe a provision, but then us in the planning department would go, uh-uh, money, money, money. We can't provision. We're gonna run the network this hot because because money and we've we've we've calculated it's okay.

You know, so so so so that's why they were making these configuration to sort of protect the important services.

But if they could just throw a bandwidth last minute, they would have done that.

Yeah. There there are real life business constraints, the cost of upgrading infrastructure, engineering, you know, all the way down to power requirements. You know, you wanna upgrade devices to accommodate hundred gig, four hundred gig. There are different power requirements to the device themselves. So I've done that where I've had customers that were grading bandwidth in their core and then out to their branch offices and stuff like that.

And in their data center, especially, And, you know, the project came to a standstill, multimillion dollar project came to a standstill because we had to then wait for the electricians to come in and then the local community to deliver more power to the to the health care facility. So, there there are a lot of considerations just to increasing bandwidth. But it is interesting, you know, that as we increase bandwidth, we enable more services like you said. I mean, A increasing bandwidth doesn't solve all our problems.

We can still have latency on a high bandwidth link. There can still be delays in a server responding to you that makes an application feel slow even if you have a gig to your house. So there are certainly other considerations. Some of them network, some of them not.

Yeah. Yeah. I was I was just, I was just at a conference this week where, I heard I shouldn't gonna be sad now.

I heard the internet in the US being described as how it was in Africa ten years ago with regards to Oh, really?

Trombone and routing.

I mean, I know my regional area, northeast US. I know where my traffic goes. I'm in upstate, New York, and it's gonna go to either New York City or Boston.

And I know where other cities in the Northeast, Washington, DC, Philadelphia. I know how all that traffic in those metro areas go. But as far as the entire US, I'm not not really sure.

No. I think I I was I think it was, it was somebody from Austin who was describing how how his traffic was routed to Atlanta and then back to San Jose when he was getting his, some, you know, his content and and, and, and, and he was very sad about that. And then, you know, one of the Africa folks go in. Yeah. That was how we were sending traffic to Frankfurt between our networks ten years ago.

But ultimately, it is this desire to consume more services that's driving all this. Right? Now, you know, back in the back in the day, it was it was pictures, text. Right? Back in the day, it was just text.

Well, you know, it was, it was obviously there was I mean, what was the biggest, I remember twelve years ago when I was at dinner parties and I got set up with, or and I didn't wanna reply to the to the so where do you work question?

Because I would I was working at the local telco and there were start talking about their bills. So I would say I work in porn because I was building the internet and the biggest servers on the internet at that time was corn. It was, you know, it's it's it's driven by by what people are consuming.

So we're building a network that can be supported what people wanna consume.

So the the very beginnings of content delivery networks, years ago really stemmed from the need to deliver high quality audio and video. Right? So streaming movies over the public internet and whatever technologies and methods were used to do that. That and that's fine. Almost feels like a like a solved problem because today what I see the new focus being is is live stream content. So for example, I I mean, every time I watch any kind of a large sporting event, it's it's live streamed over the internet and I'm watching it on prime or or whatever.

And and never on regular TV, And and also, I don't play online video games, but I know that those big multiplayer online video games, that's that's huge. Not just huge in popularity, but huge business. And so a game like, I don't know, like counter strike, right? It's a first person shooter game where you have to like every single bullet coming out of those those guns as people are playing and having their battles. And so that seems to be the new driving force for delivering content over the public internet. Am I am I way off?

It is. It is. I mean, live streaming is, is big. And I I think there's an interesting thing if we circle back to the, we talked about, you know, some video providers being so big that it made sense for them to build a specialized CDN for that.

If you look at, a live streaming provider like prime video, they've grown out of that model.


Yeah. They're too big to run on their dedicated infrastructure.

So they're using they're using all of the CIDR.

And to make their events work well, they are not only using all of the CIDR, they are working with the individual CDNs and the individual ISPs So so they're not only just checking, you know, that to CDN is will work well and will do what they want. And and will support their technology, but they're even out looking at and talking both to the ISBN the CDN. So how's the bandwidth between the two of you?


To make sure that they're not running into issues with or at least to not run into issues with connectivity. That is not due to them, miscalculating just how popular clicking.

But how does that work with live streaming in particular then? Because I can't co locate files per se. I'm not co locating files into an ISP or into my own in structure, it's live from some source somewhere, like a, like some kind of a sporting event or, you know, political speech. So how how does that isn't it getting it direct from the source and not really going over the CIDR infrastructure?

No. It's going over the CDN. So basically, what happens is that they go from the course. And then they sent the feed up, the endless feed up, and they have, like, maybe four different redundant connections to to to where they encode, which is typically in in one of the instances of AWS. Right? So the one or the best connected one.

And then from the encoding, they're going down a tree through the CIDR. So they hand off from encoding to the CIDR and they hand off both to, to cloudfront, they're AWS CDN, but also to live not not Edio, to to vastly, to Kamai to to everybody that they're using.

And they're using for further live streaming to make sure that everybody is watching the same frame at the same time, which is what you wanna do when you do sports.

So you're having your neighbor cheering before you see the goal.

That is so annoying. I has happened.

So they're using this technology they acquired in twenty twenty called Sai.

And what they've done is that they're working with their CIDR partners. So they're provided some part of the size stack. So they can spin up a size instance on the CDNs they're using. And that way they are maintaining the end time control of what's going on.

Okay. Very interesting. So everybody's watching the same frame at the same time.

It is being distributed. So there is a layer of of abstraction there from the from the source. But yeah, I I I that to me, that's kind of where we're going is the live streaming component, you know, with bandwidth the way it is downloading a video isn't the the most difficult thing in the world and co locating video in regional locations, not the most thing in the world. So, yeah.

I mean, that that's a problem that solves. Right? What what out there now is, the challenge and the fun stuff is is the live streaming. Absolutely.

So now the challenge and fun stuff is, yeah, the live streaming and getting it better quality. Because, I mean, I noticed that live streaming quality is a little lower than, and then downloading a video in ten eighty or four k.

And then having that audio spot on, and all that. So, yeah, I'm looking forward to having just better and better and seeing one day when I have holograms in my living room of Star Wars or whatever I'm Yeah.

We get into the whole augmented, reality type of thing, and and then we need to get compute really to really close to where you are, and that's a whole different, gushing as well where we have but what where I think we will go deeper and deeper and deeper into, ISP networks, but now with compute and not not just, video files or distribution.

Nina, this has been a really great conversation. I think we could easily easily have a part two or a part three and just talk about really any anything you wanna talk about at this point. But, but it was a pleasure. So thank you for for joining today. If folks want to, reach out and ask a question about CDNs or really anything, technology related. How can how can they do that?

Oh, they can, so I'm on LinkedIn.

I'm also on Twitter, but not really that I it. So I think LinkedIn would be would be the best place or the best social network to reach me on, and they can also also write an email, old fashioned email, nina at kentech dot com is an easy thing to remember.

And I am actually still on IRC So, if people can find me there.

Okay. Very good.

And you can still find me on Twitter. I am still pretty active there. Network underscore fill. Also search my name in LinkedIn. And, and if you have an idea for an episode, or if you'd like to be a guest on the podcast, please reach out to us at telemetry now at kentech dot com. So for now, thanks for listening, and bye bye.

About Telemetry Now

Do you dread forgetting to use the “add” command on a trunk port? Do you grit your teeth when the coffee maker isn't working, and everyone says, “It’s the network’s fault?” Do you like to blame DNS for everything because you know deep down, in the bottom of your heart, it probably is DNS? Well, you're in the right place! Telemetry Now is the podcast for you! Tune in and let the packets wash over you as host Phil Gervasi and his expert guests talk networking, network engineering and related careers, emerging technologies, and more.
We use cookies to deliver our services.
By using our website, you agree to the use of cookies as described in our Privacy Policy.