Kentik - Network Observability
More episodes
Telemetry Now  |  Season 1 - Episode 17  |  June 27, 2023

Demystifying CDNs: Behind the Scenes of Rich Content Delivery

Play now


Nina Bargisen
Nina Bargisen
Director of Technical Evangelism, Service Providers

Nina Bargisen is a subject matter expert on everything peering and interconnection related. She is also a prolific writer, speaker, and an experienced engineer and architect designing provider and content delivery networks for some of the largest streaming services in the world.

Transcript

Philip Gervasi: What's your favorite streaming service? Is it Prime, Netflix, Disney? Or maybe you prefer streaming YouTube videos more than anything else, or maybe you're a gamer and you play live online multiplayer games. And of course, today, even the biggest sporting events in the entire world are streamed live. The way that we consume content today has meant that organizations, that's content creators and service providers, have had to come up with ways to deliver high quality and real- time content over the public internet down to the device that's in my hand. CDNs, or Content Delivery Networks, have largely solved this problem, as have some of the largest content creator organizations themselves. And today as we focus more on livestream content, they're at it again to solve the technical challenges of delivering high quality live content on the public internet to wherever you are in the world. So with me today is Nina Bargisen, a subject matter expert in the service provider and CDN spaces, a prolific writer and speaker, and an experienced engineer and network planner designing the very solutions that we use to get our content. So we'll be talking about CDNs, why they exist, how they work, and what the future holds for content delivery. My name is Philip Gervasi, and this is Telemetry Now. Hey, Nina, thank you for coming on today, really appreciate it. I know that you have a very extensive background in this particular topic, so I'm really interested to chat with you about Content Delivery Network, CDNs. But before we get in, especially from a technical perspective, I do want to hear a little bit from you about your background in the space and in networking and technology in general. Can you give us a little bit of that?

Nina Bargisen: Yeah, sure, Phil, and thank you for having me. It will be a pleasure I'm sure to chat. So honestly, I'm all self- learned, what you call it. I lack the English word for that. But basically I started out working in telco and technology a couple of decades ago, and I started out as a project manager, and then I did the opposite of what most people did at that time where they went from some technical role and then they needed to grow and do something else, and they turned into project managers. I did project management and then I was like, " This fucking sucks. I am not good with keeping track of people." And I can talk to people and make them work, but then writing all the summaries and yeah, no, not for me. So I wanted to be more one of the tech people who did the cool stuff because a project manager is a facilitator, and that's a great role, it's really good, but I wanted to get my fingers in. So I started asking people, " Hey, how does this work? This IP thing, what's that about?" And eventually I ended up working as a network planner, managing all of the traffic on an ISPs backbone, doing traffic engineering with BTP, handling all the peering relationships, debugging why is one of these customers office in India sending some of their traffic one direction around the world and some of their traffic in other direction around the world? This doesn't make sense, what is happening? And then realizing it was due to how data was traffic engineering and load balancing over two different transits where the prefix where their VAN connection came from was routed one way and their land was routed another way and just got messed up.

Philip Gervasi: Yeah, right.

Nina Bargisen: Yeah. And then as a network planner, we've go into, in particular, why I know a bit about how CDN works. At some point, when you run an access network, at some point in the early 2000s, mid 2000s, CDN shows up, and then you first have the Akamai, then you have Google, YouTube shows up. We all remember how YouTube grew so very, very fast. And then the video on demand services, Netflix starts launching all over the world, and we figured out that we needed to do this right. So basically I spent a year or so on researching how are the CDNs that we are putting into our networks, how are they working? How should we deploy them inside our network? What should the longer term plan be? And when I did that, I realized there was a lot of limitations and a lot of assumptions that you make when you just look at it from the material they give you and your thoughts about how this works. " Oh, you should go to the nearest server. That sounds great. I'll just put servers all over and then it will work." And then you realize, " Oh, there are methods and there are reasons, and there are definitions of what the nearest server is, and not always doing what you wanted it to do unless you really know." So realizing that you need a really good understanding of A, how each CDN works, and B, how is your network designed and what is the architecture? Not just with the connectivity or the topology, which have been my focus, but also your name servers, your IP address plan, your planning, your aggregation, all of that shit. And then you run into things like security as well as sort of like, " Oh, you can't do that because we can't announce more specifics, but we need it." And then, " But no, we have put all this security in place to make sure that we are not announcing more specifics, but then this shit doesn't work." And then you have to debate for three months to get a permission to write specific policies to announce to the servers inside your network and announcing more specifics there. And then you realize, " Oh, you can't do this because all the prefixes attacked with no export and shit." So yeah, it was a long journey, but eventually we got there and I learned a lot.

Philip Gervasi: Yep, that's the self- taught part. You learn a lot by doing, I think that's probably what you meant, right? You're on the job, but also being self- taught and looking for the materials that you need to be successful at the project that you're working on that day, or that month, or that year. And then you add that to your body of knowledge and then move on. So I kind of feel like a lot of my career was haphazard like that. Not that to say that yours was haphazard, but sometimes I feel like-

Nina Bargisen: Oh, it was. Yes.

Philip Gervasi: Yeah, okay. Good, we're on the same page. Well, yeah, I got a VXLAN project and a data center deployment. I'm like, " I've never configured VXLAN before. I guess I'm learning that now." And then such went my career for many years. And then by the end of a decade you have this body of knowledge, and of course, hopefully you're using that knowledge over and over again, so it's valuable. Sometimes no. I figured OTV in a data center once and that was it. That's the Cisco proprietary overlay. Never used it again. So there are those. But yeah, the security conversation that you mentioned, yeah, I've had those. That sounded very reminiscent where security is like, " You can't do this. Well, nothing works if we don't do that. Well." And then you have this tension and you go back and forth forever, especially when I think there's that balance between policy of least privilege, I get it. And then trying to only permit the traffic that you want to permit and all of that, so there's that balance. But then working sometimes with security folks that don't have a super strong technical understanding. And so it was sometimes a hard conversation.

Nina Bargisen: Yeah, no. Actually, that specific part I did not relate to because we did have some very, very good and strong engineers that were also handling the security. So what I had to do with them to win this discussion or to get to do what I wanted us to do, I had to do the money talk because we were basically just looking at how we are building our network and we were using Big Iron, as we called it back then, big expensive routers, lots of features because we were running a multi- service network. We had important traffic, we had less important traffic. We were running voice, we were moving voice, we were moving TV, the cable TV were being moved on to run on the IP backbone, still going out in their closed circuit to the TV, cable TV saying not running as IPTV yet, but the transport layer was moved away from the dedicated lines onto the IP backbone. So we had a lot of traffic on the backbone that we needed to make sure that it was high up time and all that shit. And then we had all the over- the- top video and we were like, " Well, we are building an expensive backbone, so we don't want this traffic on it because we know our customers want their traffic," but we also knew that we couldn't sell, we couldn't get paid. We were too small to ever win that debate. And honestly, also we did not want to, but our managers wanted to. But yes, not going to happen when you are network that size.

Philip Gervasi: So you're coming from the service provider space, very, very focused on the service provider space as far as content delivery, which makes sense because that's what CDNs run on, but ultimately it's the idea of traffic engineering and making sure that the services are being delivered the way they're supposed to be delivered, which is different than probably some in our audience who come from an enterprise engineering background. Like traffic engineering is, you do that to an extent, but you're not necessarily engineering a backbone at even a large enterprise necessarily. And you mentioned the word access network, that term is used differently. So what is an access network on the service provider CDN space mean?

Nina Bargisen: So actually I think it's one of those terms that can mean so many different things. So if you're working at the service provider and at the access network, that's basically after IP, so that's where IP stops, so that is the last mile I think people would call it as well. So that's the line from the GPON or the BRAS, or the mast. So it's a radio network from a mobile provider that would be the access network, all of the DSL lines back when we were using DSL, and now it's the fiber and the GPON and the whole fiber access network out to the consumers. And then that network is connected to the IP level that goes, and then you have an aggregation layer or metro layer, and then you have your core backbone or your backbone. That's one way of using it. As soon as you start working in the CDN space, where I also went after working at the service provider or at the access network I was working at, then you think a network that connects consumers or it connects end users because it can be businesses as well, is an access network because it provides access to the internet. How does enterprises, what does access network mean for inaudible?

Philip Gervasi: Well, it's the same general idea, but the access layer in a three- tier design, the access layer is where end users actually plug into the campus. So it really is the same thing logically, except that instead of these autonomous networks, and when I say autonomy, I mean like an ASN, but these large organizations like an entire campus accessing the provider network, it's a literal individual person accessing by plugging into a jack or connecting over the air or whatever it happens to be, or a server connecting into a switch. So it's the same idea and there are technologies and methodologies that go along with that. So it's not as simple as plugging something in. And I'm sure in the service provider world, it's the same. It's preventing layer two loops and broadcast storms, and thinking about spanning tree, and things like that, which you still have to think about in a campus with devices that are spitting out bad traffic sometimes.

Nina Bargisen: Because when you build a metro network that was, at least back when I was working, that was typically an ethernet network, it was fiber running ethernet natively on top of that. And so you would be thinking about all that kind of layer two shit that I actually don't know a lot about. I know you know have to spanning tree, I remember that one broadcast storm. There's a broadcast storm going on in this bar and somebody had to run and do something about it because it was knocking off customers.

Philip Gervasi: Yeah. And as I progressed in my career, I did focus on layer two less and less. I do think that a lot of people think maybe it's less sophisticated networking, and I get that because you're not doing the cool traffic engineering and stuff like that, but that is the point at which people connect to the network and utilize it and consume those services, so it is mission- critical.

Nina Bargisen: And I think you do yourself a disservice if you think that ethernet is simple.

Philip Gervasi: Yeah.

Nina Bargisen: No, there's so many things that the idea seems simple, but if you look at the internet, we have the internet, if you look at IP, it's go from A to B, every individual router have their own table they look up and they go. It sounds like a simple thing, but we have a lot of people making a living on doing research on what the fuck is going on on the internet because we build a really, really complex animal that nobody can understand all in their head because of how distributed all the intelligence in the network is.

Philip Gervasi: Oh, yeah.

Nina Bargisen: So I find that fascinating and a little bit the same with ethernet when you dive into it's sort of all this communications and signals that go on all the time and it's like, yeah.

Philip Gervasi: Yeah, we don't really deal with that day to day as engineers, serialization, how a computer actually turns information into plus and minus voltages on an actual wire or how some of that gets converted to radio frequency and sent over the air. And there's an entire realm of physics that we don't really deal with very much as network engineers, but it's absolutely critical for the proper functioning of networking. I guess there is a little bit like for... Well, if I have to decide on what kind of fiber optic cable to use, maybe having an understanding of how the light travels over the wire might help. But otherwise, yeah, it's really intriguing stuff to me. Very, very much so.

Nina Bargisen: I don't think anybody in our business can be successful if they never heard of the OSI model and don't understand it, because unless you were really very intuitive and loggy, you'll never be able to debug anything if you can't do it systematically from layer to layer. Because if the lowest layer doesn't work, it doesn't matter all the debugging you'll do on three layers up, right? inaudible.

Philip Gervasi: Well, when I was troubleshooting issues, I always started at the DNS layer. That's my joke, that's my joke for the podcast.

Nina Bargisen: Does DNS work? And then the second one is, is the power plugged in?

Philip Gervasi: Did you plug your computer in? So why did CDNs develop in the early 2000s like you were saying? Is it because IPTV became a thing? Is it because the technology allowed it? Or is it because these companies, like you mentioned Akamai and then of course I know you were with Netflix for a long time, they developed something and there was a demand for better quality? Which came first and why did that happen all of a sudden? Because it really did in the span of just a few years become a thing.

Nina Bargisen: Yeah. I think it happened because there was a demand for faster connectivity. There was a demand for... I mean, I remember going on the internet back then. First, you would have to wait for the modem to call up, you have all the weird sounds going, beep, beep, beep. And then if you were looking at a photo, you were waiting for it to load from the top. I mean, it's always been fascinating how it was from the top. And after some time, that's just not fun anymore. And so people realize that you need faster delivery and we were all wrong about 56K being enough for everyone. Who was it? That was Bill Gates, right? So I don't see how anybody can need more than... And I don't even remember if it was 56K or if it was some other bandwidth he was mentioning, but it was hilarious and we've been laughing of it since, right? No, so it was the demand for richer content, the idea or realizing that if you put richer content out there, more people would watch and then realizing, " Okay, we need to do better with richer content and we can't really do that if we serve we are waiting for that picture to be replicated from us over in San Jose when the user is sitting in France. We don't want it to stop, it's not going away." We have to figure out a way of doing that better. So they came up with the idea and it was those brilliant folks whose name I always forget at Akamai, who came up with the algorithms, who came up with the two important things to think about when you want to try and replicate rich content and put it closer is like, " Well, how do we figure out which files should go and how do we figure out a way of making sure the consumer of the content gets to the closest?" And that's basically the idea of a CDN because that is what a CDN is. You want to put files somewhere, but you need to figure out how to put them there and you need to figure out how to get the consumers of the files to the right file.

Philip Gervasi: Now, is it the geographically closest location? Is that always the case or could it be the location, it might not be as close, but it has better latency?

Nina Bargisen: See, that's the whole thing because I'm being vague in this definition because as CDNs have evolved over time, the different CDNs have come up with different ways of defining the closest. And you always have to take into account, " Well, what is the method of the end user to get to the server? And how do you identify the requester of some content and how can you tell the requester where to go, and what is the metric you have to look at?" So we went from what Akamai did and still do going, " Well, the end user is using this DNS server and this DNS server belongs to... Or everybody using this DNS server belongs to this CDN server as a method of defining closest and directing where it should go." To today, with a very advanced CDNs are measuring, constantly measuring latency, method of reaching the end user. Is it transit, peering embedded into the end user ISP? Along with a whole set of quality KPIs that they might be looking at and then deciding based on that, " Hey, end user, you're going to go to that server to get your content." So the idea is the same but closest is just defined by many more parameters today than it did back then.

Philip Gervasi: And so are CDNs monitoring the quality of connection, quality of the service delivery to the actual end user to like me in my house or to the local CO or to a region or things like that? What are they monitoring specifically?

Nina Bargisen: That depends on the servers they're serving.

Philip Gervasi: Really?

Nina Bargisen: And also on the CDN, and different CDNs do it differently. So if we take a service like Facebook and also some of the more general CDNs that have been around for a long time who have a lot of different services that they're serving, typically they would be monitoring the quality to the resolver that you're using, because they're still using resolvers as mapping you. So there would be sort of equalizing connectivity and quality of network to the resolver with the end users that are using this resolver. But if you look at a CDN as Netflix that's serving videos, but where they have an end- to- end control, so they have a client who knows stuff, that client is still working pretty simple, but that is monitoring, " Well, how fast am I playing?" So that is the bit rate of the file it is playing. " And how fast am I downloading the next segment of that stream?" And if it's playing faster than the download, the next time it's asked, it will ask for a lower bit rate and it will go up again. And it has a recipe of which bit rates they should ask for when going up and down depending on the device that the client is playing on. So there you have some intelligence in the client defining and measuring what is going on the network. But here very simply and very, very specialized, just looking at, " What am I playing and how fast am I downloading the next segment?" Other streaming technologies could be SI, which is what Prime Video is using for their live event, where the SI technology, the SI server is constantly monitoring or evaluating or estimating the bandwidth to the client. And again, here we are talking to the client. Again, they're client based, they have end- to- end control, so they can do measurements from the closest edge server to the client. And then based on that, deciding which bit rate the stream that is pushing out to that client will have. So again, very specialized, but depends on the application and the content that you're consuming, but there's no need to be that advanced when you're just downloading pictures for your website or on- demand video, or ads, or whatever it is, right?

Philip Gervasi: Yeah, there's no real time component to that. It's an image, pixels, right?

Nina Bargisen: Exactly. Yeah.

Philip Gervasi: Yeah. So I remember it used to be that the content creator wasn't necessarily also the CDN, whereas today you've mentioned some names of CDNs that are also creating the content themselves. So have they taken ownership of that because they just wanted to have that control or maybe because they're just so big they can do it themselves and it's cheaper? Why has that shifted as well?

Nina Bargisen: So it's a good question. So as I see it, it is that some content and some content providers became big enough that it made sense for them to build their own and they were specialized enough to build their own. I remember that Google and Google Cloud started out just by new YouTube servers that were placed around into ISP's network, but they have managed to really build on that and is now having a big business with Google Cloud running on some of the same embedded service, right? Yeah. And Netflix as well is the poster child, I guess also of how building a very specialized CDN just for the video delivery. We best remember that everything else that Netflix is doing, all the compute, all the customer interactions, the validations, all the security, all of that lives in the cloud, but for this one specialized part of their business, they built a global CDN. That's one of the biggest CDNs in the world today.

Philip Gervasi: But they still have to partner with providers. So when you say build a CDN, they're still co- locating resources embedded in various ISPs around the world to actually deliver it, right?

Nina Bargisen: Yeah, that's a funny way of looking at it because the way I've always looked at it is that, well, they're still part of the CDN, and you must remember the CDN is just the servers and a method to distribute content and a method to direct users to the servers, but then Netflix built their own network. So they have routers, they have a backbone now where they have a lot of their servers placed at their own locations, and then they have the embedded solutions where they give servers to ISPs and say, " Hey, you can put them wherever you want." they actually give them a lot of control over those servers.

Philip Gervasi: Right.

Nina Bargisen: And that is also what Akamai came up with, having their own locations, but also offering ISPs, " Hey, you can put these where you want to put them because you know your network best, but then we will require you to do the physical operation of these servers, but we will do the virtual operation or the logical operation of them."

Philip Gervasi: So the decision to do one or the other can depend on just cost and operational expense, right?

Nina Bargisen: Yes. And ISPs started to think about when it will make sense to do one thing or the other, because if you have an access network, but you have five pops, in all of those five pops there are internet exchanges and ability to connect to content through peering. It doesn't really make sense to do embedded because it will save you nothing, but it will give you the extra work and the extra cost of having those servers sitting inside your network where you could connect to that content for free, or more free, than getting the servers. But if you are an access network in the UK 5, 7, 10 years ago you would only be able to connect to content in London, and then it would make a lot of sense to say, " Yes, please, I will take some of your servers and put those in Scotland or in Manchester," and in other big locations where your end users are, but there's no ability to connect to any content. So it all depends on the topology, the ability to peer, how widely build out the peering ecosystem is and where your particular network is.

Philip Gervasi: And ultimately all of this is then going to be, from a technical perspective, that's all going to be unicast traffic down to an end user.

Nina Bargisen: Yes.

Philip Gervasi: I remember learning CCNA almost two decades ago and learning about, " Oh yeah, we use multicast." I think I configured multicast for voice over IP applications in my career, I've done that a bunch of times and build out trees that way. There was like one or two other applications, one was a pharmaceutical company that required multicast. Other than that, it really isn't used at all.

Nina Bargisen: No. Also, the same thing about for networks who implemented quality of service classes in the network and they were like, "Oh yeah, we got to put..." I remember that. So when I was working at Netflix and we were implementing servers into the network or setting up peering, and they would go, " Yeah, no, we'll put the video traffic into a short- forwarding class," because they read in their CCNA book that that's where video traffic belongs. And we would go, " No, don't do that. You need to put it in the internet class." " But..." " No." So modern streaming protocols are built for the internet. They're built to run on the internet. They work very badly if you put them in a short- forwarding class because of the drop profiles, because the short forwarding is kind of assuming that you're running it on UDP. And even though UDP is coming back into streaming for live- streaming in particular, all of the video on demand is running on HTTP or TCP- based protocol, right?

Philip Gervasi: Yeah, absolutely. I mean, how many Zoom calls do we have and the call is fine. The video is, I mean, obviously it's not super 4K, but it's fine. And the audio is just fine

Nina Bargisen: And there's no delay, right? But that was super fun, but we want to do it so well for you. Yeah, but please don't.

Philip Gervasi: So the reason that we moved away from QoS as far as campus networking and class of service and quality of service is one of the main reasons was that bandwidth was growing so fast and cheap. And today you can deploy a 100 gig in an enterprise, QSFPs have come down in price, so that's not even a big deal. When I was still actually turning a wrench, 100 gig was really fancy. But you have this incredible amount of bandwidth and QoS classes, those kind of things don't actually activate until there's contention. And so if you have an incredible amount of bandwidth, who cares? So do you think that a lot of this has been enabled on the provider side, the content delivery network side, because of the advancements in the... No, not advancements, but in the increase in bandwidth, both in provider cores and also in the access network? I have gig fiber to my house, which a decade ago was unheard of to your house at least.

Nina Bargisen: Yeah, I have won a gig fiber into my house as well, and it's kind of low.

Philip Gervasi: I mean, to be fair, I never even come close to using it. I have it because I've won it. That's the only reason.

Nina Bargisen: But I got to have it because I can. No, I think yes, obviously it's been all about the prices of the high bandwidths been going down. And technology, ethernet provides us with more and more, and they keep increasing the bandwidth just technology- wise. And you develop services that take advantage of that. And I think in particular, the QoS going away because of just bandwidth is cheap, we might as well, but every single engineer in the world will always also choose more bandwidth over QoS configuration. I mean, I remember when we were implementing it, it was sort of like, " It would be nicer if we could just over provision, but then us in the planning department would go" Money, we can't over provision. We are going to run the network this hop because money, and we've calculated, it's okay. So that's why they were making these configurations to protect the important services. But if they could just have thrown bandwidth to it, they would've done that.

Philip Gervasi: Yeah, there are real life business constraints, the cost of upgrading infrastructure, engineering, all the way down to power requirements. You want to upgrade devices to accommodate 100 gig, 400 gig. There are different power requirements to the devices themselves. So I've done that where I've had customers that were upgrading bandwidth in their core and then out to their branch offices and stuff like that, and in their data center especially. And the project came to a standstill, multimillion dollar project came to a standstill, because we had to then wait for the electricians to come in and then the local community to deliver more power to the healthcare facility. So there were a lot of considerations just to increasing bandwidth. But it is interesting that as we increase bandwidth, we enable more services, like you said. Increasing bandwidth doesn't solve all our problems. We can still have latency on a high bandwidth link. There can still be delays in a server responding to you that makes an application feel slow, even if you have a gig to your house. So there are certainly other considerations. Some of them network, some of them not.

Nina Bargisen: Yeah, I was just at a conference this week where I heard, actually I'm going to be sad now, I heard the internet in the US being described as how it was in Africa 10 years ago with regards really to inaudible and routing.

Philip Gervasi: Oh, really? Yeah.

Nina Bargisen: Yeah.

Philip Gervasi: I mean, I know my regional area in Northeast US, I know where my traffic goes. I'm in upstate New York and it's going to go to either New York City or Boston. And I know where other cities in the northeast, Washington, D. C., Philadelphia, I know how all that traffic in those metro areas go, but as far as the entire US, I'm not really sure.

Nina Bargisen: No, I think it was somebody from Austin who was describing how his traffic was routed to Atlanta and then back to San Jose when he was getting his content and he was very sad about that. And then one of the Africa folks go, " Yeah, that was how we were sending traffic to Frankfurt between our networks 10 years ago."

Philip Gervasi: But ultimately it is this desire to consume more services that's driving all of this right. Now, back in the day it was pictures, text, right?

Nina Bargisen: It is.

Philip Gervasi: Back in the day it was just text.

Nina Bargisen: I remember 12 years ago when I was at dinner parties and I got fed up with, and I didn't want to reply to the so- where- do- you- work question, because I was working at the local telco and they would start talking about their bills. So I would say, " I work in porn," because I was building the internet and the biggest servers on the internet at that time was porn. It's driven by what people are consuming. So we are building a network that can supported what people want to consume.

Philip Gervasi: So the very beginnings of content delivery networks years ago really stemmed from the need to deliver high quality audio and video, so streaming movies over the public internet and whatever technologies and methods were used to do that, and that's fine. It almost feels like a solved problem because today what I see the new focus being is livestream content. So for example, every time I watch any kind of a large sporting event, it's live- streamed over the internet and I'm watching it on Prime or whatever, and never on regular TV. And also I don't play online video games, but I know that those big multiplayer online video games, that's huge, not just huge in popularity, but huge business. And so a game like, I don't know, like Counterstrike, it's a first person shooter game where you have to track every single bullet coming out of those guns as people are playing and having their battles. And so that seems to be the new driving force for delivering content over the public internet. Am I way off?

Nina Bargisen: It is. I mean, live- streaming is big. And I think there's an interesting thing if we circle back to the, we talked about some video providers being so big that it made sense for them to build a specialized CDN for that. If you look at a live- streaming provider like Prime Video, they've grown out of that model.

Philip Gervasi: Really, yeah.

Nina Bargisen: They're too big to run on their dedicated infrastructure, so they're using all of the CDNs. And to make their events work well, they are not only using all of the CDNs, they are working with the individual CDNs and the individual ISPs, so they're not only just checking that the CDN will work well and will do what they want and will support their technology, but they're even out looking at and talking both to the ISP and the CDN, so how's the bandwidth between the two of you to make sure that they're not running into issues with, or at least to not run into issues with connectivity that is not due to them miscalculating just how popular their game will be.

Philip Gervasi: But how does that work with live- streaming in particular then? Because I can't co- locate files per se, I'm not co- locating files into an ISP or into my own infrastructure, it's live from some source somewhere, like some kind of a sporting event or a political speech. So how does that work? Isn't it getting it direct from the source and not really going over the CDN infrastructure?

Nina Bargisen: No, it's going over the CDN. So basically what happens is that they go from the source and then they sent the feed up, the ingest feed up, and they have maybe four different redundant connections to where they encode, which is typically in one of the instances of AWS, so the best connected one. And then from the encoding, they're going down a tree through the CDN. So they hand off from encoding to the CDNs and they hand off both to CloudFront, their AWS CDN, but also to Edgio, to Fastly, to Akamai, to everybody that they're using. And they're using for their live- streaming to make sure that everybody is watching the same frame at the same time, which is what you want to do when you do sports, so you are not having your neighbor cheering before you see the goal. That is so annoying, it has happened. So they're using this technology they acquired in 2020 called SI. And what they've done is that they're working with their CDN partners, so they're provided some part of the SI stack so they can spin up a SI instance on the CDNs they're using, and that way they're maintaining the end- to- end control of what's going on.

Philip Gervasi: Okay, very interesting. So everybody's watching the same frame at the same time, it is being distributed, so there is a layer of abstraction there from the source. But yeah, to me that's kind of where we're going is the live- streaming component. With bandwidth the way it is, downloading a video isn't the most difficult thing in the world and co- locating video in regional locations is not the most difficult thing in the world, so yeah.

Nina Bargisen: I mean, that's a problem that's solved. What's out there now is the challenge and the fun stuff is the live-streaming. Absolutely.

Philip Gervasi: So now the challenge and fun stuff is the live- streaming and getting it better quality because I noticed that live- streaming quality is a little lower than downloading a video in 1080 or 4K, and then having that audio spot on and all that. So yeah, I'm looking forward to having just better and better and seeing one day when I have holograms in my living room of Star Wars or whatever I'm watching.

Nina Bargisen: Yeah, we get into the whole augmented reality type of thing. And then we need to get compute to really close to where you are and that's a whole different discussion as well, but where I think we will go deeper and deeper and deeper into ISPs networks, but now with compute and not just video files or distribution.

Philip Gervasi: Yeah. Well, Nina, this has been a really great conversation. I think we could easily have a part two or a part three and just talk about really anything you want to talk about at this point. But it was a pleasure, so thank you for joining today. If folks want to reach out and ask a question about CDNs or really anything technology related, how can they do that?

Nina Bargisen: Oh, so I'm on LinkedIn. I'm also on Twitter, but not really that active, so I think LinkedIn would be the best place or the best social network to reach me on. And they can also write an email, old- fashioned email, nina @ kentick. com is an easy thing to remember. And I am actually still on IRSC, so if people can find me there.

Philip Gervasi: Okay, very good. And you can still find me on Twitter, I am still pretty active there, Network_Phil, also search my name in LinkedIn. And if you have an idea for an episode or if you'd like to be a guest on the podcast, please reach out to us at telemetrynow @kentik. com. So for now, thanks for listening and bye- bye.

About Telemetry Now

Do you dread forgetting to use the “add” command on a trunk port? Do you grit your teeth when the coffee maker isn't working, and everyone says, “It’s the network’s fault?” Do you like to blame DNS for everything because you know deep down, in the bottom of your heart, it probably is DNS?

Well, you're in the right place! Telemetry Now is the podcast for you!

Tune in and let the packets wash over you as host Phil Gervasi and his expert guests talk networking, network engineering and related careers, emerging technologies, and more.

We use cookies to deliver our services.
By using our website, you agree to the use of cookies as described in our Privacy Policy.