Phil is a veteran engineer with over a decade of experience in the field troubleshooting, building, and designing networks. Phil is also an avid blogger and podcaster with an interest in emerging technology and making the complex easy to understand.
We access most of the applications we use today over the internet, which means securing global routing matters to all of us. Surprisingly, the most common method is through trust relationships. MANRS, or the Mutually Agreed Norms for Routing Security, is an initiative to secure internet routing through a community of network practitioners to facilitate open communication, accountability, and the sharing of information.
In this post, learn about what a UDR is, how it benefits machine learning, and what it has to do with networking. Analyzing multiple databases using multiple tools on multiple screens is error-prone, slow, and tedious at best. Yet, that’s exactly how many network operators perform analytics on the telemetry they collect from switches, routers, firewalls, and so on. A unified data repository unifies all of that data in a single database that can be organized, secured, queried, and analyzed better than when working with disparate tools.
What do network operators want most from all their hard work? The answer is a stable, reliable, performant network that delivers great application experiences to people. In daily network operations, that means deep, extensive, and reliable network observability. In other words, the answer is a data-driven approach to gathering and analyzing a large volume and variety of network telemetry so that engineers have the insight they need to keep things running smoothly.
A data-driven approach to cybersecurity provides the situational awareness to see what’s happening with our infrastructure, but this approach also requires people to interact with the data. That’s how we bring meaning to the data and make those decisions that, as yet, computers can’t make for us. In this post, Phil Gervasi unpacks what it means to have a data-driven approach to cybersecurity.
eBPF is a powerful technical framework to see every interaction between an application and the Linux kernel it relies on. eBPF allows us to get granular visibility into network activity, resource utilization, file access, and much more. It has become a primary method for observability of our applications on premises and in the cloud. In this post, we’ll explore in-depth how eBPF works, its use cases, and how we can use it today specifically for container monitoring.
Under the waves at the bottom of the Earth’s oceans are almost 1.5 million kilometers of submarine fiber optic cables. Going unnoticed by most everyone in the world, these cables underpin the entire global internet and our modern information age. In this post, Phil Gervasi explains the technology, politics, environmental impact, and economics of submarine telecommunications cables.
What does it mean to build a successful networking team? Is it hiring a team of CCIEs? Is it making sure candidates know public cloud inside and out? Or maybe it’s making sure candidates have only the most sophisticated project experience on their resume. In this post, we’ll discuss what a successful networking team looks like and what characteristics we should look for in candidates.
Today’s enterprise WAN isn’t what it used to be. These days, a conversation about the WAN is a conversation about cloud connectivity. SD-WAN and the latest network overlays are less about big iron and branch-to-branch connectivity, and more about getting you access to your resources in the cloud. Read Phil’s thoughts about what brought us here.
A cornerstone of network observability is the ability to ask any question of your network. In this post, we’ll look at the Kentik Data Explorer, the interface between an engineer and the vast database of telemetry within the Kentik platform. With the Data Explorer, an engineer can very quickly parse and filter the database in any manner and get back the results in almost any form.
The advent of various network abstractions has meant many day-to-day networking tasks normally done by network engineers are now done by other teams. What’s left for many networking experts is the remaining high-level design and troubleshooting. In this post, Phil Gervasi unpacks why this change is happening and what it means for network engineers.
Traffic telemetry is the foundation of network observability. Learn from Phil Gervasi on how to gather, analyze, and understand the data that is key to your organization’s success.
Collecting and enriching telemetry data with DevOps observability data is key to ensuring organizational success. Read on to learn how to identify the right KPIs, collect vital data, and achieve critical goals.
Does flow sampling reduce the accuracy of our visibility data? In this post, learn why flow sampling provides extremely accurate and reliable results while also reducing the overhead required for network visibility and increase our ability to scale our monitoring footprint.
Machine learning has taken the networking industry by storm, but is it just hype, or is it a valuable tool for engineers trying to solve real world problems? The reality of machine learning is that it’s simply another tool in a network engineer’s toolbox, and like any other tool, we use it when it makes sense, and we don’t use it when it doesn’t.
Most of the applications we use today are delivered over the internet. That means there’s valuable application telemetry embedded right in the network itself. To solve today’s application performance problems, we need to take a network-centric approach that recognizes there’s much more to application performance monitoring than reviewing code and server logs.
We’re fresh off KubeCon NA, where we showcased our new Kubernetes observability product, Kentik Kube, to the hordes of cloud native architecture enthusiasts. Learn about how deep visibility into container networking across clusters and clouds is the future of k8s networking.
At first glance, a DDoS attack may seem less sophisticated than other types of network attacks, but its effects can be devastating. Visibility into the attack and mitigation is therefore critical for any organization with a public internet presence.
A packet capture is a great option for troubleshooting network issues and performing digital forensics, but is it a good option for always-on visibility considering flow data gives us the vast majority of the information we need for normal network operations?
There is a critical difference between having more data and more answers. Read our recap of Networking Field Day 29 and learn how network observability provides the insight necessary to support your app over the network.
At Networking Field Day: Service Provider 2, Steve Meuse showed how Kentik’s OTT analysis tool can help a service provider better understand the services running on their network. Doug Madory introduced Kentik Market Intelligence, a SaaS-based business intelligence tool, and Nina Bargisen discussed optimizing peering relationships.
Investigating a user’s digital experience used to start with a help desk ticket, but with Kentik’s Synthetic Transaction Monitoring, you can proactively simulate and monitor a user’s interaction with any web application.
The theme of augmenting the network engineer with diverse data and machine learning really took the main stage at the World of Solutions. Read Phil Gervasi’s recap.
By peeling back layers of your users’ interactions, you can investigate what’s going on in every aspect of their digital experience — from the network layer all the way to application.
When something goes wrong with a service, engineers need much more than legacy network visibility to get a complete picture of the problem. Here’s how synthetic monitoring helps.
In the old days, we used a combination of several data types, help desk tickets, and spreadsheets to figure out what was going on with our network. Today, that’s just not good enough. The next evolution of network visibility goes beyond collecting data and presenting it on pretty graphs. Network observability augments the engineer and finds meaning in the huge amount of data we collect today to solve problems faster and ensure service delivery.