Kentik - Network Observability
Back to Blog

Consolidated Tools Improve Network Management

Alex Henthorn-Iwane
blog post feature image

Summary

Stuck with piles of siloed tools, today’s network teams struggle to piece together both the big picture and the actionable insights buried in inconsistent UIs and fragmented datasets. The result is subpar performance for both networks and network teams. In this post we look at the true cost of legacy tools, and how Kentik Detect frees you from this obsolete paradigm with a unified, scalable, real-time solution built on the power of big data.


Freeing Your Network Teams From Legacy Limits

Old_tools-500w.png

At Kentik, we firmly believe that network engineering, operations, and planning are critically dependent not only on the quality of traffic data but also the accessibility of that data. While quality is easily defined in terms of accuracy, relevance, and detail, the value of even high-quality data is severely diminished if it can’t be accessed quickly and easily by those who need it. Unfortunately, that’s pretty much the status quo for network teams today. Stuck with piles of siloed tools, they struggle to piece together the big picture and the actionable insights buried in inconsistent UIs and fragmented datasets. The result is subpar performance for both networks and network teams. It’s high time to move away from this legacy paradigm to a unified, scalable, real-time solution built on the power of big data.

Today’s siloed network management tools can be traced back to an earlier era, when design was constrained by the limited computing, memory, and storage capacity of appliances or single-server software deployments. This meant that any given tool could only handle a narrow use case, which led to tool fragmentation. On the one hand, you had large management software vendors like HP, CA, IBM, and BMC who sold big branded “suites” that were actually, under the hood, a collection of separate tools, often from acquired companies. On the other side were “best of breed” tools, sold by smaller vendors that specialized in one particular area. Either way, it turned out, APIs were mostly architectural afterthoughts and users ended up with a collection of disparate, narrow tools that couldn’t — even with hefty consulting fees — be integrated into a seamless, efficient whole.

The problem facing network teams who still work with these fragmented tools is described in the report from Enterprise Management Associates titled: “The End of Point Solutions: Modern NetOps Requires Pervasive and Integrated Network Monitoring,” by Senior Analyst Shamus McGillicuddy:

“Fragmentation of visibility has long plagued the world of network operations. IT organizations have no shortage of tools that provide them with glimpses of what is happening with network infrastructure, but these tools often provide very narrow views. Some tools present insights gleaned from the collection of device metrics while others use network flows. Other tools gain insight through analysis of packet data, and so on. In many cases, multiple, separate tools receive the same set of source network data but retain different data subsets. While a network operations team can assemble a good understanding of the health and performance of a network with these tools, it is not easy. In fact, as Enterprise Management Associates (EMA) research has shown, a lack of end-to-end network visibility is the top challenge to enterprise network operations today.”

“Integration” via Swivel Chair

Noc_jock-400w.png

With true cross-tool integration being extremely rare in legacy tools, engineers have to do the “integration” work themselves via swivel-chair operations. High-value engineers are forced into highly inefficient workflows in which they visually correlate peaks on graphs, tables, and other data from multiple tools spread across several screens. This archaic approach is exemplified in the photo at right, in which the tools available to a representative network operations center (NOC) engineer include a particularly arcane piece of gear: a telephone.

Poorly integrated tools that discard actionable details on network traffic result in lots of wasted time. But the overall cost goes beyond the efficiency of individual engineers. According to the EMA “End of Point Solutions” report, there’s an inverse relationship between the number of tools juggled by network operations teams and their performance outcomes. One example is problem detection:

“Our research found that IT organizations using fewer network management tools reported more effective network problem detection than organizations that were using more tools. The typical network operations team reported detecting 60% of network problems before end-users experience and report these issues. However, organizations using 11 or more network monitoring and troubleshooting tools detect only 48% of problems before end-users, and organizations using only one to three tools catch 71% of problems before they affect end users.”

Another area where worse outcomes tracked with more tools is network stability:

“Network stability also correlates with the size of a management toolkit. Among organizations that use 11 or more tools, 34% experience several network outages a day and another 28% experience network outages several times a month. Meanwhile, just 6% of organizations using one to three tools experience several outages a day. Instead, 21% of them experience just one or two outages per year, and 18% said they almost never have an outage.”

The Consolidated Solution

Kentik’s founders, who ran large network operations at Akamai, Netflix, YouTube, and Cloudflare, well understand the challenges faced by teams working with siloed legacy tools and fragmented data sets. They knew that network data was filled with valuable operations information, and also how much of that value (and their own time) was being wasted. So they built a big data engine that could ingest diverse traffic data into a unified time-series data set, keep that data unsummarized for months, and make it available to answer any ad-hoc question within moments. The result is Kentik Detect.

With Kentik Detect you can finally get the following data sets in one platform for both hyper-speed queries and streaming anomaly/DDoS detection:

  • Netflow, sFlow, IPFIX
  • BGP
  • GeoIP
  • SNMP (device & interface data)
  • Host or sensor-based network performance metrics such as latency, TCP retransmits, errors, out-of-order packets, etc.
  • DNS log data
  • HTTP data such as URLs
  • Your custom tags applied in real-time, based on ingested data field values

This data correlation dovetails nicely with the take-away of the EMA report:

“The days of network operators relying on point tools for network monitoring and troubleshooting are over, even if network managers aren’t yet aware of this change. The time has come for them to put away their point solutions, spreadsheets, and open source tools. EMA research shows that network operations teams are more effective when they use an integrated, consolidated toolset.”

Our customers have found the integration, speed, and power of Kentik Detect to be liberating. To get a feel for what’s possible, check out the Pandora case study video. If you’d like more general information about Kentik Detect, check out our product pages. To read the referenced EMA white paper, download it here. And if you’d like to see for yourself what Kentik Detect can do for your network management operations, start a free trial or request a demo.

We use cookies to deliver our services.
By using our website, you agree to the use of cookies as described in our Privacy Policy.