Kentik - Network Flow Analytics
Back to Kentipedia

NetFlow Collector

What is a NetFlow Collector?

NetFlow is a protocol developed by Cisco Systems that is used to record statistical, infrastructure, routing and other information about IP traffic flows traversing a NetFlow-enabled router or switch. A NetFlow collector is one of three typical functional components used for NetFlow analysis:

  • NetFlow Exporter: a NetFlow-enabled router, switch, probe or host software agent that tracks key statistics and other information about IP packet flows and generates flow records that are encapsulated in UDP and sent to a flow collector.

  • NetFlow Collector: an application responsible for receiving flow record packets, ingesting the data from the flow records, pre-processing and storing flow record from one or more flow exporters.

  • NetFlow Analyzer: a software application that provides tabular, graphical and other tools and visualizations to enable network operators and engineers to analyze flow data for various use cases, including network performance monitoring, troubleshooting, and capacity planning.

NetFlow Collector

A NetFlow Collector’s main functions include:

  • Ingesting flow UDP datagrams from one or more NetFlow-enabled devices
  • __Unpacking binary flow dat__a into text/numeric formats
  • __Performing data volume reductio__n through selective filtering and aggregation
  • Storing resulting data in flat files or SQL database 
  • Synchronizing flow data to the NetFlow Analyzer application running on a separate computing resource

NetFlow Collector and Netflow Analyzer applications are two functions of a NetFlow analysis system or product.  In some cases, the NetFlow analysis product implements both functions on the same server.  This is appropriate when the volume of flow data being generated by exporters is relatively low and localized.

In cases where flow data generation is high, or where sources are geographically dispersed, the collector function can be run on separate and geographically-distributed servers (such as rackmount server appliances).  In these cases, collectors then synchronize their data to a centralized analyzer server.

Historically, the most common way to run NetFlow collectors was on a physical, rackmounted Intel-based server running a Linux OS variant.  More recently, flow collectors have been deployed on virtual machines.  Unfortunately, in either case, compute and storage severely limits the amount of detailed network flow data that could be retained and analyzed. 

Most recently, a unified, cloud-scale approach to NetFlow collector and analyzer architectures has emerged.  In this architecture, a horizontally-scalable, big data system replaces physical or virtual collector and analyzer appliances.  Big data systems allow for dramatically higher volumes of data ingest, longer data retention periods, deeper network traffic analytics and more powerful anomaly detection.  To learn more about big data NetFlow analysis, visit the Kentik Platform overview page.

Releated reading and resources:

Updated: March 09, 2021
Join the Kentik Slack Community
Be part of a community of Kentik users who can help you along the way.
Join Now
We use cookies to deliver our services.
By using our website, you agree to the use of cookies as described in our Privacy Policy.