sFLow was originally developed by inMon to address the need for a common, universal standard of export for Internet Protocol (IP) flow information from switches, routers, probes and other network devices. An sFlow collector is one of three typical functional components used for sFlow analysis:

  • sFlow Exporter: an sFlow-enabled router, switch, probe or host software agent that samples one of out n packets on a random basis. Once sampled, the sFlow exporter generates UDP-based flow records and sends them to an sFlow collector.
  • sFlow Collector: an application responsible for receiving sFlow record packets, ingesting the data from the flow records, pre-processing and storing flow record from one or more flow exporters.
  • sFlow Analyzer: a software application that provides tabular, graphical and other tools and visualizations to enable network operators and engineers to analyze flow data for various use cases, including network performance monitoring, troubleshooting, and capacity planning.

An sFlow collector’s main functions include:

  • Ingesting flow UDP datagrams sent from one or more sFlow exporters
  • Unpacking binary flow data into text/numeric formats
  • Performing data volume reduction through selective filtering and aggregation
  • Storing resulting data in flat files or SQL database for post-processing by sFlow Analyzer applications
  • Synchronizing flow data to the sFlow Analysis Application running on a separate computing resource

sFlow Collector and Analyzer applications are two functions of a sFlow analysis system or product.  In some cases, the sFlow analysis product implements both functions on the same server.  This is appropriate when the volume of flow data being generated by exporters is relatively low and localized.  In cases where flow data generation is high or where sources are geographically dispersed, the collector function can be run on separate and geographically distributed servers (such as rackmount server appliances).  In these cases, collectors then synchronize their data to a centralized analyzer server.

Historically, the most common way to run sFlow collectors was on a physical, rackmounted Intel-based server running a Linux OS variant.  More recently, flow collectors have been deployed on virtual machines.  Unfortunately, in either case, compute and storage is severely limited the amount of detailed data that could be retained or analyzed. 

Most recently, a unified, cloud-scale approach to sFlow collector and analyzer architecture has emerged.  In this architecture, a horizontally scalable big data system replaces physical or virtual collector and analyzer appliances.  Big data systems allow for dramatically high volumes of ingest, greater data retention, deeper analytics and more powerful anomaly detection.  To learn more about big data sFlow analysis, visit the Kentik Detect overview page.