Valuable Network Data Deserves Life in the Cloud
Network operators and engineers have been victims of a horrible tragedy: the needless slaughter of innocent NetFlow data records. Every day, all over the globe, decent and hard-working network infrastructure elements like routers and switches diligently record valuable statistics from IP traffic flows and faithfully bundle those statistics into NetFlow, sFlow, and IPFIX records. These records are then sent to NetFlow collectors, where they are supposed to have the opportunity to work, putting their rich stores of valuable statistical information to use towards the universally accepted and common good of improving network activity awareness and performance. These upstanding deeds — like solving congestion problems, planning capacity, defeating DDoS attacks, and the like — are the outcomes that have been promised for a (technology) generation now. This is the vision that the creators of this good and wholesome system imagined.
How far we have fallen short of that vision.
The details that NetFlow can reveal are ignored in favor of shallow summary reports.
These innocent — and let’s not forget, intrinsically valuable — flow records are not being allowed to live the full productive lives they were promised. Instead, vendors of legacy network visibility systems have perpetrated an insidious program of NetFlow record layoffs. The nearly endless details that NetFlow records can reveal are blithely ignored, with shockingly shallow summary reports foisted on network engineers and operators instead. This fools no one, of course. Network engineers know that a few single-dimension top talker pie charts don’t give them enough operational detail to really solve problems. Yet, somehow the purveyors of data reduction have succeeded in peddling the notion that this is as good as it gets.
A darker place
While the shallow mockery of network visibility is bad enough, the full story goes to an even darker place. If denying proper employment to these valuable data citizens is a shame, then the sinister reality is even more heartbreaking: these NetFlow records are being cruelly disposed of. You heard that right. Every day around the world, literally trillions of NetFlow records innocently debark from the network into NetFlow collectors, little anticipating their own impending doom. In most cases, NetFlow, sFlow, and IPFIX records are FIFO’d without conscience within mere minutes of arrival.
All the data that’s tossed away represents value forever lost to the world’s networks.
The global scale of this tragedy is alarming. Consider that a single network device that generates 4000 NetFlow records per second is amassing nearly 350 million such records per day. Trillions is a conservative estimate of the losses across the globe. But beyond the numbing size of the quantities we’re talking about, let’s talk about the value that’s being lost to the world’s networks. All that detail, in even one network, could do a world of good that’s currently just tossed away. How much better application performance, capacity planning, DDoS defense, and peering and transit savings could result? If network engineers weren’t denied the kind of details they really need to do their jobs, how much sweat, tears, and hours of sleep could be saved?
The mandarins of legacy network monitoring will claim that there’s no other way. But that’s just not true. The claim that NetFlow collectors “just can’t keep data around for very long” might have been true in 1999, but not now. We have the cloud, we have big data. Every NetFlow record that wants to live and work for the good of networks can and should have the opportunity to do so. We’re not talking about some Pollyanna-ish notion of data immortality. We’re talking about a useful life of service for NetFlow records and their valuable information. For every NetFlow record, there comes a day when it must join its brethren in the great null and leave behind just a trace of its individuality in summary reporting memories. But the current, anachronistic destruction of NetFlow before its time must come to a stop!
A better way
Despite this bleak picture, we can report that there is now renewed hope for NetFlow. And the great news is that it’s so easy. Through the miracle of SaaS, it literally takes just fifteen minutes to get big data and cloud-scale capacity and analytical power. Proof is available with our free trial.
It’s time for engineers and operators to join in revolt against data destruction.
It’s time for network engineers and operators to reject the data reduction orthodoxy and join the big data cloud revolution. Save the NetFlow!
If you want to join the revolution and put NetFlow records back to work in your network, contact us and we’ll give you a demo of how big data at cloud scale can make that a reality in your network management practices.
P.S. Spread the word to Save the NetFlow!
Twitter: @kentikinc #savethenetflow