Forward-thinking IT managers are already embracing big data-powered SaaS solutions for application and performance monitoring. If hybrid multi-cloud and hyperscale application infrastructure are in your future, ACG Analyst Stephen Collins’ advice for performance monitoring is “go big or go home.”
We live in the age of analytics, powered by incredible advances in distributed computing and big data technology. Companies are turning to data and analytics to improve all aspects of how they do business. Data scientists are the new rock stars as enterprise IT managers are busy retooling for this new age of data-driven insights, operations and decision making.
So it should come as no surprise that big data analytics will play a critical role in managing application performance in hybrid multi-cloud and hyperscale infrastructure. Network data is big data, characterized by massive volume, high velocity, and a wide variety of data types for monitoring application, infrastructure, and network performance. Big data analytics enables IT managers to rapidly correlate data across multiple datasets to extract actionable insights for a spectrum of operational use cases. Consider the many types of telemetry data that need to be collected, processed and stored:
- Wire data extracted directly from packets, including flow metadata
- KPI data from network elements and monitoring probes
- Server, OS, VM and container instrumentation
- Application performance metrics
- Syslog data from various servers and network elements
This telemetry is primarily time series data, which is often enriched and fused with contextual data from other sources, including:
- CDNs, DNS servers and GeoIP databases
- User, device and provider data from OSS/BSS and CRM servers
- Security threat intelligence feeds
Modern big data platforms are capable of handling the volume, velocity and variety of performance monitoring data in hybrid multi-cloud and hyperscale environments. Highly scalable big data clusters support the cost-effective storage capacity required for petabytes of data and high-velocity data pipelines capable of ingesting streaming telemetry data in real time. Column-oriented big data repositories enable powerful multi-dimensional analytics on massive time series datasets. IT managers can perform complex queries that correlate across multiple data types in near real-time — seconds vs. minutes — gaining insights into application and network performance that are not possible using existing monitoring tools.
Big data accommodates the large datasets required to execute machine learning algorithms that can automatically detect conditions, trends and anomalies in real time. Since machines are far better at crunching numbers than humans, these systems can automate the time-consuming workflows for humans to detect, if at all. Machine learning also enables predictive analytics so that IT managers can be proactive in anticipating problems and taking action before they occur. Ultimately, big data analytics and machine learning will provide the closed-loop feedback critical for automating many NetOps, ITOps, SecOps and DevOps workflows, reducing OPEX and improving uptime by eliminating operator errors.
While big data can deliver big benefits, deploying and operating big data clusters is resource-intensive and talent with the necessary expertise is in short supply. This is why many organizations are choosing SaaS-based big data solutions instead of deploying platforms on-premise. IT managers don’t have to manage the associated complexity and technology risk while significantly reducing the up-front investment in favor of affordable, pay-as-you-grow pricing.
Forward-thinking IT managers are already embracing big data-powered SaaS solutions for application and performance monitoring. If hybrid multi-cloud and hyperscale application infrastructure are in your future, my advice for performance monitoring is “go big or go home.”