Industry 4.0: What Could Possibly Go Wrong?
This series of blog posts will examine many aspects of ensuring application performance in cloud-scale enterprise WANs utilizing a hybrid of public and private networks. However, in this post and the next, I’d like to take a closer look at the unique challenges of managing network performance inside the new generation of hyperscale data centers supporting the delivery of cloud-scale applications. The scale and performance characteristics of these networks is radically different than those supporting traditional three-tier enterprise data centers and therefore existing tools and techniques for managing network performance are not adequate.
A key benefit of migrating enterprise IT applications to public clouds is that webscalers like Amazon, Google and Microsoft will end up managing the performance of the supporting hyperscale data centers. However, there will always be mission-critical applications that businesses need to deploy in their own private cloud infrastructure, which means these companies will have to invest in acquiring the new tools and skills needed to deploy and manage hyperscale data centers.
A prime example is the cloud-scale, data-intensive, cognitive computing infrastructure that will support Industry 4.0 applications. Management consultants designated the term “Industry 4.0” for the process of digital transformation in the manufacturing domain. McKinsey offers a typical definition:
“We define Industry 4.0 as the next phase in the digitization of the manufacturing sector, driven by four disruptions: the astonishing rise in data volumes, computational power, and connectivity, especially new low-power wide-area networks; the emergence of analytics and business-intelligence capabilities; new forms of human-machine interaction such as touch interfaces and augmented-reality systems; and improvements in transferring digital instructions to the physical world, such as advanced robotics and 3-D printing.”
Digital transformation in this sector will have major implications for enterprise IT systems, software and network infrastructure. Industry 4.0 involves modeling and monitoring the physical world in the digital domain using cyber-physical systems. IoT plays a key role in sensing all of the critical elements in the physical world and delivering a continuous feed of sensor data to a Big Data analytics cluster situated in the cloud. Industry 4.0 “smart factories” consist of multiple cyber-physical system modules creating virtual copies of the physical world, monitoring physical processes and working cooperatively with each other in real time.
What could possibly go wrong?
No doubt, many things can and will, but let’s focus on the overarching challenge which is the massive scale of these operations.
It is common for large industrial companies to have multiple factories distributed globally with close ties to key suppliers, which could number in the hundreds. An Industry 4.0 manufacturing operation will involve continuously ingesting huge volumes of sensor data from a vast number of endpoints, typically several orders of magnitude more than even the largest enterprise manages today.
Private cloud hyperscale data centers will be needed to support Big Data clusters for processing sensor data in real-time along with a whole host of Industry 4.0 applications for modeling, monitoring and controlling the collection of cyber-physical system modules that comprise the full-scale operation. These data centers will be based on a highly efficient leaf-spine switching architecture with full mesh connectivity, enabling network traffic to flow “east-west” between any two servers with minimal latency. This is proving to be the optimal data center network design for DevOps-based cloud computing applications deployed using vast numbers of containers running individual microservices.
In my next blog post, we’ll take a closer look at the unique challenges of managing application performance inside these hyperscale data center environments.