Back to Blog

At The Turning Point: FinServ Data Networks

Jim Meehan

Network Engineering
blog post feature image

Summary

While data networks are pervasive in every modern digital organization, there are few other industries that rely on them more than the Financial Services Industry. In this blog post, we dig into the challenges and highlight the opportunities for network visibility in FinServ.


Challenges & Opportunities for Network Visibility in the Financial Services Industry

retail-banking-ot-core-banking-300x200.jpgWhile data networks are pervasive in every modern digital organization, there are few other industries that rely on them more than the Financial Services Industry (FSI). Dating all the way back to the telegraph era, banks and brokerages have used networks to sync transactions and transfer money. Network availability and performance are critically important in banking networks where millions of dollars are transacted every second. Investment firms dealing with High-Frequency Trading applications have zero tolerance for loss and latency. And anyone who’s been close to an FSI IT department is familiar with the expectations and stress placed on the network team. On top of the internal pressure, an increasing pace of external challenges and technology trends make the job even harder. A few of these include:

  • Migration from large monolithic applications to distributed applications (a.k.a. microservices)
  • Internet everywhere: it’s now your WAN, your digital supply chain, and delivery vehicle
  • Moving from statically deployed applications to DevOps, CI/CD development models
  • Migrating from a central internal data center to a hybrid multi-cloud environment
  • Relentless growth of traffic volume and network elements
  • New security requirements due to increased threat activity and regulation

While some of these trends provide significant benefits to the organization by lowering infrastructure costs, or speeding application development cycles, they are also creating extraordinary challenges for network and security operations teams. Basic details about network traffic that these teams previously relied upon to perform day-to-day tasks like troubleshooting congestion, outages, misconfiguration, or potential threats are now harder to get because:

  • Traditional tooling can’t deploy to all the places where applications now live (i.e. public cloud)
  • Current tools can’t scale with traffic growth and still provide on-demand details
  • Network identifiers like IP addresses or interface names have lost their context in the face of dynamically deployed and autoscaled applications

Along with these challenges comes opportunity, however. Tooling can’t be an afterthought — especially in organizations so reliant on the network. Network monitoring and visibility strategies must change to accommodate these trends in the network itself. In order to meet these new challenges, monitoring architecture should mirror the trends and technologies being employed in applications and networks overall. Key considerations for new network tooling include:

  • Web-scale — A buzzword, yes. But also a legitimate philosophy that encompasses modern distributed computing, built-in redundancy and high availability, and horizontally scalable data ingest, storage, and compute. Appliance-based architectures (physical or virtual) don’t scale to the volume of monitoring data produced by today’s networks.**

  • Fast and flexible — Past monitoring architectures required a tradeoff: fast answers to predefined questions, or long query times (minutes to hours) with full query flexibility. Neither approach is workable for solving emergent issues in critical networks because the answers you need aren’t available when you need them. To be truly operationally useful, modern tooling must provide answers in seconds with robust data filtering and segmentation capabilities.

  • Proactive insight — Enabling network teams to quickly find “the needle in the haystack” is necessary, but not sufficient. Modern tooling should baseline normal network activity at scale, proactively find anomalies before customers or users notice, and provide the details that engineers need to solve the problem quickly.

  • Real context — In the past, networks may have been static enough that engineers could identify applications, users, or locations by IP address or network element names alone. That’s no longer possible at today’s scale, especially as application components are dynamically scaled and deployed. For full understanding, modern tooling must label network data with business level context like application and service names, usernames, or physical locations.

  • Deploys everywhere — Monitoring needs to go everywhere your applications and traffic go:  your WAN (SD or not), your datacenters, public cloud instances. A single UI to view all the traffic, anywhere, puts a stop to the network ops swivel chair.

  • Serves everyone — Network data has lots of potential value across multiple teams:  network engineering and operations for sure, but also SecOps and DevOps teams, and even up into management and executive ranks. All those teams are going to use the data differently, however. To truly provide value across teams, modern tooling must allow data to be curated and views and workflows to be customized.

Here at Kentik, we’ve built the next generation network traffic analytics platform that incorporates these requirements and more. We’ve been helping FSI network and security teams meet and exceed their organizations’ high expectations for network availability, performance and security, while simultaneously pivoting to new technologies and operational models. Stay tuned for upcoming blogs about specific challenges Kentik has helped customers solve. If you’d like to learn more, contact us to schedule a demo or start a trial.

We use cookies to deliver our services.
By using our website, you agree to the use of cookies as described in our Privacy Policy.