Taken together, three additional attributes of Kentik Detect — self-service, API-enabled, and multi-tenant — further enhance the fundamental advantages of Kentik’s cloud-based big data approach to network visibility.
I’m excited to be writing my first blog post as VP Marketing at Kentik. I’ve spent most of my career in networking or network management, including stints at Packet Design, a routing and traffic analytics company, as well as CoSine Communications and Lucent Technologies (by way of Livingston Enterprises). Working at Kentik allows me to apply those experiences at a startup with an exceptionally compelling story: Kentik is rewriting the rules of network visibility with a cloud service driven by big data technology.
Before Kentik, network visibility has been dominated by single or multi-tier appliance architectures. As the volume of network metric data grows exponentially, the inadequacy of these prior approaches has become obvious. Kentik Detect, on the other hand, uses a big-data engine running on a scale-out, back-end infrastructure cluster, and is designed for either SaaS (public cloud) or on-premises (private cloud) deployment. By leveraging cloud efficiencies, Kentik Detect delivers scale, speed, time-to-value, and affordability that can’t be matched by older designs.
The importance of speed and scale for effective visibility has been discussed in previous posts from Kentik’s founders and from Jim Frey, our VP Product. So I thought I’d reflect on Kentik from a slightly different angle: the additional, valuable reasons for a cloud approach to network visibility. If we define “cloud” in the broadest sense as self-service, API-enabled, and multi-tenant, we can see why Kentik Detect is so compelling.
Kentik Detect not only ingests vast amounts of network data, it also offers customers multiple points of self-service access to that data: via Postgres, REST API, or the web portal. The meaning of self-service at Kentik is deep, because even in the web UI there’s a distinct emphasis on enabling a high degree of flexibility. Contextually nested selection and filtering of data is strongly realized. To boot, even if you’re not a SQL nerd, the SQL behind every portal-based analysis can be accessed and passed along to others with more SQL expertise, so they can self-serve the data programmatically without having to write queries from scratch. Pretty nice.
I learned the importance of APIs early on when I worked on creating a “carrier-class” RADIUS AAA policy server product back in my days at Livingston Enterprises and Lucent. The standard way of configuring policies at that time was basically an ACL, and initial commercial products simply wrapped a GUI around that concept. But that wasn’t a solid enough foundation on which to build a large services business. So the wizards in our team came up with a plug-in workflow approach that became Alcatel-Lucent’s PolicyFlowTM scripting language. Sure, we had a GUI for simpler use cases, but that scripting language allowed our customers to use the product to implement new service creation in a very unconstrained fashion.
In my observation the value of programmatic approaches like the above is often overlooked. In most network management products, for example, the focus is on the GUI, with the API being at best an afterthought. Very few network management products dogfood their API, placing the API way behind the functionality and performance curve. For example I once worked closely with a customer that had built a native language portal for its network ops team to get at network management data, but they never built that capability into an API. I don’t mean to knock the engineering or technical sales teams; they often complained about neglecting the API. It was largely a matter of habits and perceived business priorities. But it was also because the GUI was totally separate from the API, which is quite typical of enterprise software. You only have so many engineering hours to go around, and the GUI simply has to work, so the API gets pushed down the stack over and over again.
At Kentik, the GUI and the API are one. SQL queries are the foundation. This is a beautiful thing because, unlike some products where the query language is proprietary, SQL is universal and standard. Both the REST API and the GUI are derived directly from SQL queries. Not only is that incredibly efficient from an engineering point of view, but it also means that all three are always on par with each other. It means that Kentik transcends “dogfooding” because it’s all steak (interpret that as a carnivorous or vegan steak as you wish). Furthermore, with continuous deployment, new features like extended query filtering options show up in the UI, but since they’re all built on SQL, previously written queries will always keep working.
Multi-tenancy is, of course, one of the prerequisites for SaaS. But you won’t find true multi-tenancy in offerings that masquerade as cloud-based but are actually a series of separate, single-tenant enterprise software instances deployed on VMs on either a public cloud or private cloud IaaS. Multi-tenancy and true cloud require some level of standardization of the service offering, even when each tenant’s service can be heavily customized to meet specific use cases. That’s the case for Kentik Detect, which is important is because it means that all end-users directly benefit from Kentik’s continuous deployment of new functionality. Moreover, if you’re looking to offer Kentik Detect as an embedded service to your end-users, you’re not looking at some brutally awkward deployment architecture. It will work because it was designed to do that.
Taken together, these three additional attributes of Kentik Detect — self-service, API-enabled, and multi-tenant — further enhance the fundamental advantages of Kentik’s cloud-based big data approach to network visibility. And that’s the take-away for this first post. It was fun writing it; I’m having fun being here already. And I’m looking forward to speaking with you about Kentik Detect, and how network visibility in the cloud can enable your organization to maximize the value of its network data.