Contact Us

April 2017

Ultimate Exit Release #1

Fasten your seat-belts, this one is a big deal. It’s the first release within a bigger plan for end-to-end visibility of your traffic, which is a holy grail objective of flow data reconciliation. What do I mean by “end-to-end visibility”?  I mean an easy way to figure out what volumes of traffic are flowing in and out of your network, from any source to any destination network.

 A great example is assessing potential peer or transit prospects.  
How many times have you had to toggle between multiple spreadsheets that contain only approximations of traffic to or from various ASNs?   
Hacked convoluted excel formulas?  
All in order to guess the ROI of what should be a simple decision?

What about trying to figure out how much traffic from a peer is being routed locally versus over more costly long-haul links? 
You need to able to figure out precisely at the site and device level (and at the interface level in the future) the traffic flowing between network entry and exit points.

It turns out that the sophistication of flow consolidation and reconciliation needed to achieve this task is  beyond many network engineering teams’ home-grown tools, data infrastructure and software engineering capacities. And for good reason.  It’s a hard problem.

Et Voila!

Introducing two newly added destination dimensions (fanfare, please):

  • Ultimate Exit Site
  • Ultimate Exit Device

How do I use these?
Let’s say I am a transit provider.  
I move packets from content providers to eyeball ISPs, and carry them over a costly global backbone.  
I want to look at the traffic I’m exchanging with one of the major content providers like Google, and see where it comes in, and where it comes out of the my network.

Let’s further assume that I run a well organized network, so I indicate within my Interface description nomenclatures any interconnections with Google.  This means I can easily include these interconnect with a simple filter. For example:

Filtering Google PNIs

BTW, if I know that I’m going to be looking at these often, I also make myself a nice Saved Filter (see below) and just call it anytime I need it instead.

Saved filter: Google PNI

Then I can use that saved filter in any Data Explorer query I’m working on.

Applying Google Saved Filter

Alright, so here’s what I want to look at, in sequence:

  • The site where the traffic enters the network
  • The site where the traffic leaves the network
  • The next-hop Network
  • Which eyeball network it is terminating at, i.e. Destination AS.

With my handy new dimensions, I can answer this question with the following query:

ultimate exit dimensions

To get a very useful visual, I’ll select the Sankey display type and voila!

Ultimate Exit Sankey

Looking at the generated Sankey diagram (above), I can now instantly know what traffic is flowing between the entry Site and the Ultimate Exit site, and which eyeball networks are reached.

What you would usually do at this point is look at where transport is the most expensive or least performant between your Entry Site and Ultimate Exit site and optimize for either of them.

In the above sankey chart, I can see that I am shipping a lot of traffic from Frankfurt to Marseilles.  So a few questions come to mind that I can explore further using Kentik data:

  • Should I track Google’s ability to PNI in Marseilles and save myself some Frankfurt→Marseilles transport costs?
  • Do I want to review my prices for transport for London→Marseilles based on how much my Google PNI consumes of that capacity?
  • What portion of the private links between Frankfurt and Marseilles is going to those Google PNIs, and therefore what’s the real ROI I’m getting from these links?

You can’t even start this ROI exploration when you’re stuck in spreadsheet hell.
Stay tuned, because there’s a lot more coming over the next few months in this arena.


Custom Dimensions update

Our Custom Dimension infrastructure has been upgraded.  Before the upgrade, default provisioning rules were:

  • 5 Custom Dimensions max per customer account
  • 12 characters max for each dimension
  • a maximum of 5,000 Populators per each dimension (i.e. each dimension can take a maximum number of 5000 different values)

The new infrastructures allows us to now offer:

  • 10 Custom Dimensions max per customer account (x2 the amount of Custom Dimensions)
  • 128 characters max for each dimension (i.e. you can now store much larger values for your business flagging needs)
  • you are now limited to a maximum of 10,000 Populators overall across all dimensions (the previous 5,000 Populators/Dimension has been lifted)
    → i.e. you can now use more than 5,000 on a single Custom Dimension, subject to the overall Populator limit.

PREVIEW: User Based Filtering

Every now and then we will preview an upcoming feature. In this feature preview, we’ll look at the User-Based Filtering feature.

In addition, occasionally we believe that there is value in releasing an early/crude version of a feature-set, in order to get early feedback from our users and iterate on it quickly to make it the exact feature they really want.  This is what we have decided to do with User-Based Filtering

With this feature, ‘member’ (as opposed to ‘admin’) users can now be restricted to certain data via a user filter.
Admin users can setup a given user’s filters on the Users listing page.

Admin > User

Modify user prefs in Admin > User list

Filters are composed in the same fashion as in the Data Explorer filter panel. Once tied to a user, these filters are systematically  appended (“AND’d” if you will) with any query the user runs for:

  • Data Explorer queries via Kentik Portal UI
  • SQL queries from the SQL Query explorer or via PGSQL connections
  • API queries

The underlying idea is for Admins to be able to grant (very) granular rights on what specific users are allowed to see and/or query.  For example, only allowing certain users to query flows from backbone routers, as in the below screenshot example:

User Based Filter example

The filter screenshot below allows certain users to only query flows for CUSTOMER interfaces on ‘Ashburn DC3’ and ‘Ashburn DC4’:

User Based Filter example

As explained previously, we have released the minimum amount of functionality for this feature, and hope to leverage the feedback of interested users, in order to iterate on it.

Some open questions we have for this feature include:

  • Should filtered users be made aware in the UI that they are being filtered? In the current version of this feature, the user wouldn’t know.
  • If filtered users are made aware, should we indicate a permanently locked filter setting in the Data Explorer?
  • Should we let users know they are being administratively filtered, but not indicate what the filter constraints are?
  • Should the display of filtering information be administratively configurable at the user level?
  • How do we mention or indicate user filtering in the API and SQL ? For example, when a user submits a SQL query, should we return a modified version of the submitted requests with the appended filtering in its SQL form?

Please let us know your feedback on  Is this a useful feature you would like to rely on?  What should the next iteration for it should look like.

Sampling Rate

This one here is for the nerdier users out there. As you may know, our ingest platform includes smart ways of re-sampling flows exported by your devices to match your contracted FPS.  We’ve been improving this functionality quite a lot recently.

Our goal is to can resample accurately  and keep the resampling-bound distortion as close to zero as possible.

In order to keep our engineering work accurate, we actually had to add Sampling Rate to our:

Available Dimensions Available Metrics Available Filters
New dimension: Sampling Rate New Metric: Sampling Rate New Filter: Sampling Rate * 100 
  • This could come in handy on your end when debugging potential Flow Sampling misconfigurations on your end.

    Extra Data Explorer niceties

    As we see usage of the Data Explorer evolve with our customers, we often throw in additional convenience features that we think streamline the overall user experience.

    This time around, we’ve added a couple of convenience tweaks, both geared towards optionally stripping processing to make query return times faster:

    • You can now disable computation of Total over a metric.
      • This saves processing time on our mid-layer, i.e. returns query results faster, if you already know you aren’t interested in looking at the total value for your breakdown:

    Disable total

  • You can also now disable Hostname lookups directly from the Data Explorer query panel, which shaves down the time to query response, since IPs won’t need to be reversed DNS’d before returning results of an IP/CIDR breakdown:
With reverse DNS enabled With reverse DNS disabled


Reverse DNS enabled


Reverse DNS disabled
Data table with reverse DNS enabled Data table with reverse DNS disabled 

Alerting Update

Syslog Alert Notification Channel

We have just added the capability for you to ship alert notifications to good ole Syslog infrastructure.  This has been a recurring ask since we’ve released v3 of our Anomaly Detection / Alerting platform.  Your voice has been heard!

Syslog alerting works in the same way than the JSON Webhook feature does, which is by offering a new type of notification channel aptly named ‘Syslog’.

When configuring a threshold in an Alert Policy (Alerting Alert Policies edit a policy), you will notice that a new entry has been added to the Create Notification Channel button, along with the existing Email and JSON webhook options.  You can tune all the config knobs when you create the channel, including Port, UDP/TCP transport, Syslog Severity, and Syslog Facility.

syslog config panel

Alerting: new dimensions and filters

We’ve just added new support in our Alert Policies for:

  • IPV6 (for Dimensions as well as Filters)
  • inet_family (for Dimensions as well as Filters – this is to select IPv4 vs IPv6)