Despite the siren song of AI in the keynotes, visitors were far more focused on solving real-world problems. These are the issues that have plagued IT practitioners for years, if not decades: troubleshooting and validating performance and availability of their applications, services, and infrastructure.
The first two demos are the most terrifying.
After arriving at the booth, unpacking my stuff, and looking for the first cup of coffee, I spent a lot of time wondering what questions I’d be asked.
Side note: In my (not so) humble opinion, an organizer’s respect for sponsors is directly proportional to the number of coffee stations available to attendees before the expo floor opens.
And when the floor opens and attendees come flooding in, searching for answers and swag in equal measure, it takes at least a couple of demos before I find my groove. To understand the major questions and themes attendees have on their minds and to overcome my imposter syndrome.
After that, it’s just a matter of listening, clarifying, and then showing off Kentik so others have the same sense that I do of how awesome a solution it is.
My initial event demo jitters aside, though, what else did I find notable about this year’s AWS re:Invent?
The biggest AWS re:Veal
As my colleague Justin commented over on his blog:
"AWS showcased significant updates and improvements in its AI and ML services. Advancements in natural language processing, neural nets for machine learning, and more intelligent tools for developers were the key highlights."
Of course, not everyone has been impressed with what AWS had to show or how Amazon Q is comporting itself. While I’m personally inclined to give products the benefit of the doubt when they’re in the initial release stages, I also understand that the levels of both hype and forced integration Amazon has used to push Q on the public leaves little that kind of generosity.
re:Flecting on what’s re:Quired
Despite the siren song of AI in the keynotes, visitors to the Kentik booth were far more focused on solving real-world problems. These are the issues that have – with minor variations – plagued IT practitioners for years, if not decades: troubleshooting and validating the performance and availability of their applications, services, and infrastructure.
To be sure, there are differences. What qualifies as “infrastructure” today is vastly more complex and nuanced than in the past. A far more varied range of elements must work in concert – from “traditional” network infrastructure to cloud-based platforms, containers and orchestration, microservices and external APIs, and more.
But even so, we’re still often asking the same questions at the end of the day. And honestly, that’s OK.
As my buddy Phil Gervasi wrote in his own review:
"...it's important we keep that discussion in the context of using generative AI to solve problems rather than awkwardly look for problems to solve that could have been solved more simply."
re:View, re:New, re:Lax, and re:Cover
It’s always important to put a conference into its proper context: Spread across six separate casinos (Caesars, Encore, Mandalay Bay, MGM Grand, The Venetian, and Wynn), over 65,000 attendees attempted to jam in over 2,000 technical sessions and keynotes, and also visit as many of the 400+ vendors on the expo floor.
It’s an impossible ask, even in a city like Las Vegas, where time has no meaning. Even for experienced conference attendees, that level of frenetic energy is hard to ignore.
This is why – even a couple of weeks later – I’m still processing the experiences and ideas I collected. Nevertheless, I’m grateful to the people who stopped in and said hello, and I’m already looking forward to next year’s event.
First demo jitters and all.