Operations | Monitoring | ITSM | DevOps | Cloud

#055 - From Enterprise Java to Kubernetes and AI-Driven Infrastructure with Dan Hicks (Boomi)

Dan breaks down the fundamental similarities and stark differences between application development and platform engineering. He shares the unexpected hurdles he faced during his transition, from complex networking and CoreDNS latency to the harsh realities exposed by chaos testing in cloud environments.

Introducing the StatusGator MCP Server

Your AI agents can now monitor, triage, and respond to cloud outages autonomously. The way enterprises manage cloud infrastructure incidents is changing. AI agents are no longer just chatbots answering questions — they’re becoming first responders in your incident management pipeline. Today, we’re launching the StatusGator MCP Server, giving AI agents direct, structured access to the full power of StatusGator’s cloud status monitoring platform.

KubeCon Europe 2026: AI Is Shipping Code Faster Than Orgs Can Govern It

KubeCon + CloudNativeCon Europe 2026 recently brought the cloud native community to Amsterdam. We were there all week bouncing between the booth, a Braintrust event with engineering leaders from across the community, and more hallway conversations than we can count. One talking point dominated the week: AI is shipping code faster than most engineering orgs can govern it. It also became clear that we weren't the only ones talking about this challenge.

Harness Ships Five Capabilities to Power Confident Releases at AI Speed | Harness Blog

The pace of AI-assisted development has outgrown how most teams actually ship. Harness is closing that gap. Engineering teams are generating more shippable code than ever before — and today, Harness is shipping five new capabilities designed to help teams release confidently. AI coding assistants lowered the barrier to writing software, and the volume of changes moving through delivery pipelines has grown accordingly. But the release process itself hasn't kept pace.

Pull Request Velocity as a Proxy for AI Usage for Software Development

While AI have usage has been growing steadily for the last several years, the LLM models noticeably improved around the end of 2025. Specifically, they become more viable for software development. We are seeing the results. The feature and product delivery has picked up. One way to visualize this is by looking at the number of pull requests for your organization / software development teams. This chart shows the number of Github pull requests created by a team. Can you spot when AI usage increased?

Accelerate Your OpenTelemetry Migrations With Honeycomb's Agent Skills

Since releasing our hosted MCP server last year, we've been thrilled to see customers not just adopt it but build Honeycomb deeply into their agentic development and observability workflows. Users have embraced it, leveraging Honeycomb to stay in conversation with their code and understand how it runs in production.

Mastering AI Prompts: How to Get the Best Out of SQL Prompt AI | The Tony and Tonie show Ep41

How to get the most value from SQL Prompt AI in day-to-day work, whether you're writing new queries or improving existing code. A little prompt-writing knowledge goes a long way with SQL Prompt AI. Tony and Tonie discuss how to build reusable prompts that give the tool the context it needs to return useful results first time.