Operations | Monitoring | ITSM | DevOps | Cloud

MCP = Observability + Code, a Real-life Example

Our bot is hitting an error. We can see it in the distributed trace. Here, see what happened when we noticed it: Austin fired up Claude Code (hooked up to Honeycomb with its MCP tool) and got it to find the error, fix it, deploy, and check that the fix worked. It got a little overconfident at first, but the ending is happy. IRL this took 22 minutes; the video speeds up the AI agent interactions and cuts out waiting. This video includes Austin Parker, Jessica Kerr, and Ken Rimple.

Why Cribl Copilot Editor is Built for the Human, First and Foremost

I’m genuinely excited about what we're rolling out with Copilot Editor, an update to our AI that’s truly packed with new capabilities designed to help you automate pipeline development. You can read about these capabilities here. I wanted to take a moment to share our thinking on a core principle that guides how we build, especially regarding the impactful, and sometimes daunting, world of generative AI.

From RPA to Agentic AI: Understanding the Shifting Landscape of Enterprise Automation

Over the past decade, organizations have embraced automation in waves – starting with basic task scripts and Robotic Process Automation (RPA), then moving to hyperautomation, and now exploring “agentic AI” as the next frontier. Each step in this evolution has expanded the scope of what can be automated, and revealed new challenges. This blog offers a detailed comparison of RPA, hyperautomation, and agentic AI, their key differences, strategic advantages, and potential drawbacks.

Hyperparameter tuning for LLMs using CircleCI matrix workflows

Hyperparameter tuning is a critical step in optimizing large language models (LLMs). Parameters such as learning rate, batch size, weight decay, and number of training epochs can significantly affect convergence behavior and final model performance. While several approaches like grid search or random search are widely used, executing them manually is inefficient; especially when each training run is compute-intensive.

AI in Action with Kunal Kushwaha: 2 Demo Showcase. See What's Possible!

Join Kunal Kushwaha, Field CTO at Civo, for two demos using relaxAI. In the first demo, we'll show you how to deploy your own Large Language Model (LLM) inference engine using Ollama, giving you full control over your AI model. In the second demo, we'll demonstrate how to build custom AI integrations using relaxAI API, making it easy to add AI features to your existing applications. Whether you're an AI developer, MLOps team, or just curious about AI, this video is for you.