Operations | Monitoring | ITSM | DevOps | Cloud

The latest News and Information on APIs, Mobile, AI, Machine Learning, IoT, Open Source and more!

Getting started with Gemini and CircleCI

AI coding assistants like Gemini are changing how developers write code. They can generate entire functions, debug tricky issues, and help you move faster than ever before. But with that speed comes a new challenge: how do you make sure AI-generated code actually works? AI assistants are powerful, but they’re not perfect. They can introduce subtle bugs, miss edge cases, or generate code that breaks existing functionality. That’s where CI (continuous integration) comes in.

AI Assistant vs Skylar Advisor

What happens when AI understands your entire environment? With Skylar Advisor, you move beyond prompts and responses and get prioritized guidance based on real operational impact. Skylar Advisor identifies what matters most, explains why it matters, and provides clear next steps so even junior IT professionals can operate with confidence.

Getting started with Claude Code and CircleCI

AI-powered coding tools are changing how developers work. Tools like Claude Code can write functions, refactor code, and build features through natural conversation, often faster than you could type them yourself. But speed creates its own risks. AI-generated code can contain subtle bugs, reference packages that don’t exist, or misuse APIs in ways that only surface at runtime. That’s where continuous integration comes in. CI is a safety net that lets you move fast confidently.

From Chef to Chief Architect: Navigating the Intersection of AI and Data Security | Harness Blog

In the world of enterprise software, the transition from traditional DevOps to modern AI-driven delivery is less like a flip of a switch and more like a high-stakes kitchen. As Devan Shah, Chief Architect at IBM, puts it: the ingredients have changed from food to code, but the need for a precise, governed process remains the same.

Evaluating our AI Guard application to improve quality and control cost

This article is part of our series on how Datadog’s engineering teams use LLM Observability to build, monitor, and improve AI-powered systems. Organizations are building AI agents that help users automate work, analyze data, and interact with complex systems through natural language. As these agents become more capable, they also become more complex and exposed to risks such as prompt injection, data leaks, and unsafe code execution.

When AI Writes the Code, Who Keeps Production Running?

The production environment has become a minefield of code nobody really understands. Here’s what’s happening: Development teams are using Claude Code, Cursor, and GitHub Copilot to ship features at 10x their previous velocity. Product managers are ecstatic. Business stakeholders are thrilled. And somewhere in a war room at 2:17 AM, an SRE is staring at a stack trace for code that was AI-generated three weeks ago, trying to figure out why the payment service just fell over.

The limits of MCP and how Olly surpasses them

Model Context Protocol (MCP) servers act as adapter layers between clients and AI based workloads. MCP installation into an IDE, such as Cursor, brings a wealth of information directly into the developers primary tool, minimizing context switching and, especially in the world of observability, bringing telemetry closer to the code. MCP is not without its limits. These limits initially seem trivial, but in time, some of the inherent limitations to a basic MCP implementation become apparent.

A 4-Month Bug Fixed in <10 Minutes with Olly

In today’s highly interconnected systems, the subtle relationships between services are rarely obvious. Modern, complex architectures generate telemetry that functions less as “flashing signs” and more as faint “breadcrumbs” to be followed across a vast network of signals. In 2025, about two-thirds of outages involved third-party systems like cloud platforms and APIs.

Get Kafka-Nated S2E2: Viktor Kessler on Apache Iceberg, OSS, and Community

In this episode, we sit down with Viktor Kessler, co-founder of Vakamo, major contributor to Lakekeeper, and organiser of Apache Iceberg Meetup Europe, to explore the evolving world of Apache Iceberg. From architectural deep dives to open-source governance, Viktor shares insights from building an Iceberg REST catalog in Rust, launching a company around open data governance and growing the European Iceberg community from Dublin to Vilnius.