Operations | Monitoring | ITSM | DevOps | Cloud

Harness AI January 2026 Updates: Human-Aware SRE and Smarter API and Application Security | Harness Blog

Harness AI is starting 2026 by doubling down on what it does best: applying intelligent automation to the hardest “after code” problems, incidents, security, and test setup, with three new AI-powered capabilities. These updates continue the same theme as December: move faster, keep control, and let AI handle more of the tedious, error-prone work in your delivery and security pipelines. ‍

Webinar Recap: What It Really Takes To Make AI Profitable

Right now, 48% of organizations say they’re being asked to measure or report on AI-related costs. The problem is that they’re still figuring out how to do it. That was a very telling stat from a recent CloudZero webinar on AI and profitability, and speaks loudly to the reality that many organizations are still struggling to get a grasp on AI spend which our data shows to be rising sharply as a part of total spend in recent months.

IT as the Proving Ground for AI: Driving Enterprise Innovation

As per the Enterprise AI Survey conducted by Digitate in collaboration with Sapio Research revealed that IT operations have emerged as the primary proving ground for artificial intelligence in the enterprise. With 78% of organizations already deploying AI in IT, 65% identifying ITOps as the biggest AI beneficiary, and adoption outpacing every other function, IT leads enterprise AI maturity.

How Qovery uses Qovery to speed up its AI project

Discover how Qovery leverages its own platform to accelerate AI development. Learn how an AI specialist deployed a complex stack; including LLMs, QDrant, and KEDA - in just one day without needing deep DevOps or Kubernetes expertise. See how the "dogfooding" approach fuels innovation for our DevOps Copilot.

Datadog acquires Propolis

Generative AI enables teams to write and ship code faster than ever. But current methods for testing and quality assurance have not evolved to match the new pace and scale of deployments. Manual and deterministic testing paths quickly become obsolete when new features are released, and they fundamentally can’t test AI outputs, leaving a massive untested surface area. To keep up, teams need new testing methods that can define what goals users have, and ensure that their outcomes match.

Why Context, Not Prompts, Determines AI Agent Performance

Prompt engineering improves single responses, but agent performance is determined by how execution context is captured, replayed, and constrained over time. For the past few years, enterprises have obsessed over prompts, with entire roles emerging around their design and an ecosystem of tooling and templates following close behind. This focus delivered early gains because it allowed teams to rapidly improve outputs without modifying the surrounding system. Over time, those gains flattened.

The Hidden Cost of 30% AI-Generated Code #speedscale #aicoding #devops #technews #ai

AI now writes 30% of Big Tech’s code, but the resulting surge in defects is crashing platforms like AWS and GitHub. Manual testing can no longer keep up with this velocity; it's time to deploy AI Quality Agents to save our systems. Is AI speed worth the decline in code quality, or are we headed for a breaking point? Let me know if you’ve noticed more bugs in your workflow lately. Video collab with @ScottMooreConsultingLLC.

Scaling AI Reliability: Real world lessons from Mistral AI

How does one of the world's leading AI companies keep its infrastructure reliable while shipping new models constantly? In this webinar, Devon Mizelle, Senior SRE at Mistral AI, shares the real story. Devon walks through how Mistral built an automated system that generates synthetic checks for every model the moment it goes live—no manual configuration, no forgotten monitors, no inconsistent alerting. Using monitoring as code, his team eliminated the toil of maintaining hundreds of checks across a rapidly evolving model ecosystem.

How Cisco Revolutionized Platform Engineering with Komodor's Agentic AI

In the world of cloud-native infrastructure, complexity is the silent killer of innovation. For Cisco Outshift, the company’s incubation engine, managing a sprawling environment of AWS EKS clusters and edge-based MicroK8s workloads created a classic bottleneck: the Platform Engineering team was drowning in toil. Facing SRE burnout and the limits of human scaling, Cisco embarked on an ambitious journey to evolve its internal operations from standard DevOps to Agentic AI.