Operations | Monitoring | ITSM | DevOps | Cloud

The latest News and Information on DevOps, CI/CD, Automation and related technologies.

7 best AI deployment platforms for production Kubernetes workloads in 2026

Training a model in a notebook is easy. What breaks teams is the step after, serving it reliably without haemorrhaging cloud budget or burying your SREs in YAML. The common trap: picking a platform that handles the model but not the surrounding stack. An AI deployment platform should orchestrate the full application graph (inference endpoints, vector databases, caching layers, and frontends) inside a single VPC, with GPU autoscaling that doesn't require a dedicated platform engineer to babysit.

#056 - Cloud Contradictions and Cautionary Tales with Corey Quinn (The Duckbill Group)

In this episode of the Kubernetes for Humans podcast, Itiel sits down with the internet's favorite cloud contrarian, Corey Quinn of the Duckbill Group. Corey shares his unconventional career path as a "cautionary tale," explaining why his knack for fixing horrifying AWS bills makes him a terrible employee, and why he absolutely refuses to touch Kubernetes in production.

Context Engineering: How to Manage AI Context at Scale

Context engineering is the practice of managing the information an AI model sees (documents, tool outputs, memory, and structured metadata about the systems it reasons over) so it can make accurate decisions inside a real engineering organization. Most engineering teams have access to the same AI coding agents: Claude, GPT, Gemini, the major variants everyone is shipping. The model is no longer the differentiator.

What happens when you delete everything? Three minutes, or thirty hours.

Last year, at the annual conference for an open source framework you've definitely heard of, I walked up to the founder in a room outside the main stage. He was hunched over his laptop, frantic. We've known each other for a few years. "What's going on? Is everything okay?" He looked up with the specific shade of white people only get when they realize they've made a big mistake.

GitKraken Desktop in 6 Minutes: Open a Repo, Run an Agent, Ship the Change

The fastest way to get up and running in GitKraken Desktop. In this tutorial, you'll open a repo, start an AI coding agent in its own worktree, review the agent's changes against your own work, and ship a pull request without leaving the app. What you'll learn: Chapters Help Center: help.gitkraken.com.

Share artifacts between parent and child pipelines | Bitbucket Blitz | Atlassian

Bitbucket Pipelines lets you build reusable pipelines and share them across repositories. These reusable, shared pipelines need a way to share artifacts. Otherwise, we’ll have to repeat expensive steps such as downloading and installing dependencies and building the application code. You can now specify an artifacts section for child pipelines, with upload and download keywords. Artifacts listed under upload will be moved from the parent pipeline into the child pipeline, where they can be used and potentially modified.

Harness Cursor Plugin Demo: AI for Software Delivery from Your IDE

Stop context-switching between your IDE and your CI/CD dashboards. In this video, we demonstrate the new Harness Cursor Plugin, a native integration that brings the full power of the Harness AI Software Delivery Platform directly into Cursor. Using the Cursor Agent window and the new Harness Model Context Protocol (MCP) server, you can now manage your entire software delivery lifecycle through natural language. From triggering pipelines to governing deployments, this plugin ensures you stay in your flow while maintaining enterprise-grade security and control.