Operations | Monitoring | ITSM | DevOps | Cloud

Check the status of your CircleCI pipeline without leaving your IDE

Waiting on CI is one thing. Keeping tabs on it without breaking focus is another. Most developers track build progress by opening the CircleCI UI, navigating to the project, and digging through pipelines to find the latest run for a specific branch. It’s not hard, but it pulls you out of flow. Especially when you’re doing it multiple times a day across projects.

End-to-end testing and deployment of a multi-agent AI system with Docker, LangGraph, and CircleCI

Multi-agent AI systems are transforming how intelligent applications are built. By orchestrating multiple specialized agents that collaborate to solve complex tasks, these systems enable more dynamic and efficient workflows. However, deploying such a system reliably and at scale requires a structured approach to testing, packaging, and automation.

MCP server: Automated test coverage

Learn about a new feature using CircleCI's MCP server that brings automated test coverage to AI-enabled applications. Using a simple React app, the MCP server scans for AI prompts, recommends tests, and writes them directly into your codebase. Watch how you can: Now you can test and ship with confidence—right from your IDE or CI pipeline.

Trigger CircleCI pipelines from your IDE with natural language

Most CircleCI pipelines are configured to trigger automatically on code commits, but not every development scenario fits that model. Sometimes you need to trigger a build manually—to test pipeline changes, retry a flaky test, or run CI on a colleague’s branch—without the friction of pushing empty commits or navigating to the UI to manually trigger a build.

Prevent pipeline collisions with serial groups in CircleCI

In a single pipeline, it’s easy to control job order. But in a large engineering org with dozens of pipelines, hundreds of contributors, and countless shared environments and services, that control can start to slip. Pipelines interfere with each other. Deploys overlap. Test environments break. Someone merges code, triggers a build, and gets a failure they can’t reproduce. Unfortunately, this kind of instability is a routine byproduct of scale.

How CircleCI implemented llms.txt for better AI discoverability

At CircleCI, we’re committed to making our platform work seamlessly with the AI-powered tools that developers increasingly rely on. Our journey into AI integration is focused on creating a robust Model Context Protocol (MCP) server that allows AI assistants to access and understand CircleCI data in real-time. This enables developers to debug build failures, analyze test results, find and fix flaky tests, and improve pipelines using natural language within their favorite AI tools.

How to set up chaos engineering in your CI/CD pipeline with CircleCI and Chaos Toolkit

Distributed architecture is increasingly being adopted in current software systems because it brings great scalability and flexibility, keeping them resilient under real-world conditions, Unfortunately, this new distribution also introduces new points of failure in the systems. Traditional testing methods are no longer enough; they focus only on whether a system works, not on whether it keeps working under stress or failure. That is where chaos engineering comes in.

Explore CircleCI projects from your IDE with AI assistance

CircleCI gives you deep visibility into your builds, workflows, and tests, but jumping between browser tabs, copying project URLs, or re-authenticating across tools can slow things down. What if your IDE could just show you the projects you’re working on and let you act on them directly? This post shows how to use the list_followed_projects tool in the CircleCI MCP server to browse and interact with your CircleCI projects by chatting with an AI assistant inside your IDE.

Fix flaky CI tests by chatting with your IDE

Flaky tests are a serious productivity problem. When tests sometimes pass and sometimes fail without code changes, they undermine trust in your CI pipeline and drain time from engineering teams. Debugging them often turns into a slow process of chasing logs, rerunning builds, and trying to guess what went wrong. This post shows how to quickly detect and fix flaky tests directly in your IDE by chatting with an AI assistant.

Building a real-time AI autocomplete app with Next.js and Vercel AI SDK

Over the past ten years, Azure has become one of the most prominent cloud computing platforms available, rivaled only by AWS. Part of Microsoft’s suite of Azure services, Azure web apps provide a packaged environment for hosting web applications built in many languages. Because this environment is fully managed by Azure, developers have limited options for control.

Streamline your LangChain deployments with Langserve on GCP

Deploying Large Language Model (LLM) applications can transform ideas into valuable services. But, deployment challenges can slow down even experienced developers. In this tutorial, you will build and deploy a LangChain application using LangServe and CircleCI on Google Cloud Run. You will create a text summarization tool powered by Google’s Gemini model. You will use Langserve to expose it as an API. You will automate testing and deployment to Google Cloud Run using CircleCI.

Use AI to resolve CI test failures with zero guesswork

Test failures are inevitable. A broken condition, a missed edge case, or a last-minute refactor can trip up even the most careful changes. That’s part of shipping software. What shouldn’t be part of the job is spending half your afternoon parsing logs and chasing down the root cause. Now, there’s a faster way. This guide shows how to use the CircleCI MCP server to identify, understand, and resolve failing tests in a CI/CD build without ever leaving your editor.