Operations | Monitoring | ITSM | DevOps | Cloud

Agentic AI Explained: How Autonomous Systems Are Changing Cybersecurity

Discover how agentic AI enhances cybersecurity by augmenting security teams’ existing security tools and workflows. See how Retrieval-Augmented Generation (RAG) enables faster threat detection, streamlined investigations, and smarter incident response — empowering SOC teams to work more effectively. Join cybersecurity experts Lisa Jones-Huff and Mohammed Anas Khatri to discover how agentic AI can help your security team multiply its impact.

Testing AI Code in CI/CD Made Simple for Developers

Generative AI can produce code faster than humans, and developers feel more productive with it integrated into their IDEs. That productivity is only real if CI/CD tests are solid and automated. When not appropriately tested, you may encounter a production issue that you haven’t seen before. According to the State of Software Delivery 2025 report, 67% of developers spend more time debugging and resolving security vulnerabilities in code generated by AI.

Building and deploying a Python MCP server with FastMCP and CircleCI

Extending Large Language Models (LLMs) with custom tools has become increasingly valuable in today’s AI landscape. Model Context Protocol (MCP) servers provide a standardized way to connect external tools and resources to LLMs. This can enhance their capabilities beyond basic text generation. While thousands of pre-built MCP servers exist, creating your own allows you to address specific workflows. You can implement use cases that off-the-shelf solutions cannot handle.

Enhanced multi-modal Ask Zia, AI-powered workflow builder, guided tours, and more

We've rolled out our biggest GenAI release for the cloud version of ServiceDesk Plus. Explore the enhanced Ask Zia that now sports an LLM-style interface, and the new GenAI-powered workflow builder that can move your workflows from concept to execution in minutes. Plus, learn how to connect the on-premises version of ServiceDesk Plus with over 40 telephony providers that help your support teams place and receive calls directly within the application.

Ep 13: Everyone is winging it: Hope for an AI future

In this episode, we welcome Naomi Buckwalter, Sr. Director of Product Security at Contrast Security, to chat about the evolving landscape of security threats and the dual role of AI in both facilitating and combating these challenges. We explore the increasing sophistication of modern phishing attacks and discuss how security teams must rapidly adapt to stay ahead of emerging threats. We debate the transformative impact of AI on the future job market, where personal qualities and soft skills may increasingly take precedence over traditional technical competencies.

Versatile Automation: Applications of AI Across Different Sectors

From small and medium-sized enterprises to larger corporations, virtually all industries are asking their staff to work faster, do more with less, and keep up with an ever-increasing amount of work, accelerated timelines, repetitive or manual tasks, complex systems and data-intensive workloads in the digital age. The result? Oftentimes higher profits, but with greater risks of stress, frustration, and even lower quality customer service.

How Redgate's Foundry is Shaping the Future of Database Innovation with AI

Learn how Redgate’s Foundry drives AI innovation in database management - from intelligent monitoring and ML-based automation, to smarter SQL optimization. In today’s rapidly evolving database landscape, innovation is essential. With the rise of artificial intelligence (AI), machine learning (ML), and automation, database management is undergoing one of its most significant transformations in decades.

LLM Observability Explained: Prevent Hallucinations, Manage Drift, Control Costs

Large Language Models (LLMs) are transforming how businesses interact with users, automate workflows, and deliver insights in real time. But as powerful as these models are, running them at scale comes with unique challenges, from hallucinations and latency spikes to cost overruns and user trust issues.

Automated RAG pipeline evaluation and benchmarking with RAGAS

Retrieval-Augmented Generation (RAG) pipelines have become an integral part of how Large Language Models (LLMs) access information beyond their training cutoff. These pipelines enable LLMs to deliver current, accurate, and grounded responses. By fetching relevant external documents, RAG mitigates common LLM challenges like factual inaccuracies and hallucinations. However, this methodology introduces a new complexity: evaluating RAG pipeline performance is particularly challenging.

Bridging the Language Gap: AI Tools That Humanize Technical Content for Global Teams

Working on a global team can be exciting. You get to collaborate with people from different cultures, time zones, and perspectives. But it also comes with challenges-especially when technical content doesn't translate well across languages. A single unclear instruction in a manual or a misinterpreted email can lead to delays, extra meetings, or even costly mistakes.