Operations | Monitoring | ITSM | DevOps | Cloud

AI Test Generation and PR Review in Sentry (Now in Open Beta)

You write code. Open a PR. CI runs. PR merges. Prod’s on fire by 5pm. Maybe you skipped writing some tests. (It's tedious, sometimes unclear, and easy to ignore when you're racing to ship—until something breaks and you realize a test could’ve saved your Friday night.) Maybe the PR review was more of a drive-by from a teammate who barely had time to skim the diff. But reviews and tests matter.

Leading the Way in Accessible Innovation: Voice Input for the AI Platform

What if a broken arm didn't break your workflow? Follow Alice as she faces tough circumstances, but still gets the job done. Nursing a broken arm in a sling, she uses built-in accessibility features like Voice Input for Now Assist to set up the ServiceNow AI Platform for a new client—hands-free. This is how we turn accessibility into opportunity—building forward-lthinking features that solve real problems and work better for everyone.

How Cursor scaled infrastructure rapidly and reliably using Datadog

At Datadog, we use Cursor to empower our teams to build more quickly. And we know that building and troubleshooting with AI tools like Cursor is done best with the right observability data and context. Discover how Cursor was able to rapidly and reliably scale their infrastructure 100x using Datadog to meet the needs of a fast growing user base. And learn more about how we’re bring Datadog tools and context to your favorite AI IDEs and agents with our MCP Server and extensions.

The Benefits of Using Juniper's Network Monitoring Tools for IT Operations

More data means more complexities in IT networks. Hence, the right solution is needed to monitor such networks. Many companies struggle without the right tools, and they often lose great business opportunities because they are unable to identify performance-related issues upfront. Network monitoring is thus essential for business success. It helps build healthy network performance, saving companies money in the long run.

Generating Playwright Tests With AI: Let's Try the New Playwright MCP Server!

In this video, Stefan (Playwright Ambassador) dives into the integration of AI with the Playwright MCP server to automate end-to-end test generation. Learn about MCP, browser automation and how to combine everything to generate Playwright tests. We'll explore AI capabilities and limits and discuss best practices for generating accurate and reliable Playwright tests. If you're curious about leveraging AI for end-to-end testing with Playwright, this video is for you!

Watch RITA (Resolve IT Agent) fix VPN issues in seconds! #itautomation #ai #agenticai

Say goodbye to ticket chaos. Meet RITA. RITA (Resolve IT Agent) is your intelligent frontline assistant—built to deflect L1 tickets, resolve routine requests instantly, and slash MTTR across the board. In this live demo clip, watch Derek Pascarella, our Global Director of Sales Engineering, show how RITA fixes VPN issues in seconds—no human handoff, no ticket backlog. Ready to fast-track your journey to Zero Ticket IT? This is where it starts.

Insights to keep AI applications reliable

AI has become a massive investment for companies. Engineering teams across industries are integrating AI into their products, whether it’s through homegrown, self-managed models or third-party model integrations. But no matter how much AI shifts the user experience, it’s still an application, which means your engineering team still needs to operate it and keep it reliable. At the same time, AI applications add complexity and complications that require a shift in your approach.

Validating OS-compatibility for locally-run LLMs using Ollama with CI/CD matrix workflows

Large Language Models (LLMs) are becoming increasingly accessible, with regular adoption of open-source models and the growing ecosystem of tools for running them locally. Compact versions are now able to run on consumer-grade hardware, so developers are using LLMs on personal devices like Linux workstations, macOS laptops, or even Windows machines. As this trend grows, so does the need to ensure that your LLM-powered applications run reliably across all major operating systems.