Operations | Monitoring | ITSM | DevOps | Cloud

AI-Powered LMS: Personalization, Analytics & Automation for Corporate Training

Corporate training systems change operationally once AI is embedded into their learning logic. In LMS environments used for onboarding and workforce development, AI shifts training from scheduled delivery toward continuous adjustment based on employee performance and role context. This shift affects how companies assign onboarding programs, detect skill gaps, and maintain compliance readiness across departments.

Claude Code + OpenTelemetry: Per-Session Cost and Token Tracking

I was looking at our Claude Code spend in the Anthropic console the other day. Aggregate cost, aggregate tokens — no breakdown by developer, no breakdown by session. I knew my Hackathon team had been using it heavily on building out new features for the OpenTelemetry Distro Builder. But heavily how? I had no idea. Turns out Claude Code has been emitting OpenTelemetry signals the whole time. Per-session cost, token counts, every tool call it makes on your codebase.

AI performance reviews for your app with the Flare CLI

The Flare CLI connects to your Flare performance monitoring data and uses AI to turn it into actionable insights, right from your terminal. In this video, you'll see how a single command pulls your real performance data from Flare, then generates a full review: identifying slow endpoints, spotting error trends, and suggesting concrete fixes. Links.

AI for App Resiliency: Automation Without Operational Chaos

Enterprise IT leaders face a persistent contradiction. Digital systems grow more complex each year, but operational stability and resilience do not improve at the same pace. Downtime costs are only the visible part of the problem. For large enterprises, unplanned outages can run into hundreds of thousands of dollars per hour in lost revenue, productivity, and remediation effort. The harder cost to quantify is the reputational damage when critical business services fail at the worst possible time.

Boosting Rust developer productivity with cursor - Our journey at ilert

AI-assisted coding has evolved from a novelty into an industry standard. At ilert, we started our adoption in mid-2023, quickly realizing that success depends heavily on proper context and workflows. This is particularly acute with Rust. While the language is central to our backend infrastructure, its strict compiler rules and distinct idiomatic approaches make it notoriously difficult for modern LLMs to master.

AI infrastructure cost optimization for scaling teams

This post is also available in German and in French. The 2026 AI landscape has shifted from "Can we build it?" to "How much will it cost to run it?" For CTOs and engineering leaders, the challenge is no longer just model performance: it is the underlying infrastructure sprawl that silently erodes margins. When AI workloads scale, they often inherit the inefficiencies of legacy cloud models: over-provisioned instances, fragmented data pipelines, and a lack of unified context.

How to Implement an AI Governance Framework Using Safe, Ethical and Reliable AI Guardrails

In my time at Ivanti, I've witnessed firsthand how AI acts as a force multiplier across enterprise organizations. When deployed strategically, AI accelerates decision-making and operational execution at scale in a way that teams simply can't sustain manually. However, without clear and enforceable AI guardrails, implementing AI opens organizations up to serious new risks.