Operations | Monitoring | ITSM | DevOps | Cloud

Machine Learning

AI, Privacy and Terms of Service Updates

Like everyone else in the world, we are thinking hard about how we can harness the power of AI and machine learning while also staying true to our core values around respecting the security and privacy of our users’ data. If you use Sentry, you might have seen our “Suggested Fix” button which uses GPT-3.5 to try to explain and resolve a problem. We have additional ideas being developed as well that we’re excited to preview.

Leveraging Argo Workflows for MLOps

As the demand for AI-based solutions continues to rise, there’s a growing need to build machine learning pipelines quickly without sacrificing quality or reliability. However, since data scientists, software engineers, and operations engineers use specialized tools specific to their fields, synchronizing their workflows to create optimized ML pipelines is challenging.

Unlock the power of network forecasting with machine learning

In the dynamic world of IT, traditional network monitoring approaches are no longer sufficient to manage the complexities of today’s networks—be they wired or wireless. To stay ahead of network events, IT administrators must shift from being reactive to adopting a proactive stance. This transition involves a comprehensive approach to network monitoring that includes forecasting future network requirements with the help of machine learning (ML) technology.

ML and APM: The Role of Machine Learning in Full Lifecycle Application Performance Monitoring

The advent of Machine Learning (ML) has unlocked new possibilities in various domains, including full lifecycle Application Performance Monitoring (APM). Maintaining peak performance and seamless user experiences poses significant challenges with the diversity of modern applications. So where and how does ML and APM fit together? Traditional monitoring methods are often reactive, resolving concerns after the process already affected the application’s performance.

A Guide to Predictive Maintenance & Machine Learning

Various economic pressures on businesses have created a focus on new and innovative ways to manage operational costs. At the same time, businesses are looking at using IT to help manage overall business costs and increase income—for example, by supporting remote working, and in many cases, enabling e-commerce to replace closed retail outlets. Management of infrastructure to minimize downtime has two major benefits: reductions in support and maintenance costs and improvements in service levels.

Building a comprehensive toolkit for machine learning

In the last couple of years, the AI landscape has evolved from a researched-focused practice to a discipline delivering production-grade projects that are transforming operations across industries. Enterprises are growing their AI budgets, and are open to investing both in infrastructure and talent to accelerate their initiatives – so it’s the ideal time to make sure that you have a comprehensive toolkit for machine learning (ML).

Canonical releases Charmed Kubeflow 1.8

Canonical, the publisher of Ubuntu, announced today the general availability of Charmed Kubeflow 1.8. Charmed Kubeflow is an open source, end-to-end MLOps platform that enables professionals to easily develop and deploy AI/ML models. It runs on any cloud, including hybrid cloud or multi-cloud scenarios. This latest release also offers the ability to run AI/ML workloads in air-gapped environments.

ML for software engineers ft. Gideon Mendels of Comet ML

In this episode, Rob explores the fascinating crossroads of machine learning and software engineering with Gideon Mendels, the co-founder and CEO of Comet ML. Gideon navigates the often ambiguous world of training ML models, focusing on building a common language between software engineers and data science teams. Gain valuable insights into fostering mutual understanding between these two disciplines and aligning the possibilities of ML with organizational needs in this thought-provoking episode.

Optimize your MLOps pipelines with inbound webhooks

In a traditional DevOps implementation, you automate the build, test, release, and deploy process by setting up a CI/CD workflow that runs whenever a change is committed to a code repository. This approach is also useful in MLOps: If you make changes to your machine learning logic in your code, it can trigger your workflow. But what about changes that happen outside of your code repository?

10 Practical Machine Learning Use Cases in Observability - Navigate Europe 23

Dive into the world of machine learning and its practical applications in observability with Andrew Maguire from Netdata. Explore a variety of use cases, challenges, and considerations in implementing ML for enhanced monitoring and analytics. Learn about the potential benefits and the importance of human oversight in this insightful presentation.