Operations | Monitoring | ITSM | DevOps | Cloud

The latest News and Information on DevOps, CI/CD, Automation and related technologies.

User interface design and its importance in the user experience

A product’s user experience consists of the before, during, and after of a user’s interactions with it, or more precisely, their expectation, satisfaction, and fulfilment. Experience can be influenced by anything from marketing materials, product reviews, interaction with sales, through to cost, installation and onboarding process, product stability, post-sales support, and numerous other factors. So it’s clear that creating a positive experience is a collective responsibility.

What's the reliability of your checkout process?

One of the reasons companies practice Chaos Engineering is to prevent expensive outages in retail (or anywhere, for that matter) from happening in the first place. This blog post walks through a common retail outage where the checkout process fails, then covers how to use Chaos Engineering to prevent the outage from ever happening in the first place. Let’s dive in. Maybe you’ve been there.

Monitor Apache Ignite with Datadog

Apache Ignite is a computing platform for storing and processing large datasets in memory. Ignite can leverage hardware RAM as both a caching and storage layer to serve as a distributed, in-memory database or data grid. This allows Ignite to ingest and process complex datasets—such as those from real-time machine learning and analytics systems—in parallel and at faster speeds than traditional databases supported by only disk storage.

How To Deploy Artifactory via Operator in Openshift - John Peterson, Senior Partner Engineer, JFrog

In this lightning talk you will learn the basics of k8s operators and how they work in the Openshift environment. We will also go over the Openshift Operator Lifecycle and explain the stages and steps that take place to get the operator from the OperatorHub and deploy it into your Openshift environment. A demonstration will be done showing Artifactory being deployed into a new Openshift cluster to provide a learning experience on how Artifactory can quickly and easily be deployed into Openshift. Finally we will have a Q&A session to help answer any questions you may have about the integration and how you can use it.

Let's Dive In: JFrog Unified Platform and Splunk - John Peterson, Senior Partner Engineer, JFrog

In our lightning talking will we cover the JFrog Unified Platform integration with Splunk for a wholistic analytics view into the unified platform logs. Combining the two best of breed applications makes tremendous sense for an enterprise without it valuable data insights are lost as well as any action the business might have taken. We will cover how to setup this integration, valuable data insights that can be gained, and how you can extend this integration to discover all new data insights you will wished you always had.

Becoming Hybrid: Operating Your Cloud Environment

It’s a day for celebration! Our migration is complete, and our applications are now running in the cloud environment best suited to their needs. The rest of our application inventory, the ones not cut out for the cloud, remain on-premises where they belong. Actually…we’re not done yet. We still have some work to do to make sure our hybrid environment runs smoothly and delivers the business value we expect. Fortunately, we aren’t the first ones to travel this path.

Stretch Your Reach with Unified JFrog Data and Elastic

DevOps teams rely on Artifactory as the bread and butter tool of universal binary repo managers, but observing its operations can be challenging. With multiple high availability nodes and unification with Xray as the JFrog DevOps Platform, that operations data is spread out across logs for each service in the JFrog Platform deployment. Operations teams need a view into valuable data insights that can only be gained through real time data mining and observation of the platform.

Creating Organizations and Teams and Managing Permissions In Cloudsmith

One reason for building a ‘single source of truth’ for software assets is that it gives the organization control over who can use what when. The ‘wild West’ of public repositories gives no control at all and can lead to a situation in which packages and dependencies of dubious provenance are integrated into builds without a second thought. Within the Cloudsmith world, we want to have the maximum security and control possible.