Operations | Monitoring | ITSM | DevOps | Cloud

Latest News

How to Use Observability to Reduce MTTR

When you’re operating a web application, the last thing you want to hear is “the site is down." Regardless of the reason, the fact that it is down is enough to cause anyone responsible for an app to break out into a sweat. As soon as you become aware of an issue, a clock starts ticking — literally, in some cases — to get the issue fixed. Minimizing this time between an issue occurring and its resolution is arguably the number one goal for any operations team.

The Importance of Log Management for Your Home Network

The team at observIQ is just like every one of you reading this, we are avid programmers, gamers, traders, thinkers, and innovators who build an elaborate home network for fun, work, and for the simple reason that we enjoy technology. We are constantly growing the size and footprint of our home networks and labs as well – adding custom apps, devices, and servers, making it challenging to gauge our technical footprint.

Log Management Challenges in Modern IT Environments

Modern IT environments have presented many difficult-to-overcome challenges to organizations in recent times. One such challenge is gaining visibility into the systems. One may argue that due to cloud computing and limitless storage, it is now very easy to overcome some of the conventional challenges regarding visibility. However, the architecture has changed into infrastructure scheduling and microservices. Hardware and software programs are now more complex, with their own set of challenges.

Is Operational Resilience in Financial Services actually just a data problem?

Operational resilience is currently a hot topic in Financial Services, largely because of the impact that COVID has had on how customers interact with financial institutions. Almost overnight, the financial services industry had to cope with a large volume of transactions moving to digital channels at the same time as its employees were forced to set up home offices so that they could continue to work remotely.

Announcing LogDNA Agent 3.2 GA: Take Control of Your Logs

The LogDNA Agent is a powerful way for developers and SREs to aggregate logs from their many applications and services into an easy-to-use web interface. With only 3 kubectl commands, the installation process is quick and simple to complete for any number of connected systems. To help control the logs that are stored and surfaced in the LogDNA web interface, users can set Exclusion Rules, which enables the exclusion of certain queries, hosts, and tags directly from the UI.

The Spike Protection Bundle with Index Rate Alerting

For DevOps teams that want to accelerate release velocity and improve reliability, logs can unlock the insights you need to move faster. But for managers and budget owners, logging can be an unpredictable pain. Trying to estimate logging spend, especially with the adoption of microservices and container-based architecture, seems like an impossible task.

Using pre-built Monitors to proactively monitor your application infrastructure

SREs, developers and DevOps staff for mission-critical modern apps know being notified in real-time when or before critical conditions occur can make a massive difference in end-user digital experiences and in meeting a 99.99% availability objective.

New in Kibana: How we made it easier to manage visualizations and build dashboards

Our Kibana team has been hard at work implementing and executing on a new Kibana strategic vision to streamline the dashboard creation process and sand down the rough edges of creating visualizations for dashboards. We accomplished our goal and reduced the overall time it takes users to go from a blank slate to a meaningful dashboard that conveys insights about the data.

Easily ingest data to Elastic via Splunk

As organizations migrate to Elastic from incumbent vendors, quickly onboarding log data from their current solution into Elastic is one of the first orders of business. Data onboarding often involves having to adjust ingestion architecture and implement configuration changes across data sources. We want to ensure that users trialing or migrating to Elastic can get data in quickly to start seeing the power of Elastic solutions as quickly as possible.