The latest News and Information on Databases and related technologies.
While tuning the performance of your application at the code level or sizing the JVM appropriately are important for enhancing performance, it is equally important to look at how to tune accesses to the backend database. After all, response time for a web request is dependent on the processing time in the Java application tier as well as the query processing time in the database tier.
Imagine some users complaining that querying PostgreSQL is slow (this never happened right?), and we have to troubleshoot this problem. It could be one of two things: I would normally first check on the environment, specifically PostgreSQL metrics over time. Such monitoring shows if the CPU is too high or how many disk reads were buffer reads. PostgreSQL logs also give information about the environment, such as how many statements were run and if any errors occurred.
Evolving MySQL operations requires understanding how MySQL works. A good monitoring tool alerts on issues before they impact end users and helps reduce the MTTR of incidents when they do occur. But choosing a database monitoring solution can be tough due to the vast number of solutions available, each with their own pros and cons. In this blog post, I’ll review some of the best MySQL monitoring tools available that can help measure and improve database performance.
MarkLogic is a multi-model NoSQL database with support for queries across XML and JSON documents (including geospatial data), binary data, and semantic triples—as well as full-text searches—plus a variety of interfaces and storage layers. Customers include large organizations like Airbus, the BBC, and the U.S. Department of Defense. Because MarkLogic can process terabytes of data across hundreds of clustered nodes, maintaining a deployment is a complex business.