NEW YORK, April 13, 2015 /PRNewswire/ -- End Point, a database expert and full-service ecommerce consulting company, today announced the results of a NoSQL database benchmark between Apache CassandraTM, MongoDB, HBase and Couchbase. The end conclusions were that Apache Cassandra outperformed the most recent versions of other leading NoSQL databases in both throughput and latency in the scale-out tests that were run.
"NoSQL databases have become a standard part of the toolset for high-throughput, horizontally scalable applications with large data sets, and Apache Cassandra proved to be the top performer throughout this study," said Jon Jensen, CTO, End Point. "While we always recommend that anyone assessing a database's performance test the engine under specific use case and deployment conditions intended for a particular production application, general competitive benchmarks like this one can be very useful to those evaluating NoSQL databases."
The report shows that Apache Cassandra performed significantly better than Couchbase 3.0, MongoDB 3.0 (with the Wired Tiger storage engine), and HBase 0.98 in throughput and latency. Using the YCSB benchmark, End Point scaled each database from 1 to 32 nodes for a variety of tests that included load, insert heavy, read intensive, analytic, and other typical transactional workloads.
End Point focused the tests on workloads, data volumes, and conditions most commonly found in production environments. This negated the use of small, RAM-only data volumes, the potential for losing data during write tasks, and single node deployments. Instead, a multi-node, scale out implementation with data volumes that exceeded the RAM capacity on each machine with no possibility for data loss during load/write operations was used.
The full End Point report titled "Benchmarking Top NoSQL Databases" can be found here.
End Point performed the benchmark on Amazon Web Services EC2 instances, an industry-standard platform for hosting horizontally scalable services such as the tested NoSQL databases. Each test was performed three separate times on three different days to ensure accuracy, and new EC2 instances were used for each test run to further reduce the impact of any "lame instance" or "noisy neighbor" effect on any one test.
About End Point
SOURCE End Point