Co-authored by Boris Kurktchiev (Field CTO, Diamanti) & Jason Mimick (Product Manager, MongoDB)
Organizations today are embracing containers and Kubernetes for rapid development and portability of stateful applications such as databases. While containers and Kubernetes can abstract away much of the infrastructure, it is still an important consideration when containerizing high-performance databases such as MongoDB. Enterprises need to ensure that the infrastructure can support multiple service level offerings (performance levels) and is self-managed across on-premises and cloud environments. In addition to this, these stateful applications contain business-critical information and demand advanced data services like data protection, monitoring, and fine-grained RBAC over a transactional data store with encryption at rest and in transit.
MongoDB has traditionally delivered excellent performance to value ratios. Typically, these deployments have been bare-metal or VM-based. However, as cloud-native continues to grow and dominate enterprise software development and delivery practices, the ability to offer first-class enterprise-grade data services into containerized platforms such as Kubernetes is more urgent than ever before.
Recently, Diamanti teamed up with MongoDB to showcase the simplicity of running MongoDB on the Diamanti platform with MongoDB Enterprise Advanced. We also tested the performance gains delivered by Diamanti’s I/O acceleration and had those results validated by analyst firm IDC.
In this blog, we’ll highlight the test setup and give you a glimpse of the jaw-dropping results.
Goals of the tests
- Showcase a resilient MongoDB architecture on Kubernetes
- Demonstrate the I/O capabilities of MongoDB and Diamanti’s Kubernetes Platform under different workload scenarios
For these tests we used 2 Diamanti D20 clusters each with 3 nodes, combined with MongoDB Kubernetes Operator and MongoDB Cloud Manager to showcase the benefits of a purpose-built platform. While we chose Cloud Manager for this test case, MongoDB Ops Manager would be typically used for production scenarios. The entire example from the functional and Kubernetes perspective is identical using either cluster management solution.
Diamanti D20 series of modern hyper-converged platforms are packaged with Diamanti Ultima I/O acceleration cards with varying configurations of Intel CPUs, memory, and NVMe storage. Diamanti Ultima is a pair of second generation PCIe based I/O acceleration cards that offload networking and storage traffic to deliver dramatically improved performance. They provide enterprise-class storage services for disaster recovery (DR) and data protection (DP) of mission critical applications, including mirroring, snapshots, multi-cluster asynchronous replication, and backup and recovery.
For these tests, we used the Diamanti D20 “Small” nodes to form the two 3-node clusters. The D20 “Small” has 20 physical CPU cores running at 2.2GHz each, 192 GB of RAM and Diamanti Ultima I/O acceleration for 40G networking, and 4 TB of ultra-fast, low latency NVMe Storage.
Diamanti Spektra is the prevalidated, pre-packaged, and fully-featured software stack including Kubernetes, container runtime, operating system, enterprise-class DP/DR features, access controls, and Management UI.
The MongoDB database-as-a-service tier is implemented with MongoDB Ops Manager in conjunction with the MongoDB Enterprise Kubernetes Operator. MongoDB Ops Manager is an enterprise database management system. Ops Manager helps you automate database administration tasks such as deployment, upgrades, scaling events, and more through the platform UI and API. The MongoDB Kubernetes Operator introduces native customer resource definitions (CRDs) into your Kubernetes cluster enabling you to dynamically provision MongoDB clusters and MongoDB Ops Manager instances.
Yahoo! Cloud Serving Benchmark (YCSB) was utilized as a simple off-the-shelf synthetic test harness tool. While this tool connects to MongoDB just like any other application (with a secure connection string) we did optimize the software specifically for this project. As such, you may not get the same performance results and neither these tests or results are valid for any other database which YCSB may happen to support. But you will be able to use the exact same infrastructure and platform tooling for any MongoDB application running on Kubernetes.
You can try out these demonstrations yourself with the example deployments and charts available in the total-cluster Github repository. Additional details for installing this in your own clusters is in the Diamanti-MongoDB YCSB Tests How To doc over in our repository.
Below you can see the simple deployment of MongoDB on the Diamanti Spektra management console. To test all the workloads, we utilized a 3-way ReplicaSet using 4Gb of RAM and 4 CPU Cores and configured Diamanti Spektra QoS policies to guarantee 20,000 IOPS and 500 MB/s of network throughput for the containers. The setup is shown below.
Figure 2 shows the Diamanti Spektra dashboard displaying the 3 deployed containers in the namespace diamanti-mongo. Figure 3 shows the detailed metrics of one of the deployed containers.
Moreover, one of the built-in features of the Diamanti platform is the ability to augment Kubernetes by providing a way for users to define Quality of Service tiers, the default set of which are shown in the figure below.
The results of the testing were very clear – Diamanti was able to deliver at least 27 times greater throughput as measured by operations per second. For a full report on these findings and access to the full test results, download this IDC Lab Validation Report.
Enterprises are beginning to adopt containers en masse, solidifying containers as the computing foundation for the next generation. Early adopters used Kubernetes predominantly for stateless workloads due to it’s agility and simplicity. Stateful databases, such as MongoDB, are also increasingly being supported in Kubernetes, but require persistent storage and high performance I/O systems. While containers and Kubernetes can abstract away much of the infrastructure, the underlying physical I/O systems are still important, particularly for I/O heavy workloads like MongoDB. Thus, the underlying infrastructure is still an important consideration when containerizing high performance workloads such as databases.