Benchmarking is an important, yet often overlooked, aspect of any database management system (DBMS) research and development effort. Despite several advancements over the last decades, the deployment of a comprehensive testing platform with a diverse set of data sets and workloads is still non-trivial. In many cases, researchers and developers are limited to a small number of workloads to evaluate the performance characteristics of their work. This is due to the lack of a universal benchmarking infrastructure, and because it is typically hard to gain access to real data and workloads due to privacy and security concerns. This results in many unnecessary engineering efforts, limits availability of reference workloads, and makes scientific results difficult to compare.
To remedy these problems, we present OLTP-Bench, an extensible “batteries included” DBMS benchmarking testbed that is tailored for on-line transaction processing (OLTP) and Web-oriented workloads.
Some of the key features are:
- Precise rate control (allows to define and change over time the rate at which requests are submitted)
- Precise transactional mixture control (allows to define and change over time % of each transaction type)
- Access Distribution control (allows to emulate evolving hot-spots, temporal skew, etc..)
- Support trace-based execution (ideal to handle real data)
- Extensible design
- Support for statistics collection (microseconds latency and throughput precision, seconds precision for OS resource utilization)
- Elegant management of SQL Dialect translations (to target various DBMSs)
- Targeting all major relational DBMSs and DBaaS offerings via the standard JDBC interface (tested on MySQL, Postgres, Oracle, SQLServer, DB2, HSQLDB, Amazon RDS MySQL, Amazon RDS Oracle, SQL Azure)
- Store-Procedure friendly architecture
The key contributions of OLTP-Bench are its ease of use and extensibility, support for tight control of transaction mixtures, request rates, and access distributions over time, as well as the ability to support all major DBMSs and database-as-a-service (DBaaS) platforms. Moreover, it is bundled with ten Workloads that all differ in complexity and system demands:
- JPAB (Hibernate)
- Resource Stresser
- ScienceWise (SPARQL)
- [SIBench] (Snapshot Isolation)
By design, we refrain from defining “competition rules” but focus on providing an infrastructure that others can leverage for a broad range of benchmarking purposes. To demonstrate the flexibility of our framework, we conducted an extensive set of experiments using four popular DBMSs and three DBaaS offerings, including side-by-side system comparisons, RAM and multi-core efficiency tests, multi-tenant deployments, and performance-vs-pricing evaluations.
This framework is design to allow easy extension, we provide stub code that a contributor can use to include a new benchmark, leveraging all the system features (logging, controlled speed, controlled mixture, etc..).
This project is open-source and the code is hosted GitHub.
If you decide to use it for your papers or for your work, please cite us, and let us know so we can add you to our Publications Using OLTPBenchmark page. If you want to contribute by improving the framework or add your own workloads we are happy to help you help us... Please contact one of our main contributors (list below).
- Carlo Curino
- Djellel Eddine Difallah (point of contact)
- Andy Pavlo
- Philippe Cudre-Maroux
- Dana Van Aken
More than source code
The goal of this website is not only to provide access to source code, examples of usage and many experimental results, but to provide a place where other can report their experiments (and have them automatically rendered... see: Experiments) and thus increase ease of comparison. To bootstrap this effort we report several hundreds of experiments including configuration parameters and results.