TPC BenchmarkTM C (TPC-C) simulates a complete computing environment where a population of users executes transactions against a database. The benchmark is centered on the principal activities (transactions) of an order entry environment. These transactions include entering and delivering orders, recording payments, checking the status of orders, and monitoring the level of stock at the warehouses. While the benchmark portrays the activity of a wholesale supplier, TPC-C is not limited to the activity of any particular business segment, but represents any industry that must manage, sell, or distribute a product or service. TPC-C involves a mix of five concurrent transactions of different types and complexity either executed on-line or queued for deferred execution. TPC-C performance is measured in new-order transactions per minute. The primary metrics are the transaction rate (tpmC), the associated price per transaction ($/tpmC), and the availability date of the priced configuration.
TPC BenchmarkTM E (TPC-E) is a new On-Line Transaction Processing (OLTP) workload developed by the TPC.
The focus of the benchmark is the central database that executes transactions related to the firm's customer
accounts. Although the underlying business model of TPC-E is a brokerage firm, the database schema, data
population, transactions, and implementation rules have been designed to be broadly representative of modern
OLTP systems. The primary metrics are the transactions per second (tpsE), the associated price per transaction
($/tpsE), and the availability date of the priced configuration.
The TPC-H Benchmark (TPC-H) is a decision support benchmark. It consists of a suite of business oriented ad-hoc queries and concurrent data modifications. The queries and the data populating the database have been chosen to have broad industry-wide relevance. This benchmark illustrates decision support systems that examine large volumes of data, execute queries with a high degree of complexity, and give answers to critical business questions.
The performance metric reported by TPC-H is called the TPC-H Composite Query-per-Hour Performance Metric (QphH@Size), and reflects multiple aspects of the capability of the system to process queries. These aspects include the selected database size against which the queries are executed, the query processing power when queries are submitted by a single stream, and the query throughput when queries are submitted by multiple concurrent users. The TPC-H Price/Performance metric is expressed as $/QphH@Size.
Not application specific, the SPEC CPU2006 benchmark measures processor, chipset, and compiler speed (SPECint® and SPECfp® ) and throughput (SPECint_rate and SPECfp_rate). The SPECint (single-task) and SPECint_rate (multi-task) benchmarks measure compute-intensive integer performance, while SPECfp (single-task) and SPECfp_rate (multi-task) measure compute-intensive floating point performance. The integer benchmarks are representative of most real-world workloads, while the floating point benchmarks are more specialized (crash simulations, ocean modeling, etc.) and most closely model are HPCC environments.
SPECpower_ssj2008 reports power consumption for servers at different performance levels - from 100-percent to idle in 10-percent segments - over a set period of time. The graduated workload recognizes the fact that processing loads and power consumption on servers vary substantially over the course of days or weeks.
SPECjAppServer2004 tests performance for a representative J2EE application and each of the components that make up the application environment, including hardware, application server software, JVM software, database software, JDBC drivers, and the system network. The workload is an application that emulates information flow among an automotive dealership, manufacturing, supply chain management, and an order/inventory system.
SPECjbb measures server-side performance of Java and emulates a three-tier system, which is the most common form of Java business applications according to SPEC® . In a three-tier environment, the business logic and object manipulation reside in the middle tier, which is what this test focuses on predominantly. The clients are the first tier and the database is the third tier.
The SPECweb benchmark is designed to measure a system's ability to act as a web server that services static and dynamic page requests. Its three workloads (banking, E-commerce, and a support downloads site) measure simultaneous user sessions. To more closely model real-world implementations, a database backend is simulated, and the dynamic pages must be executed with a scripting engine.
VMware developed VMmark as a standard methodology for comparing virtualized systems. Each "tile" is a set of 6 workloads (virtual machines): a database server, mail server, file server, web server, Java transaction server, and a standby server (for failover or quick deployments). The workloads comprising each tile are run simultaneously in separate virtual machines. The performance of each workload is measured and then combined with the other workloads to form the score for the individual tile. Multiple tiles can be run simultaneously to increase the overall score.