Please DON’T Fiddle with Your Database Block Size!

DBAs must consider several factors when selecting a database’s block size.  Typically OLTP and mixed workload databases use an 8K block size, while data warehouses are often 16K.   The conventional wisdom is that smaller block sizes are well suited for lots of random I/O (e.g. an OLTP workload) while larger block sizes are appropriate for lots of sequential I/O (e.g. a data warehouse). The decision cannot be made lightly because changing a database’s block size requires a complete rebuild of the database.  In other words, stop the apps, dump the database via datapump or similar, then load into a database with the “right” block size and validate that everything came through perfectly.  How big is your database?  How long is your maintenance window?  This is truly  a case of a cure worse than the disease.

At Pure Storage, we believe that a factor that should never influence the block size decision is your storage subsystem.  Unlike other vendors, we don’t handcuff flash storage with paradigms from the spinning disk era, such as the Advanced Storage Format’s 4K sector size.   Violin Memory, for example, has documented the importance of a 4K database block size on their appliance, because it is architected with a 4K geometry.

The Pure Storage FlashArray is not.  We use a 512 byte geometry, so every I/O — be it 4K, 8K, 16K,  or whatever — is a multiple of our “sector”.  In effect, we have a variable block size. Therefore, we never experience block misalignment or write amplification.  An added benefit is that we achieve much better data reduction rates than our competitors because we de-duplicate data in 512 byte chunks.  There is no relationship between data block size and I/O to physical media, nor is there any relation between the 512 byte addressing to the physical representation of data on the media.

Proving It — Benchmark Environment

We used Quest’s Benchmark Factory to drive a TPC-E workload against 4 Oracle databases.  The databases were identical in every way except for the block size, which spanned 4K, 8K, 16K, and 32K.

Hardware

  • HP DL580 G7
  • Xeon 4870 2.40GHz – 40 cores
  • 512GB RAM
  • Pure Storage Array – 2 controllers, 2 shelves (11TB raw storage)

Operating System

  • Redhat Enterprise Server Release 6.3
  • Kernel  2.6.32-279.el6.x86_64

Database

  • Oracle ASM 11.2.0.3
  • 10 LUNS for +ORADATA and 2 LUNS for +REDOSSD
  • Oracle Database 11.2.0.3
  • Parameters for each database are identical (except for block size)
  • 4K, 8K, 16K and 32K block sizes
  • 512 Byte block size for redo logs (the default) (But we don’t care what redo log block size you use.)

Benchmark Factor Specifics

  • Benchmark Factory 6.8.1
  • TPC-E OLTP benchmark settings
  • Scale = 10 (approximately 80GB of data and 45GB of indexes)
  • Users = 5 to 60 incrementing by 5 every 10 minutes.
  • Zero thinking and keying times
  • 1 client workstation running one agent

We ran the exact same Benchmark Factory TPCE-E workload on each database instance.

Test Results

For each block size, we scaled the user load from 5 to 60 users in 5 user increments.  Each user load ran for 10 minutes, making a 2 hour overall test duration.  For each iteration, we collected

  • Transaction rate (tps)
  • Response time
  • Throughput (KB/s)

The graphs below illustrate the average of these metrics over each 2 hour test run.  The performance consistency speaks for itself.

 

Similarly, we can see that the performance was consistent across the varying user loads over the course of each run.  The largest anomaly was a 1ms difference in response time for the 4K block size for user loads in the 30-60 range:

 

Conclusion

One of our tenets at Pure Storage is simplicity.  We believe that migrating your database or any other application should be as simple as possible.  You shouldn’t need to change fundamental attributes such as database block size.  Your design decisions should be driven by application characteristics and operational policies, not by vendor-imposed constraints.  For more information about the Pure Storage array’s capabilities, please visit our resource page.