DBAs must consider several factors when selecting a database’s block size. Typically OLTP and mixed workload databases use an 8K block size, while data warehouses are often 16K. The conventional wisdom is that smaller block sizes are well suited for lots of random I/O (e.g. an OLTP workload) while larger block sizes are appropriate for lots of sequential I/O (e.g. a data warehouse). The decision cannot be made lightly because changing a database’s block size requires a complete rebuild of the database. In other words, stop the apps, dump the database via datapump or similar, then load into a database with the “right” block size and validate that everything came through perfectly. How big is your database? How long is your maintenance window? This is truly a case of a cure worse than the disease.
At Pure Storage, we believe that a factor that should never influence the block size decision is your storage subsystem. Unlike other vendors, we don’t handcuff flash storage with paradigms from the spinning disk era, such as the Advanced Storage Format’s 4K sector size. Violin Memory, for example, has documented the importance of a 4K database block size on their appliance, because it is architected with a 4K geometry.
The Pure Storage FlashArray is not. We use a 512 byte geometry, so every I/O — be it 4K, 8K, 16K, or whatever — is a multiple of our “sector”. In effect, we have a variable block size. Therefore, we never experience block misalignment or write amplification. An added benefit is that we achieve much better data reduction rates than our competitors because we de-duplicate data in 512 byte chunks. There is no relationship between data block size and I/O to physical media, nor is there any relation between the 512 byte addressing to the physical representation of data on the media.
We used Quest’s Benchmark Factory to drive a TPC-E workload against 4 Oracle databases. The databases were identical in every way except for the block size, which spanned 4K, 8K, 16K, and 32K.
We ran the exact same Benchmark Factory TPCE-E workload on each database instance.
For each block size, we scaled the user load from 5 to 60 users in 5 user increments. Each user load ran for 10 minutes, making a 2 hour overall test duration. For each iteration, we collected
The graphs below illustrate the average of these metrics over each 2 hour test run. The performance consistency speaks for itself.
Similarly, we can see that the performance was consistent across the varying user loads over the course of each run. The largest anomaly was a 1ms difference in response time for the 4K block size for user loads in the 30-60 range:
One of our tenets at Pure Storage is simplicity. We believe that migrating your database or any other application should be as simple as possible. You shouldn’t need to change fundamental attributes such as database block size. Your design decisions should be driven by application characteristics and operational policies, not by vendor-imposed constraints. For more information about the Pure Storage array’s capabilities, please visit our resource page.