Is Your Storage as Quick as a Flash? Part III

Is Your Storage as Quick as a Flash? Part III

flashpool-vdi-1

Let’s move on to NetApp’s implementation of SSDs in its cache lineup. This feature is called Flash Pool, and it gives customers the ability to add SSD-based cache to an existing hard drive-based aggregate. This is an aggregate level read/write cache.  That is important because unlike Flash Cache, it is persistent across takeovers and givebacks, and caching policies can be set at the volume level. Similar to Flash Cache, existing customers can use PCS to size their Flash Pools appropriately.

It is important to note that the Flash Pool works with repeat random read operations and small block random overwrite operations. Sequential reads are always read from disk with the help of read-ahead caching algorithms. Sequential writes are not cached because NetApp is already write optimized.

In part two of this blog series, I discussed how read requests are serviced from the NetApp without the use of cache. This remains the same with Flash Pool. Just like Flash Cache, Flash Pool will insert blocks that are evicted from system memory buffers if they were randomly read from disk. The insertion process is quite different and involves adding the data block to a pending consistency point (CP) operation where the block is written as part of a RAID stripe to the SSD cache. Subsequent read requests for the same data block are served from Flash Pool SSD back up to system buffer cache and forwarded to the requesting host. Frequently accessed blocks in the Flash Pool are given a higher cache priority so they stay in the Flash Pool longer. Blocks that are not accessed and have gone “cold” are evicted by means of an eviction scanner.

Flash Pool write caching offloads small block random overwrites. The initial data write goes directly to disk. Subsequent updates of the same data blocks that are smaller than 16KB and match the write insertion policy are eligible to be inserted into the Flash Pool write cache. Once the data blocks are written to the SSDs, the previous version of the blocks that still reside on the HDDs are invalidated. When the blocks that have been written to the SSDs become “cold,” they are evicted by means of the eviction scanner, but these blocks must then be written to the HDDs to ensure data consistency.

It is important to understand the workload characteristics and working set size to reap the benefits of a Flash Pool architecture. PCS can help with that. As a rule of thumb, we generally find that OLTP workloads are good candidates for Flash Pool optimization. If you have additional use cases or are running a Flash Pool in your environment, leave a comment and tell us about it.

Tags:

Write a Comment

x

Contact Us Close