Is Your Storage as Quick as a Flash? PART II — Flash Cache Boosts Storage Investment

Is Your Storage as Quick as a Flash? PART II — Flash Cache Boosts Storage Investment

b_zpsb55741c6

NetApp’s Flash Cache has been around for some time. It started off with PAM (Performance Acceleration Module) cards and then Flash Cache, and currently it includes Flash Cache 2 —boasting cache sizes up to 2TB per card that can be combined in configurations of up to 16TB of intelligent read cache in a single system.

That is all great stuff, but I want to talk about how it works and, for existing NetApp customers, how you would know if adding Flash Cache could optimize an existing workload.

Let’s look at how the Flash Cache works, starting with a quick baseline of NetApp functions without Flash Cache. When a read request comes into a NetApp, the system checks its buffer cache in system memory for the data. If the data is not there, it goes to disk to find it. Whether the data is found in the buffer cache depends on many factors, including: how busy the system is, the priority of the data blocks, and when it was last accessed. The buffer cache will eject blocks when memory is needed, with the lowest priority data being cleared first.

When we add Flash Cache to the system, instead of just clearing data when memory is needed, qualifying blocks can be placed in Flash Cache when they have been ejected from buffer cache. When a read request is made, the system first checks its buffer cache for the data.  If the data is not there, the system goes to Flash Cache. If it does not find the data there, it will finally go to disk. “Hot” data that is accessed often will cycle from Flash Cache back up to buffer cache as it is accessed again and again. Unlike the buffer cache, the Flash Cache uses a FIFO (First In First Out) algorithm to eject “cold” data from the cache.

I mentioned qualifying blocks in the description of how data is ejected from buffer cache and inserted into Flash Cache. Not all blocks ejected from main memory are inserted into the Flash Cache, only qualifying blocks. Large sequential reads, for instance, will go directly to disk because that is the most efficient method of serving that data.  The benefit of reading the data from the Flash Cache for random read intensive workloads is that it can be done as much as 10 times faster compared to disk alone.

NetApp introduced a feature as part of Data ONTAP version 7.3.2 and later, called PCS (Predictive Cache Statistics).  It allows the storage system to simulate the use of a Flash Cache card before actually installing one.  This enables storage administrators to determine the size of the read working set of a particular workload and consequently understand the performance benefits of adding cache to the system.  It is as simple as enabling PCS, allowing the workload to run, and collecting data throughout the process.  This procedure is usually run a few times with different options and cache sizes to determine the optimal configuration.

The ability to model the performance of the system without making any physical changes is powerful. The ease of sizing and deploying this technology combined with the benefits of efficiency and speed make Flash Cache a big win for many customers looking to make the most of their storage investment.

Tags:

Write a Comment

x

Contact Us Close