From the beginning of the Storage era, almost all storage vendors challenged each other to achieve the highest number of IOPS possible. There are a lot of historical storage datasheets showing IOPS as an only number and probably customers at that time only followed those numbers
Do IOPS really matter?
The short answer here is: “a little bit”. It is a one factor of several other factors. After the data revolution a lot of things got changed. Now the source of data could be millions of devices in an IoT system, that means there are millions of systems that are trying to Read/Write simultaneously. The type of workloads dramatically varies especially in the presence of caching from write intensive media solutions like VDI solutions to read intensive in the database world. The time to reach the data become extremely important in several time-sensitive architectures like core banking.
So now the huge numbers measured in millions is nothing to be proud of, so let us check what other factors we need to check before selecting or judging our storage
How IOPS are measured and does that related to your workload?
Storage vendors used to do their benchmarks in a way that helped them reach a higher number of IOPS, usually using few number of clients which might not be your use case, small block size such as 4k which might be much more lower than the one you need, random workloads where SSD speed grows 50% Read/Write which also might not be related to for example VDI or archiving workloads. Usually the reads are much faster than writes especially in RAID arrays. Such type of benchmarking will lead to a huge number of IOPS which might be not relevant to workloads that may need lower amount of IOPS, but more data written per each IOPS that may introduce a game-changing factor which is latency.
Latency does matter!
Latency is a real critical factor, never accept those huge IOPS numbers without having a look at the latency figures.
Latency is how long it takes for a single IOPS to happen, by increasing the workload the storage hardware including the controller, caching, RAM, CPU, etc will try to keep the latency consistent but things are not that ideal, at certain huge number of IOPS things will go out of control and the storage appliance hardware will get exhausted and more busy, so a delay serving the data will start getting noticed by the application and problems will start to happen.
Databases for example are very latency sensitive workloads, usually they need small latency [5ms or lower] especially during writing otherwise there will be a huge performance degradation and business impact.
So if your business is growing and you noticed degradation in your database performance, You don’t only need a storage with higher IOPS rate but with lower latency as well which leads us to another side point which is storage flexibility that Buurst can help you with. Just few steps you can upgrade your storage with whatever numbers that satisfies your workload
How to get a storage that will work?
Generally speaking, any storage data sheet is not usually meant for you, but it can be somehow relevant and give you an idea about the general performance of the storage especially if it includes:
1. Several benchmarks based on different block size, different read/write ratio and for both the sequential and random workload cases.
2. The number of clients used per each benchmark, the RAID type and the storage features [compression, deduplication etc].
3. The IOPS/latency charts for each of the above case, which is the most important thing.
That is not all, if you are satisfied with those initial metrics, you are recommended to ask for a PoC to check how the storage works in your environment and in your specific case.
Buurst will be happy to help you with the sizing and the PoC too with a trial license