Make Smart Cloud Storage Decisions for your Data
There is a prevailing notion out there that more is better. If you have a problem, throw “more” at it, whether it be man-hours or money. In the IT world, you think of bandwidth, memory, or storage. While “more” might temporarily solve the problem, over time it becomes increasingly obvious that simply throwing “more” at it is not sustainable. The longer it takes to come to this realization, the more difficulties you will have in reining in the original issue.
This is true of storage as well. With the amount of data that is generated daily in our personal lives and at the office, obtaining more storage seems the simple and easy fix. When your solution to increasing the amount of data is to buy more storage, there is a tendency to simply put everything in one storage solution, without thinking about how to retrieve it when needed, and the speed with which it needs to be retrieved.
However, many of the problems above could be prevented by thinking a little more about the storage itself, rather than expanding the amount of storage allocated. By determining what type of data you are storing, you can select the type of storage that best suits your needs. There are many factors to consider, certainly more than can be covered in this short blog. But at a high level, we will attempt to give you some points to consider.
Data Type and Performance
What type of data are you trying to store? Is it user data that needs to be retrieved instantly by an app? Files that must be stored for compliance reasons that seldom get accessed? Or something in between? Understanding the type of data and the level of accessibility and performance you need is crucial to finding the right balance of performance and cost.
For archive data that is seldom accessed, lower-performance object storage might be sufficient. You can also compress and deduplicate the data to save space and reduce storage costs, but be aware that this adds additional overhead on the processing side (CPU and memory) should you need to retrieve it. Be sure that this overhead is factored in when determining if the selected storage will meet your needs. For data that must be accessible immediately, block storage is typically preferred. But there are different performance tiers for block storage as well, and you should be aware of the performance characteristics of each.
AWS and Azure both know that one storage type does not fit all, and provide many different storage options. Below are some links to help you choose the right option for you. Each of the below options is supported by Buurst SoftNAS.
- Amazon S3 Storage Classes – https://aws.amazon.com/s3/storage-classes/
- Amazon EBS Volume Types – https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-volume-types.html
- Azure Block Storage (Disk Types) – https://docs.microsoft.com/en-us/azure/virtual-machines/disks-types
- Azure Blob (Block Blob) Storage tiers – https://docs.microsoft.com/en-us/azure/storage/blobs/storage-blob-performance-tiers
Regardless what flavor or variety of cloud storage implemented, if it is not compatible with your workload, it is no solution at all. For example, legacy workloads may require additional client-side effort to ensure compatibility. Ideally, your cloud solution, whether it is a SaaS, a cloud NAS such as Buurst’s SoftNAS, or another solution, should provide compatibility for a wide variety of protocols to ensure that the application or workload in question does not need to be changed.
This is the hallmark of a smart storage solution. SoftNAS provides support for iSCSI, CIFS/SMB, and NFS protocols (including older versions), ensuring that in most cases, the application will be able to use SoftNAS hosted storage natively. Because it is based on Linux, it also provides native support for FTP and SFTP file transfer protocols. All of this ensures that little to no effort will be required to connect legacy workloads and applications to AWS or Azure cloud storage.
One of the key reasons to move to the cloud, scalability significantly impacts performance. If going through a third party such as a SaaS, this scalability might be hampered by price. In other words, the more storage you use or need, the more you will need to spend.
Due to the challenges of higher memory and CPU requirements for larger deployments, storage is one area in which it is not always cheaper to ‘buy in bulk’. Nor is it always efficient to store everything in one central storage location, considering the challenges of latency between the repository and satellite locations. It may be cheaper and more efficient to determine usage locally and design smaller cloud repositories in different regions to support these users.
Whatever solution is best for you, Buurst’s SoftNAS is built to scale, and able to support petabytes of data, while keeping costs down, as the price of the license does not change based on the amount of storage you require. A single SoftNAS instance can support up to 16 petabytes of storage and can communicate across regions. It can even be deployed for high availability across regions, to ensure additional redundancy. Any number of SoftNAS VMs can be deployed across a given organization, to serve regional requirements. No matter the storage solution you deploy, this flexibility and scalability must be present for it to be considered “smart” storage.