The Three Strategies to Increase Performance for Your Applications in AWS

The Three Strategies to Increase Performance for Your Applications in AWS

“3 strategies to increase performance for your applications in AWS.” You can download the full slide deck on Slideshare.

AWS Application Performance

Many customers have tried building these applications in their data centers and they are designed for extremely fast storage. As they are migrating their line of business applications to the cloud, a lot of their applications worked and didn’t have a constraint on access to the storage.  

But a few of those applications would not run or will not run in the cloud with acceptable performance because of the demand, the type of application that it is, and the connections or the throughput or the latency to get the data off of the storage and back into the application or put it back on to the storage. They weren’t getting the performance that they needed, so they tried using AWS EFS. They tried using AWS FSx. They tried using maybe some open-source NAS and they weren’t getting the performance until they discovered SoftNAS. 

SoftNAS push to make the AWS application run with great performance

SoftNAS then had the appropriate levers to pull, and buttons to push to make the application run with great performance, and we’re going to do a deep dive into why we have some statistics on some performance benchmarking we did that prove some of these things. 

Companies trust Buurst for data performance, data migration, data cost control, high availability, control, and security. When we think about this scenario of performance, there are two different camps, let’s say. There are managed storage services, and they are great because someone else is managing your storage layer for you. Then there’s a cloud NAS (network attached storage) like SoftNAS, so this sits in between your cloud clients and your on-premises clients and your storage. 

 
 
 

Managed Storage Service

For managed storage services like AWS EFS or AWS FSx

First of all, they serve many different companies or customers and they throw security in so company A can get to company B. The other end of that is they limit the amount of throughput that each company has to access storage so one of them does not become a noisy neighbor.

Then we have like SSD and HD in cold storage on the backend there that we have access to, so we can always just go to a faster disk or things like that to get better performance, but that comes with a price. To really think about increasing the performance of a managed storage service, we have two main levers we can pull and push. It’s we can purchase more storage or we can purchase more throughput.

Increase AWS Performance by Increasing Storage

This chart is based on the AWS EFS and FSx websites. What it’s telling me is the more gigs I have purchased; the more throughput I have.

For instance, the very top one on AWS EFS, if I purchase 10 gigs, I get 0.5 per second and it’s going to cost me $3 a month. If my solution requires that I have 100 megs of throughput, I need to purchase two terabytes of storage. That will cost me $614 a month, and that’s fine.

To get more performance, I can increase the amount of storage. I can also increase my throughput by purchasing provisioned throughput.

In this instance here, in this case, I needed that 100 megs of throughput, and I can purchase that. I’m basically buying two terabytes of storage. Even if I only have one terabyte of storage but I need 100 megs of throughput, I’m basically still purchasing two terabytes of storage but I’m provisioned with 100 megs.

The same thing with 350, I’m eight terabytes. With 600 megs, I’m basically purchasing 16 terabytes of storage for the throughput performance that I require. Now that’s great. Like I said earlier, in a lot of applications, that runs very well and it’s fine. But for those applications that it doesn’t quite work for, they turn to SoftNAS AWS. They turn to a cloud NAS.

SoftNAS for better AWS Application Performance

AWS application performance

With SoftNAS, I can use an efficient protocol. If it’s a SQL Server solution, I can use iSCSI. If it’s Linux, I can use NFS, windows, I can use CIFS, and it could all connect to the same amount of storage I can control and manage off my NAS.

I can use different disks speeds or more disks. I can use more smaller disks than fewer larger disks in my RAID array. I can do things like that. For this conversation, we’re going to talk about the cool ways to increase performance and manage costs.

I can increase the compute instance, and I can increase the read/write cache of my SoftNAS. Let’s dive into that. In AWS, I have compute instances for my NAS. In this case, I have an m3.xlarge. I have 4VCU and 15 megs of RAM, and I get 100-megabyte throughput.

That’s the throughput from my NAS to my clients because all the storage is directly connected to my NAS. I have a c5.9xlarge, 36 VCPUs, 17 megs of RAM. I get 1,200 megs of throughput. Remember that’s from my NAS to my clients.

For my NAS to my storage, that’s directly connected to my NAS. In AWS, I can connect about a petabyte of storage. I can also utilize caching. We know how important caching is. On my NAS, I have an L1 cache which is RAM, and I have an L2 cache which is a dedicated disk to a specific storage pool.

With SoftNAS, by default, I use half of the available RAM. If I have 32 megs of RAM, I can get 16 meg of that by default, available for L1 cache.

For L2 cache, I can use NVMe or SSD. For instance, let’s say I have a SQL Server solution that I’m providing data to and all my storages are on an array. That’s great. I can have NVMe dedicated to that pool servicing that SQL Server and my solution is just humming.

Then I could have another pool, let’s say it’s all my web servers data, that are in an array. I can have all of that with a bigger SSD drive for the cache. I can tune my solution in my NAS controller.

We ran a performance benchmark on AWS EFS against SoftNAS on AWS. Because so many customers were coming to us and saying, “We came to you because you gave us the performance that we needed,” we really wanted to dive down and try to figure out why or how.

To just summarize, throughput is a measurement of how fast your storage can read and write the data. In IOPS, the higher the IOPS; the faster you have access to the data on that disk. Latency is a measurement of the time it takes for a component or a subsystem to process the data request or transaction.

How do we do it?

Well, we have a Linux fio Server with four Linux fio clients through NFS connecting either to SoftNAS with no L2 cache or AWS EFS, with SSD storage on both. We had a basic, medium, and high.

Basic was 100 Megabits per second. Medium was 350 megs per second. High was 600 megs per second. We gave them different amounts of storage to give more throughput to the storage on AWS EFS.

For storage throughput, what we learned was the more RAM, CPU, and network we gave to the NAS the better numbers we got. And we were able to provide continuously sustained throughput and predictable performance because we weren’t throttling any other customers.

Throughput (MiB\s) – Higher is Better

What we found was pretty remarkable. On our basic, we are almost twice the performance. When we got to our higher-end, we are 23% at all times — 23 times the performance in the higher end for read/write sequential and read/write random of a 70/30 combo there.

On IOPS, again what we learned was the more CPU, RAM, and network speed, we got better IOPS. We know we could have used a faster disk such as NVMe, and that’s how we got a million IOPS on AWS. We could have added more disks to an array to aggregate and increase the disk IOPS. We could have done that also, but we didn’t.

IOPs – Higher is Better

What we found is, again, we were almost two times the performance on IOPS on our basic. I think we are around 18 times on the medium and 23 times the performance on the higher end — just huge numbers that we are really proud of.

For the latency, how can I get that data or the time it takes to get that data off and on that disk. What we found is no surprise. If I increase the CPU, the RAM, and network speed, I decrease latency. I could have decreased latency by using NVMe, but that adds a substantial cost.

Latency – Lower is Better

I could have decreased latency by using smaller and more disks than larger disks. Again, these results were kind of the same. We were two times lower. We’re about 18 times lower in our midrange there. In our high end, we’re about 23 times lower.

Now, all of that data, we’ve published. It’s on our blog in burst.com and it’s an I-chart. If you want to dive into this data, we’d be more than happy to talk with you on how we achieve these numbers, try to replicate these numbers, or give you a demo on how we think we can increase the performance of your specific application.

Let’s talk about specific applications where there’s throughput, IOPS, or latency.

For throughput, if you have many client connections like a web server array or even just cloud virtual desktops or actual desktops on-premises, you have lots of clients accessing the data. You need more throughput.

Maybe they are accessing video files, office files, AutoCAD files, or web server content — more throughput. Like one of our great partners, Petronas, had many clients out there that needed access to the content. SoftNAS was able to handle the amount of clients accessing the data.

When we’re talking about IOPS, we’re talking about small block size transactions like a database server or an email server who need to access the data and put data in very small chunks. We have a great partner, Halliburton, for instance. Their application for the oil and gas industry — their landmark application — they take the seismic data, just massive amounts of data.

It has to pull it off of the disk and then render it to show it visually to the plate-tectonic engineers. They were able to get their application, import it, and move to the cloud in record speed with SoftNAS and then have it run at a great performance level with SoftNAS. Absolutely fantastic.

On latency, think about applications like banking, stock exchange, finance who need fast access in and out of that data as fast as they can, or streaming. Netflix is a great partner of ours. To provide the solution for great cloud partners like AWS accessing the cloud storage, providing NetApp to the application.

How do I increase performance with AWS applications?

We’ve got to understand what the bottlenecks are. How does the AWS application perform? Does it need throughput? Does it need IOPS? Does it need low latency?

How to Increase Performance for Your AWS Application

  1. Understand your solution and where the bottlenecks are
    • Throughput
    • IOPs
    • Latency
  2. Then understand if managed storage will work for you
  3. Reach out to SoftNAS to better understand your AWS performance options to develop a solution to meet the specification of your workload

When we understand that, then we can understand if managed storage can work for you. That’s great. If it’s not, come talk to us. Meet a cloud storage performance professional and just talk about some ideas, what you’re seeing out there, why won’t it work. We’ve helped so many customers.

We have helped tons of customers get their applications running on AWS. We were the first NAS up there. We started that whole industry of NAS in the cloud. We have lots of information on performance blogs. We have an e-book.

We have a dedicated performance web page at burst.com/performance. We’d love to show you our product. To get a demo, talk to a performance professional. At Buurst, we are a data performance company. That’s all we want to do, live and breathe and think and provide you the fastest access to your cloud storage for the lowest price.

Try SoftNAS For better AWS Application Performance

SoftNAS can increase cloud application performance with 23x faster throughput, 18x better IOPs, and 24x less latency than other cloud storage solutions. It provides NAS capabilities suitable for the enterprise on AWS, Azure, VMware, and Red Hat Enterprise Linux (RHEL).

SoftNAS provides unified storage designed and optimized for high-performance, higher than normal I/O per second (IOPS), and data reliability and recoverability. It also increases storage efficiency through thin-provisioning, compression, and deduplication. See a SoftNAS DEMO 

 

EBS Backups on AWS

EBS Backups on AWS

In this blog post, we are going to discuss how to effectively backup and restore your Data on SoftNAS running in AWS. AWS EBS Backups have always been significant daily activity on every organization’s plan to protect against missioncritical data loss due to unforeseen circumstances like failed hardware or accidental deletion. 

There are two types of Backups that one can use to accomplish this goal in AWS.

  1. EBS Backup & Restore
  2. AMI backups AWS

AWS Backup is a fully managed service that makes it easy to centralize and automate data protection across AWS services, in the cloud and on-premises. Using this service, you can configure backup policies and monitor activity for your AWS resources in one place.

AWS EBS Backup & Restore:

By utilizing the native EBS snapshots feature on AWS, SoftNAS has made it relatively easy to take consistent backups of your ZFS pool(s) from the SoftNAS UI with just a click of a button!  

This type of backup allows you to periodically take backups of your entire ZFS pool(s) by taking snapshots of the underlying EBS Volume(s) that made up the SoftNAS pool(s)  either by going through the UI or running  a cron job from CLI (more on this below) to automate the process

To access this tool, you can head over to the SoftNAS UI under Settings >> Administrator >> EBS Backup and Restore >> Create, to create your first backup. From the screen below, notice that we created our first backup on pool1 using the steps above 

To create a cron job to automate the process, you can follow the instructions here.

Scheduling EBS Snapshots Using Crontab

If using a backup strategy that involves EBS snapshots, the best practice is to schedule a cron job that leverages the EBS Backup and Restore features of SoftNAS.

We now make this script available to you via the softness-cmd API. The script will store the snapshots in your AWS account under the EC2 Snapshots console and they will also be seen in the SoftNAS Storage Manager console in the ” AWS EBS Backups and Restore” tab. Please note that the retention policy has now been removed and you can create as many EBS snapshots as you want. You also have the ability to manually clean up old EBS snapshots from the UI.

A step-by-step guide using IAM policy:

  1. Make sure that you are using the SoftNAS_HA_IAM role and it is attached to your SoftNAS instance.
  2. As the ‘root’ user, schedule the below cron. For this example, we will use every 8 hours for the snapshot interval (*/8 in the hourly place holder):
  3. “crontab -e” to edit the crontab.
  4. Add this line: “0 */8 * * *  softnas-cmd login softnas <password> && softnas-cmd ebs_backup create > /dev/null 2>&1 

A step-by-step guide using AWS Managed KMS Keys:

  1. As either ‘root’ or ‘ec2-user’, schedule the below cron. I’m using every 8 hours for my snapshot interval (*/8 in the hourly place holder):
  2. “crontab -e” to edit the crontab.
  3. Add this line: “0 */8 * * * softnas-cmd login softnas <password> && softnas-cmd ebs_backup create ‘<yourAccessKey>’ ‘<yourSecretKey>’ > /dev/null 2>&1”

AMI backups AWS:

This type of backup allows an entire system backup with all its current configuration and data volumes. It is especially useful when you have a failed/unrecoverable instance, and you need to replace it as quickly as possible with all its data and configuration in place.

To create this type of backup, you can head over to the AWS Console >> EC2 Dashboard >> Instances >> Select the instance you want to make a backup of >> Actions >> Image >>  Create Image to get started. Here you can define the Name of your Image (Image Name), A brief description of the Image (Image Description), and No reboot, this option is significant to ensure that the backup Image is free of IO errors. Therefore, we recommend not checking this box to allow the system to be shut down before the backup is taken

To restore to the AMI backup, we can navigate to Ec2 Dashboard >> Expand Images >> AMIs >> Select your AMI and click Launch. Please refer to the screenshots below for visual

Once the deployment process is complete, you should have a new SoftNAS instance will all your data at the time of the AMI backup!

SoftNAS is designed to support a variety of market verticals, use cases, and workload types. Increasingly, SoftNAS NAS deployed on the AWS platform to enable block and file storage services through Common Internet File System (CIFS), Network File System (NFS), Apple File Protocol (AFP), and Internet Small Computer System Interface (SCSI).  

SoftNAS AWS NAS Storage

SoftNAS is a software-defined AWS NAS delivered as a virtual appliance running within Amazon EC2. Provides NAS capabilities suitable for enterprise storage, including Multi-Availability Zone high availability with automatic failover in the Amazon Web Services (AWS) Cloud.

SoftNAS offers AWS customers an enterprise-ready NAS capable of managing your fast-growing data storage challenges including AWS Outpost availability. Dedicated features from SoftNAS deliver significant cost savings, high availability, lift and shift data migration, and a variety of security protection.

Replacing EMC® Isilon®, VNX® and NetApp® with AWS® and Microsoft® Azure

Replacing EMC® Isilon®, VNX® and NetApp® with AWS® and Microsoft® Azure

Replacing EMC Isilon, VNX, and NetApp with AWS and Microsoft Azure cloud!

At Buurst, We have customers continually asking for solutions that will allow them to migrate from their existing on-premises storage systems to the Amazon AWS and Microsoft Azure cloud. We hear this often from EMC VNX and Isilon customers, and increasingly from NetApp customers as well.

The ask is usually prompted by receipt of the latest storage array maintenance bill or a more strategic decision to take action to close existing datacenters and replace them with the public cloud. In other cases, customers have many remote sites, offices, or factories where there’s simply not enough space to maintain all the data at the edge in smaller appliances. The convenience and appeal of the public cloud have become the obvious answer to these problems.

Customers want to trade out hardware ownership for the cloud subscription model and get out of the hardware management business.

Assessing the Environment

One of Isilon’s primary advantages that originally drove broad adoption is its scale-out architecture, which aggregates large pools of file storage with an ability to add more disk drives and storage racks as needed to scale what are effectively monolithic storage volumes. The Isilon has been a hugely successful workhorse for many years. As that gear comes of age, customers are faced with a fork in the road – continue to pay high maintenance costs in the forms of vendor license fees, replacing failing disks and ongoing expansion with more storage arrays overtime or take a different direction. In other cases, the aged equipment has reached its end of life and/or support period or it’s coming soon, so something must be done.

The options available today are pretty clear:

  • Forklift upgrade and replace the on-premises storage gear with new storage gear,
  • Shift storage and compute to a hyper-converged alternative, or
  • Migrate to the cloud and get out of the hardware business.

Increasingly, customers are ditching not just the storage gear but the entire data center at a much-increased pace. Interestingly, some are using their storage maintenance budget to fund migrating out of their datacenter into the cloud (and many reports they have money left over after that migration is completed).

This trend started as far back as 2013 for SoftNAS®, when The Street ditched their EMC VNX gear and used the 3-year maintenance budget to fund their entire cloud migration to AWS. You can read more about The Street’s migration here.

Like Isilon, the public cloud provides its own forms of scale-out, “bottomless” storage. The problem is that this cloud storage is not designed to be NFS or file-based but instead is block and object-based storage. The cloud providers are aware of this deficit, and they do offer some basic NFS and CIFS file storage services, which tend to be far too expensive (e.g., $3,600 per TB/year) and lack enterprise NAS features customers rely upon to protect their businesses and data and minimize cloud storage costs. These cloud file services sometimes deliver good performance but are also prone to unpredictable performance due to their shared storage architectures and the multi-tenant access overload conditions that plague them.

How Customers Replace EMC VNX Isilon and NetApp with SoftNAS®?

Buurst allows you to easily replace EMC VNX Isilon and NetApp with SoftNAS Cloud NAS.

As we see in the example below, SoftNAS Virtual NAS Appliance includes a “Lift and Shift data migration” feature that makes data migration into the cloud as simple as point, click, configure and go. This approach is excellent for up to 100 TB of storage, where data migration timeframes are reasonable over a 1 Gbps or 10 Gbps network link. 

SoftNAS Lift and Shift feature continuously syncs data from the source NFS or CIFS mount points with the SoftNAS storage volumes hosted in the cloud, as shown below. Data migration over the wire is convenient for migration projects where data is constantly changing in production. It keeps both sides in sync throughout the workload migration process, making it faster and easier to move applications and dependencies from on-premises VM’s and servers into their new cloud-based counterparts.

In other cases, there could be petabytes of data that need to be migrated, much of that data being inactive data, archives, backups, etc. For large-scale migrations, the cloud vendors provide what we historically called “swing gear” – portable storage boxes designed to be loaded up with data, then shipped from one data center to another – in this case, it’s shipped to one of the hyper-scale cloud vendor’s regional datacenters, where the data gets loaded into the customer’s account. For example, AWS provides its Snowball appliance and Microsoft provides Azure DataBox for swing gear.

If the swing gear path is chosen, the data lands in the customer account in object storage (e.g., AWS S3 or Azure Blobs). Once loaded into the cloud, the SoftNAS team assists customers by bulk loading it into appropriate SoftNAS storage pools and volumes. Sometimes customers keep the same overall volume and directory structure – other times this is viewed as an opportunity to do some badly overdue reorganization and optimization.

Why Customers Choose SoftNAS vs. Alternatives

SoftNAS provides an ideal choice for the customer who is looking to host data in the public cloud in that it delivers the most cost-effective storage management available. SoftNAS is able to deliver the lowest costs for many reasons, including:

  • No Storage Tax – Buurst SoftNAS does not charge for your storage capacity. You read that right. You get unlimited storage capacity and only pay for additional performance. 
  • Unique Storage Block Tiering – SoftNAS provides multi-level, automatic block tiering that balances peak performance on NVMe and SSD tiers, coupled with low-cost bulk storage for inactive or lazy data in HDD block storage.
  • Superior Storage Efficiencies – SoftNAS includes data compression and data deduplication, reducing the actual amount of cloud block storage required to host your file data.

The net result is customers save up to 80% on their cloud storage bills by leveraging SoftNAS’s advanced cost savings features, coupled with unlimited data capacity.

Summary and Next Steps

Isilon is a popular, premises-based file server that has served customers well for many years. VNX is the end of life. NetApp 7-mode appliances are long in the tooth with the end of full NetApp 7-mode support on 31 December 2020 (6 months from this posting). Customers running other traditional NAS appliances from IBM®, HP, Quantum, and others are in similar situations.

Since 2013, as the company that created the Cloud NAS category, Buurst SoftNAS has helped migrate thousands of workloads and petabytes of data from on-premises NAS filers of every kind across 39 countries globally. So you can be confident that you’re in good company with the cloud data performance experts at Buurst guiding your way and migrating your data and workloads into the cloud.

To learn more, register for a Free Consultation with one of our cloud experts. To get a free cost estimate or schedule a free cloud migration assessment, please contact the Buurst team.

7 Cloud File Data Management Pitfalls and How to Avoid Them

7 Cloud File Data Management Pitfalls and How to Avoid Them

7 Cloud File Data Management Pitfalls and How to Avoid Them

There are many compelling reasons to migrate applications and workloads to the cloud, from scalability and agility to easier maintenance. But anytime IT systems or applications go down it can prove incredibly costly to the business. Downtime costs between $100,000 to $450,000 per hour, depending upon the applications affected. And these costs do not account for the political costs or damage to a company’s brand and image with its customers and partners, especially if the outage becomes publicly visible and newsworthy.

Cloud File Data Management Pitfalls

“Through 2022, at least 95% of cloud security failures will be the customer’s fault,” says Jay Heiser, research vice president at Gartner If you want to avoid being in that group, then you need to know the pitfalls to avoid. To that end here are seven traps that companies often fall into and what can be done to avoid them.

1. No data-protection strategy

It’s vital that your company data is safe at rest and in-transit. You need to be certain that it’s recoverable when (not if) the unexpected strikes. The cloud is no different than any other data center or IT infrastructure in that it’s built on hardware that will eventually fail. It’s managed by humans, who are prone to an occasional error, which is what typically has caused most major cloud outages over the past 5 years that I’ve seen on a large scale.

Consider the threats of data corruption, ransomware, accidental data deletion due to human error, or a buggy software update, coupled with unrecoverable failures in cloud infrastructure. If the worst should happen, you need a coherent, durable data protection strategy. Put it to the test to make sure it works.

Most native cloud file services provide limited data protection (other than replication) and no protection against corruption, deletion or ransomware. For example, if your data is stored in EFS on AWS® and files or a filesystem get deleted, corrupted or encrypted and ransomed, who are you going to call? How will you get your data back and business restored?  If you call AWS Support, you may well get a nice apology, but you won’t get your data back. AWS and all the public cloud vendors provide excellent support, but they aren’t responsible for your data (you are).

As shown below, a Cloud NAS with a copy-on-write (COW) filesystem, like ZFS, does not overwrite data. In this oversimplified example, we see data blocks A – D representing the current filesystem state. These data blocks are referenced via filesystem metadata that connects a file/directory to its underlying data blocks, as shown in a. As a second step, we see a Snapshot was taken, which is simply a copy of the pointers as shown in b. This is how “previous versions” work, like the ability on a Mac to use Time Machine to roll back and recover files or an entire system to an earlier point in time.

Anytime we modify the filesystem, instead of a read/modify/write of existing data blocks, we see new blocks are added in c. And we also see block D has been modified (copied, then modified and written), and the filesystem pointers now reference block D+, along with two new blocks E1 and E2. And block B has been “deleted” by removing its filesystem pointer from the current filesystem tip, yet the actual block B continues to exist unmodified as it’s referenced by the earlier Snapshot.

Copy on write filesystems use Snapshots to support rolling back in time to before a data loss event took place. In fact, the Snapshot itself can be copied and turned into what’s termed a “Writable Clone”, which is effectively a new branch of the filesystem as it existed at the time the Snapshot was taken. A clone contains a copy of all the data block pointers, not copies of the data blocks themselves.

Enterprise Cloud NAS products use COW filesystems and then automate management of scheduled snapshots, providing hourly, daily and weekly Snapshots. Each Snapshot provides a rapid means of recovery, without rolling a backup tape or other slow recovery method that can extend an outage by many hours or days, driving the downtime costs through the roof.

With COW, snapshots, and writable clones, it’s a matter of minutes to recover and get things back online, minimizing the outage impact and costs when it matters most. Use a COW filesystem that supports snapshots and previous versions. Before selecting a filesystem, make sure you understand what data protection features it provides. If your data and workload are business-critical, ensure the filesystem will protect you when the chips are down (you may not get a second chance if your data is lost and unrecoverable).

2. No data-security strategy

It’s common practice for the data in a cloud data center to be comingled and collocated on shared devices with countless other unknown entities. Cloud vendors may promise that your data is kept separately, but regulatory concerns demand that you make certain that nobody, including the cloud vendor, can access your precious business data.

Think about access that you control (e.g., Active Directory), because basic cloud file services often fail to provide the same user authentication or granular control as traditional IT systems. The Ponemon Institute puts the average global cost of a data breach at $3.92 million. You need a multi-layered data security and access control strategy to block unauthorized access and ensure your data is safely and securely stored in encrypted form wherever it may be.

Look for NFS and CIFS solutions that provide encryption for data both at rest and in flight, along with granular access control.

3. No rapid data-recovery strategy

With storage snapshots and previous versions managed by dedicated NAS appliance, rapid recovery from data corruption, deletion or other potentially catastrophic events is possible. This is a key reason that there are billions of dollars worth of NAS applications hosting on-premises data today.

But few cloud-native storage systems provide snapshotting or offer easy rollback to previous versions, leaving you reliant on current backups. And when you have many terabytes or more of filesystem data, restoring from a backup will take many hours to days. Obviously, restores from backups are not a rapid recovery strategy – it should be the path of last resort because it’s so slow and going to extend the outage by hours to days and the losses potentially into six-figures or more.

You need flexible, instant storage snapshots and writable clones that provide rapid recovery and rollback capabilities for business-critical data and applications. Below we see previous version snapshots represented as colored folders, along with auto pruning over time. With the push of a button, an admin can clone a snapshot instantly, creating a writable clone copy of the entire filesystem that shares all the same file data blocks using a new set of cloned pointers. Changes made to the cloned filesystem do not alter the original snapshot data blocks; instead, new data blocks are written via the COW filesystem semantics, as usual, keeping your data fully protected.

Ensure your data recovery strategy includes “instant snapshots” and “writable clones” using a COW filesystem. Note that what cloud vendors typically call snapshots are actually deep copies of disks, not consistent instant snapshots, so don’t be confused as they’re two totally different capabilities.

4. No data-performance strategy

Shared, multi-tenant infrastructure often leads to unpredictable performance. We hear the horror stories of unpredictable performance from customers all the time. Customers need “sustained performance” that can be counted on to meet SLAs.

Most cloud storage services lack the facilities to tune performance, other than adding more storage capacity, along with corresponding unnecessary costs. Too many simultaneous requests, network overloads, or equipment failures can lead to latency issues and sluggish performance in the shared filesystem services offered by the cloud vendors.

Look for a layer of performance control for your file data that enables all your applications and users to get the level of responsiveness that’s expected. You should also ensure that it can readily adapt as demand and budgets grow over time.

Cloud NAS filesystem products provide the flexibility to quickly adjust the right blend of (block) storage performance, memory for caching read-intensive workloads, and network speeds required to push the data at the optimal speed. There are several available “tuning knobs” to optimize the filesystem performance to best match your workload’s evolving needs, without overprovisioning storage capacity or costs.

Look for NFS and CIFS filesystems that offer the full spectrum of performance tuning options that keep you in control of your workload’s performance over time, without breaking the bank as your data storage capacity ramps and accelerates.

5. No data-availability strategy

Hardware fails, people commit errors, and occasional outages are an unfortunate fact of life. It’s best to plan for the worst, create replicas of your most important data and establish a means to quickly switch over whenever sporadic failure comes calling.

Look for a cloud or storage vendor willing to provide an SLA guarantee that matches your business needs and supports the SLA you provide to your customers. Where necessary create a failsafe option, with a secondary storage replica to ensure your applications do not experience any outage and instead a rapid HA failover occurs instead of an outage.

In the cloud, you can get 5-9’s high availability from solutions that replicate your data across two availability zones; i.e., 5 minutes or less of unplanned downtime per year. Ask your filesystem vendor to provide a copy of their SLA and uptime guarantee to ensure it’s aligned with the SLAs your business team requires to meet its own SLA obligations.

6. No multi-cloud interoperability strategy

As many as 90% of organizations will adopt a hybrid infrastructure by 2020, according to Gartner analysts. There are plenty of positive driving forces as companies look to optimize efficiency and control costs, but you must properly assess your options and the impact on your business. Consider the ease with which you can switch vendors in the future and any code that may have to be rewritten. Cloud platforms entangle you with proprietary APIs and services, but you need to keep your data and applications multi-cloud capable to stay agile and preserve choice.

You may be delighted with your cloud platform vendor today and have no expectations of making a change, but it’s just a matter of time until something happens that causes you to need a multi-cloud capability. For example, your company acquires or merges with another business that brings a different cloud vendor to the table and you’re faced with the need to either integrate or interoperate. Be prepared as most businesses will end up in a multi-cloud mode of operation.

7. No disaster-recovery strategy

A simple mistake where a developer accidentally pushes a code drop into a public repository and forgets to remove the company’s cloud access keys from the code could be enough to compromise your data and business. It definitely happens. Sometimes the hackers who gain access are benign, other times they are destructive and delete things. In the worst case, everything in your account could be affected.

Maybe your provider will someday be hacked and lose your data and backups. You are responsible and will be held accountable, even though the cause is external. Are you prepared? How will you respond to such an unexpected DR event?

It’s critically important to keep redundant, offsite copies of everything required to fully restart your IT infrastructure in the event of a disaster or full-on hacker attack break-in.

The temptation to cut corners and keep costs down with data management is understandable, but it is dangerous, short-term thinking that could end up costing you a great deal more in the long run. Take the time to craft the right DR and backup strategy and put those processes in place, test them periodically to ensure they’re working, and you can mitigate these risks.

For example, should your cloud root account somehow get compromised, is there a fail-safe copy of your data and cloud configuration stored in a second, independent cloud (or at least a different cloud account) you can fall back on?  DR is like an insurance policy – you only get it to protect against the unthinkable, which nobody expects will happen to them… until it does. Determine the right level of DR preparedness and make those investments. DR costs should not be huge in the cloud since most everything (except the data) is on-demand.

Conclusions

We have seen how putting the right data management plans in place ahead of an outage will make the difference between a small blip on the IT and business radars vs. a potentially lengthy outage that costs hundreds of thousands to millions of dollars – and more when we consider the intangible losses and career impacts that can arise. Most businesses that have operated their own data centers know these things, but are these same measures being implemented in the cloud?

The cloud offers us many shortcuts to quickly get operational. After all, the cloud platform vendors want your workloads running and billing hours on their cloud as soon as possible. Unfortunately, choosing naively upfront may get your workloads migrated faster and up and running on schedule, but in the long run, cost you and your company dearly.

Use the above cloud file data management strategies to avoid the 7 most common pitfalls. Learn more about how SoftNAS Cloud NAS helps you address all 7 of these data management areas.

Check Also

Cloud data management strategy

Data Management Software

7 Essential Strategies for Cloud Data Management Success

Meeting Cloud File Storage Cost and Performance Goals – Harder than You Think

Meeting Cloud File Storage Cost and Performance Goals – Harder than You Think

Meeting Cloud Storage Cost and Performance Goals – Harder than You Think

According to Gartner, by 2025 80% of enterprises will shut down their traditional data centers. Today, 10% already have. We know this is true because we have helped thousands of these businesses migrate workloads and business-critical data from on-premises data centers into the cloud since 2013. Most of those workloads have been running 24 x 7 for 5+ years. Some of them have been digitally transformed (code for “rewritten to run natively in the cloud”).

The biggest challenge in adopting the cloud isn’t the technology shift – it’s finding the right balance of storage cost vs performance and availability that justifies moving data to the cloud. We all have a learning curve as we migrate major workloads into the cloud. That’s to be expected as there are many choices to make – some more critical than others.

Applications in the cloud

Many of our largest customers operate mission-critical, revenue-generating applications in the cloud today. Business relies on these applications and their underlying data for revenue growth, customer satisfaction, and retention. These systems cannot tolerate unplanned downtime. They must perform at expected levels consistently… even under increasingly heavy loads, unpredictable interference from noisy cloud neighbors, occasional cloud hardware failures, sporadic cloud network glitches, and other anomalies that just come with the territory of large-scale data center operations.

In order to meet customer and business SLAs, cloud-based workloads must be carefully designed. At the core of these designs is how data will be handled. Choosing the right file service component is one of the critical decisions a cloud architect must make.

Application performance, costs, and availability

For customers to remain happy, application performance must be maintained. Easier said than done when you no longer control the IT infrastructure in the cloud.

So how does one negotiate these competing objectives around cost, performance, and availability when you no longer control the hardware or virtualization layers in your own data center?  And how can these variables be controlled and adapted over time to keep things in balance? In a word – control. You must correctly choose where to give up control and where to maintain control over key aspects of the infrastructure stack supporting each workload.

One allure of the cloud is that it’s (supposedly) going to simplify everything into easily managed services, eliminating the worry about IT infrastructure forever. For non-critical use cases, managed services can, in fact, be a great solution. But what about when you need to control costs, performance, and availability?

Unfortunately, managed services must be designed and delivered for the “masses”, which means tradeoffs and compromises must be made. And to make these managed services profitable, significant margins must be built into the pricing models to ensure the cloud provider can grow and maintain them.

In the case of public cloud-shared file services like AWS Elastic File System (EFS) and Azure NetApp Files (ANF), performance throttling is required to prevent thousands of customer tenants from overrunning the limited resources that are actually available. To get more performance, you must purchase and maintain more storage capacity (whether you actually need that add-on storage or not). And as your storage capacity inevitably grows, so do the costs. And to make matters worse, much of that data is actually inactive most of the time, so you’re paying for data storage every month that you rarely if ever even access. And the cloud vendors have no incentive to help you reduce these excessive storage costs, which just keep going up as your data continues to grow each day.

After watching this movie play out with customers for many years and working closely with the largest to smallest businesses across 39 countries, At Buurst™. we decided to address these issues head-on. Instead of charging customers what is effectively a “storage tax” for their growing cloud storage capacity, we changed everything by offering Unlimited Capacity. That is, with SoftNAS® you can store an unlimited amount of file data in the cloud at no extra cost (aside from the underlying cloud block and object storage itself).

SoftNAS has always offered both data compression and deduplication, which when combined typically reduces cloud storage by 50% or more. Then we added automatic data tiering, which recognizes inactive and stale data, archiving it to less expensive storage transparently, saving up to an additional 67% on monthly cloud storage costs.

Just like when you managed your file storage in your own data center, SoftNAS keeps you in control of your data and application performance. Instead of turning control over to the cloud vendors, you maintain total control over the file storage infrastructure. This gives you the flexibility to keep costs and performance in balance over time.

To put this in perspective, without taking data compression and deduplication into account yet, look at how Buurst SoftNAS costs compare:

SoftNAS vs NetApp ONTAP, Azure NetApp Files, and AWS EFS

These monthly savings really add up. And if your data is compressible and/or contains duplicates, you will save up to 50% more on cloud storage because the data is compressed and deduplicated automatically for you.

Fortunately, customers have alternatives to choose from today:

  1. GIVE UP CONTROL – use cloud file services like EFS or ANF, pay for both performance and capacity growth, and give up control over your data or ability to deliver on SLAs consistently
  2. KEEP CONTROL – of your data and business with Buurst SoftNAS, and balance storage costs, and performance to meet your SLAs and grow more profitably.

Sometimes cloud migration projects are so complex and daunting that it’s advantageous to just take shortcuts to get everything up and running and operational as a first step. We commonly see customers choose cloud file services as an easy first stepping stone to a migration. Then these same customers proceed to the next step – optimizing costs and performance to operate the business profitably in the cloud and they contact Buurst to take back control, reduce costs, and meet SLAs.

As you contemplate how to reduce cloud operating costs while meeting the needs of the business, keep in mind that you face a pivotal decision ahead. Either keep control or give up control of your data, its costs, and performance. For some use cases, the simplicity of cloud file services is attractive and the data capacity is small enough and performance demands low enough that the convenience of files-as-a-service is the best choice. As you move business-critical workloads where costs, performance and control matter, or the datasets are large (tens to hundreds of terabytes or more), keep in mind that Buurst never charges you a storage tax on your data and keeps you in control of your business destiny in the cloud.

Next steps:

Learn more about how SoftNAS can help you maintain control and balance cloud storage costs and performance in the cloud.

What do we mean when we say “No Storage Tax”?

What do we mean when we say “No Storage Tax”?

At Buurst, we’re thinking about your data differently, and that means it’s now possible to bring all your data to the cloud and make it cost effective. We know data is the DNA of your business, which is why we’re dedicated to getting you the best performance possible, with the best cloud economics.

When you approach a traditional storage vendor, regardless of your needs, they will all tell you the same thing: buy more storage. Why is this? These vendors took their on-premises storage pricing models to the cloud with them, but they add unnecessary constraints, driving organizations down slow, expensive paths. These legacy models sneak in what we refer to as a Storage Tax. We see this happen in a number of ways:

    • Tax on data: when you pay for storage to a cloud NAS vendor for the storage you’ve already paid for
    • Tax on performance: when you need more performance, they make you buy more storage
    • Tax on capabilities: when you pay a premium on storage for NAS capabilities

 

So how do we eliminate the Storage Tax?

Buurst unbundles the cost of storage and performance, meaning you can add performance, without spinning up new storage and vice versa. As shown in the diagrams below, instead of making you buy more storage when you need more performance, Buurst’s pricing model allows you to add additional SoftNAS instances and compute power for the same amount of data. On the opposite side, if you need to increase your capacity, Buurst allows you to attach as much storage behind it as needed, without increasing performance levels.

Why are we determined to eliminate the StorageTax?

At Buurst, our focus is on providing you with the best performance, availability, cost management, migration, and control –not how much we can charge you for your data. We’ve carried this through our pricing model to ensure you’re getting the best all-up data experience on the cloud of your choice. This means ensuring your storage prices remain lower than the competition.

 

Looking at the below figures illustrates this point further:

NetApp
ONTAP™ Cloud Storage
10TB
Buurst’s SoftNAS$1,992
ONTAP$4,910

%

NetApp
Azure NetApp Files
8TB
Buurst’s SoftNAS$1,389
ANF$2,410

%

AWS
Elastic File Services
8TB
Buurst’s SoftNAS$1,198
AWS EFS$2,457

%

AWS
Elastic File Services
512TB
Buurst’s SoftNAS$28,798
AWS EFS$157,286

%

These figures reflect full-stack costs, including computing instances, storage, and NAS capabilities, in a high availability configuration, with the use of SmartTiers, Buurst’s dynamic block-based storage tiering. With 10TB of data, SoftNAS customers save up to 60% compared to NetApp ONTAP. And at 8TB of data, when compared with Azure NetApp Files and Amazon EFS, customers see cost savings of 42% and 51%, respectively. The savings can continue to grow as you add more data over time. When compared to Amazon EFS, SoftNAS users can save up to 82%at the 512TB level. Why is this? Because we charge a fixed fee, meaning the more data you have, the more cost-effective Buurst will be.

But we don’t just offer a competitive pricing model. Buurst customers also experience benefits around:

 

    • Data performance: Fast enough to handle the most demanding workloads with up to 1 million IOPS on AWS
    • Data migration: Point and click file transfers to with speeds up to 200% faster than TCP/IP over high latency and noisy networks
    • Data availability: Cross-zone high availability with up to 99.999% uptime, asynchronous replication, and EBS snapshots
    • Data control and security: Enterprise NAS capabilities in the cloud including at-rest and in-flight data encryption.

At Buurst, we understand that you need to move fast, and so does your data. Our nimble, cost-effective data migration and performance management solution opens new opportunities and capabilities that continually prepare you for success.

Get all the tools you need so day one happens faster and you can be amazing on day two, month two, and even year two. We make your cloud decisions work for you – and that means providing you data control, data performance, cost-management with storage tiering, and security.

To learn how Buurst can help you manage costs on the cloud, download our eBook:

The Hidden Costs of Cloud Storage