Meeting Cloud File Storage Cost and Performance Goals – Harder than You Think

Meeting Cloud File Storage Cost and Performance Goals – Harder than You Think

Meeting Cloud Storage Cost and Performance Goals – Harder than You Think

According to Gartner, by 2025 80% of enterprises will shut down their traditional data centers. Today, 10% already have. We know this is true because we have helped thousands of these businesses migrate workloads and business-critical data from on-premises data centers into the cloud since 2013. Most of those workloads have been running 24 x 7 for 5+ years. Some of them have been digitally transformed (code for “rewritten to run natively in the cloud”).

The biggest challenge in adopting the cloud isn’t the technology shift – it’s finding the right balance of storage cost vs performance and availability that justifies moving data to the cloud. We all have a learning curve as we migrate major workloads into the cloud. That’s to be expected as there are many choices to make – some more critical than others.

Applications in the cloud

Many of our largest customers operate mission-critical, revenue-generating applications in the cloud today. Business relies on these applications and their underlying data for revenue growth, customer satisfaction, and retention. These systems cannot tolerate unplanned downtime. They must perform at expected levels consistently… even under increasingly heavy loads, unpredictable interference from noisy cloud neighbors, occasional cloud hardware failures, sporadic cloud network glitches, and other anomalies that just come with the territory of large-scale data center operations.

In order to meet customer and business SLAs, cloud-based workloads must be carefully designed. At the core of these designs is how data will be handled. Choosing the right file service component is one of the critical decisions a cloud architect must make.

Application performance, costs, and availability

For customers to remain happy, application performance must be maintained. Easier said than done when you no longer control the IT infrastructure in the cloud.

So how does one negotiate these competing objectives around cost, performance, and availability when you no longer control the hardware or virtualization layers in your own data center?  And how can these variables be controlled and adapted over time to keep things in balance? In a word – control. You must correctly choose where to give up control and where to maintain control over key aspects of the infrastructure stack supporting each workload.

One allure of the cloud is that it’s (supposedly) going to simplify everything into easily managed services, eliminating the worry about IT infrastructure forever. For non-critical use cases, managed services can, in fact, be a great solution. But what about when you need to control costs, performance, and availability?

Unfortunately, managed services must be designed and delivered for the “masses”, which means tradeoffs and compromises must be made. And to make these managed services profitable, significant margins must be built into the pricing models to ensure the cloud provider can grow and maintain them.

In the case of public cloud-shared file services like AWS Elastic File System (EFS) and Azure NetApp Files (ANF), performance throttling is required to prevent thousands of customer tenants from overrunning the limited resources that are actually available. To get more performance, you must purchase and maintain more storage capacity (whether you actually need that add-on storage or not). And as your storage capacity inevitably grows, so do the costs. And to make matters worse, much of that data is actually inactive most of the time, so you’re paying for data storage every month that you rarely if ever even access. And the cloud vendors have no incentive to help you reduce these excessive storage costs, which just keep going up as your data continues to grow each day.

After watching this movie play out with customers for many years and working closely with the largest to smallest businesses across 39 countries, At Buurst™. we decided to address these issues head-on. Instead of charging customers what is effectively a “storage tax” for their growing cloud storage capacity, we changed everything by offering Unlimited Capacity. That is, with SoftNAS® you can store an unlimited amount of file data in the cloud at no extra cost (aside from the underlying cloud block and object storage itself).

SoftNAS has always offered both data compression and deduplication, which when combined typically reduces cloud storage by 50% or more. Then we added automatic data tiering, which recognizes inactive and stale data, archiving it to less expensive storage transparently, saving up to an additional 67% on monthly cloud storage costs.

Just like when you managed your file storage in your own data center, SoftNAS keeps you in control of your data and application performance. Instead of turning control over to the cloud vendors, you maintain total control over the file storage infrastructure. This gives you the flexibility to keep costs and performance in balance over time.

To put this in perspective, without taking data compression and deduplication into account yet, look at how Buurst SoftNAS costs compare:

SoftNAS vs NetApp ONTAP, Azure NetApp Files, and AWS EFS

These monthly savings really add up. And if your data is compressible and/or contains duplicates, you will save up to 50% more on cloud storage because the data is compressed and deduplicated automatically for you.

Fortunately, customers have alternatives to choose from today:

  1. GIVE UP CONTROL – use cloud file services like EFS or ANF, pay for both performance and capacity growth, and give up control over your data or ability to deliver on SLAs consistently
  2. KEEP CONTROL – of your data and business with Buurst SoftNAS, and balance storage costs, and performance to meet your SLAs and grow more profitably.

Sometimes cloud migration projects are so complex and daunting that it’s advantageous to just take shortcuts to get everything up and running and operational as a first step. We commonly see customers choose cloud file services as an easy first stepping stone to a migration. Then these same customers proceed to the next step – optimizing costs and performance to operate the business profitably in the cloud and they contact Buurst to take back control, reduce costs, and meet SLAs.

As you contemplate how to reduce cloud operating costs while meeting the needs of the business, keep in mind that you face a pivotal decision ahead. Either keep control or give up control of your data, its costs, and performance. For some use cases, the simplicity of cloud file services is attractive and the data capacity is small enough and performance demands low enough that the convenience of files-as-a-service is the best choice. As you move business-critical workloads where costs, performance and control matter, or the datasets are large (tens to hundreds of terabytes or more), keep in mind that Buurst never charges you a storage tax on your data and keeps you in control of your business destiny in the cloud.

Next steps:

Learn more about how SoftNAS can help you maintain control and balance cloud storage costs and performance in the cloud.

7 Things You Need to Know About the #1 Azure Cloud NAS with High Performance NFS and CIFS

7 Things You Need to Know About the #1 Azure Cloud NAS with High Performance NFS and CIFS

Like many of you, we see tremendous growth and demand by customers for enterprise NFS and CIFS/SMB with Active Directory integration for Azure. Companies are now betting the farm on Azure with major business-critical, extremely demanding applications, and workloads. We’re talking about mission-critical healthcare, e-discovery, government, revenue-generating SaaS, line of business applications, and more. These workloads run 24 x 7 x 365 at the hundreds of terabytes to petabytes scale with up to millions of users – serious applications that are core to these customers’ businesses.

Customers have chosen Buurst SoftNAS since 2013 because it’s the most mature, proven, cloud-native NAS available.

#1 Azure Cloud NAS with High-Performance NFS and CIFS

Here are key reasons why customers trust SoftNAS Cloud NAS with their most important data in the Azure cloud today:

1. Highest-performing NFS and CIFS with Active Directory.

You get control over the level of NFS performance and full SMB3 compatible CIFS that supports millions of Active Directory objects and native ACLs – and into the hundreds of thousands of IOPS of DEDICATED performance. You also get RAM caching and SSD caching to maximize throughput and I/O with minimal latency.

2. Scalable IOPS with arrays of Premium block storage.

Customers get as many IOPS as needed by joining dozens of Premium SSD or regular SSD block disk devices into aggregated storage pools, that can be thin-provisioned into many volumes and shared via NFS, CIFS/SMB, iSCSI, and AFP.

3. Highly-durable, low-cost Azure blob object storage.

SoftNAS provides the only option to leverage Azure blobs for low-cost, highly durable, bottomless archive storage that’s accessible via NFS, CIFS, and iSCSI with no application rewrites or changes.

4. Automatic tiering across block and object storage.

SoftNAS delivers the only patent-pending block auto-tiering across both block and object storage, which provides the high performance of SSD block storage and the convenience of automatically tiering inactive data to lower-cost object storage, reducing cloud storage costs by up to 67% or more.

5. No downtime guarantee Service Level Agreement.

SoftNAS customers need to know we have skin in the game as a true partner who shares the same level of concern and accountability around delivering on up-time, availability, and meeting internal SLAs. Only SoftNAS provides an SLA (ask your other vendors what kind of SLA and money-back guarantee they provide – you’ll be amazed at some of the replies you get).

6. Full-featured Azure Cloud NAS.

Compression, deduplication, storage snapshots, and thin-provisioning reduce storage costs by up to an additional 80%, further reducing cloud operational costs.

7. Built-in data migration tools.

Only SoftNAS Platinum includes built-in data migration tools to live sync your data to make complex migration projects easier and less complex and reduce the migration timeframes. Considering Azure Data Box? We’re also working closely with Microsoft to provide auto-sync in conjunction with Data Box to deal with extremely large data sets > 50 TB.

When performance counts and you need control and the flexibility to adapt, SoftNAS continues to deliver as the mature #1 NAS in the cloud, as it has since 2013.

Learn more about SoftNAS on Azure.

And now you get more than NFS, CIFS, iSCSI, and AFP. With the Platinum Edition, you get the ability to integrate virtually ANY kind of data type via virtually ANY kind of protocol… mind-blowing, I know. It’s just one of the many ways that SoftNAS continues to run circles around the competition by innovating faster and delivering on customer expectations.

SoftNAS Virtual NAS Appliance offers Microsoft Azure customers an enterprise-ready NAS capable of managing their fast-growing data storage challenges. Dedicated features from SoftNAS deliver significant cost savings, high availability, lift and shift data migration, and a variety of security protection.

The Cloud’s Achilles Heel – The Network

The Cloud’s Achilles Heel – The Network

SoftNAS began its life in the cloud and rapidly rose to become the #1 best-selling NAS in the AWS cloud in 2014, a leadership position we have maintained and continue to build upon today. We and our customers have been operating cloud native since 2013, when we originally launched on AWS. Over that time, we have helped thousands of customers move everything from mission-critical applications to entire data centers of applications and infrastructure into the cloud.  In 2015, we expanded support to Microsoft Azure, which has become a tremendous area of growth for our business.

By working closely with so many customers with greatly varying environments over the years, we’ve learned a lot as an organization – about the challenges customers face in the cloud – and getting to the cloud in the first place with big loads of data in the hundreds of terabytes to petabyte scale.

Aside from security, the biggest challenge area tends to be the network – the Internet.  Hybrid cloud uses a mixture of on-premises and public cloud services with data transfers and messaging orchestration between them, so it all relies on the networks.  Cloud migrations must often navigate various corporate networks and the WAN, in addition to the Internet.

The Internet is the data transmission system for the cloud, like power lines distribute power throughout the electrical grid. While the Internet has certainly improved over the years, it’s still the wild west of networking.

The network is the Achilles heel of the cloud.

Developers tend to assume that components of an application are operating in close proximity of one another; i.e., a few milliseconds away across reliable networks, and if there’s an issue, TCP/IP will handle retries and recover from any errors. That’s the context many applications get developed in, so it’s little surprise that the network becomes such a sore spot.

In reality, production cloud applications must hold up to higher, more stringent standards of security and performance than when everything ran wholly contained within our own data centers over leased lines with conditioning and predictable performance.  And the business still expects SLA’s to be met.

Hybrid clouds increasingly make use of site-to-site VPN’s and/or encrypted SSL tunnels through which web services integrate third party and SaaS sites and interoperate with cloud platform services. Public cloud provider networks tend to be very high quality between their data center regions, particularly when communications remain on the same continent and within the same provider.  For those needing low-latency tunnels, AWS DirectConnect and Azure ExpressRoute can provide additional conditioning for a modest fee, if they’re available where you need them.

But what about the corporate WAN, which are often overloaded and plagued by latency and congestion?  What about all those remote offices, branch offices, global manufacturing facilities and other remote stations that aren’t operating on pristine networks and remain unreachable by cost-effective network conditioning options?

Latency, congestion and packet loss are the usual culprits

It’s easy to overlook the fact that hybrid cloud applications, bulk data transfers and data integrations take place globally. And globally it’s common to see latencies in the many hundreds of milliseconds, with packet loss in the several percent range or higher.

In the US, we take our excellent networks for granted.  The rest of the world’s networks aren’t always up to par with what we have grown accustomed to in pure cloud use cases, especially where many remote facilities are located.  It’s common to see latency in the 200 to 300 milliseconds range when communicating globally. When dealing with satellite, wireless or radio communications, latency and packet loss is even greater.

Unfortunately, the lingua franca of the Internet is TCP over IP; that is, TCP/IP. Here’s a chart that shows what happens to TCP/IP in the face of latency and packet loss resulting from common congestion.

TPC

The X axis represents round trip latency in milliseconds, with the Y axis showing effective throughput in Kbps up to 1 Gbps, along with network packet loss in percent along the right side.  It’s easy to see how rapidly TCP throughput degrades when facing more than 40 to 60 milliseconds of latency with even a tiny bit of packet loss. And if packet loss is more than a few tenths of a percent, forget about using TCP/IP at all for any significant data transfers – it becomes virtually unusable.

Congestion and packet loss are the real killer for TCP-based communications. And since TCP/IP is used for most everything today, it can affect most modern network services and hybrid cloud operation.

This is because the TCP windowing algorithm was designed to prioritize reliable delivery over throughput and performance.  Here’s how it works. Each time there’s a lost packet, TCP cuts its “window” buffer size in half, reducing the number of packets being sent and slowing the throughput rate.  When operating over less than pristine global networks, sporadic packet loss is very common. It’s problematic when one must transfer large amounts of data to and from the cloud.  TCP/IP’s susceptibility to latency and congestion render it unusable. This well-known problem has been addressed on some networks by deploying specialized “WAN Optimizer” appliances, so this isn’t a new problem – it’s one IT managers and architects are all too familiar with and have been combating for many years.

Latency and packet loss turn data transfers from hours into days, and days into weeks and months

So even though we may have paid for a 1 Gbps network pipe, latency and congestion conspire with TCP/IP to limit actual throughput to a fraction of what it would be otherwise; e.g., just a few hundred kilobits per second.  When you are moving gigabytes to terabytes of data to and from the cloud or between remote locations or over the hybrid cloud, what should take minutes takes hours, and days turn into weeks or months.

We regularly see these issues with customers who are migrating large amounts of data from their on-premises datacenters over the WAN and Internet into the public cloud.  A 50TB migration project that should take a few weeks turns into 6 to 8 months, dragging out migration projects, causing elongated content freezes and sending manpower and cost overruns through the roof vs. what was originally planned and budgeted.

As we continued to repeatedly wait for customer data to arrive in the public cloud to complete cloud migration projects involving SoftNAS Cloud NAS, we realized this problem was acute and needed to be addressed. We had many customers approach us and ask us if we had thought about helping in this area – as far back as 2014.  Several even suggested we have a look at IBM Aspera, which they said was a great solution.

In late 2014, we kicked off what turned into a several year R&D project to address this problem area. Our original attempts were to use machine learning to automatically adapt and adjust dynamically to latency and congestion conditions.  That approach failed to yield the kind of results we wanted.

Eventually, we ended up inventing a completely new network congestion algorithm (that’s now Ultra pending patent) to break through and achieve the kind of results we see below.

We call this technology “UltraFast™.”

UltraFast

As can be easily seen here, UltraFast overcomes both latency and packet loss to achieve 90% or higher throughput, even when facing up to 800 milliseconds and several percent packet loss.  Even when packet loss is in the 5% to 10% range, UltraFast continues to get the data through these dirty network conditions.

I’ll save the details of how UltraFast does this for another blog post, but suffice it to say here that it uses a “congestion discriminator” that provides the optimization guidance.  The congestion discriminator determines the ideal maximum rate to send packets without causing congestion and packet loss.  And since TCP/IP constantly re-routes packets globally, the algorithm quickly adapts and optimizes for whatever path(s) the data ends up taking over IP networks end-to-end.

What UltraFast means for cloud migrations

We combine UltraFast technology with what we call “Lift and Shift” data replication and orchestration. This combo makes migration of business applications and data into the public cloud from anywhere in the world a faster, easier operation. The user simply answers some questions about the data migration project by filling in some wizard forms, then the Lift and Shift system handles the entire migration, including acceleration using UltraFast.  This makes moving terabytes of data globally a simple job any IT or DevOps person can do.

Additionally, we designed Lift and Shift for “live migration”, so once it replicates a full backup copy of the data from on-premise into the cloud, it then refreshes that data so the copy in the cloud remains synchronized with the live production data still running on-premise.  And if there’s a network burp along the way, everything automatically resumes from where it left off, so the replication job doesn’t have to start over each time there’s a network issue of some kind.

Lift and Shift and UltraFast take a lot of the pain and waiting out of cloud migrations and global data movement.  It took us several years to perfect it, but now it’s finally here.

What UltraFast means for global data movement and hybrid cloud

UltraFast can be combined with FlexFiles™, our flexible file replication capabilities, to move bulk data around to and from anywhere globally. Transfers can be point-to-point, one to many (1-M) and/or many to one (M-1). There is no limitation on the topologies that can be configured and deployed.

Finally, UltraFast can be used with Apache NiFi, so that any kind of data can be transferred and integrated anywhere in the world, over any kind of network conditions.

SUMMARY

The network is the Achilles heel of the cloud. Internet and WAN latency, congestion and packet loss prevent hybrid cloud performance, timely and cost-effective cloud migrations and slow global data integration and bulk data transfers.

SoftNAS’ new UltraFast technology, combined with Lift and Shift migration and Apache NiFi data integration and data flow management capabilities yield a flexible, powerful set of tools for solving what have historically been expensive and difficult problems with an purely software solution that runs everywhere; i.e., on VMware or VMware-compatible hypervisors and in the AWS and Azure clouds. This powerful combination puts IT in the driver’s seat and in control of its data, overcoming the cloud’s Achilles heel.

NEXT STEPS

Visit Buurst, Inc to learn more about how SoftNAS is used by thousands of organizations around the world to protect their business data in the cloud, achieve a 100% up-time SLA for business-critical applications and move applications, data and workloads into the cloud with confidence.  Register here to learn more and for early access to UltraFast, Lift and Shift, FlexFiles and NiFi technologies.

ABOUT THE AUTHOR

Rick Braddy is an innovator, leader and visionary with more than 30 years of technology experience and a proven track record of taking on business and technology challenges and making high-stakes decisions. Rick is a serial entrepreneur and former Chief Technology Officer of the CITRIX Systems XenApp and XenDesktop group and former Group Architect with BMC Software. During his 6 years with CITRIX, Rick led the product management, architecture, business and technology strategy teams that helped the company grow from a $425 million, single-product company into a leading, diversified global enterprise software company with more than $1 billion in annual revenues. Rick is also a United States Air Force veteran, with military experience in top-secret cryptographic voice and data systems at NORAD / Cheyenne Mountain Complex. Rick is responsible for SoftNAS business and technology strategy, marketing and R&D.

SoftNAS Named Best Cloud NAS Solution in Network World Review


networkworld

Network World recently reviewed software-based NAS solutions and concluded:

If you’re looking for a cloud-based solution, SoftNAS is your best bet.

SoftNAS 3.3.3, the latest version of our award-winning Cloud NAS solution, differs from other solutions reviewed by offering both software-based and on-premise versions.  It’s available using Amazon EC2, Microsoft Azure and most recently, CenturyLink Cloud.

Eric Geier, freelance tech writer with Network World, provides a thorough review, citing the following about how SoftNAS is typically deployed:

Commonly, SoftNAS is deployed in AWS VPCs serving files to EC2 based servers within the same VPCs. SoftNAS also supports a hybrid cloud model where one SoftNAS instance is deployed on-premise on a local PC in your office and a second instance in an AWS VPC. In this hybrid model, replication occurs from the local SoftNAS to cloud-based SoftNAS for cloud-based disaster recovery.

SoftNAS is quick and easy to configure and setup in minutes, something we strive to achieve with all our products. Eric alluded to this during the review:

Once we configured the EC2 instance of SoftNAS, we could access the web GUI, which they call the SoftNAS StorageCenter, via the Amazon IP or DNS address. After logging in, you’re prompted to register and accept the terms. Then you’re presented with a Getting Started Checklist, which is useful in ensuring you get everything setup.

Read the full review.

Better yet, give it a try yourself.

SoftNAS 30-day Free Trial
start free trial now

 

Learn more: https://www.softnas.com
Twitter: @SoftNAS
LinkedIn: https://www.linkedin.com/company/softnas