Designing your storage to meet cost and performance goals

Designing your storage to meet cost and performance goals

Public cloud platforms like AWS and Azure offer a few different choices for persistent storage. Today I’m going to show you how to leverage a SoftNAS storage appliance with these different storage types to scale your application to meet your specific performance and cost goals in the public cloud. To get started, let’s take a quick look at these storage types in both AWS and Azure to understand the characteristics. The high-performance disk types are more expensive, and the cost decrease as the performance level decreases.  Refer to the below table for a quick reference.

References below:

Azure storage types

Amazon EBS volume types

Because I’m going to use a SoftNAS as my primary storage controller in AWS or Azure, I can take advantage of all of the different disk types available on those platforms and design storage pools that meet each of my application performance and cost goals. I can create pools using high-performance devices along with pools that utilize magnetic media and object storage. I can even create tiered pools that utilize both SSD and HDD. Along with the flexibility of using different media types for my storage architecture, I can leverage the extra benefits of caching, snapshots, and file system replication that come along using SoftNAS. There are tons of additional features that I could mention, but for this blog post, I’m only going to focus on the types of pools I can create and how to leverage the different disk types.

High-Performance Pools

I’ll use AWS in this example. For an application that requires low latency and a high IOP’s, we would think about using SSD’s like IO1 or GP2 as the underlying medium. Let’s say we need our application to have 9k available IOP’s and at least 2TB of available storage. We can aggregate the devices in a pool to get the sum throughput and IO of all the devices combined, or we can provision a single IO Optimized volume to achieve the performance target. Let’s look at the underlying math and figure out what we should do.

We know that AWS GP2 EBS gives us 3 IOPs per GB of storage. With that in mind, 2TB would only give us 6k IOPs. That’s 3k short of our performance goal. To reach the 9k IOP’s requirement, we would either need to provision 3TB of GP2 EBS disk or provision an IO Optimized (IO1) EBS disk and set the IOPS to 9k for that device.

 

Any of the below configurations would allow you to achieve this benchmark using Buurst™ SoftNAS.

Throughput Optimized Pools

If your storage IO specification does not require low latency but does require a higher throughput level, then ST1 Type EBS may work well for you. ST1 disk types are going to be less expensive than the GP2 or IO1 type of devices. The same rules apply regarding aggregating the throughput of the devices to achieve your throughput requirements. If we look at the specs for ST1 devices (link above), we are allowed up to 500 IOP’s per device and a max of 500 MiB of throughput per device. If we require a 1TB volume to achieve 1GiB of throughput and 1000 IOPs, then we can design a pool with those requirements as well. It may look something like below:

Pools for Archive and Less Frequently Accessed Data

If you require storing backups on disk or have a data set that is not frequently accessed, then you could save money by storing this data set on less expensive storage. Your options are going to be magnetic media or object storage. SoftNAS can also help you out with that. HDD in Azure or SC1 in AWS are good options for this. You can combine devices to achieve high capacity requirements for this infrequently accessed or archival data. The throughput limits on the HDD type devices are limited to 250MiB, but the capacity is higher, and the cost is much less when compared to SSD type devices. If we needed 64TB of cold storage in AWS, it might look like below. The largest device in AWS is 16TB, so that we will use four.

Tiered Pools

Finally, I will mention Tiered Pools. Tiered pools are a feature you can use in BUURST™ SoftNAS, whereby you can have different levels of performance all within the same pool. When you set up a tiered pool on SoftNAS, you can have a ‘hot’ tier made of fast SSD devices along with a ‘cold’ tier that is made of slower, less expensive HDD devices. You set block-level age policies that enable the less frequently accessed data to migrate down to the cold tier HDD devices while your frequently accessed data will remain in the hot tier on the SSD devices. Let’s say we want to provision 20TB of storage. We think that about 20% of our data would be active at any time, and the other 80% could be on cold storage. An example of what that tiered pool may look like is below.

The tier migration policy has the following configuration:

  • Maximum block age: Age limit of blocks in seconds.
  • Reverse migration grace period: If a block is requested from the lower tier within this period, it will be migrated back up.
  • Migration interval: Time in seconds between checks.
  • Hot tier storage threshold: If the hot tier fills to this level, data is migrated off.
  • Alternate Block Age: Addition age to migrate blocks in the case of HOT tier becoming full.

Summary

If you are looking for a way to tune your storage pools based on your application requirements, then you should try SoftNAS.  It gives you the flexibility to leverage and combine different storage mediums to achieve the cost, performance, and scalability that you are looking for. Feel free to reach out to BUURST™ sales team for more information.

Webinar Recap – Three Ways to Slash Your Enterprise Cloud Storage Cost

Webinar Recap – Three Ways to Slash Your Enterprise Cloud Storage Cost

The above is a recording and follows is a full transcript from the webinar, “Three Ways to Slash Your Enterprise Cloud Storage Cost.” You can download the full slide deck on Slideshare.

My name is Jeff Johnson. I’m the head of Product Marketing here at Buurst. In this webinar, we will talk about three ways to slash your Enterprise cloud storage cost.

Companies trust Buurst for data performance, data migration, data availability, data control and security, and what we are here to talk about today is data cost control. We think about the storage vendors out there. The storage vendors want to sell more storage.

At Buurst, we are a data performance company. We take that storage, and we optimize it; we make it perform. We are not driven or motivated to sell more storage. We just want that storage to run faster.

We are going to take a look at how to avoid the pitfalls and the traps the storage vendors use to drive revenue, how to prevent being charged or overcharged for storage you don’t need, and how to reduce your data footprint.

Data is increasing every year. 90% of the world’s data has been created over the last two years. Every two years, that data is doubling. Today, IT budgets are shifting. Data centers are closing – they are trying to leverage cloud economics – and IT is going to need to save money at every single level of that IT organization by focusing on this data, especially data in the cloud, and saving money.

We say now is your time to be an IT hero. There are three things that we’re going to talk about in today’s webinar.

 

We are going to be looking at all the tools and capabilities that you have for on-premises solutions and moving those into the cloud and trying to figure out which of those solutions are already in the cloud or not.

We’ll be taking a look at reducing the total cost of acquisition. That’s just pure Excel Spreadsheet cloud storage numbers, which cloud storage to use that don’t tax you on performance—speaking of performance, reducing the cost of performance because some people want to maintain performance but have less expense.

I bet you we could even figure out how to have better performance with less costs. Let’s get right down into it.

Reducing that cost by optimizing data

We think about all of these tools and capabilities that we’ve had on our NAS, on our on-premise storage solutions over the years. We expect those same tools, capabilities, and features to be in that cloud storage management solution, but they are not always there in cloud-native storage solutions. How do you get that?

Well, that’s probably pretty easy to figure out. The first one we’re going to talk about is deduplication. This is inline deduplication. The files are compared block to block and see which ones we can eliminate and just leave a pointer there. To the end-user, they think they have the file, but it’s just a duplicate file.

 

 

We could eliminate…in most cases reduce that data storage by 20 to 30% less storage, and this becomes exponentially critical in our cloud storage.

The next one we have is compression. Well, with compression, we are going to reduce the numbers of bits needed to represent that data. Typically, we can reduce the storage cost by 50 to 75% depending on the types of files that are out there that can be compressed, and this is turned on by default with our SoftNAS program.

 

 

The last one we want to talk about is Data Tiering. 80% of data is rarely used past 90 days, but we still need it. With SoftNAS, we have data tiering policies or aging policies that can move me from more expensive, faster storage to less expensive storage, to even way back to ice-cold storage.

 

 

We could gain some efficiency in this tiering, and for a lot of customers, we’ve reduced their storage cost with an active data set by 67%.

What’s crazy is we add all these together. If I take a look at 50 TB of storage at 10 cents per GiB, is $5,000 a month. If I dedupe that just 20%, it brings it down to $4,000 a month. Then if I compress that by 50, I can get it down to 2,000 a month. Then if I tier that with 20% SSD and 80% HDD, I can get down to $1,000 a month, reducing my overall cost by 80% from 5,000 to $1,000 a month.

Again, not everything is equal out in the cloud. With SoftNAS, obviously, we have dedup, compression, and tiering. With AWS EFS, they do have tiering – great product. With AWS FSx, they have deduplication but not compression and tiering. Azure Files doesn’t have that.

Actually, with AWS infrequent storage, they charge you to write and read from that cold storage. They charge a penalty to use the data that’s already in there. Well, that’s great.

Reducing the total cost of acquisition is just use the cheapest storage.

Now I see a toolset here that I’ve used on-premise. I’ve always used dedupe on-premises. I’ve always used compression on-premises. I might have used tiering on-premises, but it’s really like NVME type of disk, and that’s great.

I see the value in that, but TCA is a whole different ball part here. It’s self-managed versus managed. It’s different types of disks to choose from. We take a look at this. This is like I said earlier. It’s just Excel Spreadsheet stuff  — what do they charge, what do I charge, and who has the least cost.

We take a look at this in two different kinds of buckets. We have self-managed storage like NVME disks and block storage. We have managed-storage as a service like EFS FSx and Azure Files.

If we drill down that a little bit, there are still things that you need to do and there are things that your managed storage service will do for you. For instance, of course, if it’s self-managed, you need to migrate the data, mount the data, grow the data, share the data, secure the data, backup the data. You have to do all those things.

 

 

Well, what are you paying for because if I have a managed storage service, I still have to migrate the data? I have to mount the data. I have to share and secure the data. I have to recover the data, and I have to optimize that data. What am I really getting for in that price?

The price is, block storage, AWS is 10 cents per Gig Per month. In Azure, it’s 15 cents per Gig per month. Those things that I’m trying to alleviate like securing, migrating, mounting, sharing, recovery, I am still going to pay 30 cents – three times the price of AWS SSD; or FSx, 23 cents; or Azure File, 24 cents. I’m paying a premium for the storage, but I am still having to do a lot on the management layer of that.

 

 

If we dive a little bit deeper into all that. EFS is really designed for NFS connectivity, so my Linux clients. AWS/FSx is designed with CIFS for the Windows clients with SMB, and Azure Files with CIFS for SMB. That’s interesting.

If I have Windows and Amazon, if I have Windows and Linux clients, I have to have an EFS account and an FSx account. That’s fine. But wait a second. This is a shared access model. I’m in contention with all the other companies who have signed up for EFS.

Yeah, they are going to secure my data, so company one can’t access company two’s data, but we’re all in line for the contention of that storage. So what do they do to protect me and to give me performance? Yeah, it’s shared access.

They’ll throttle all of us, but then they’ll give us bursting credits and bursting policies. They’ll charge me for extra bursting, or I can just pay for increased performance, or I can just buy more storage and get more performance.

At best, I’ll have an inconsistent experience. Sometimes I’ll have what I expect. Other times, I won’t have what I expect – in a negative way. For sure, I’ll have all of the scalability, all the stability and security with these big players. They run a great ship. They know how to run a data center better than all on-premises data centers combined.

But we compare that to self-managed storage. Self-managed, you have a VM out there, whether it’s Linux or Windows, and you attach that storage. This is how we attached storage back in the ‘80s or ‘90s, with a client-server with all its attached storage. That wasn’t a very great way to manage that environment.

Yeah, I had dedicated access, consistent performance, but it wasn’t very scalable. If I wanted to add more storage, I had to get a screwdriver, pop the lid, add more disks, and that is not the way I want to run a data center. What do we do?

We put a NAS in between all of my storage and my clients. We’re doing the same thing with SoftNAS in the cloud. With SoftNAS, we have an NFS protocol, CIFS protocol, or we use iSCSI to connect just the VMs of my company to the NAS and have the NAS manage the storage out to the VMs. This gives me dedicated access to storage, a consistent and predictable performance.

 

 

The performance is dictated by the NAS. The bigger the NAS, the faster the NAS. The more RAM and the more CPU the NAS has, the faster it will deliver that data down to the VMs. I will get that Linux and Windows environment with scalability, stability, and security. Then I can also make that highly available.

I can have duplicate environments that give me data performance, data migration, data cost control, data availability, data control, and security through this complete solution. But you’re looking at this and going, “Yeah, that’s double the storage, that’s double the NAS.” How does that work when you’re talking about Excel spreadsheets kind of data?

 

 

Alright. We know that EBS storage is 10 cents per GiB per month. EFS storage is 30 cents per GiB per month. The chart is going to expand with the more…two more terabytes I have in my solution.

If I add a redundant set of block storage, redundant set of VMs, and then I turn on dedupe and depression, and then I turn on my tiering, the price of the SoftNAS solution is so much smaller than what you pay for storage. It doesn’t affect the storage cost that much. This is how we’re able to save companies huge amounts of money per month on their storage bill.

 

 

This could be the single most important thing you do this year because most of the price of a cloud environment is the price of the storage, not the compute, not the RAM, not the throughput. It’s the storage.

If I can reduce and actively manage, compress, optimize that data and tier it, and use cheaper storage, then I’ve done the appropriate work that my company will benefit from. On the one hand, it is all about reducing costs, but there is a cost to performance also.

Reducing the Cost of Performance

No one’s ever come to me and said, “Jeff, will you reduce my performance.” Of course not. Nobody wants that. Some people want to maintain performance and lower costs. We can actually increase performance and lower costs. Let me show you how that works.

We’ve been looking at this model throughout this talk. We have EBS storage at 10 cents with a NAS, a SoftNAS between the storage and the VMs. Then we have this managed storage like EFS with all of the other companies in contention with that storage.

It’s like me working from home, on the left-hand side, and having a consistent experience to my hard drive from my computer. I know how long it takes to boot. I know how long it takes to launch an application. I know how long it takes to do things.

But if my computer is at work in the office and I had to hop in a freeway, I’m in contention with everybody else who’s going to work who also needs to work on their hard drive at the computer in their office. Some days the traffic is light and fast, some days it’s slow, some days there’s a wreck, and it takes them twice as long to reach there. It’s inconsistent. I’m not sure what I am paying for.

If we think about what EFS does for performance, and this is based on their website, you get more performance or throughput with more storage that you have. I’ve seen some ads and blog articles that a lot of developers.

They say, “If I need 100 MB of throughput for my solution and I only need one terabyte worth of data, I’ll put an extra terabyte of dummy data out there on my share so that I can get the performance I want.” I put another terabyte at 30 cents per GiB per month that I’m not even going to use just to get the performance that I need.

Then there’s bursting, then there is throttling, and then it gets confusing. We are so focused on delivering performance. SoftNAS is a data-performance company. We have levels or scales of performance, 200, 400, 800, to 6,400. Those relate to throughput, so the throughput and IOPS that you can expect for the solution.

We are using storage that’s only 10 cents per GiB on AWS. It’s a dedicated performance that you can determine the performance you need and then buy that solution. On Azure, it’s a little bit different. Their denominator for performance is of vCPUs. A 200 is a 2 vCPU. A 1,600 is a 20 vCPU. Then we publish the IOPS and throughput that you can expect to have for your solution.

Of course, reducing cost performance, use a NAS to deliver the storage in the cloud. Use a predictable performance. Use attached storage with a NAS. Use a RAID configuration. You can focus on read and write cache even with different disks that you use or with a NAS on the amount of RAM that you use.

Pay for performance. Don’t pay more for the capacity to get the performance. We just took a real quick look at three ways to slash your storage cost – optimizing that storage with dedupe, compression, and tiering, making less expensive storage work for you, right, and then reducing the cost of performance. Pay for the performance you need, not for more storage to get the performance you need.

What do you do now? You could start a free trial on AWS or Azure. You can schedule a performance assessment where you talk with one of our dedicated people who do this 24/7 to look to how to get you the most performance you can at the lowest price.

We want to do what’s right by you. At Buurst, we are a data-performance company. We don’t charge for storage. We don’t charge for more storage. We don’t charge for less storage. We want to deliver the storage you paid for.

 

 

You pay for the storage from Azure or AWS. We don’t care if you attach a terabyte or a petabyte, but we want to give you the performance and availability that you expect from an on-premises solution. Thank you for today. Thank you for your time.

At Buurst, we’re a data-performance company. It’s your time to be this IT hero and save your company money. Reach out to us. Get a performance assessment. Thank you very much.   

Meeting Cloud File Storage Cost and Performance Goals – Harder than You Think

Meeting Cloud File Storage Cost and Performance Goals – Harder than You Think

According to Gartner, by 2025 80% of enterprises will shut down their traditional data centers. Today, 10% already have. We know this is true because we have helped thousands of these businesses migrate workloads and business-critical data from on-premises datacenters into the cloud since 2013. Most of those workloads have been running 24 x 7 for 5+ years. Some of them have been digitally-transformed (code for “rewritten to run natively in the cloud”).

The biggest challenge in adopting cloud isn’t the technology shift – it’s finding the right balance of cost vs. performance and availability that justifies moving to the cloud. We all have a learning curve as we migrate major workloads into the cloud. That’s to be expected as there are many choices to make – some more critical than others.

Many of our largest customers operate mission-critical, revenue-generating applications in the cloud today. Business relies on these applications and their underlying data for revenue growth, customer satisfaction, and retention. These systems cannot tolerate unplanned downtime. They must perform at expected levels consistently… even under increasingly heavy loads, unpredictable interference from noisy cloud neighbors, occasional cloud hardware failures, sporadic cloud network glitches, and other anomalies that just come with the territory of large scale datacenter operations.

In order to meet customer and business SLAs, cloud-based workloads must be carefully designed. At the core of these designs is how data will be handled. Choosing the right file service component is one of the critical decisions a cloud architect must make.

For customers to remain happy, application performance must be maintained. Easier said than done when you no longer control the IT infrastructure in the cloud…

So how does one negotiate these competing objectives around cost, performance, and availability when you no longer control the hardware or virtualization layers in your own datacenter?  And how can these variables be controlled and adapted over time to keep things in balance? In a word – control. You must correctly choose where to give up control and where to maintain control over key aspects of the infrastructure stack supporting each workload.

One allure of the cloud is that it’s (supposedly) going to simplify everything into easily managed services, eliminating the worry about IT infrastructure forever. For non-critical use cases, managed services can, in fact, be a great solution. But what about when you need to control costs, performance, and availability? Unfortunately, managed services must be designed and delivered for the “masses”, which means tradeoffs and compromises must be made. And to make these managed services profitable, significant margins must be built into the pricing models to ensure the cloud provider can grow and maintain them.

In the case of public cloud shared file services like AWS® Elastic File System (EFS) and Azure NetApp® Files (ANF), performance throttling is required to prevent thousands of customer tenants from overrunning the limited resources that are actually available. To get more performance, you must purchase and maintain more storage capacity (whether you actually need that add-on storage or not). And as your storage capacity inevitably grows, so do the costs. And to make matters worse, much of that data is actually inactive most of the time, so you’re paying for data storage every month that you rarely if ever even access. And the cloud vendors have no incentive to help you reduce these excessive storage costs, which just keep going up as your data continues to grow each day.

After watching this movie play out with customers for many years and working closely with the largest to smallest businesses across 39 countries, at Buurst™ we decided to address these issues head-on. Instead of charging customers what is effectively a “storage tax” for their growing cloud storage capacity, we changed everything by offering Unlimited Capacity. That is, with Buurst SoftNAS® you can store an unlimited amount of file data in the cloud at no extra cost (aside from the underlying cloud block and object storage itself).

SoftNAS has always offered both data compression and deduplication, which when combined typically reduces cloud storage by 50% or more. Then we added automatic data tiering, which recognizes inactive and stale data, archiving it to less expensive storage transparently, saving up to an additional 67% on monthly cloud storage costs.

Just like when you managed your file storage in your own datacenter, SoftNAS keeps you in control of your data and application performance. Instead of turning control over to the cloud vendors, you maintain total control over the file storage infrastructure. This gives you the flexibility to keep costs and performance in balance over time.

To put this in perspective, without taking data compression and deduplication into account yet, look at how Buurst SoftNAS costs compare:

Buurst SoftNAS vs. NetApp ONTAP, Azure NetApp Files and AWS EFS

These monthly savings really add up. And if your data is compressible and/or contains duplicates, you will save up to 50% more on cloud storage because the data is compressed and deduplicated automatically for you.

Fortunately, customers have alternatives to choose from today:

  1. GIVE UP CONTROL – use cloud file services like EFS or ANF, pay for both performance and capacity growth, give up control over your data or ability to deliver on SLAs consistently
  2. KEEP CONTROL – of your data and business with Buurst SoftNAS, balance storage costs, and performance to meet your SLAs and grow more profitably.

Sometimes cloud migration projects are so complex and daunting that it’s advantageous to just take shortcuts to get everything up and running and operational as a first step. We commonly see customers choose cloud file services as an easy first stepping stone to a migration. Then these same customers proceed to the next step – optimizing costs and performance to operate the business profitably in the cloud and they contact Buurst to take back control, reduce costs, and meet SLAs.

As you contemplate how to reduce cloud operating costs while meeting the needs of the business, keep in mind that you face a pivotal decision ahead. Either keep control or give up control of your data, its costs, and performance. For some use cases, the simplicity of cloud file services is attractive and the data capacity is small enough and performance demands low enough that the convenience of files-as-a-service is the best choice. As you move business-critical workloads where costs, performance and control matter, or the datasets are large (tens to hundreds of terabytes or more), keep in mind that Buurst SoftNAS never charges you a storage tax on your data and keeps you in control of your business destiny in the cloud.

Next steps:

Learn more about how Buurst SoftNAS can help you maintain control and balance cloud storage costs and performance in the cloud.

What do we mean when we say “No Storage Tax”?

What do we mean when we say “No Storage Tax”?

At Buurst, we’re thinking about your data differently, and that means it’s now possible to bring all your data to the cloud and make it cost effective. We know data is the DNA of your business, which is why we’re dedicated to getting you the best performance possible, with the best cloud economics.

When you approach a traditional storage vendor, regardless of your needs, they will all tell you the same thing: buy more storage. Why is this? These vendors took their on-premises storage pricing models to the cloud with them, but they add unnecessary constraints, driving organizations down slow, expensive paths. These legacy models sneak in what we refer to as a Storage Tax. We see this happen in a number of ways:

  • Tax on data: when you pay for storage to a cloud NAS vendor for the storage you’ve already paid for
  • Tax on performance: when you need more performance, they make you buy more storage
  • Tax on capabilities: when you pay a premium on storage for NAS capabilities

So how do we eliminate the Storage Tax?

Buurst unbundles the cost of storage and performance, meaning you can add performance, without spinning up new storage and vice versa. As shown in the diagrams below, instead of making you buy more storage when you need more performance, Buurst’s pricing model allows you to add additional SoftNAS instances and compute power for the same amount of data. On the opposite side, if you need to increase your capacity, Buurst allows you to attach as much storage behind it as needed, without increasing performance levels.

Why are we determined to eliminate the StorageTax?

At Buurst, our focus is providing you with the best performance, availability, cost management, migration, and control –not how much we can charge you for your data. We’ve carried this through our pricing model to ensure you’re getting for the best all-up data experience on the cloud of your choice. This means ensuring your storage prices remain lower than the competition.

 

Looking at the below figures illustrates this point further:

NetApp
ONTAP™ Cloud Storage
10TB
Buurst’s SoftNAS $1,992
ONTAP $4,910

%

NetApp
Azure NetApp Files
8TB
Buurst’s SoftNAS $1,389
ANF $2,410

%

AWS
Elastic File Services
8TB
Buurst’s SoftNAS $1,198
AWS EFS $2,457

%

AWS
Elastic File Services
512TB
Buurst’s SoftNAS $28,798
AWS EFS $157,286

%

These figures reflect full-stack costs, including computing instances, storage, and NAS capabilities, in a high availability configuration, with the use of SmartTiers, Buurst’s dynamic block-based storage tiering. With 10TB of data, SoftNAS customers save up to 60% compared to NetApp ONTAP. And at 8TB of data, when compared with Azure NetApp Files and Amazon EFS, customers see cost savings of 42% and 51%, respectively. The savings can continue to grow as you add more data over time. When compared to Amazon EFS, SoftNAS users can save up to 82%at the 512TB level. Why is this? Because we charge a fixed fee, meaning the more data you have, the more cost-effective Buurst will be.

But we don’t just offer a competitive pricing model. Buurst customers also experience benefits around:

  • Data performance: Fast enough to handle the most demanding workloads with up to 1 million IOPS on AWS
  • Data migration: Point and click file transfers to with speeds up to 200% faster than TCP/IP over high latency and noisy networks
  • Data availability: Cross-zone high availability with up to 99.999% uptime, asynchronous replication, and EBS snapshots
  • Data control and security: Enterprise NAS capabilities in the cloud including at-rest and in-flight data encryption.

At Buurst, we understand that you need to move fast, and so does your data. Our nimble, cost-effective data migration and performance management solution opens new opportunities and capabilities that continually prepare you for success.

Get all the tools you need so day one happens faster and you can be amazing on day two, month two, and even year two. We make your cloud decisions work for you – and that means providing you data control, data performance, cost-management with storage tiering, and security.

To learn how Buurst can help you manage costs on the cloud, download our eBook:

The Hidden Costs of Cloud Storage

 

What do some of the most important brands know about SoftNAS?

What do some of the most important brands know about SoftNAS?

What do these powerful brands all have in common?

They’re all using SoftNAS.

Many of our Buust SoftNAS customers tried their own cloud-native storage solutions or used on-premises solutions that were ported to the cloud, only to quickly realize they were difficult to implement and did not provide the cost or performance needed.

Traditional storage solutions just want to sell more storage, driving companies down a slow, expensive path, resulting in a solution that doesn’t scale easily and is unable to take advantage of the capabilities and tools offered on the cloud. So why not learn from their mistakes? Skip the fail step by jumping right to the successful solution that helps you manage and control your data on the cloud, without the hefty price tag. Buust SoftNAS offers NAS-like tools that support your highest-performing, data-intensive applications while helping you manage costs on the cloud.

So how does SoftNAS provide high performance without a high cost? 

Storage tiering

Automatic dynamic storage tiering applies data aging and access policies to data, moving aging data from high-performance block storage to less expensive, lower performance storage –saving you up to 67% in cloud storage costs. Unlike file-based tiering models, SoftNAS SmartTiers is block-based, meaning the portions of large files that are regularly accessed remain in high-performance storage tiers, while irregularly accessed portions are moved to lower performance tiers for better cost savings.

 

Deduplication and compression

SoftNAS doesn’t stop at storage tiering to keep your costs low. Inline data deduplication and compression greatly reduce your storage footprint, therefore reducing the amount of storage you have to procure. SoftNAS’s data deduplication compares your application files block by block to find and eliminate redundancies. On top of deduplication, data compression capabilities will reduce the size of your data by an additional 50-75%. With these drastic drops in storage needs, it’s not difficult to see why so many organizations are looking at SoftNAS to manage cloud costs for their high performing application data.

 

CapEx for OpEx

Perhaps one of the more obvious ways that SoftNAS manages your costs is by enabling you to make the shift from CapEx to OpEx. Netflix couldn’t operate at their current scale if they were strictly managing their data on-premises –the costs would be astronomical, and maintenance would be unfathomable. Procuring on-premises storage doesn’t stop at purchasing another piece of hardware. It means you’re buying hardware, a secondary data center, and paying for the electricity, manpower, and security to manage that hardware. The list goes on. SoftNAS eliminates the need to purchase hardware by making the transition to the cloud fast, easy, and possible. If it worked for Netflix, why can’t it work for you?

If you’re ready to unlock the same cost-effective capabilities many powerful companies have already experienced, check out our 30-day free trial: