Best Practices Learned from 1,000 AWS VPC Configurations

Best Practices Learned from 1,000 AWS VPC Configurations

AWS VPC Best Practices with SoftNAS 

Buurst SoftNAS has been available on AWS for the past eight years providing cloud storage management for thousands of petabytes of data. Our experience in design, configuration, and operation on AWS provides customers with immediate benefits.  

Best Practice Topics  

  • Organize Your AWS Environment
  • Create AWS VPC Subnets
  • SoftNAS Cloud NAS and AWS VPCs
  • Common AWS VPC Mistakes
  • SoftNAS Cloud NAS Overview

Organize Your AWS Environment

Organizing your AWS environment is a critical step in maximizing your SoftNAS capability. Buurst recommends the use of tags. The addition of new instances, routing tables, and subnets can create confusion. The simple use of tags will assist in identifying issues during troubleshooting.  

When planning your CIDR (Classless Inter-Domain Routing) block, Buurst recommends making it larger than expected. This is because every VPC subnet created uses five IP addresses for the subnet. Thus, remember that all newly created subnets have a five IP overhead.  

Additionally, avoid using overlapping CIDR blocks as any future updates in pairing the VPC with another VPC will not function correctly with complicated VPC pairing solutions. Finally, there is no cost associated with a larger CIDR block, so simplify your scaling plans by choosing a larger block size upfront. 

Create AWS VPC Subnets

A best practice for AWS subnets is to align VPC subnets to as many different tiers as possible. For example, the DMZ/Proxy layer or the ELB layer uses load balancers, application, or database layer. If your subnet is not associated with a specific route table, then by default, it goes to the main route table. Missing subnet associations are a common issue where packets are not flowing correctly due to not associating subnets with route tables.  

Buurst recommends putting everything in a private subnet by default and using either ELB filtering or monitoring services in your public subnet. A NAT is preferred to gain access to the public network as it will become part of a dual NAT configuration for redundancy. Cloud formation templates are available to set up highly available NAT instances which require proper sizing based on the amount of traffic going through the network.      

Set up VPC peering to access other VPCs within the environment or from a customer or partner environment. Buurst recommends leveraging the endpoints for access to services like S3 instead of going out either over a NAT instance or an internet gateway to access services that don’t live within the specific VPCs. This setup is more efficient with a lower latency by leveraging an endpoint rather than an external link.  

Control Your Access

Control access within the AWS VPC by not cutting corners using a default route to the internet gateway. This access setup is a common problem many customers spend time on with our Support organization. Again, we encourage redundant NAT instances leveraging cloud formation templates available from Amazon or creating highly available redundant NAT instances 

The default NAT instance size is an m1.small, which may or may not suit your needs depending on the traffic volume in your environment. Buurst highly recommends using IAM (Identity and Access Management) for access control, especially configuring IAM roles to instances. Remember that IAM roles cannot be assigned to running instances and are set up during instance creation time. Using those IAM roles will allow you to not have to continue to populate AWS keys within the specific products to gain access to some of those API services. 

How Does SoftNAS Fit Into AWS VPCs?

Buurst SoftNAS offers a highly available architecture from a storage perspective, leveraging our SNAP HA capability, allowing us to provide high availability across multiple availability zones. SNAP HA offers 99.999% HA with two SoftNAS controllers replicating the data into block storage in both availability zones. Buurst customers who run in this environment qualify for our Buurst No Downtime Guarantee 

Additionally, AWS provides no SOA (Start of Authority) unless your solution runs in a multi-zone deployment.    

SoftNAS uses a private virtual IP address in which both SoftNAS instances live within a private subnet and are not accessible externally, unless configured with an external NAT, or AWS Direct Connect.

SoftNAS SNAP HA™ provides NFS, CIFS and iSCSI services via redundant storage controllers. One controller is active, while another is a standby controller. Block replication transmits only the changed data blocks from the source (primary) controller node to the target (secondary) controller. Data is maintained in a consistent state on both controllers using the ZFS copy-on-write filesystem, which ensures data integrity is maintained. In effect, this provides a near real-time backup of all production data (kept current within 1 to 2 minutes). 

A key component of SNAP HA™ is the HA Monitor. The HA Monitor runs on both nodes that are participating in SNAP HA™. On the secondary node, HA Monitor checks network connectivity, as well as the primary controller’s health and its ability to continue serving storage. Faults in network connectivity or storage services are detected within 10 seconds or less, and an automatic failover occurs, enabling the secondary controller to pick up and continue serving NAS storage requests, preventing any downtime.  

Once the failover process is triggered, either due to the HA Monitor (automatic failover) or as a result of a manual takeover action initiated by the admin user, NAS client requests for NFS, CIFS and iSCSI storage are quickly re-routed over the network to the secondary controller, which takes over as the new primary storage controller. Takeover on AWS can take up to 30 seconds, due to the time required for network routing configuration changes to take place. 

Common AWS VPC Mistakes

These are the most common support issues in AWS VPC configuration: 

  • Deployments require two NIC interfaces with both NICs in the same subnet. Double-check during configuration.  
  • SoftNAS health checks perform a ping between the two instances requiring the security group to be open at all times  
  • A virtual IP address must not be in the same CIDR of the AWS VPC. So, if the CIDR is, select a virtual IP address not within that subnet.  

SoftNAS Overview

Buurst SoftNAS is an enterprise virtual software NAS available for AWS, Azure, and VMware with industry-leading performance and availability at an affordable cost. 

SoftNAS is purpose-built to support SaaS applications and other performance-intensive solutions requiring more than standard cloud storage offerings.

  • Performance – Tune performance for exceptional data usage 
  • High Availability – From 3-9’s to 5-9’s HA with our No Downtime Guarantee 
  • Data Migration – Built-in “Lift and Shift” file transfer from on-premises to the cloud 
  • Platform Independent – SoftNAS operates on AWS,Azure, and VMware

Learn: SoftNAS on AWS Design & Configuration Guide 


    Common questions related to SoftNAS and AWS VPC: 

    We use VLANs in our data centers for isolation purposes today. What VPC construct do you recommend to replace VLANs in AWS?

    That would be subnets, so you could either leverage the use of subnets or if you really wanted to get a different isolation mechanism, create another VPC to isolate those resources further and then actually pair them together via the use of VPC pairing technology.

    You said to use IAM for access control, so what do you see in terms of IAM best practices for AWS VPC security?

    The most significant thing is that you deal with either thirdparty products or customized software that you made on your web server. Anything that requires the use of AWS API resources needs to use a secret key and an access key, so you can store that secret key and access key in some type of text file and have it reference it, or, b, the easier way is just to set the minimum level of permissions that you need in the IAM role, create this role and attach it to your instance and start time. Now, the role itself cannot be assigned, except during start time. However, the permissions of several roles can be modified on the fly. So you can add or subtract permissions should the need arise.

    So when you’re troubleshooting the complex VPC networks, what approaching tools have you found to be the most effective?

    We love to use traceroute.  I love to use ICMP when it’s available, but I also like to use the AWS Flow Logs which will actually allow me to see what’s going on in a much more granular basis, and also leveraging some tools like CloudTrail to make sure that I know what API calls were made by what user to understand what’s gone on.

    What do you recommend for VPN intrusion detection?

    There are a lot of them that are available. We’ve got some experience with Cisco and Juniper for things like VPN and Fortinet, whoever you have, and as far as IVS goes, Alert Logic is a popular solution. I see a lot of customers that use that particular product. Some people like some of the opensource tools like Snort and things like that as well.

    Any recommendations around secure junk box configurations within AWS VPC?

    If you’re going to deploy a lot of your resources within a private subnet and you’re not going to use a VPN, one of the ways that a lot of people do this is to just configure a quick junk box, and what I mean by that is just to take a server, whether it be a Windows or Linux, depending upon your preference, and put that in the public subnet and only allow access from a certain amount of IP addresses over to either SSH from a Linux perspective or RDP from a Windows perspective.  It puts you inside of the network and actually allows to gain access to the resources within the private subnet.

    And do junk boxes sometimes also work? Are people using VPNs to access the junk box too for added security?

    Some people do that. Sometimes they’ll just put like a junk box inside of the VPN and your VPN into that. It’s just a matter of your organization security policies.

    Any performance or further considerations when designing the VPC?

    It’s important to understand that each instance has its own available amount of resources, from not only from a network IO but from a storage IO perspective, and also it’s important to understand that 10GB, a 10GB instance, like let’s say take the c3.8xl which is a 10GB instance. That’s not 10GB worth of network bandwidth or 10GB worth of storage bandwidth. That’s 10GB for the instance, right? So if you have a high amount of IO that you’re pushing there from both a network and a storage perspective, that 10GB is shared, not only from the network but also to access the underlying EBS storage network. This confuses a lot of people, so it’s 10GB for the instance not just a 10GB network pipe that you have.

    Why would use an elastic IP instead of the virtual IP?

    What if you had some people that wanted to access this from outside of AWS? We do have some customers that primarily their servers and things are within AWS, but they want access to files that are running, that they’re not inside of the AWS VPC.  So you could leverage it that way, and this was the first way that we actually created HA to be honest because this was the only method at first that allowed us to share an IP address or work around some of the public cloud things like node layer to broadcast and things like that.

    Looks like this next question is around AWS VPC tagging. Any best practices for example?  

    Yeah, so I see people that basically take different services, like web and database or application, and they tag everything within the security groups and everything with that particular tag.  For people that are deploying SoftNAS, I would recommend just using the name SoftNAS as my tag. It’s really up to you, but I do suggest that you use them. It will make your life a lot easier.

    Is storage level encryption a feature of SoftNAS Cloud NAS or does the customer need to implement that on their own?  

    So as of our version that’s available today which is 3.3.3, on AWS you can leverage the underlying EBS encryption. We provide encryption for Amazon S3 as well, and coming in our next release which is due out at the end of the month we actually do offer encryption, so you can actually create encrypted storage pools which encrypt the underlying disk devices.

    Virtual VIP for HA: does the subnet this event would be part of add in to the AWS VPC routing table?

    It’s automatic. When you select that VIP address in the private subnet, it will automatically add a host route into the routing table. Which allows clients to route that traffic.

    Can you clarify the requirement on an HA pair with two next, that both have to be in the same subnet? 

    So each instance you need to move NIC ENIs, and each of those ENIs actually need to be in the same subnet.

    Do you have HA capability across regions? What options are available if you need to replicate data across regions? Is the data encryption at-rest, in-flight, etc.?

    We cannot do HA with automatic failover across regions.  However, we can do SnapReplicate across regions. Then you can do a manual failover should the need arise. The data you transfer via SnapReplicate is sent over SSH and across regions. You could replicate across data centers. You could even replicate across different cloud markets.

    Can AWS VPC pairings span across regions?

    The answer is, no, that it cannot.

    Can we create an HA endpoint to AWS for use with AWS Direct Connect?

    Absolutely. You could go ahead and create an HA pair of SoftNAS Cloud NAS, leverage direct connect from your data center, and access that highly available storage.

    When using S3 as a backend and a write cache, is it possible to read the file while it’s still in cache?

    The answer is, yes, it is. I’m assuming that you’re speaking about the eventual consistency challenges of the AWS standard region; with the manner in which we deal with S3 where we treat each bucket as its own hard drive, we do not have to deal with the S3 consistency challenges.

    Regarding subnets, the example where a host lives in two subnets, can you clarify both these subnets are in the same AZ?

    In the examples that I’ve used, each of these subnets is actually within its own VPC, assuming its own availabilities. So, again, each subnet is in its own separate availability zone, and if you want to discuss more, please feel free to reach out and we can discuss that.

    Is there a white paper on the website dealing with the proper engineering for SoftNAS Cloud NAS for our storage pools, EBS vs. S3, etc.?

    Click here to access the white paper, which is our SoftNAS architectural paper which was co-written by SoftNAS and Amazon Web Services for proper configuration settings, options, etc. We also have a pre-sales architectural team that can help you out with best practices, configurations, and those types of things from an AWS perspective. Please contact and someone will be in touch.

    How do you solve the HA and failover problem?

    We actually do a couple of different things here. When we have an automatic failover, one of the things that we do when we set up HA is we create an S3 bucket that has to act as a third-party witness. Before anything takes over as the master controller, it queries the S3 bucket and makes sure that it’s able to take over. The other thing that we do is after a take-over, the old source node is actually shut down.  You don’t want to have a situation where the node is flapping up and down and it’s kind of up but kind of not and it keeps trying to take over, so if there’s a take-over that occurs, whether it’s manual or automatic, the old source node in that particular configuration is shut down.  That information is logged, and we’re assuming that you’ll go out and investigate why the failover took place.  If there are questions about that in a production scenario, is always available.

    AWS VPC 101: What is a VPC, topology, VPC access, and packet flow

    AWS VPC 101: What is a VPC, topology, VPC access, and packet flow

    AWS VPC 101

    Amazon Virtual Private Cloud (Amazon VPC) enables you to launch AWS resources into a virtual network that you’ve defined. This virtual network closely resembles a traditional network that you’d operate in your own data center, with the benefits of using the scalable infrastructure of AWS. In this blog we will learn:

    What is an AWS VPC (Virtual Private Cloud)?

    What is an AWS VPC

    A VPC (Virtual Private Cloud) is a virtual network that’s specific to your environment. Amazon Virtual Private Clouds or AWS VPC provide you with your own virtual private data center and private network within your AWS account. Once you have created a VPC, you can create resources, such as EC2 instances, in your VPC to keep them physically isolated from other resources in AWS.

    A VPC gives you configuration options that allow you to tune your VPC environment. These options include: being able to configure your private IP address ranges, setup private subnets, control and manipulate the routing versus route tables, setup different networking gateways and get granular with security settings from both a network ACL as well as a security group type settings.

    AWS Virtual Private Cloud Flexibility

    AWS Virtual Private Cloud Flexibility

    One of the greatest features of the AWS VPC is flexibility. You configure what is going to be your IP address range, how the VPC routing works, whether or not you’re going to allow VPN access, and what the architecture of the different VPC subnets is going to be. You have security options such as security groups and network ACLs and specific routing rules that can be configured to enable different features, such as running multiple NIC interfaces, configuring static private IP addresses and VPC peering.

    VPCs can be leveraged to create an AWS hybrid cloud by leveraging the AWS Direct Connect service. AWS Direct Connect allows you to connect your on-prem resources into the AWS cloud over a high bandwidth, low latency connection.

    It is possible to connect one VPC to another VPC. Both VPCs may be in the same organization, or you can connect one to other organizations for specific services. Endpoint flow logs can be enabled that can help you with troubleshooting connectivity issues to specific services within the VPC itself.

    AWS VPC Topology

    aws vpc topology

    VPCs are used in a single region. But they are a multi-availability zone, which basically means that each subnet you create has the ability to live in a different availability zone or in a single availability zone. All of the subnets that you create within a VPC can route to each other by default.

    The overall network size for a single VPC can be anywhere between a16 or 28 subnets for the overall CIDR (Classless Inter-Domain Routing) of the VPC and is configurable for each of the subnets that you want to set within the AWS VPC.

    You have the ability to choose your own IP prefix. If you want the 10 networks, the 50 networks, or whatever you’d like your private IP address to be, the IP prefix is configurable within your AWS VPC network topology.

    Accessing the AWS VPC

    aws vpc access

    There are several types of gateways to connect into and out of a VPC. One of the questions often asked is what do each of these gateways do? How do they work?

    The internet gateway allows you to point specific resources within your VPC via route tables to gain access to the outside world, or you can leverage a NAT (Network Address Translation) instance. This allows you to enable VPN access to your VPC, using AWS Direct Connect or another VPN solution to gain access to resources inside the VPC.

    • There are two parts involved in setting up a VPN to your VPC.
    • The VPG (Virtual Private Gateway) which is the AWS side of a VPN connection.
    • The customer gateway is the customer side of a VPN connection. Most of the major VPN hardware vendors have supported template configurations that can be downloaded directly from the virtual private gateway interface within your VPC via the AWS console.

    AWS VPC Packets Flow

    aws vpc packet flow

    How do the packets flow within a Virtual Private Cloud?

    Let’s use the above example setup of an AWS VPC and discuss how the packets will flow.

    In this example, we have three subnets:


    We have three instances in the AWS VPC:

    • instance A connected to subnet 1 (
    • instance C connected to subnet 3 (
    • instance B has two elastic network interfaces, or ENIs, that are connected to two subnets
      • subnet 1 (
      • subnet 2 (

    AWS VPC Packet Flow (Instance A and Instance B)

    aws vpc packet flow

    How do instance A and instance B connect to each other over subnet 1?

    • Instance A and B both live in the same subnet so by default, the routing table is the first check and the routing table automatically has a default route associated with it to route to all traffic within the overall CIDR of the VPC.
    • Next, it hits the ARP (Address Resolution Protocol) table, the outbound portion of the firewall, and a source and destination check occurs, which is a configurable option within AWS.
    • Then it hits the outbound security group which, by default, the outbound security group is wide open. All traffic is allowed out.
    • It then goes over to the other instance and checks the inbound security group, a second source, and destination check, and then hits the firewall before the packet flows into instance B.

    When people run into VPC networking issues they say, “I can SSH, or I can’t SSH, or I can ping, or I can’t ping,” and a lot of the problems that people experience in troubleshooting connectivity here are primarily around the security groups on the inbound side because the outbound is opened by default. The inbound side is usually the first place to check when you’re having some type of connectivity issue within the AWS VPC to ensure that the security group is not blocking the type of traffic based upon either source or destination IP or port number, for example, that may be impairing your connectivity.

    AWS VPC Packet Flow (Instance B and Instance C)

    aws vpc packet flow

    How would the packets flow to instances B and C?

    Remember, instance B is living in two subnets (Subnet 1 and 2), and instance C which is living in subnet 3.

    If instance B wanted to talk to instance C, it can go one of two ways. It could go out of subnet 1 or subnet 2. Regardless of which subnet it goes out on, the same rules apply. It’s going to hit the routing table, go to the firewall, source destination check, and security group out.

    It’s going to check the route table to make sure it has a route to that destination network, and then because it’s going to a different network, it’s going to check the network ACL out, and then on the reverse side, it comes back in. It’s going to check the network ACL in before it checks the security group, so it’s different types of connectivity options for instances that happen to live in a different subnet.

    Understanding the above flow can be very useful. Once you understand how the packets flow and where they were going, and how everything was being checked, it allows you to better troubleshoot VPC network connectivity issues.

    Once you grasp the VPC networking concepts in this blog, we suggest you have a look at our “AWS VPC Best Practices” blog post. In it, we share a detailed look at best practices for the configuration of an AWS VPC and common VPC configuration errors.

    SoftNAS AWS NAS Storage Solution

    SoftNAS offers AWS customers an enterprise-ready NAS capable of managing your fast-growing data storage challenges including AWS Outpost availability. Dedicated features from SoftNAS deliver significant cost savings, high availability, lift and shift data migration, and a variety of security protection.

    SoftNAS AWS NAS Storage Solution is designed to support a variety of market verticals, use cases, and workload types. Increasingly, SoftNAS NAS deployed on the AWS platform to enable block and file storage services through Common Internet File System (CIFS), Network File System (NFS), Apple File Protocol (AFP), and Internet Small Computer System Interface (SCSI). Watch the SoftNAS Demo.

    On-Premise vs AWS NAS Storage – Which is Best for Your Business?

    On-Premise vs AWS NAS Storage – Which is Best for Your Business?

    On-Premise vs AWS Cloud NAS Storage – Which is Best for Your Business? Should you keep your On-Premises NAS: Upgrade, Pay Maintenance, or Public Cloud? download the full slide deck on Slideshare

    The maintenance bill is due for your on-premises SAN/NAS–or it just increased. It’s hundreds of thousands or millions of dollars just to keep your existing storage gear under maintenance. And you know you will need to purchase more storage capacity for this aged hardware.

    • Do you renew and commit another 3-5 years by paying the storage bill and further commit to a data center architecture?
    • Do you make a forklift upgrade and buy new SAN/NAS gear or move to hyper-converged infrastructure?
    • Do you move to the AWS cloud for greater flexibility and agility?
    • Will you give up security and data protection?

    Difference between on-premise vs Hyper-converged vs AWS

    We’re going to be talking about on-premises to cloud conversation. We’re going to show you the difference between on-premise vs hyper-converged vs AWS. We will also tell you why you should choose AWS over on-premise and hyper-converged. Then we will tell you a little bit about SoftNAS and how it helps with your cloud migrations.

    On-premise vs AWS: upgrade, paid maintenance, or public cloud?

    Let’s focus on a storage dilemma that we have happening with IT teams and organizations all across the world.

    on-premises VS hyper-converged VS AWS
    The looming question is what do I do when my maintenance renewal comes up?

    Teams are left with three options. Either we stay on-premise and pay the renewal fee for your maintenance bill which is a continuously increasing expense, or you could consider a forklift upgrade where you’re buying a new NAS or SAN and moving to a hyper-converged platform.

    The drawback with this option is that you still haven’t solved all your problems because you’re still on-prem, you’re still using hardware and the new maintenance renewal is about 24 to 12 months away. Finally, customers can Lift and Shift data to AWS – hardware will no longer be required, and data centers could be unplugged.

    on premises nas upgrade or not

    There is an increase in maintenance costs for support, swapping disks, for downtime. There is exorbitant pricing for SSD drives and SSD storage when you pay a nominal leg for pricing that you need to pay to ensure that your environment works as advertised. You have never-ending pressure from businesses to add more storage capacity – we need it, we need it now, and we need more of it. There is a lack of low-cost high-performance object storage, and you’re pressured by the business owners for agile infrastructure.

    On-Premise vs Hyper-Converged vs AWS Cloud

    The business is growing; data is growing. You need to be ahead of the curve and way ahead of the curve to actually keep up. Let’s take a look. Let’s do a stare and compare all these three options that are there so On-Premise, Hyper-Converged, and AWS Cloud.

    on premise vs hyper converged vs aws cloud computing

    From a security standpoint, all these three options deliver a secure environment with all the rules and policies that you’ve already designed to protect your environment — they travel with you. From an infrastructure and management scenario, your on-premise and hyper-converged still require your current staff to maintain and update the underlying infrastructure.

    That’s where we’re talking about your disk swaps, your networking, your racking, and un-racking. AWS can help you limit this IT burden with its Manage Infrastructure.

    From a scalability standpoint, I dare that you call your NAS or SAN provider and tell them that you think you bought too much storage last year and you want to give them back some. In AWS, you get just that option. You can scale up or scale down allowing you to grow as needed and not as estimated.


    On-premise vs AWS Management

    AWS vs.On-Premises NAS Storage

    For your on-premise and hyper-converged system, you control and manage everything from layer one all the way up to layer seven. In an AWS model, you can remove the need for jobs like maintenance, disk swapping, and monitoring the health of your system and hand that over to AWS. You’re still in control of managing user-accounts access in your application, but you could wave goodbye to hardware, and maintenance fees in your forklift upgrades.

    8 reasons to choose AWS for your storage infrastructure.

    From a scalability standpoint. We talked about this earlier. Gives you the ability to grow your storage as needed, not as estimated. For the people who are storage gurus, you know exactly what that means.

    reasons to choose AWS for your storage infrastructure

    I’ve definitely been in rooms sitting with people with predictive modeling about how much data we are going to grow by for the next quarter or for the next year. I could tell you for 100% fact, that I have never ever been in a room where we’ve come up with an accurate number. It’s always been an idea, a hope, a dream, a guess.

    With AWS and the scalability that it provides the ability to grow your storage and then pay for it as you go and only pay for what you use. That in itself is worth its own weight in gold. Not only that, you get a chance to end your maintenance renewals and no longer pay for that maintenance ransom where they are holding access to your data until your maintenance ransom is paid.

    There are also huge benefits for trading-in CAPEX for the OPEX model. There is no more long-term commitment. When you’re finished with the resource, you send it back to the provider. You’re done with using your S3 disk, you turn it off and you send that back to Amazon.

    You also gain freedom from having to make a significant long-term investment for equipment that you and I know will eventually break down or become outdated. You also have a reliable infrastructure. We’re talking about S3 and its (11) nines worth of durability. EC2, where it gives you (4) nines worth of accessibility for your multi-AZ deployments. You have functions like disaster recovery, ease of use, and management, and you’re utilizing the best in the class in security to protect your data.

    If you’re currently using an on-premises NAS system and it’s coming up for maintenance renewal, what do you intend to do?
    • Are you going to do an in-place upgrade or you use your existing hardware, but you update the software?
    • Are you going to go a forklift update where you buy new hardware and software?
    • Or are you going to move to a hyper-converged system? Or are you considering the public cloud whether it’s AWS or other options?

    I think most of you are intending to move to the public cloud, whether it’s AWS or others. Looks like a lot of you are interested in in-place NAS and SAN upgrades, so it’s interesting.

    A lot of you are also considering moving to hybrid-converged. For those of you who answered other, we would be curious to learn more about what your other plans are. On the questions pane, you’re more than welcome to write what you’re intending to do.

    Lift and Shift Data Migration while keeping enterprise NAS features

    Lifting and shifting to the cloud can be done in multiple ways. We’re talking about, from a petabyte-scale, you can incorporate AWS Import using Snowball. You can connect directly using AWS Direct Connect, or you could use some open-source tools or programs like Rsync, Lsync, and Robocopy.

    Once your data is in the cloud, it’s how you’re going to maintain that same enterprise-level of experience that you’re used to. With SoftNAS Cloud NAS, you have that ability. We give you the opportunity to be able to have no downtime SLA and we’re the only company that gives that guarantee.

    Lift and Shift Data Migration

    SoftNAS allows you to lift and shift while maintaining the same enterprise-level of service and experience that you are used to. We are the only company that gives a no downtime guarantee SLA. We will give you the same enterprise feel in the cloud that you are used to on-premise. Whether or not that’s serving out your data via NFS for your apps that need NFS, CIFS, or SMB, we can do that.

    We deploy within minutes. We give you the ability, as we demonstrated, to do storage snapshots. The GUI in itself is easy to learn, easy to use, no training. You don’t need to send your teams back to get training to be able to use SoftNAS as software.

    We allow you to use the standard protocols. We leverage AWS’s storage elasticity. SoftNAS enables the existing applications to migrate unchanged. Our software provides all the Enterprise NAS capabilities — whether it’s CIFS, NFS, or iSCSI — and it allows you to make the move to the cloud and preserve your budget for the innovation and adaptations that translate to improved business outcomes.

    SoftNAS can also run on-premise via a virtual machine and create a virtual NAS from a storage system and connect to AWS for cloud-hosted storage.

    What storage workloads do you intend to move to the AWS Cloud?

    For those of you who are interested in moving to AWS, is it going to be NFS, CIFS, iSCSI, AFP, or you are not just intending to move to AWS at all? It looks like about over 40% of you are intending to move NFS workloads to the AWS Cloud, which is pretty common with what we’ve seen.

    CIFS and iSCSI support is also balanced too. A couple of you just have no interest in moving to the AWS Cloud. For those of you who don’t have an interest in moving to the AWS Cloud, again, on the questions page, please let us know why you don’t intend to move to AWS.

    Easily Migrate Existing Applications to AWS Cloud

    SoftNAS in a nutshell is an enterprise NAS filer that exists on a Linux appliance with a ZFS backing in the cloud or on-premise.

    migrate application to aws cloud

    We have a robust API, CLI, and cloud base that integrate with AWS S3, EBS on-Premise storage, and VMware. This allows us to provide data services like block replication, which allows you to access cloud disks. We give you storage enhancements such as compression and in-line deduplication, multi-level caching, and the ability to produce writable snap clones, and encrypt your data at rest or in-flight.

    We continue to deliver the best brand services by working with our industry-leading partners. Some of these people you guys might know as Amazon, Microsoft, and VMware. We continue to partner with them to enhance both our offerings and theirs.

    We recommend that you have your technical leads try out SoftNAS Virtual NAS Appliance. Tell them to visit our website and they’ll be able to go and try out SoftNAS.

    For some of the more technical people in our audience, we invite you to go to our AWS page where you can learn a little bit more about the details of how SoftNAS AWS NAS works.

    Why chose SoftNAS over the AWS storage points?

    We give you the ability, from a SoftNAS standpoint, to be able to encrypt the data. As we spoke about in the webinar, we’re the only appliance that gives a no downtime SLA. We stand by that SLA because we have designed our software to be able to address and take care of your storage needs. We do have the ability to connect to blob storage, and we are in other platforms other than AWS – that would be CenturyLink, and Azure, among some the others.

    You have an application that you need to be able to move to the cloud. However, in order for you to rewrite that application, it’s going to take you six months to a year to be able to support S3 or any kind of block storage. We give you the ability to migrate that data by setting up an NFS or a CIFS whatever that application is used to already in your enterprise.

    How SoftNAS Solved Isilon DR on AWS and Azure Clouds

    How SoftNAS Solved Isilon DR on AWS and Azure Clouds

    Isilon disaster recovery (DR) on AWS & Azure Storage

    We have customers continually asking for solutions that will allow them to either migrate or continuously replicate from their existing storage systems to the AWS and Azure cloud storage. The ask is generally predicated on some cost savings analysis, receipt of the latest storage refresh bill, or a recent disaster recovery DR event. In other cases, customers have many remote sites, offices, or factories where there’s simply not enough space to maintain all the data at the edge. The convenience of the public cloud is an obvious answer. We still don’t have elegant solutions to all of our cloud problems, but with the release of SoftNAS Platinum, we have at least solved one. Increasingly, we see a lot of Dell/EMC Isilon customers who want to trade out hardware ownership for the cloud subscription model and get out of the hardware management business.

    I will focus on one such customer and the Isilon system for which they tasked us to provide a cloud-based storage disaster recovery DR solution to solve.

    Problem Statement

    Can the cloud be used as a Disaster Recovery (DR) location for an on-premises storage array?

    The customer’s Isilon arrays provide scale-out NFS for their on-premises datacenters. These arrays were aging and due for renewal. The company had been interested in public cloud alternatives, but as much as the business leaders are pushing for a more cost-effective and scalable solution, they are also risk-averse. Thus, their IT staff was caught in the “change it but don’t change it “paradox.  The project lead and his SoftNAS team were asked to provide an Isilon disaster recovery solution that would meet the immediate cost savings goal while providing the flexibility to scale and modernize their infrastructure in the future, as the company moves more data and workloads into the public cloud.

    Assessing the Environment

    One of Isilon’s primary advantages is its scale-out architecture, which aggregates large pools of file storage with an ability to add more disk drives and storage racks as needed to scale what are effectively monolithic storage volumes. The Isilon has been a hugely successful workhorse for many years.

    Like Isilon, the public cloud provides its own forms of scale-out, “bottomless” storage. The problem is that this native cloud storage is not natively designed to be NFS or file-based but instead is block and object-based storage. The cloud providers are aware of this deficit, and they do offer some basic NFS file storage services, which tend to be far too expensive (e.g., $3,600 per TB per year). These cloud file solutions also tend to deliver unpredictable performance due to their shared storage architectures and the multi-tenant access overload conditions that plague them.

    How to Replace Isilon DR Storage on AWS and Microsoft Azure Cloud?

    Often, Isilon replacement projects begin with the disaster recovery DR datacenter because it is the easiest component to justify and move, and thus, poses the least risk for risk-averse IT shops that want to prove out their Isilon replacement with public cloud storage and provide a safe place for employees to gain improved cloud skills.

    Companies are either tired of paying millions of dollars per year for traditional DR datacenters and associated standby equipment – that rarely if ever have been used – or they do not have a DR facility and are still in need of an affordable DR solution with a preference for leveraging the public cloud instead of financing yet another private datacenter.

    SoftNAS solves this common customer need by leveraging the cloud provider’s scale-out block and object storage, providing enhanced data protection and tools to help in the migration and continuous sync of data.

    SoftNAS leverages the cloud providers’ block and object storage

    Isilon Disaster Recovery (DR) Solution Requirements

    • Active Cloud provides subscriptions with the ability to provision from the marketplace
    • 1 x SoftNAS Platinum v4.x Azure VM (recommend VM with at least 8 x virtual cores, 32GB memory, min. 1GBPS network)
    • Network connectivity to an existing AD domain controller (CIFS) from both on-prem and SoftNAS VMs
    • Source data-set hosted via on-prem Isilon & accessible via SMB/CIFS share and NFS exports
    • Internet access for both on-prem and SoftNAS VMs

    Isilon Disaster Recovery (DR) Step by Step

    1. Spin-up SoftNAS from the Marketplace and present a hot file system (share).
    2. Setup another SoftNAS machine as a VMware® VM on-prem using the downloadable SoftNAS OVA.
    3. From the on-prem SoftNAS VM, using the built-in Lift & Shift wizard, create a replication schedule to initially copy the data from the Isilon to SoftNAS VM into the public cloud. If there’s any issue with the network during the transfer process, there’s a suspend/resume capability that ensures large transfers do not have to start over (very important with hundreds of terabytes and lengthy initial transfer jobs).
    4. Once the initial synchronization copy completes, Lift & Shift enters its (Continuous Sync), making sure any production changes to local Isilon volumes are replicated to the cloud immediately.
    5. Now that the data is replicated to the cloud, it is now instantly available should the on-prem services cease to function.  This affords customers a quick transition to cloud services during DR events and should the local applications need to be spun up temporarily or eventually fully migrate to the cloud to improve performance.


    A detailed description of technologies used in providing the solution

    Who is SoftNAS?

    SoftNAS  NAS Filer has been focused 100% on cloud-based file storage and NAS solutions since 2013. The SoftNAS product has been tried and tested by thousands of customers located, in 36 countries across the globe. SoftNAS has brought enterprise features like high-availability (HA), replication, snapshot data protection, and AD integration to the two major cloud providers AWS and Azure. The software exists in the marketplace and can be spun up as on-demand instances or as a BYOL instance.

    How is DR data replication handled by SoftNAS?

    After deploying the instances, the next challenge is establishing replication and maintaining DR synchronization from on-premises Isilon shares to the public-cloud SoftNAS DR system/s. SoftNAS provides an incorporated flexible data synchronization tool along with a fill-in-the-blanks “Lift and Shift” feature that makes replication and sync quick and easy.

    Easy to setup

    As shown below, the admin just chooses the source file server (and any subdirectories) via the SMB or NFS share, then configures the transfer job. Once the transfer jobs start, it securely synchronizes the files from the Isilon source and transfers them into the SoftNAS filer running in the cloud.

    Monitor Progress

    A progress dashboard shows the percent complete of the replication job, along with detailed status information as the job progresses. If for any reason the job gets interrupted, it can simply be resumed from where it left off. If there’s a reason to temporarily suspend a large transfer job, there is a Pause/Resume feature.

    Less than optimal network links

    In the case that the remote Isilon file servers (or Windows or other file servers) also need to be replicated to the same cloud-based DR region; e.g., from factories, remote offices, branch offices, edge sites, or other remote locations over limited, high-latency WAN or Internet/VPN connections, SoftNAS addresses this barrier with its built-in UltraFast™ global data accelerator feature.

    SoftNAS UltraFast provides end-to-end optimization and data acceleration for network connections where packet loss and high round-trip times commonly limit the ability of the cloud to be used for file replication.

    SoftNAS UltraFast allows remote sites to accommodate high-speed large file data transfers of 600 Mbps or more, even when facing 300 milliseconds of latency and up to 2% packet loss on truly dirty or congested networks or across packet-radio, satellite, or other troublesome communications conditions (shown below). UltraFast can also be used to throttle or limit network bandwidth consumption, using its built-in bandwidth schedule feature. See the example of UltraFast vs. TCP performance:

    Why the Customer Chose SoftNAS vs. Alternative DR Solutions

    SoftNAS provides an ideal choice for the customer who is looking to host data in the public cloud in that it delivers the most granular configuration capabilities of any cloud storage solution. It enables customers to custom-build their cloud NAS solution using familiar native cloud compute, storage, networking, and security services as building blocks.

    Cost optimization can be achieved by combining the correct instance type with backend pool configuration to meet throughput, IOPS, and capacity requirements. The inclusion of SoftNAS SmartTiers, SoftNAS UltraFast and Lift and Shift as built-in features make SoftNAS the most complete cloud-based NAS solution available as a pre-integrated Cloud DR and migration tool. Even though the customer started with the Isilon DR solution, it now has the infrastructure in the cloud for DR replication of all its remote Windows and other file servers as a next step.

    Alternatively, customers must pick and choose various point products from different vendors, act as the systems integrator, and develop their own DR and migration systems. SoftNAS’s Platinum product provides all these capabilities in a single, cost-effective, ready-to-deploy package that saves time, reduces costs, and minimizes the risks and frustrations typically encountered using the alternative methods.

    Summary and Next Steps

    Isilon is a popular, premises-based file server that has served customers well for many years. Now that enterprises are moving into the hybrid cloud and public cloud computing era, customers need alternatives to on-prem file servers and dedicated DR file servers. A common first step we see customers taking at SoftNAS® is to start by replacing the Isilon file server in the DR datacenter as part of a bigger objective to eliminate the DR datacenter in its entirety and replace it with a public cloud. In other cases, customers do not have a DR datacenter and are starting out using the public cloud as the DR datacenter, while keeping the Isilon, NetApp, EMC, and Windows file servers on-prem.

    In other cases, customers wish to replace both on-prem primary and DR datacenter file servers with a 100% cloud-based solution, then either rehost some or all the applications in the public cloud or access the cloud-hosted file server via VPN and high-speed cloud network links. In either case, SoftNAS virtual NAS appliance is combined with native cloud block and object storage to deliver the high-performance, cloud-based file server solutions the customers want in the cloud, without compromising on performance, availability, or overspending on cloud storage.

    Tired of Slow EFS Performance? Need Predictable NFS and CIFS with Active Directory?

    Tired of Slow EFS Performance? Need Predictable NFS and CIFS with Active Directory?

    Tired of Slow AWS EFS Performance? Need Predictable NFS and CIFS with Active Directory?

    I’m Rick Braddy, Founder and CTO of Buurst. Like you, I have been in a situation where I was dependent upon Storage-as-a-Service (SaaS) and forced to live with slow performance, with no control or say in how NFS storage was structured or delivered. In my case, it was on-premise on VMware®, using NFS-mounted multi-user, shared Isilon® storage arrays in a private cloud, where we could not install our own storage appliances. We were stuck and ended up having to switch data centers to resolve this storage performance issue that plagued our business. It’s very frustrating when the users complain about slow application performance, then the bosses get riled up, and there’s little to nothing you can do about it. So I feel your pain.

    Today customers have choices, along with the freedom to choose. So if you’re experiencing slow or inconsistent performance with your current storage provider (EFS or other) and need answers, you came to the right place. On the other hand, if you haven’t yet deployed and are evaluating your options, this is a great time to get the facts before you find yourself stuck and have to switch horses later, as we see all too often.

    We commonly see AWS customers who have tried AWS EFS (and other options) that have hit slow performance, high costs, and other barriers that do not meet their business requirements. But it’s not just throttling from burst limits that slow performance. It’s the very nature of shared, multi-tenant filesystems that makes them prone to bottlenecks. The very nature of shared, multi-tenant filesystems makes them prone to bottlenecks, as shown in the following animated image.

    A shared, multi-tenant filesystem must deal with millions of concurrent filesystem requests from thousands of other companies, who compete for filesystem resources within a Shared Data Service Lane. It’s easy to see why performance becomes slow and unpredictable in a shared, multi-tenant filesystem that must service so many competing requests. In such an environment, bottlenecks and collisions at the network and storage levels are unavoidable at times. 

    Contrast that with having a Dedicated Data Express Lane like Buurst SoftNAS, your own exclusive Cloud NAS with its caching and resources focused on your data alone, where your applications’ needs are met consistently and constantly at maximum performance. 

    Below we see the published EFS burst limits. As applications use more throughput, burst credits are used. Applications that use too much I/O are penalized, and EFS throttles performance, one of the potential causes for slow AWS EFS performance.

    EFS is a shared, multi-tenant filesystem, subject to burst limits and multi-tenant bottlenecks. SoftNAS is a dedicated, high-performance filesystem with no burst limits and access to full AWS EBS performance, regardless of how much storage capacity you use. This is the first core difference between the two – shared vs. dedicated infrastructure. 

    There’s a place in the market where each type of filesystem fits and serves its individual purpose. The question is, which approach best meets your application and business needs? 

    Performance results can also vary based upon many other factors and each application’s unique behavior. Therefore, we recommend customers run benchmarks and test their applications at scale with simulated user loads before finalizing the components that will be placed into production as part of their cloud stack. This is often best accomplished during a proof-of-concept stage before applications go into production. Unfortunately, customers often discover too late in the process that they have performance issues and are in urgent need of a solution. 

    Since 2013, SoftNAS has provided thousands of customers with high-performance, dedicated Cloud NAS solutions. We originally pioneered the “cloud NAS” category as the #1 Best-selling NAS in the Cloud. By coupling industry standards like ZFS on Linux with our patented cross-zone high-availability and high-performance AWS EBS and S3-backed storage technologies, SoftNAS has consistently provided customers with the most flexibility, choice, and value business-critical NFS and CIFS with Active Directory for AWS®. 

    What do customers say about AWS EFS Slow Performance?

    EFS has been in the marketplace for several years now, so there’s been plenty of time to hear from customers about their experiences. It’s helpful to consider what customers see and report.

    Here are some examples of comments our Support Team sees below.

    You can search for “AWS EFS slow performance” to do your own research.

    “We want to migrate about 5 percent of our 150+ Centos based servers from on-prem to AWS and need NFS and CIFS but AWS does not offer EFS in Canada.”
    “Hi we were suggested to contact you guys from our AWS rep in helping us deal with the issues EFS has due to the slowness of I/O … our AMI’s currently mount our EFS mount point and due to the nature of the software we are using it tends to use a lot of file_get_contents, is_dir functionality which is very slow with EFS compared to a standalone server with SSD. Is this something you guys can help us with and get it up and running in our VPC?”
    “We host multiple Magento based websites for our clients in AWS and looking for a solution that we can use to replace AWS EFS as it is very slow with many small files and it is not yet available in many regions.”
    “I am trying the trial of SoftNAS in our AWS cloud for potential use for SMB/CIFS & NFS to be used as a replacement for EFS. We have reached the limit of the capabilities of EFS and so looking into other products.”
    “I need a demo by which I can decide the SoftNAS is suitable AMI for my production use case. This is very high priority as I need to implement this for persistence storage for Kubernetes as EFS is not available in Mumbai region.”

    “Need ability to use S3 as the backend storage instead of EBS/EFS as they are much too expensive for us.”

    The above is just a sample of what we typically hear from customers after evaluating or deploying EFS. EFS is a solid NFS-as-a-service that is meeting many customers’ cloud file storage needs. Some customers, however, need more than a multi-tenant, the shared filesystem can deliver. 

    What’s needed is a full-featured, enterprise-grade Cloud NAS that delivers: 

    • A tunable level of dedicated, predictable performance 
    • Guaranteed cross-zone high-availability with an up-time SLA 
    • Storage efficiency features (e.g., compression, deduplication) that mitigate the costs of cloud storage 
    • Support for EBS (Provisioned IOPS, SSD, magnetic) and highly durable, low-cost S3 storage 
    • A full-featured POSIX-compliant filesystem with full NFS that supports all the expected features at scale 
    • CIFS with Active Directory integration and full ACL support for Windows workloads 
    • Ability to scale storage into the hundreds of terabytes or petabytes while maintaining performance consistency at a reasonable cost level that doesn’t break the bank or derail the project 
    • Storage auto-tiering that maximizes performance while minimizing storage costs 
    • Data checksumming and other data integrity features that verify retrieved data is accurate and correct 
    • Instant storage snapshots and writable clones that do not involve making expensive, time-consuming copies of data for rapid recovery and previous versions 
    • Integrated data migration tools that make lifting and shifting production workloads into the cloud faster and easier 

      How can SoftNAS address all these issues better than AWS EFS?

      Simple. That’s what it was designed to do from the beginning – provide cloud-native, enterprise-grade “cloud NAS” capabilities in a way that’s designed to squeeze the maximum performance from EC2, EBS, and S3 storage. And cloud storage and data management is all we have done, every day, since 2013, so we are among the world’s top cloud storage experts.

      Below we see SoftNAS running as a Linux virtual machine image on EC2. First, EBS and S3 cloud storage is assigned to the SoftNAS image that’s been launched from AWS Marketplace or the AWS Console. Next, that storage is aggregated into ZFS filesystems as “storage pools”. Then, these pools of storage can be thin-provisioned into “volumes” that are shared out as NFS, CIFS, AFP, or iSCSI for use by applications and users.

      SoftNAS leverages arrays of native EBS block devices and S3 buckets as underlying, scale-out cloud storage. Data is striped across arrays of EBS and S3 devices, thereby increasing the available IOPS (I/O per second).  To add more performance and storage capacity, add more devices at any time. 

      SoftNAS runs on a dedicated instance within your AWS account, so everything is under your control – how many and what types of storage devices are attached, how much RAM is available for caching, how much direct-attached SSD to use (based on instance choice), along with how much CPU to allocate to compression, deduplication, and performance. Incidentally, compression and deduplication can reduce your actual storage costs by up to 80%, depending upon the nature of your data (you don’t get any storage efficiency features with EFS). 

      And because ZFS is a copy-on-write filesystem and SoftNAS automatically creates storage snapshots for you based on policies, you always have previous versions of your data available at your fingertips, in case something ever happens. You can quickly go back and recover your data without rolling a backup restore (you don’t get any storage snapshots or the ability to recover with EFS). 

      ZFS provides complete POSIX-compliant semantics as the SoftNAS filesystem. CentOS Linux provides full NFS v4.1 and NFS v3 filesystem semantics, including native file_get_contents, is_dir functionality. And because SoftNAS is built upon Linux, you get native NFS support with no functional limitations. 

      SoftNAS supports Windows CIFS with SMB 3 protocol, with Active Directory integration for millions of AD objects. All of these details matter when you’re deploying applications that rely on proper locking, filesystem permissions, full NFS semantics, NTFS ACLs, etc.   

      SoftNAS delivers reliable performance with two levels of caching – RAM and direct-attached SSD for massive read-caching, which together provide both predictable performances that YOU control and performance increases. And later, if you need even more performance, you can upgrade your EC2 instances to add more IOPS and greater throughput with more network bandwidth. 

      SoftNAS keeps you in control instead of turning your data over to the multi-tenant EFS filesystem and living with its various limitations. 
      Cost Management
      • Highest Performance
      • Most Expensive Storage
      • Most Active Data
      Cost Management
      • Moderate Performance
      • Medium Expense Storage
      • Less Active Data
      Cost Management
      • Lowest Performance
      • Least Expensive Storage
      • Least Active Data
      • Highly-durable S3 object storage

      Using the combination of three data efficiency methods, SoftNAS can reduce cloud storage costs more than any other alternative available today. No other vendor offers these kinds of cloud storage cost savings while maximizing performance and providing the level of data protection and control that SoftNAS delivers. Additionally, by working with our Sales team, you can get volume discounts for larger capacity implementations to save you even more. 

      You may wonder “who else uses SoftNAS?”  Look at who’s chosen SoftNAS for their business-critical cloud application deployments below. This is just a sample of recognizable customers across 36 countries globally who trust their business data to SoftNAS on a 24 x 7 x 365 basis. 


      You will find that customers say that Buurst provides the best technical support and cloud expertise available, along with its world-class SoftNAS Cloud NAS product. 

      By the way, over the years, we have developed an excellent relationship with AWS. AWS suggested that we create this and other comparison material to help customers understand the differences between EFS and SoftNAS. We are great partners with one common objective – to make our customers successful together. 

      How did this come about?

      I started Buurst in 2012 as a former traditional storage customer, so I know what you’re dealing with and what you need and want in a storage software product and partner. I believed in AWS and the future of the public cloud before it was apparent to everyone that it would be the next big thing. As a result, we were early to market in 2013 and have many years of head start vs. the alternatives. 

      Today, I ensure our team delivers excellence to our customers every day. Our management team ensures we hire, train, and provide the best technical support resources available globally. In addition, we bring years of cloud storage and multi-cloud performance experience to the table to ensure your success. 

      Customers always come first at Buurst. Our cloud experts are here to help you quickly address your most pressing cloud data management needs and ensure your success in the cloud – something we’ve done for thousands of customers since 2013. You can count on Buurst to be there with you. We help customers with all issues around networking, security, VPCs, and other areas all the time. 

      For example, our networking and storage experts have helped customers deploy thousands of VPCs in just about every imaginable security and network topology.

      Check out our prevalentBest Practices Learned from 1,000 VPC Configurations. This is just one of many examples of how Buurst helps its customers make the journey to the cloud faster, easier, and better than going it alone. 

      What regions is SoftNAS available in?

      Our products are available in all regions globally. We typically add support for new regions within 60 days or less (as soon as our QA team can test and certify the product in the new region). This means you can count on Buurst to be where you need it to be when you need it there. 

      What are my options to get started?

      There are many ways to get started with Buurst. If you need help fast, I recommend you reach out to our solutions team or call us at 1-832-495-4141  to see how Buurst can address your cloud file storage and data management needs. 

      Need help migrating your data from on-premise or EFS?

      SoftNAS now includes automated data migration capabilities that take the pain out of migration. We can also assist you in the planning and actual migration.  Learn more.

      Prefer some help getting started?

      If you’d prefer to see a demo first or get some assistance evaluating your options further, schedule a demo and free consultation with one of our experts now and get your questions answered one-on-one. 

      Need more information before proceeding?

      That’s understandable. Here’s some additional helpful links below to more information:

      Remember. We are here and ready to help. You don’t have to go it alone anymore– reach out, and let’s schedule a time together to explore how our cloud experts can be of service and quickly address your needs – at no cost and no obligation.Click here to schedule a free consultation now. 

      Thank you for visiting Buurst. Here’s to your cloud success! We look forward to being of service. 

      Got the Public Cloud Storage Summertime Blues? I’ve Got the Cure.

      Got the Public Cloud Storage Summertime Blues? I’ve Got the Cure.

      SoftNAS SmartTiers Cost Savings Calculator is able to show you how much you can potentially save on storage costs from public cloud vendors.

      We’ve heard the many benefits of moving to the public cloud including:

      • How scalable it is.
      • How fast it is to set up and tear down an environment.
      • The unlimited and bottomless storage availability.
      • The increased levels of security.

      The ease in which testing can be accomplished and of course how inexpensive it is. Sounds a lot like a salesperson promising you the world with all upside and no downside, doesn’t it? Well, much of the hype about the public cloud is true, for the most part. “Wait” you say, “how can this be true?” Public cloud storage can be pricey and adds up quickly if you have any data-hungry applications running in the cloud. Not only can this create the Summertime blues; but the blues for every season as well.

      Friends (if I may call you that), I got the cure for what ails you. What if I told you that I have a way for you to save up to two-thirds of your public cloud storage costs – would you believe me? Heck, you don’t know me, maybe I’m one of those aforementioned salespeople promising you the world. Well I’m not and I can even prove that I can save you money on your public cloud storage costs and the best part, you will even prove it to yourself with your own information and no “sleight of hand” on my part.

      SoftNAS®, announced and made available its SoftNAS® 4 release that contains a powerful, patent-pending feature called SoftNAS® SmartTiers™. This feature (currently available for beta testing) offers automated storage tiering functionality that moves aging data from more expensive, high-performance block storage to less expensive object storage, according to customer-set policies (up to 4 storage tiers), reducing public cloud storage costs by up to 67%, without sacrificing application performance.

      SoftNAS SmartTiers

      “That’s great” you say; “but, how can I prove it to myself?” Well that’s what I’m so excited about telling you, because you can prove it to yourself using your own data.

      SoftNAS has just released the SoftNAS SmartTiers Cost Savings Calculator on its corporate website. The calculator is able to show you how much you can potentially save on storage costs from public cloud vendors like Amazon Web Services™ (AWS) and Microsoft® Azure™ by using the automated storage tiering feature of SoftNAS® 4 called: SoftNAS SmartTiers.

      The SmartTiers Cost Savings Calculator provides you with a simple, clean interface for entering in a few key data points and then dynamically presenting in graphical form what the cost for public cloud storage would be both with and without using SoftNAS SmartTiers, including the potential savings that you could achieve with SoftNAS SmartTiers. After filling out the simple and quick registration form, you will also receive a detailed email report. The multi-page detailed report breaks down the potential costs and savings over a three-year period.

      SoftNAS® SmartTiers™ Storage Cost Savings Calculator

      So, my friends, do you still have the public cloud storage Summertime blues, or have I cured them for you? Please try out the new SoftNAS SmartTiers Cost Savings Calculator and let us know your thoughts. Thank you.

      About the Author:

      John Bedrick, Senior Director of Product Marketing at Buurst

      Bringing more than four decades of technology experience to SoftNAS, John is responsible for all product marketing functions including pricing, content development, competitive, industry and market analysis. John has held leadership roles with: Microsoft, Intel, Seagate, CA Technologies, Cisco Systems, National Semiconductor, and several startup companies. John began his career in technology in IT management with Time-Warner and Philip Morris, where he was responsible for introducing personal computing devices and networking to both mainframe centric companies. He has helped numerous companies increase their bottom line revenue by over USD $1 billion. John holds a Bachelor of Science degree in Economics and Marketing from Wagner college and joint master’s and doctorate degrees in Metaphysics from ULC, as well as, an advanced certificate in international business from Harvard Business School. John is an internationally recognized subject matter expert in the areas of: security, data privacy and storage. He is an accomplished speaker and has been featured in numerous international venues.