Using AWS Disaster Recovery to Close Your DR Datacenter

Using AWS Disaster Recovery to Close Your DR Datacenter

Maintaining physical Disaster Recovery (DR) data centers grows more cost-prohibitive yearly. Moving your DR data center to the Amazon Web Services (AWS) Cloud enables faster disaster recovery and more excellent resiliency without the cost of a second physical data center. In this article, we’ll discuss how to use AWS to manage disaster recovery in the cloud.

Use AWS to manage disaster recovery in the cloud.

How to use AWS Disaster Recovery to Close Your DR Datacenter?

We’ll give a brief overview of:

  • How AWS disaster recovery works,

  • The benefits of using AWS DR,

  • An overview of DR architectures

  • AWS disaster recovery example

AWS Disaster Recovery Overview

Before we begin, there are four terms you should be familiar with when discussing disaster recovery:

Business Continuity:

Ensuring that your organization’s mission-critical business functions continue to operate or they recover pretty quickly from a serious incidence.

Disaster Recovery:

Disaster recovery is all about preparing for and recovering from a disaster, so any event that hurts your business; things as hardware failures; software failures; power outages; or physical damages to your building like fire, flooding, hurricanes, or even human error. Disaster recovery is all about planning for those incidents.

Recovery Point Objectives (RPO):

RPO is the acceptable amount of data loss measured in time. If your disaster hits at 12:00 PM and your RPO is one hour, your system should recover all the data that was in the system before 11:00 AM. Your data loss will only span from 11:00 AM to 12:00 PM.

Recovery Time Objective (RTO):

RTO is the time it takes after a disruption to restore your business processes to their agreed-upon service levels. If your disaster occurs at noon and your RTO is eight hours, you should be back up and running by no later than 8:00 PM.

Keep your Primary Datacenter, But Shift Disaster Recovery to AWS

We’re not saying that you need to shut down all of your data centers and migrate them to AWS. You can keep your primary data center, but close your DR data center and migrate those workloads to AWS.

aws disaster recovery vs on premises datacenter

In the image above, we have our traditional DR architecture on the left. You have your primary datacenter, and then you have your DR datacenter. With the DR datacenter, there is replication between the primary and the DR datacenter. You can recover as soon as a disaster happens in the primary datacenter. This way your users are still able to be up and running without too much of an impact.

With traditional DR, you have the infrastructure that’s required to support the duplicate environment. Physical location, power, cooling, security to ensure that the place is protected, procuring storage, and enough server capacity to run all of missing critical services — including user authentication, DNS monitoring, and alerting.

On the right, we have Disaster Recovery managed on AWS. What you see is the main datacenter, but you can set up replication to the AWS Cloud using a couple of services including Amazon S3, Route 53, Amazon EC2, and SoftNAS Cloud NAS.

You’ll get all the benefits of your current DR datacenter but without having to maintain any additional hardware or have to worry about overprovisioning your current datacenter.

Physical Datacenter On-Premise vs AWS

aws disaster recovery benefits

Let’s compare a physical DR data center vs. AWS Disaster Recovery. With your DR data center, there’s a high cost to build and maintain it. You’re responsible for storage, power, networking, the internet, and more. There are a lot of capital expenditures involved in maintaining the DR datacenter.

Storage, backup tools, and retrieval tools are expensive. It often takes weeks to add in more capacity because planning, procurement, and deployment just take time with physical datacenters. It’s also challenging to verify your DR plans. Testing for DR on-site is time-consuming and takes a lot of effort to make sure it’s working correctly.

On the right, we have the benefits of using AWS for DR. There are not a lot of capital expenditures when using AWS. The beauty of using AWS to manage DR is it’s all on-demand so you’re only going to pay for what you use.

There’s also a consistent experience across the AWS environments. AWS is just highly durable and highly available. There’s a lot of protection in making sure that your AWS DR is going to be up and running to go. You can also automate your recovery. Finally, you can even set up disaster recovery per application or business unit. So different business units within an organization can have different recovery objectives and goals.

Managing AWS DR Infrastructure

aws disaster recovery infrastructure

The beauty of using AWS to manage your DR is you’re only responsible for your snapshot storage. AWS is handling routers, firewalls, operating systems, and more. AWS just takes all that off your hands, so you can focus on more important tasks and projects.

How does AWS disaster recovery mirror your DR data center?

aws disaster recovery services mapping

The image above shows traditional DR services and AWS services. For example: with DNS, AWS has Route 53; for load balancers, AWS has Elastic Load Balancing; web servers can be EC2 or auto-scaling. Data centers are managed by availability zones.

Because everything is on AWS, enterprise security standards are met. AWS is always up to date with certification, whether it’s ISO, HIPAA, ITAR, or other compliance standards.

There is also the physical security aspect. AWS datacenters are highly secure, located in nondescript facilities and physical access is strictly controlled. They log all the physical access to their data centers. With hardware and software networking, they have systematic change management. Updates are phased and they do safe storage decommission. There is also automated monitoring, self-audits, and advanced network protection.

AWS Disaster Recovery Architecture

AWS Disaster Recovery Architecture? Disaster recovery is the process by which an organization anticipates and addresses technology-related disasters. IT systems in any company can go down unexpectedly due to unforeseen circumstances, such as power outages, natural events, or security issues.


Let’s look at some DR architecture and scenarios for AWS. There is 4 main AWS DR architecture:

  1. Backup & Restore
  2. Pilot Light
  3. Hot Standby
  4. Multi-site

aws disaster recovery architectures

Before we discuss the architectures, let’s look at some of the AWS services involved with DR. For backup and restore, you’re not using too many AWS core services. You’re going to be using Amazon S3, Glacier, SoftNAS virtual NAS appliance (for replication), Route 53, and VPN.

aws disaster recovery services

As you move on to Pilot Light, you’re going to add in CloudFormation, EC2, EBS volumes, Amazon VPCs and DirectConnect. For Hot Standby, you add in Auto Scaling, Elastic Load Balancing, and setup also multiple Direct Connects. For Multi-site, there’s a whole host of AWS services to add in.

1) AWS Backup & Restore Architecture

The way to back up and restore work is shown in the image below. We have an on-premises data center on the left and AWS DR infrastructure on the right.

aws disaster recovery backup and restore architecture

On the left, you have a data center using the iSCSI file protocol, and then you’ve got a virtual NAS on top of that managing file storage. What you can do with AWS DR is use a combination of SoftNAS Cloud NAS, Amazon S3, EC2, and EBS to go in and manage your backup architecture.

In traditional environments, data is backed up to the DR data center. It’s offsite, so if something fails it’s going to take a long time to restore your system because you have to pull them and then pull the backup data from them.

Amazon S3 and Glacier are really good for this. Using a service like SoftNAS Cloud NAS Filer enables you to use snapshots of your on-premises data and copy them into S3 for backup. The benefit of this is you can also arm your snapshot data volumes to give you a highly durable backup.

aws disaster recovery backup restore architecture how it works

It’s pretty cost-effective since you’re not paying a lot of money for it. In case of disaster, you’re going to retrieve your backups from S3 and bring up the required infrastructures. So these are the EC2 instances with prepared AMIs, load balancing, etc. You restore the system from a backup.

With the backup and restore architecture, it’s a little bit more time-consuming. It’s not instant, but there is a workaround for that.

With SoftNAS AWS NAS, you can set up replication with SnapReplicate. It makes your data instantly available instead of having to wait for it to download and back it up. It’s now all instantly available to you. So your RTO and your RPO go from hours or days into minutes or just one or two hours.

2) AWS Pilot Light Architecture

Moving on to the Pilot Light architecture, this is a scenario in which a minimal version of your primary data center’s architecture is always running in the cloud.

aws disaster recovery pilot light architecture

It’s pretty similar to a Backup and Restores scenario. With AWS, you can maintain the power of the pilot light by configuring and running your most critical core elements over your system in AWS. When the time comes for recovery, you can rapidly provision a full-scale production environment around the critical core.

To prepare for the Pilot Light architecture, you replicate all of your critical data to AWS. You prepare all of your required resources for your automatic start. Including the AMI, network settings, and load balancing. We even recommend reserving a few instances too.

aws disaster recovery pilot light architecture how it works

In case of disaster, you automatically bring up the resources around the replicated core data set. Then you can scale the system as needed to handle your current production traffic. Again, one of the benefits of AWS is you can scale higher or lower based on your current needs.

3) AWS Hot Standby Architecture

Moving on to the Hot Standby architecture, this is a DR scenario which is a scaled-down version of a fully-functional environment. It’s always running in the cloud. So basically a warm standby kind of extends to Pilot Light elements and it kind of further reduces the recovery because some of your services are always running in AWS – they’re not idle and there’s no downtime with them. By identifying your business-critical systems, you can fully duplicate them on AWS and have them always on.

aws disaster recovery hot standby architecture

Hot Standby handles production workloads pretty well. To prepare, you again replicate all of your critical data to AWS. You prepare all of your required resources and your reserved instances. In case of disaster, you automatically bring up the resource around the replicated core dataset. You scale the system as needed to handle your current production traffic.

aws disaster recovery hot standby architecture how it works

The objective of the Hot Standby is to get you up and running almost instantly. Your RTO can be about 15 minutes and your RPO can vary from one 1 to 4 hours.

4) AWS Multi-site Architecture

aws disaster recovery multi site architecture

Finally, we have the Multi-site architecture. This is where your AWS DR infrastructure is running alongside your existing on-site structure. Instead of it being active-inactive, it’s going to be an active-active configuration. This is the most instantaneous architecture for your DR needs. At any moment, once your on-premises data center goes down, AWS will go ahead and pick up the workload almost immediately.

aws disaster recovery multi site architecture how it works

You’ll be running your full production load without any kind of decrease in performance so it immediately fills over all your production load. All you have to do is kind of just adjust your DNS records to point to AWS. Your RTO and your RPO are within minutes so no need to worry about spending time re-architecting everything.

AWS Disaster Recovery Example

aws disaster recovery use case example

Here’s an example showing how our customers are using disaster recovery on AWS. The customer is using AWS to kind of manage their business applications and they’ve broken them down into Tier 1, Tier 2, and Tier 3 apps.

For Tier 1 apps that need to be up and running 24/7, they’ve got their EC2 instances for all services running at all times. Their in-house and their AWS infrastructure are load balanced and configured for auto-failover. They do the initial data synchronization using in-house backup software or FTP. Finally, they set up replication with SoftNAS Cloud NAS to automatically failover in minutes.

With Tier 2 apps, they’re configuring only the critical core elements of the system – they don’t configure everything. Again, they’ve got their EC2 instances running only for the critical services. They pre-configured their AMIs for the Tier 2 apps that can be quickly provisioned. Their cloud infrastructure is load balanced and configured for AMI failover. They did the initial data sync with their backup software. Finally, replication was set up with SoftNAS Cloud NAS.

For Tier 3 apps, where the RPO and RTO aren’t too strict, they replicated all their data into S3 using SoftNAS Cloud NAS. Again, they did a sync with their backup software. They went ahead and pre-configured their AMIs. And then also their EC2 incidents are spun up from objects within S3 to a manual process, but they’re able to get there pretty tier pretty quickly.

To start using AWS Disaster Recovery, check out the resources below:

Try SoftNAS AWS NAS Storage to start managing DR in the cloud:

SoftNAS Introduces Product Support Connection Feature on AWS Marketplace

aws softnas product support connection

On November 1, 2016 Amazon Web Services (AWS) introduced Product Support Connection (PSC), a new program that gives vendors like SoftNAS more visibility into the end customer in order to more easily provide product support. SoftNAS is proud to be the only storage launch partner and one of a handful of partners including Chef and Sophos.

What is Product Support Connection?

With PSC, SoftNAS customers on the AWS Marketplace can directly share their contact data with AWS. AWS then provides the data to SoftNAS via an API that quickly allows us to verify your info and streamline any support requests you might have.

How does it work?

You can learn more about AWS PSC program by clicking the links below:

How does this benefit me?

SoftNAS customers on the AWS Marketplace will now benefit from more world class support. With your contact information being kept up to date by AWS, we are able to immediately provide support because we will already have your contact information on hand.

Thanks again to AWS for launching such a great feature! We’re looking forward to sharing even more exciting news about SoftNAS during re:Invent 2016 from November 28th to December 2nd. If you will be at re:Invent and want to learn more about SoftNAS, stop by and say hello to us at booth #844. Alternatively, you can contact us and to set up a time and date that work for you.

What is SoftNAS for AWS?

SoftNAS for AWS is a software-defined NAS filer that delivers enterprise-class NAS features with NFS, CIFS, iSCSI and AFP unified file services and S3 compatible object storage connectivity. You can try SoftNAS – with a free $100 AWS credit – for 30 days on AWS by clicking the link below:

aws credit softnas product support connection

AWS Sydney Outage: How SoftNAS Customers Avoided Downtime

Amazon Web Services (AWS) in Sydney, Australia faced an outage on Sunday, June 5, 2016. Many websites and apps went down causing IT teams to scramble to get their services back up and running. Outages are a fact of life, whether data centers, public/private/hybrid clouds, or the power grid. But were your cloud applications built to failover during this outage?

Amazon did a great job handling the outage by providing a detailed summary of what happened and getting their instances back up and running in a few hours. But it does serve as a good lesson. Just like with on-premises data centers, outages in the public cloud happen. SoftNAS customers running in a high-availability configuration in the AWS Sydney region did not have any downtime.

How to Avoid Storage Downtime on AWS

SoftNAS recommends architecting your solutions on AWS to account for failure at the server level or availability zone level. This means running your storage in a high-availability (HA) configuration and across AWS availability zones. HA aims to maintain uninterrupted service  – typically 99.999%. Basically, you should only expect to be down for 5 minutes a year with HA.

Businesses should also take advantage of Amazon’s Cross-Region Replication, a feature that automatically replicates your data across different AWS regions.

The primary reason why your business needs high availability and cross-region replication is in the economics. You’re likely to be offline more often and for longer periods of time without an HA strategy. Plus the cost of downtime to your business is going to be large.

There’s more to it than just the cold hard numbers. Here are a few reasons why high availability and cross-region replication are so crucial to businesses in the cloud:

  • Your reputation will improve as your brand is known for its reliability versus your competitors
  • Reduce customer impact during planned maintenance. In many cases, you can completely avoid service disruption during planned maintenance events
  • If your customers are in two geographic locations, you can maintain object copies in AWS regions that are geographically closer to your users to minimize latency.

SoftNAS High Availability and No Downtime Guarantee

aws sydney outage no downtime

During the AWS Sydney outage, SoftNAS customers in Australia were able to keep their services, applications and websites up and running. Because they took advantage of SoftNAS’ High Availability features. And we back up all of that with the SoftNAS No Downtime Guarantee. Our customers were able to automatically failover to a new instance and experienced no interruptions in service.

You can always check on the status of AWS by visiting the AWS Service Health Dashboard. Be sure to subscribe to a service’s RSS feed to immediately be notified of interruptions.

Have you calculated the cost of downtime to your business?

Docker Persistent Storage on AWS

Docker Persistent Storage on AWS

Persistent storage is critical when running applications across containers on AWS. In this article, we cover how to build persistent storage for Docker containers on AWS. Learn best practices to spin-up, spin-down and move containerized applications across AWS environments, whether running Docker or Amazon EC2 Container Services (ECS).

You can jump to different sections of the article by clicking the hyperlinks below:

  1. Resources (Recording video, slides, whitepapers)
  2. What is Docker?
  3. Virtual Machines vs. Containers
  4. Why Does Docker Persistent Storage Matter?
  5. Application Delivery with Persistent Storage
  6. Amazon EC2 Container Service
  7. Docker and SoftNAS Cloud NAS
  8. SoftNAS Cloud NAS Overview
  9. Docker Persistent Storage Q&A

SlideShare: How to Build Docker Persistent Storage on AWS

SoftNAS Cloud NAS on the AWS Marketplace: Visit SoftNAS on the AWS Marketplace

What is Docker?

docker persistent storage what is

What is Docker and what are containers?

Containers running on a single machine share the same operating system kernel. They start instantly and they make more efficient use of RAM. Images are constructed from layered file systems so they share common files. This makes disk usage and image downloads much more efficient. Docker containers are based on open standards. Which allows containers to run on major Linux distributions and Microsoft operating systems. Containers isolate applications from each other and the underlying infrastructure. This provides an added layer of protection for the application.

Virtual Machines vs. Containers

docker persistent storage containers vs vm

People often ask, “How are virtual machines and containers different?” Containers have similar resource isolation and allocation benefits as virtual machines. But have a different architectural approach that allows them to be much more portable and efficient. Virtual machines may include the application, necessary binaries and libraries. But also have the overhead of an entire guest operating system, and that can take tens of gigabytes. It’s a challenge that virtual desktop people have taken on.

Containers have a very different approach where there’s more containers running into a single instance or virtual machine. They’re better for isolating processes and user space not tied into any specific infrastructure. They’re much more portable and can run virtually everywhere, but it has a Docker infrastructure. The benefits of Docker containers over VMs is less overhead, faster instantiation, better isolation, and easier scalability. Another benefit of containers over virtualization is containers are a great fit for automation. Containers are the best for automation. 

So why does DevOps care? Again, it’s all about automation, setup, launch and run. Don’t worry about what hardware you’re on. Don’t worry about finding the drivers for your servers. Now you can focus on your life cycle repeatability and not worry about keeping your infrastructure going.

Why Does Docker Persistent Storage Matter?

docker persistent storage devops

Why does persistent storage matter for Docker? We need to think about what our storage options are. The Docker containers themselves have checked storage. You can use the storage that’s in that container if you want. The huge problem with that is simple: it goes away when the container is gone. The container is useful as a scratch pad, but not great if you have data you want to keep. So that’s the storage problem.  

Docker containers can mount directories from the host instance on AWS. It would be in this instance that storage can be shared by all containers that run within that host. So what are the issues? You can deploy a cluster of instances to house containers. Your containers move around those different instances, hosts, and your storage is persistent. Also, there’s no guarantee of how you can share that storage.

docker persistent storage container storage options

Network storage is a much better option because now you can share storage like you used to. You can access it from anywhere, and then there’s cloud storage such as EBS and S3. If you wanted the block, block doesn’t share very well. If you want S3, you have to code your containers to work directly with object. SoftNAS Cloud NAS does give you that middle ground option with network storage and native cloud storage. Let’s put CIFS shares, NFS shares, etc., onto your cloud storage and have that complete solution.

Application Delivery with Persistent Storage

docker persistent storage application delivery

Let’s talk about application delivery with persistent storage. If you look at container services, there’s really three components in a container service. There’s your front-end service, and think of that as what you see. It’s the stuff that provides information often like on a webpage. Your back-end service provides APIs in front of the service and the execution part of the application within Docker. Then there’s data storage services. If you use SoftNAS Cloud NAS as your data storage service, you now get high availability and persistence.

So to really use EC2 and SoftNAS together, what does that mean? You’re going to use Amazon’s clustering to kick off a cluster of containers and instances, and auto-scaling.  

docker persistent storage aws ecs

By doing this, we can have a SoftNAS Cloud NAS instance in one availability zone using our virtual IP address and mount that storage a couple of ways.  You can mount those directly to the containers so that they’re using NFS directly, or you could even mount SoftNAS Cloud NAS into the container instance. This lets each of the containers use this as local storage and that lessens the amount of capacity you need on your container instances and also still provides that level of sharability that you model on.

The other thing we stress with ECS and Docker containers is you really want to stretch those across a couple of availability zones. That way if an AZ completely goes out, your auto-scaling can help by bringing up new container instances. This distributes the load on new containers, and by continuing access to SoftNAS virtual IP, you’ll then be able to keep up with your storage and stay online.

Amazon EC2 Container Service

docker persistent storage amazon ec2

Now let’s go into Amazon EC2 Container Services. It’s a highly scalable, fast, container management service that makes it easy to run, stop, and manage Docker containers on a cluster of Amazon EC2 instances. Amazon EC2 lets you launch and stop container-enabled applications with simple API calls and allows you to get the state of your cluster from a centralized service, gives you access to many familiar Amazon features.

When you use Amazon ECS’ schedule, placing the containers across your cluster based on resource needs, isolation policies and availability requirements, having the ability to schedule, that’s important. ECS also eliminates the need for you to operate your own cluster management and configuration management system and not have to worry about scaling your management infrastructure.

One of the benefits of ECS is being able to easily to manage clusters for any scale. There is flexible container placement. You want them to flow across availability zones for most of that time.  

docker persistent storage amazon ec2 benefits

Docker and SoftNAS Cloud NAS

docker persistent storage softnas cloud nas

Let’s talk about why SoftNAS matters to Docker. Again, the storage for Docker and container services, they’re in multiple places. There’s temporary storage in the containers. There’s global storage on the server, the instance that the containers are running on, but if you really want to have your storage around portable, accessible and highly available, then you need the capabilities that SoftNAS Cloud NAS provides. We try to make it simple.

Part of that’s making sure we have APIs that plug into your automation system that goes along with ECS. The setup we showed you had high availability setup as part of that CFT deployment. So to have that kind of ability within quick deployment is amazing. Now try to be agile, and really when you think about DevOps, they want to be fast and they want to be quick.

We have a feature called SnapClone that lets you take snapshots or remember ahead of schedule for hourly snapshots during business hours turned on as a result of that container cluster deployment. Each of those snapshots or a snapshot on-demand could be mounted as what we call a SnapClone so it’s a space efficient writeable snapshot and a 99% rate, and then use that for like a DevOps test case where you want to test against real production data without, heaven forbid, damaging your real production data. The SnapClones are very useful that way. Continuous deployment, and that’s again making it easy, using the SnapClones.  

What are the read/write latency, max throughputs, IO costs for SoftNAS? We don’t use the ephemeral storage except for read cache as part of an EC2 instance, so the storage is the EBS general purpose, EBS provision IOPS. What percentage of that is due to SoftNAS overhead? It’s very light as overall IO.

The reason you might want to do that is because of our fault system. We have background scanning, so if you want an additional layer of protection, allow us to recognize that some bit-rot occurred or something went wrong in the EBS and then with our fault system underlying we can fix that. The reason I bring that into what’s our overhead, so, you know, in this case I configured it for marriage so that means every write is two writes, every read still maps to one read and we go to, you know, the most available storage. If we’re doing like a RAID 5 of course, then it’s a, we do a parity writing but there’s no read-back.  

One additional thought on that is in some scenarios, SoftNAS will actually improve the IO profile simply because of the way that we use the FS in the backend with read caching. In the event that you’re re-reading something that’s been pulled into cache, you’ll typically see an even better IO profile than what the storage typically provides. There’s definitely some considerations associated with performance and the way that we’re structured and the way the product is designed.

SoftNAS Cloud NAS Overview

docker persistent storage softnas architecture

Let’s talk about SoftNAS Cloud NAS and what it is. SoftNAS is a software-based Cloud NAS filer. We deliver on Amazon through the AWS Marketplace as an EC2 instance. One of the huge benefits you get from SoftNAS is being able to use cloud native storage, to deliver files such as NFS for Linux and CIFS for Microsoft, and blocks through an iSCSI interface.

Being able to layer that on the different types of EBS volumes, whether it’s provisioned IOPS or general purpose or any of the other flavors or on object storage such as Amazon S3, SoftNAS can take S3 and EBS, and we look at those as this device is aggregated into storage pools and then carve those storage pools into volumes and export those interfaces that I mentioned as on with AFP. So being able to take that cloud made of storage and provide it to software that used to work indirectly with shared files is a huge benefit.

Another huge benefit is our ability to replicate data, and through data replication also provide high availability where we won’t have parity instances. and where with the data replicated mutually more often in separate availability zones, and sort of the secondary monitor, the primary’s health through network and storage heartbeats and then do a takeover and continue to provide uninterrupted service to the servers, the users and the files. That’s just as important in Docker you’re going to spend that two, four, five hundred of containers that all want to have shared storage, you want to make sure that your infrastructure stays up during that entire time so that you don’t have any outages. Since we go across availability zones, it usually leaves a whole Amazon availability zone within a region and then through auto provisioning with your containers and SoftNAS turnover have completely uninterrupted service.

Docker Persistent Storage Q&A

  1. SnapReplicate and public elastic IPs. If it’s private, how do you have the service storage using private IPs which is specific to a subnet AZ or availability zone for AWS?
    • We have two modes of HA and with a virtual IP.  For the longest time we’ve been supporting the HA through Amazon’s elastic IP and that in fact does use the public IP, of course. The other mode is our private virtual IP and in that case everything would be completely private. We’re managing the route tables between availability zones and moving that virtual IP from primary and secondary instance that way so that’s how we deliver that.
  2. Can EBS volumes be encrypted with AWS key management service?
    • We have encryption built into our product for data storage and we’re using a common third-party encryption software called LUKS so we can encrypt the data on the disk and we also have a pretty nice application guide on how to do data flight encryption for both NFS and CIFS.
  3. Is there a way to backup the SoftNAS managed storage? What types of recovery can we leverage?
    • So we’ve built into our product the ability to back up our storage tools through EBS snapshots. If you’re familiar with EBS snapshots, it’ll take a full copy of your volume and copy that into EBS and manage the changes through there, so we had that built into our storage pool panel in the UI so you can take full backups that way and then be able to restore it to the full storage tool as well. But, you know what, that’s just one avenue for backup. Obviously what we’re doing with storage in the public cloud and Amazon in this case is mirror any other enterprise-class storage product, enterprise-class NAS, and we highly recommend that you have a complete backup and recovery strategy. There’s a lot of really good products on the market today, you know, that we’ve integrated and tested within our lab and a lot of others that I’m sure work just fine because we’re completely about open standards. It’s very important that with our storage or anybody’s storage that, you know, want to have a very comprehensive backup plan and use of those third party products.
  4. What RAID types are being used under SoftNAS?
    • So in the safety built for containers, that’s RAID 1, but that was just a choice.  Often when you look at Amazon and the short answer is we support RAID 0, RAID 1, RAID 5, and RAID 6. I probably want to use RAID 1 at really durable storage, and then if you were to deploy us in a data center where we’re on raw drives, that’s where you want to really look hard at the RAID 5, RAID 6 and use them there because as the drives got bigger and bigger the rebuilds take a while and you kind of want to get to the point where if I have a failed drive and replace it, well, we’ll say, “Oh, it’s in another one,” we’ll develop some type of bit error, we’ll undo it, you know, recovering from the other, so those kind of factors go in. We support the whole gambit of RAID levels.
  5. What version of NFS do we support?
  6. What is the underlying file system used by SoftNAS?
    • At SoftNAS we are very much an open standards, open source company. We’ve built the Z file system commonly referred to as ZFS into our product.
  7. What is the maximum storage capacity of the SoftNAS instance?
    • We don’t really have a limit that we enforce ourselves. Amazon has certain rules for the amount of drive mounts that they provide. But if you’re using S3, our capacity range is virtually limitless. We’ll quote up to 6PB. On the AWS marketplace, we do have editions based on the capacity that they’ll manage. Our express edition manages up to 1TB, our standard up to 20, and then we have a BYOL edition.
  8. Is it possible to get a trial version of SoftNAS?
    • Yes it is. Through the AWS marketplace, we have a 30 day trial as long as you’ve never tried our product before. It just works out of the box there, just off the console. If you would like to try it through BYOL, then contact our sales team at sales@softnas.com.
  9. Is it a good idea to use SoftNAS as a backup target?
    • Yes. It’s a common use case for us since we enable native cloud storage. Even on premise storage you could have a good backup plan of maybe keeping your nightly backups on the very fast storage such as EBS or, you know, SSD or spending the rest of this in your data center, but then also have a storage pool made out of object storage and S3 up in the cloud and use that for your weekly archival. Very common for people that are using product like VMware to use SoftNAS for backup, as a backup target.
  10. Is it a good idea to replace a physical NAS device with SoftNAS?
    • The considerations would be with replacing that type of a solution, just adding where you want to store your backups. If you want to leverage the cloud or if you want to retain those locally. If you want to retain those locally, we have an offering that allows you to connect to local disks and you have a lot of flexibility on the types of disks that you can attach to, local attach disks, iSCSI targets, as well as tying into S3. You can have a local instance that’s tied into S3 from all object storage. 
    • Additionally a secondary option would be to have a SoftNAS node that’s deployed within the cloud and use that as a backup target, and essentially you’re getting a 2-for-1 with that type of a strategy.  By backing up to the target, you get a backup storage resource that you don’t have to store on premise but secondarily it offers a disaster recovery strategy by being able to take your backups and store those offsite.  So those are two approaches that might make sense for that particular scenario.
  11. Is it advisable to utilize an AWS-based SoftNAS instance for on premise apps?
    • I advise not to deploy SoftNAS into an Amazon VPC and access it remotely through NFS or CIFS. And those protocols are very tatty and degrade over long distances. What is common is to deploy SoftNAS into your VMware cluster and your virtual loader data center. Then mount Amazon S3 into a storage pool. Backup your applications and storage pools use them in your data center. It’s great for backups.
    • You’ll have to be somewhat sensitive to latency. There’s applications that would not be great for, because there’s IO latency. It’s going to happen from your data center to the Amazon region where the S3 is. For example, it wouldn’t be a good idea to use like a database with transactional IO. Using a backup with S3, you put SoftNAS onto your data center and back up to the storage pool.
    • A typical customer use case is when they segment out their hot data which is highly active. It’s typically a smaller subset of their overall data set. One use case is to tie into object storage that you host in the cloud for cool data. Use on premise storage that is not affected by latency to service hot data requirements. Then leverage S3 object storage as the backend larger repository for your cool data.
  12. If we use S3 as a storage pool for on premise, does it provide write back caching?
    • With on premises, we’re able to leverage high performance local disk as a block cache file to front-end S3 storage. It functions like a page file for read and write operations. It  also allows higher performance, essentially caching for S3 access, and enhances overall performance.
    • Using the log cache for both reads and writes will do read-aheads and make it easier to handle read/writes.

We hope that you found the content useful and that you gained something out of it. Hopefully, you don’t feel we marketed SoftNAS Cloud NAS too much. Our goal here was just to pass on some information about how to build docker persistent storage on AWS. As you’re making the journey to the cloud, hopefully this saved you time from tripping over some common issues. 

We’d like to invite you to try SoftNAS Cloud NAS on AWS. We do have a SofNAS 30 day trial

 

How to Build a Hybrid Cloud with Amazon Web Services

How to Build a Hybrid Cloud with Amazon Web Services

We’ve seen a lot of interest from our customers in building an AWS hybrid cloud architecture. Maybe you’re just a little bit too hesitant to move your entire infrastructure to the public cloud, so we’ll be talking about how to build a hybrid cloud that gives you the best of both worlds.

We’re going to be covering a couple of use cases on why you might use a hybrid cloud architecture instead of a private or public cloud. We’re also going to talk a little bit more about how SoftNAS works with the AWS architecture. We’ll then go in and give you a step-by-step guide on how to build an AWS hybrid cloud with AWS, SoftNAS Cloud NAS, and whatever existing equipment that you might have.

Best practices for building an AWS hybrid cloud solution

We’ll cover some best practices for building an AWS hybrid cloud solution, and we’ll also talk more about how to backup, protect and secure your data on AWS.

See the slides: How to Build an AWS Hybrid Cloud

Read the Whitepaper

Visit SoftNAS on the AWS Marketplace

In this article, we’ll cover:

AWS Hybrid Cloud Architecture:

Learn how SoftNAS can be installed both in Amazon EC2 and on-premises

How To Build an AWS Hybrid Cloud:

Watch us create a hybrid cloud using existing equipment, AWS, and SoftNAS Cloud NAS

Best Practices for Building an AWS Hybrid Cloud:

Tips and tricks from the SoftNAS team on how to get your hybrid cloud up and running in 30 minutes.

Buurst SoftNAS is a hybrid cloud data integration product, combining a software-defined, enterprise-class NAS virtual storage appliance, backups, and data movement; and data integration/replication for IT to manage and control data centrally. Customers save time and money while increasing efficiency.

How to Build A Hybrid Cloud with AWS

aws hybrid cloud benefits

Why build a hybrid cloud architecture with AWS? This boils down to several things. A lot of people are excited about the opportunity to leverage cloud technologies, but they’ve already got a lot of investment made with on-premises equipment. To get the best of both worlds, let’s manage both on-premises and the cloud to create a best-in-class solution that can satisfy our business needs and the use cases that present themselves to us.

aws hybrid cloud services

Since we are talking about how to build an AWS hybrid cloud, AWS has brought a lot of capabilities to assist with this process, from integrated networking to the ability to very easily create a direct connection from your on-premises network to a virtual private cloud (VPC) sitting in AWS to integrated identity.

If we need to federate your existing active directory or other LDAP provider, we can do that. We can manage these objects just as you would on-premises with perhaps vCenter and System Center. We’ve got a lot of options for deployment, and backup, and we’ve got some metered billing inside of the marketplace that makes things very unique as it relates to consuming cloud resources. 

We’re seeing a lot of people interested in the cloud. Yet it appears that a lot of our companies aren’t ready to make that full, all-in commitment. They’re putting their feet in the water, they’re starting to see what kind of workloads they can transfer to the cloud, what kind of storage they can get access to, and what or how it’s really going to play into their use case. 55% is a lot of organizations and, as such, we need a solution that’s going to provide for that scenario and scale with them accordingly, and we think that the best class solution is AWS and SoftNAS.

The chart below shows a survey done by SoftNAS where we asked our customers what their main cloud architecture is:

aws hybrid cloud architecture

AWS Hybrid Cloud Use Cases

What are some good use cases for a hybrid AWS cloud? Some common use cases are:
  • Elastic scaling
  • Backup & Archive
  • Legacy application migration
  • Dev & Test

Let’s dive into the use cases in more detail. If we talk about elastic scaling, the ability to get access to the resources you need when you need them, be it an increase in access or even a decrease in access, the cloud has many benefits. Traditionally in the absence of elastic scaling, we have to purchase something. We’re purchasing compute or additional storage, but, there’s a time where we’re going to be overprovisioned and there’s probably going to be a time where we’re under-provisioned. So in those peaks and valleys, there’s some inefficiency.

With elastic scaling, we can accommodate our needs, be it seasonal escalations in utilization or be it the retiring of certain applications that require a scale down in usage of compute and storage. So elastic scaling is something that is very beneficial and something we should consider when we talk about leveraging the cloud in any scenario, be it public, private, hybrid, etc.

Backup & archiving is a great use case for the hybrid cloud. Traditionally we’re dealing with a lot of very expensive tapes that are sitting out there and we’re routinely struggling with getting our entire data set backed up to this physical media in the windows that we have made available to use before that data actually changes, and then we have to start concerning ourselves about the viability of those tapes when it actually becomes time to restore or gather data from them, and this is just becoming increasingly complex as we add more and more data to it. Are our procedures still adequate? Are our tapes still adequate?

Now we can start leveraging the cloud so we can get real-time backups of data, or very low RPO access to data, replicate data within 60 seconds, replicate it in real-time, etc. So now, rather than having to have a window that is representative of our backup that occurred eight hours ago, we now have access to data, a separate or replicate data set in the cloud that we can get access to whenever and however we need it, be it across regions, across geographies, across data centers, etc.

AWS Hybrid Cloud Best Practices

aws hybrid cloud best practices

What are the best practices when considering an AWS hybrid cloud architecture? What makes the best use case for an AWS hybrid cloud? Anything I need to look forward to? The first thing we want you to consider is that if you’ve got an application that’s absolutely mission critical and you can’t handle the hiccups that are involved in connecting over that network to the cloud, you might want to consider this for the hybrid cloud.

We can provide local storage and we can back it up or synchronize it to the cloud, but we keep the main storage, the highest performing storage, local, and where it’s needed. But by that same token, if you’re in the environment where seasonally you see a tremendous spike in traffic, those seasonal workloads are prime targets for upload and offload into a cloud infrastructure.

As it relates to cost, if you’re getting ready to consider replacing that five year old SAN or NAS, or making a tremendous investment in compute, it might be worth evaluating what kind of benefit that a hybrid or public scenario is going to bring to the table. As far as security’s concerned, we’ve come a long way with how we can manage security in a cloud environment and in a hybrid environment with the ability to federate our access and authentication environments with the ability to manage policies in the cloud as you would on-premises. We can lock down this information and manage this information almost as granular as you would previously when managing everything on site.

AWS Hybrid Cloud Architecture and SoftNAS Cloud NAS

At a high level, we’ve talked about some of the benefits of the AWS hybrid cloud, some of the different use cases, and now we need to start driving in on how SoftNAS Cloud NAS is going to integrate with AWS hybrid cloud architecture and the value we’re going to bring to the scenario. Let’s look at the architecture and features of SoftNAS Cloud NAS and how you can use it to build an AWS hybrid cloud.

aws hybrid cloud softnas

I want to talk about how the AWS hybrid cloud architecture works with SoftNAS Cloud NAS. SoftNAS Cloud NAS, at its base functionality is an enterprise-class Cloud NAS filer. SoftNAS Cloud NAS is going to sit in front of your storage, whether that storage is storage in Amazon S3 object storage, Amazon EBS storage, on-premises storage, it doesn’t matter. But in front of that storage we’re going to provide file system access and block access to those backing disks. So we’re going to give you your CIFS, NFS, AFP, and iSCSI access to all of that storage. And the ability to provision access to your consumers or creators of data, your applications, your servers, your users of data.

With services, they’re used to connecting to, so we’re not going to require you to rewrite those old legacy applications just to leverage an archive data in the S3. We’re going to allow you to connect to that CIFS share, that NFS export, etc.

Once we have it set up to provision storage in such a way, we also have the ability to set up a secondary storage environment. Perhaps that secondary storage environment sits in another data center or in a cloud, the AWS cloud here in this instance, and now we have the ability to replicate information between these two environments. We have a feature called SnapReplicate, which we’ll talk about in just a little bit, but essentially we can replicate via an RPO of 60 seconds all our data between those locations however we see fit, however the business case dictates.

Once we have our system set up and we’re leveraging SnapReplicate, we also have the ability to introduce what we call HA or Snap HA – high availability – to this equation. Inside of Amazon AWS or on-premises in the same network, we can set up two systems, two SoftNAS instances, two filers, and we can replicate data in-between those two instances, and we can place them inside an HA protected cluster and in the instance we lose access to that primary, behind the scenes we’ll automatically adjust route tables. We’ll handle the allocations and assignments of VNIs and we’ll automatically allow our end users to create and consume data from that backup set, all the while being seamless and transparent to the end user.

Does SoftNAS work with Veeam for offsite replication? Yes. We have several white papers on what we can do and how we can work with Veeam explicitly, but at a high level we’re able to provide access via an iSCSI LUN or even a share to any of the backup servers, so they can write data to a specific repository that could be cloud-backed, whether it’s EBS, S3, etc., and then we can even leverage some of the advanced features beyond that scenario that are built inside of Amazon to archive that data that’s sitting in S3 based on aging all the way to Glacier, so we definitely work with Veeam and we definitely can work with other backup solutions, and we’ve got some pretty compelling stories there that we’d like to talk with you in more detail about.

How to Build an AWS Hybrid Cloud with SoftNAS Cloud NAS

What I’d like to get into now is actually showing you how to build an AWS hybrid cloud using SoftNAS AWS NAS. What you’re looking at now is what we call our SoftNAS Storage Center. Whether you’re on-premises, whether you’re sitting in the cloud, you’ve either gotten an OVA that you’re standing up via VMware vCenter, or you’re using an AMI that’s published through the AWS Marketplace. The net result is the same; you’ve got a CentOS based operating system that’s hosting the SoftNAS Cloud NAS solution, and the way SoftNAS Cloud NAS works is once we’ve gone through the initial setup, what we’re going to do is we’re going to provision disk devices to this system.

Now, these disk devices could be physical disks attached to the local VMware instance. They could be on-premises object storage that we’re connecting to this instance, etc. They could even be cloud-based objects provided by AWS, and to add these disks we’re going to have you go through a simple wizard.  Now, this wizard has many options for allowing you to connect to many different types of storage providers here. The goal is to allow you to connect to and consume those resources you need regardless of what the provider might be. In this particular case, I’m just going to simply choose Amazon Web S3 storage. We could just as easily add local S3 storage as well.

In choosing an S3 disk, we simply step through a very simple wizard. That wizard is going to automatically pull over your AWS access keys and secret access keys based on a role, an IAM role in the background, and then we’re going to define where we’d like this storage to sit. I’m simply going to select Northern California, and then I’m going to provide for the allocation: how much disk space do I want to be allocated in this S3 disk I’m defining? I’m going to define this as 10GB and then I’m going to either enable disk encryption as provided by Amazon or I’m not.

Once we create this disk or these disks depending on how we’re allocating storage provisioning, we’re then going to have to take these disks and turn them into something that SoftNAS can manage, and we do that by attaching these disks to what we call a pool. I’m creating several disks here so we have the ability to go in and manage things across multiple disks, but once we get beyond this disk devices and the allocations of those disks, to get into the pool section of things we go to a storage pool outlet within our console.

Now, storage pools are how SoftNAS manages all of our storage. To create one of these pools, we go through a simple wizard that should look relatively similar to the last wizard. We’re trying to keep things very similar and very comfortable with regards to the UI, and we’re trying to abstract you from the heavy lifting that’s going on in the background. Now, as far as the pool’s concerned, we are going to give it a name. I could call this S3 data just so I know that this pool’s going to be made up of our S3 disk and this is where I’m going to have everything ultimately residing. Now, once we’re giving it a pool name, we do have the ability to introduce different levels of RAID. In my situation, if we’re using S3 disks, we’ve already got an incredible level of redundancy, so I’m just simply going to choose RAID 0 to get some increase in performance striping across all those disks that we’re using in this pool.

Once we’ve defined our RAID level, we select the disk we’d like to participate or assign to this pool, and then we have the option to enable encryption at the SoftNAS pool level.  So disk encryption is provided by the hosting provider. LUKS encryption is provided by SoftNAS in this case. Once we’re happy with those selections, we can go ahead and create this pool. Once the pool’s defined, we can then create access points or particular volumes and LUNs that our consumers or creators of data can connect to, to either read or write data to that S3 back in, in this case.

So we’ve got a storage pool called S3 data. We then go to our volumes and LUNs tab to create those shares that our users are going to use to connect to. In this particular instance, maybe we want to store all of our Windows users’ home drives on this S3 pool, and we’re going to create a volume name.  Volume name’s important because that’s going to be the root name of your export and the root name of your CIFS share, so we want it to be something that fits within the naming parameters of your organization. We then assign it to one of those storage pools. S3 data’s the only one we happen to have right now, but once we have that created, we’re going to choose the file system we’d like to apply to this share.

By default, we automatically check an export by NFS. We also have CIFS for our Windows users and we even have the Apple filing protocol if you’re a Mac shop and you need that. We also can define an iSCSI LUN connecting to this volume. Now, anytime we provide an iSCSI LUN, we will have to go in and we will have to grab a LUN target. Now, that target is provided by the system automatically for us but once we grab that we then can go ahead and continue on this process, ultimately having an iSCSI target, a LUN target that people can connect to and consume this information as it was in the other disk on their system. So for this exercise, I’ll simply leave it in NFS and I’ll choose CIFS.

The next option we have is dealing with the provisioning of this volume. Thin provisioning says anybody who connects to home drives and writes data to it, home drives are going to scale its capacity up to the maximum capacity of this pool, 17.9GB in this case. Thick provisioning says I’m going to limit you to a subset of that available size, 5GB, 10, whatever that might be. Also, at the pool level, we can enable compression and deduplication, so if your data is such that it can benefit from either one of these, we can architect accordingly and leverage both or either one of those as it relates to this particular volume.

The second thing that’s interesting about how SoftNAS manages volumes is that we’re going to give you the ability by default to manage snapshots at the volume level, and snapshots are a great thing. It is your data at that point in time, and the great thing about how SoftNAS manages this is that at any point in time, we can go back into a snapshots tab that I’ll show you very shortly and pull data out from that point in time and mount as another readable and writable volume completely independent of your production data. So we can define those schedules for snapshots here, and then once we’re happy we can go ahead and create those shares.

Now at this point, if this controller is sitting onsite, you’re free to go ahead and allow users to connect to those CIFS shares, connect to the NFS export, or mount that iSCSI LUN. We also can leverage or integrate or use Kerberos and LDAP for access. We also can integrate with an active directory, so I don’t have an active directory here to integrate with directly but we do step you through a very simple wizard that simply takes the name of your active directory, the name of your domain controller, and appropriate credentials, and we’re going to do all the heavy lifting behind the scenes, join your active directory, synchronize users in groups and you can continue accessing and (cuts out) your data as you always have within a Windows network.

Once our volume’s out there and we’ve got data written to that storage pool, I mentioned that we created a snapshot routine when we built that volume, so we have the snapshots tab. Now, there’s nothing currently in this tab because we just created it, but ultimately what would happen is we would take snapshots and the snapshots would start to populate in this list. I’m just simply creating snapshots or forcing snapshots so you can see this process that we’re going to perform. But the point is here if at any point in time someone comes to you or there’s a business case that requires you to provide access to data at a prior point in time, you simply choose that point in time from this list and we can create what we’re calling a SnapClone from the snapshot, and a SnapClone is a readable, writable volume that contains that data set from that point in time.

It’s automatically created and put up inside your volumes tab so now you can provide that application, that business unit, and that customer access to that data so they can get what they need, and recover what they need, without adversely impacting your production data set. So something very simple to do potentially could even be automated via at-REST APIs, but very, very powerful.

Now that we have the concept of snapshotting and the ability to go back to a point in time, we can start extending upon the functionality of this SoftNAS implementation. So we’ve talked about filer access, the ability to give that backend storage regardless of storage type CIFS, NFS, iSCSI, and AFP access. We’ve talked about snapshotting that data at the volume level. Well, now we can start talking about snapshotting and replicating this data at the entire SoftNAS instance level, and we call that SnapReplicate. And the way SnapReplicate works, and I’m going to share a different browser at this point where both of these things are open. So hopefully everybody’s seeing this; what you’re seeing now is the same storage center app that I just showed you except you’re seeing another instance.

So the notion is, here I have one instance of SoftNAS on-premises or in the cloud and I have another instance of SoftNAS either on-premises or in the cloud, the whole hybrid solution. And if I go to my ‘Volumes and LUNs’ tab here and I refresh this, we should be able to see those volumes that we created, that home drives that are sitting on top of S3 data in that SnapClone. Well, if we start talking about replicating this entire data set, what we’re going to do is we’re going to go into this SnapReplicate Snap HA applet that we have here, and we’re then going to create a second SoftNAS machine. I call it ‘target’ in this case.

To get this target machine ready to accept that replicate data set, the first thing we need to do is we need to make sure we have enough storage provision to this device to accommodate the storage pools. So if I look at my storage pool on this machine that we just built, S3 data, I only have 17.9GB. I’ve got 40GB provisioned on my target machine. I think we have enough disk space. Now, the disk space doesn’t have to be identical. We could have a local high-speed disk, direct-attached SSD on the source on-premises, and this target might be in the cloud completely backed by an S3 disk.

We need capacity here, so once we have the capacity the second thing we need to do is we need to create the storage pool that we want to replicate. So here we need to create an S3 data storage pool, so once we create that storage pool, we select the RAID level, we assign the disk we’d like to write to this, and then we hit ‘create’. So once this pool is created, we’re completely set up and we’re ready for this target machine to accept replication from this source machine. We’re ready to accept replication from our on-premises VMware infrastructure to our AWS cloud infrastructure, and we do this by opening up again that SnapReplicate app that I was talking about.

I’m going to open that up on both sides, and what we’re going to do here is we’re going to start this process by adding replication. Now, when we add replication what we’re doing is we’re taking some simple information from you and we’re doing a lot of things behind the scenes, so as far as the simple information from you is concerned, I need to know the hostname or the IP address of that target machine. In this case, it’s 10.23.101.8. And the next thing I need is permission to modify that target machine, specifically the SoftNAS storage center.

We do have a user to that I’m going to provide access. Once we provide that access, we’ll go ahead and set everything up behind the scenes, we’ll build that replication pair, and then ultimately what’s going to happen is we’ll send information about how SoftNAS is configured from the primary machine over to the target. We’ll build the volumes and LUNs just like they’re built on your primary machine to the target, and then we’ll start replicating the data from those volumes and LUNs across to that target pair.

Now, some of this takes a little while, so you might see a fail happen just because we rushed things and you’ve got some latency involved here, but we’ll continue that process until that actual first mirroring is complete. Once the mirror is complete, we’ve got an RPO of 60 seconds. Every minute, we’re going to send another snapshot to that target machine that represents all the data that is changed. So now you have all of your data being replicated from on-premises to AWS as one scenario.

Several different use cases for this; you could this as a means to migrate data or ingest data into the cloud for ultimate migration to the cloud. You could use this as a backup solution. You could even use this as a DR node. Now, as it relates to business continuity and disaster recovery since we do have a data set sitting somewhere else, we could ultimately go in if this primary server becomes unavailable for whatever reason and point everybody to this secondary node, which pretty much elevates the secondary node to primary status and have everybody connect with that. That’s a manual process, you know, we’d have some networking to deal with on the back end, but not too bad, something we could write a procedure around.  

But it would be great if we could automate failover.  Now, to that end, we have a feature called Snap HA.  Snap HA does have a requirement that these two nodes sit in the same network. So if you’re connecting from on-prem to AWS, we’d have to have that direct connection so the network’s the same between these two nodes, that BPN connection. If you’re in AWS, they’re going to sit within the same BTC separated by subnets, etc.

To get SnapReplication to work, we’re again going to go through a very simple wizard. We’re going to click on this ‘Add Snap HA’ button and we’re going to ask you for more information the first thing we’re going to need is how we’re going to allow our users to connect to this data set. If you think about this conceptually, we’re going to put a virtual IP above this source in this target node and all of our users are going to connect to that virtual IP anytime they need storage. They’re going to mount via that virtual IP or the DNS pointing to that, and route tables are going to behind the scenes take care of who they’re getting their storage from, whether it’s the source node or the primary node.

Now, I’m going to just randomly provide a virtual IP here since I’m completely private, and what’s going to happen is it’s going to take my AWS access key and secret key so it can start modifying route tables in the AWS cloud specifically belonging to that VPC we’re a member of, and then it’s going to install this HA feature. So there’s quite a bit going on in the background right now, but the notion is when we’re doing we’re going to have a SnapReplicated pair that is now highly available. All of our users are going to connect to and consume their storage through that virtual IP to that too in this scenario. Our graphics on both of these nodes will change to illustrate that, and if a primary node becomes unavailable for whatever reasons, we will monitor the health and status of that primary node and we will automatically failover to the target node if we need to.

So very quickly upon noticing the service is no longer available on that source node, we’ll modify route tables. We’ll adjust the assignment of VNIs if required, depending upon how we’re connecting to the storage, and we’ll elevate the target to primary status. We’ll break the high availability pair so we have time to go back and remedy whatever problem was causing that source node to become unavailable in the first place, and then our users are none the wiser. If your CIFS user’s connecting to your files, you still see your files. If you happen to lose access and we’re going into a failover, you might see the little spinning icon for a couple of seconds longer than normal, but you’re ultimately going to get access to your data, and again that data’s subject to a 60 second RPO. So as you can see, my graphics have changed to let you know that both machines are currently set up; one is a primary, and one is a secondary.  We’re connecting through and consuming through that private adapter, and at this point, this is where we could illustrate a failover process if we needed to.

It’s a lot of information that we’re covering at a high level so again I implore you to reach out should you need any additional information or more detail on any of these functions that we covered, or, you know, you see some things in the background that we haven’t covered. Let us know and we’d love to have these conversations, but in carrying on from the demo what I’d like to do is I’d like to go ahead and switch back to a PowerPoint, and we could take this time to go ahead and address some questions.  

I see one question here, I see capacity; does this also show performance info? So it’s just performance info to a certain degree, and the reason I say to a certain degree is, and I’ll show you this by sharing out my desktop, is that we do have a dashboard up here. That’s going to give you certain metrics about the health and status of your environment. So once that loads, you can see here we’ve got percent of CPU, the megabytes per second and IO and the cash performance. If you need actual performance of the volume itself, then, you know, we can run tests outside of SoftNAS and we can help you configure those tests or what tools to use for those tests, etc., and hopefully that addressed that. If it didn’t, let us know and we can definitely go into that in more detail.

And let’s not forget that we’ve got a lot of use cases that are sometimes forgotten, the test, the dev, the QA and the production. When leveraging SoftNAS Cloud NAS with AWS, we’ve now got a very easy way to potentially replicate our data from a point in time into the test dev and QA environments, keeping them up to date in a very quick and efficient manner, and that is something that’s very important for organizations, something that again we’d love to talk to you about.

So this is just the tip of the iceberg. Definitely we want to continue this conversation, but we’re going to start driving down more into some of the features and extended options of SoftNAS that we didn’t necessarily cover in great deal in the demo, and we talked about SnapReplicate and I’m talking about it again here because SnapReplicate really is a great solution. It allows us to not only make sure our data is replicated from point A to point B across the availability zone, etc., or across the region, it’s going to make sure that our SoftNAS configuration is replicated as well.

So we would have, in the event of catastrophic loss, we would have that replicate data set. We would also have a storage controller just waiting and happy to service the needs of your consumers or creators of data, your applications, your servers, your users, etc. SnapReplicate doesn’t have the same requirements as SNAP HA meaning all it needs is network access. We don’t have to exist on the same network, so we do operate across the region. We will operate between data centers so you can get a replicate set based on your particular scenario, your particular requirements.

And as far as the data protection is concerned, you know, we’re going to leverage a lot of features that AWS provides, you know, virtual private cloud. We’ll completely isolate the SoftNAS controllers if we need to. We can control access to our virtual private cloud via the security groups and the IAM policies that AWS provides. We have data encryption provided by AWS at the disk level. And beyond that or standing on top of that data encryption, we have the ability to start doing things at the SoftNAS level.

As far as encryption is concerned, we can encrypt at the pool level using LUKS encryption. Depending upon which versions of SMB and NFS we use, we can leverage in-flight encryption. We can use SSL to encrypt access to our admin console. We can tunnel through SSH. We can do a lot of things to protect this data as it rests in place, as it goes in flight, and to protect the data as it relates to multiple data sets, SnapReplication, snapshotting, etc.

So just to give you again a high level overview of this product, SoftNAS, that enterprise cloud NAS filer is our default functionality and everything we’ve talked about thus far can be done through the GUI, extracted from the heavy lifting behind the scenes. The notion is you’re going to have a business case and we want you to be able to configure that storage based on that business case without necessarily having a degree in storage that lets you or requires you to understand the bits and bytes of every piece of the equation.

We do have a REST API and a CLI if there’s ever any need to develop against that or automate that.  We do have some different options available to you. We do support cross-data center replication and cross-zone PPC replication. We have a file gateway. Our admin UI is based on HTML5.  We are CentOS and our file system is based on top of ZFS. As far as integration is concerned, naturally, we’re talking about AWS but we’re not limited to AWS by any means. We support AWS, Microsoft Azure, CenturyLink Cloud, and several different providers of onsite storage, and we have a lot of technology partners that we’ll briefly highlight on the next slide that bring different means of accessing and provisioning storage to us when talking about different on-site solutions.

We do also offer, you know, as a file system again, NFS, CIFS, your traditional file systems that you see everywhere, but we also offer AFP for those cases where you do have a Macintosh native network as well as iSCSI, and as far as our data services are concerned, compression, inline deduplication, snapshotting.  We do have multi-level caching.  We can implement read caching and write logging for increased performance of your storage.

We do manage everything via a storage pool. We offer thin provisioning so things can dynamically grow.  The writable SnapClones, just to revisit them quickly, very important feature, very effective feature, getting access to data or providing access to data at a prior point in time, something that, you know, is not always the easiest to do. SnapClones are readable and writable and completely independent of your production data as we need them to be. And again, we’ve also talked about encryption, AES 256 LUKS encryption at rest, and we can also support encryption in flight.

SoftNAS Cloud NAS Overview

So just to talk about the different product offerings, we do have SoftNAS Cloud NAS, the product that we illustrated in the demo. SoftNAS is an enterprise-grade cloud NAS filer. It does support both on-premises implementation and public file implementation (i.e. AWS hybrid cloud) and we also have SoftNAS for Storage Providers for those that want to host the SoftNAS and provide managed services in storage through SoftNAS to their end-users. If you have any questions on either one of these, specifically want to dive into what we offer for service providers, again, definitely reach out and let us know. We’d very much enjoy talking about that.

We also have a fantastic white paper that we co-developed with AWS where you can learn more about how SoftNAS Cloud NAS architecture works in conjunction with AWS hybrid cloud architecture. If you do have more questions, or you have any specific AWS hybrid cloud use cases that you think SoftNAS might be able to help you with, please feel free to contact us.

 

Best Practices for Amazon EBS with SoftNAS

Best Practices for Amazon EBS with SoftNAS

This post is all about AWS EBS Best Practices using SoftNAS Cloud NAS. AWS Elastic Block Storage (EBS) volumes are block-level, durable storage devices that attach to your Amazon EC2 Instances. EBS Volumes can use as your primary storage device for an EC2 instance or database, or throughput-intensive systems requiring constant disk scans.

AWS EBS volumes exist independently from your amazon EC2 instances and can retain after the associated EC2 instance has been deleted. AWS provides various types of EBS volumes allowing you to tailor the right volume to meet your budget and application performance requirements.

SoftNAS AWS EBS best practices

Amazon Elastic Block Store (Amazon EBS) provides persistent block-level storage volumes for use with Amazon EC2 instances in the AWS Cloud.  EBS is not about any specific device type, it’s about providing EC2 instances with highly available and durable storage volumes. To achieve this, each Amazon EBS volume is automatically replicated within its Availability Zone. Amazon EBS volumes offer the consistent and low-latency performance needed to run your workloads.

According to Amazon documentation, these are the Amazon EBS limits within an AWS account:Amazon EBS limits

With large numbers of EBS volumes, the time to attach during boot can be high and potentially fail.   

When a volume does not attach, SoftNAS resolves through an AWS-supplied API to ensure each volume does get attached before completing the boot.  When SoftNAS is implemented along with Amazon Web Services, it has been proven that you can scale to very large numbers of EBS Volumes without boot-attach issues.

Best practices for using AWS EBS volumes with SoftNAS:

  • Each EBS volume attached to an instance will be constituted on independent storage hardware within AWS infrastructure.  Configure SoftNAS storage pools as RAID 0 to stripe across multiple EBS volumes to gain the highest possible bandwidth and performance
  • SoftNAS disk device protection (RAID levels 1, 10, 5, 6, 7) is unnecessary and should not be used in a storage pool with EBS volumes.  Using any RAID level beyond RAID 0 merely increases storage costs with little benefit in reducing failure rate or performance. EBS Volumes are IOP limited. EBS General Purpose SSD is limited to 10,000 IOPs per volume.  EBS Provisioned IOPs are limited to 20,000 IOPs per volume.  EBS Magnetic is more severely limited to 40-200 IOPs, which represents the capabilities of today’s spinning media.  In testing, we have seen the EBS SSDs provide more IOPs in shorter durations, but appear to have forced limiters with longer sustained IOP usage.  By striping across multiple EBS Volumes, of any type, the IOPs can be higher than a single EBS Volume can provide.  Of course, workload and queue depth dictates whether higher IOPs are achieved.
  • AWS EBS annual failure rate (AFR) is published to be between 0.1% and 0.2%.  Aggregating multiple EBS volumes within a SoftNAS storage pool will magnify the AFR.  The AFR is roughly the number of EBS volumes multiplied by the AFR rate.  Our recommendation is to understand the risk and size of storage pools appropriately.  Using 5 EBS  volumes within a storage pool (totaling up to 80 TB of capacity) will be an acceptable AFR for most use cases, and many use cases can tolerate an even higher AFR.
  • Use multiple SoftNAS storage pools for very high-capacity deployments.  EBS volumes separated by storage pools do not affect AFR.
  • Use SoftNAS backup to create EBS Snapshots of storage pools to further protect data. EBS snapshots are arguably the most useful and the most difficult to understand feature of EBS. You can backup the data on your EBS volumes to Amazon S3 by taking point-in-time snapshots. EBS Snapshots are incremental, which means that in order to create a subsequent snapshot, EBS saves only the disk blocks that have changed since the previous snapshots to S3.SoftNAS has integrated EBS snapshots with one-button backup and restore options for storage pools.
  • SoftNAS SNAP HA provides data protection and high availability.  Data is replicated across availability zones and failover is managed between a pair of SoftNAS instances in private or public VPCs.  SNAP HA is recommended for a complete data protection strategy, replicating the storage from one region/zone to another.
  • SoftNAS SnapReplicate can be used as part of a disaster recovery strategy by replicating data to a remote region or another data center.

SoftNAS AWS NAS Storage

SoftNAS is a software-defined AWS NAS delivered as a virtual appliance running within Amazon EC2. Provides NAS capabilities suitable for enterprise storage, including Multi-Availability Zone high availability with automatic failover in the Amazon Web Services (AWS) Cloud.

SoftNAS offers AWS customers an enterprise-ready NAS capable of managing your fast-growing data storage challenges including AWS Outpost availability. Dedicated features from SoftNAS deliver significant cost savings, high availability, lift and shift data migration, and a variety of security protection.