Migrating Existing Applications to AWS Without Reengineering

Migrating Existing Applications to AWS Without Reengineering

Migrate Existing Applications to AWS Cloud without Re-engineering... Download the full slide deck on Slideshare

Designing a cloud data system architecture that protects your precious data when operating business-critical applications and workloads in the cloud is of paramount importance to cloud architects today.

Ensuring the high availability of your company’s applications and protecting business data is challenging and somewhat different than in traditional on-premise data centers. For most companies with hundreds to thousands of applications, it’s impractical to build all of these important capabilities into every application’s design architecture.

The cloud storage infrastructure typically only provides a subset of what’s required to protect business data and applications properly. So how do you ensure your business data and applications are architected correctly and protected in the cloud? In this webinar, we covered: Best Practices for protecting business data in the cloud How To design a protected and highly-available cloud system architecture, and Lessons Learned from architecting thousands of cloud system architectures

Migrate Applications to AWS Cloud without Re-engineering

For our agenda today, this is going to be a technical discussion. We’ll be talking about security and data concerns for migrating your applications to AWS Cloud. We will also talk a bit about the design and architectural considerations around security and access control, performance, backup and data protection, mobility and elasticity, and high availability.

Best practices for designing cloud storage for existing apps on AWS.

First, let’s talk about SoftNAS a little bit. We’re just going to give our story and why we have this information and why we want to share this information.

SoftNAS was born in the cloud in 2012. It was born from…Our founder, Rick Brady went out to find a solution that would be able to give him access to cloud storage. When he went out and looked for one, he could find anything that was out there. He took it upon himself to go through the process of creating this solution that we now know as SoftNAS.

SoftNAS is a cloud NAS. It’s a virtual NAS storage appliance that exists on-premise or within the AWS Cloud and we have over 2,000 VPC deployments. We focus on no app left behind. We give you the ability to migrate your apps into the cloud so that you don’t have to change your code at all. It’s a Lift and Shift bringing your applications to be able to address cloud storage very easily and very quickly.

We work with Fortune 500, SMB companies, and thousands of AWS subscribers. SoftNAS also owns several patents including patents for HA in the cloud and data migration.

 Security and data concerns compound design requirements

The first thing that we want to go through and we want to talk about is cloud strategy. Cloud strategy, what hinders it? What questions do we need to ask? What are we thinking about as we go through the process of moving our data to the cloud?

Every year, IDG, the number one tech media company in the world. You might know them for creating CIO.com, Computer World, Info World, IT World, and Network World. Basically, if it has the technology and a world next to it, it’s probably owned by IDG.

IDG calls its customers every single year with the goal of measuring cloud computing trends among technology decision-makers to figure out uses and plans across various cloud services, development models, and investments and trying to figure out business strategies and plans for the rest of the IT world to focus on. From that, we actually took the top five concerns.

 

The top five concerns belief it or not has to do with our data.
 Security and data concerns compound design requirements

Number one concerns come across. Concerns regarding where will my data be stored? Is my data going to be stored safely and reliably? Is it going to be stored in a data center? What type of storage is it going to be stored on? How could I figure that out? Especially when you’re thinking about moving to a public cloud, these are some questions that rake people’s minds.

Questions number 2. Concerns about the security of cloud computing solutions. Is there a risk of unauthorized access, and data integrity protection? These are all concerns that are out there about the security of what I’m going to have in an environment.

We also have concerns that are out there about vendor lock-in. What happens if my vendor of choice changes costs, changes offerings, or just goes? There are concerns out there, number four, with the surrounding integration of existing infrastructure and policies. How do I make the information available outside the cloud while preserving the uniform set of access privileges that I have worked for the last 20 years to develop?

Number five. You have concerns about the ability to maintain enterprise or industry standards — that’s your ISOs, PSI, SaaC, or SaaS. We will share some of the questions that our customers have asked and which they have asked earlier in the design structure of moving their data to the cloud so that this would be beneficial to you.

Security and Access Control

Security and Access Control

Question number one and it’s based on the same IDG poll. How do I maintain or extend security and access control into the cloud? We often think outside to end when we design for threats.

This is how our current on-prem environment is built. It was built with protection from external access. It then goes down to physical access to the infrastructure. Then it’s access to the files and directories. All of these protections need to be considered and extended to your cloud environment, so that’s where the setup of AWS’s infrastructure plays hand-in-hand with SoftNAS NAS Filer.

First, it’s the fact of setting up a firewall and ensuring Amazon already has the ability for you to be able to utilize the firewalls through your access controls to your security groups. Setting up stateful protection from your VPCs, setting up network access protection, and then you go into cloud-native security. Access to infrastructure. Who has access to your infrastructure?

By you setting up your IAM roles or your IAM users, you have the ability to control that from security groups. Being able to encrypt that data. If everything fails and users have the ability to touch my data, how do I make sure that even when they can see it, it’s encrypted and it’s something that they cannot use?

We also talk about the extension of enterprise authentication schemes. How do I make sure that I am tiering into an active directory or tiering into LDAP which is already existing in my environment?

Backup and Data Protectiom

Backup and Data Protectiom

The next question that we want to ask is structured around backups and data protection. How to safeguard from data loss or corruption. We get asked this question almost — I don’t know — probably about 10-15 times a day. We get asked, I’m moving everything to the cloud; do I still need a backup? Yes, you still do need a backup. An effective backup strategy is still needed. Even though your data is in the cloud, you still have redundancy.

Everything has been enhanced, but you still need to be able to have a point in time or extended period of time that you could actually back and grab that data. Do I still need antivirus and malware protection? Yes, antivirus and malware protection is still needed. You also need the ability to have snapshots and rollbacks and that’s one of the things that you want to design for as you decide to move your storage to the cloud.

How do I protect against user error or compromise? We live in a world of compromise. A couple of weeks ago, we saw all in Europe that had run through the problem of ransomware. Ransomware was basically hitting companies left and right. From a snapshot and rollback standpoint, you want to be able to have a point in time that you could quickly rollback so that your users will not experience a long period of downtime. You need to design your store with that in mind.

We also need to talk about replication. Where am I going to store my data? Where am I going to replicate my data to? How can I protect so the data redundancy that I am used to in my on-prem environment, that I have the ability to bring some of that to the cloud and have data resiliency?

I also need to think about how do I protect myself from data corruption. How do I design my environment to ensure that data integrity is going to be there, that I am making that protocols with the underlying infrastructure that I am protecting myself from being corrupted by different scenarios that would cause my data to lose its integrity?

Also, you want to think of data efficiency. How can I minimize the amount of data while designing for costs-savings? Am I thinking in this scenario about how do I dedupe, how do I compress that data? All of these things need to be taken into account as you go through that design process because it’s easier to think about it and ask those questions now than to try to shoehorn or fit it in after you’ve moved your data to the cloud.

Performance

Performance

The next thing that we need to think about is performance. We get asked this all the time. How do I plan for performance? How do I design for performance but not just for now? how do I design for performance in the future also? Yes, we could design a solution that is angelic for our current situation; but if it doesn’t scale with us for the next five years, for the next two years, it’s not beneficial – it gets archaic very quickly.

How do I structure it so that I am designing not just for now but for two years from now, for five years from now, for potentially ten years from now? There are different concerns that need to be taken at this point. We need to talk about dedicated versus shared infrastructure. Dedicated instances provide the ability for you to tune performance. That’s a big thing because what you do right now and your use-case as it changes, you need to make sure that you could actually tune for performance for that.

Shared infrastructure. Although shared infrastructure can be cost-effective, multi-tenancy means that tuning is programmatic. If I go ahead and I use a shared infrastructure, that means that if I have to tune for my specific use-case or multiple use-cases within that environment, I have to wait for a programmatic change because it’s not dedicated to me solely. It is used by many other users.

Those are different concerns that you need to think about when it actually comes to the point of, am I going to use dedicated or am I going to use a shared infrastructure?

We also need to think about bursting and bursting limitations. You always design with the ability to burst beyond peak load. That is number one. 101, you have to think about the fact, I know that my regular load is going to be X but I need to make sure that I could burst beyond X. You need to also understand the pros and cons of burst credit. There’re different infrastructures and different solutions that are out there that introduce burst credits.

If burst credits are used, what do I have the ability to burst too? Once that burst credit is done, what does it do? Does it bring me down to subpar or does it bring me back to par? These are questions that I need to ask, or you need to ask as you go into the process of designing for storage, and what the look and feel of your storage are going to be while you’re moving to the public cloud.

You also need to look at and consider predictable performance levels. If I am running any production, I need to know that I have a baseline. That baseline should not fluctuate as much as I have the ability to burst beyond my baseline and be able to use more when I need to use more. I need to know that when I am at the base, I am secure with the fact that my baseline predictable is predictable and my performance levels are going to be just something that I don’t have to worry about.

We need to be able to go ahead and you should already be thinking about using benchmark programs to validate the baseline performance of your system. There’re tons of freeware tools out there, but that’s something that you definitely need to do while you’re going through that development process or design process for performance within the environment.

Storage, throughput, and IOPS. What storage tier is best suited for my use case? In every environment, you’re going to have multiple use cases. Do I have the ability to design my application or the storage that’s going to support my application to be best suited for my use case?

EBS Volume types

From a performance standpoint, you have multiple apps 19:04 for your storage tiers. You could go with your GP2 – these are the general-purpose SSD drives. There’s Provisioned IOPS. There’s Throughput Optimize. There is Cold HDDs. All of these are options that you need to make a determination on.

A lot of times, you’ll think about the fact that “I need general-purpose IOPS for this application, but Throughput Optimize works well with this application.” Do I have the ability? What’s my elasticity to using both? What’s the thought behind doing it?

aws ebs volume types

AWS gives you the ability to address multiple storages. The question that you need to ask yourself is, based on my use case, what is most important to my workload? Is it IOPS? Is it Throughput?

Based on the answer to that question, it gives you a larger idea of what storage to choose. This slide breaks down a little bit of moving from anything greater than 65,000 IOPS. Are you positioned to use anything that you need higher throughput from? What type of storage should you actually focus on?

These are definitely things that we actually work with our customers on a regular basis to actually steer them to the right choice so that it’s cost-effective and it is also user-effective for their applications.

amazon s3

 Then AWS S3. A cloud NAS should have the ability to address object storage because there are different use cases within your environment that would benefit from being able to use object storage.

We were at a meetup a couple of weeks ago that we did with AWS and there was an ask from the crowd. The ask came across. They said, “If I have a need to be able to store multiple or hundreds of petabytes of data, what should I use? I need to be able to access those files.”

The answer back was, you could definitely use S3, but you’ll need to be able to create the API to be able to use S3 correctly. With a cloud NAS, you should have the ability to use object storage without having to utilize the API.

How do you actually get to the point that you’re using APIs already built into the software to be able to use S3 storage or object storage the way that you would use any other file system?

Mobility and Elasticity.

Mobility and elasticity.

What are my storage requirements and limitations? You would think that these would be the first questions that get asked as we go through this design process. As companies come to us and we discuss it, but a lot of times, it’s not. It’s, people are not aware of their capacity limitations, so they make a decision to use a certain platform or to use a certain type of storage and they are unaware of the maximums. What’s my total growth?

What is the largest size that this particular medium will support? What’s the largest file size? What’s the largest folder size? What’s the largest block size? These are all questions that need to be considered as you go through the process of designing your storage. You also need to think about platform support. What other platforms can I quickly migrate to if needs be? What protocols can I utilize?

From tiered storage support and migration, if I start off with Provisioned IOPS disks, am I stuck with Provisioned IOPS disks? If I realize that my utilization is that of the need for S3, is there any way for me to migrate from Provisioned IOPS storage to S3 storage?

We need to think about that as we’re going with designing storage in the backend. How quickly can we make that data mobile? How quickly can we make it something that could be sitting on a different tier of storage, in this case, from object to block or vice versa, from block back to object?

And automation. Thinking about automation, what can I quickly do? What can I script? Is this something that I could spin up using cloud formation? Are there any tools? Is there API associated with it, CLI? What can I do to be able to make a lot of the work that I regularly do quick, easy scriptable?

Application Modernization or Application Migration

Application Modernization or Application Migration? We get asked this question also a lot. What strategy should I choose to get the data or application to the cloud? There are two strategies that are out there right now.

Application Modernization or Application Migration

There is the application modernization strategy which comes with its pros and cons. Then there is also the Lift and Shift strategy which comes with its own pros and cons.

What’s associated with that modernization is the fact that the pros, you build a new application. You can modify and delete and update existing applications to take advantage of cloud-native services. It’s definitely a huge pro. It’s born in the cloud. It’s changed in the cloud. You have access to it.

However, the cons that are associated with that is that there is a slower time to production. More than likely, it’s going to have significant downtime as you try to migrate that data over. The timeline we’re looking at is months to years. Then there are also the costs associated with that. It’s ensuring that it’s built correctly, tested correctly, and then implemented correctly.

Then there is the Lift and Shift migration. The pros that you have for the Lift and Shift is that there is a faster time to cloud production. Instead of months to years, we’re looking at the ability to do this in days to weeks or, depending on how aggressive you could be, it could be hours to weeks. It totally depends.

Then there are costs associated with it. You’re not recreating that app. You’re not going back and rewriting code that is only going to be beneficial for your move. You’re having your developers write features and new things to your code that are actually going to benefit and point you in a way of making sure that it’s actually continuously going to support your customers themselves.

The cons associated with the Lift and Shift approach is that your application is not API optimized, but that’s something that you can make a decision on whether or not that’s something that’s beneficial or needed for your app. Does your app need to speak the API or does it just need to address the storage?

High Availability (HA)

high availability

The next thing that we want to discuss, we want to discuss high availability. This is key in any design process. It’s how do I make sure or plan for failure or disruption of services? Anything happens; anything goes wrong, how do I cover myself to make sure that my users don’t feel the pain? My failover needs to be non-disruptive.

How can I make sure that if reading fails, if an instance fails, my users come back and my users don’t even notice that a problem happened? I need to design for that. How do I ensure that during this process I am maintaining my existing connections? It’s not the fact that failover happens and then I need to go back and recreate all my tiers and repoint my pointers to different APIs to different locations.

How do I make sure that I have the ability to maintain my existing connections within a consistent IPO? How do I make sure that I have and what I’ve designed fulfills my IPO needs?

Another thing that comes up and is one of the questions that generally come into our team is, HA per app, or is this HA for infrastructure? When you go ahead and you go through the process of app modernization, you’re actually doing HA per app.

When you are looking at a more holistic solution, you need to think in advance. On your on-premises environment, you’re doing HA for infrastructure. How do you migrate that HA for infrastructure over to your cloud environment? And that’s where the cloud NAS comes in.

Cloud NAS solves many of the security and design concerns.

Cloud NAS

 We have Keith Son which whom we did a webinar a couple of weeks ago. It might have been last week. I don’t remember if it’s up in my mind. Keith Son loves that software, constantly coming to us asking for different ways and multiple ways that he could actually tweak or use our software more.

Keith says, “Selecting SoftNAS has enabled us to quickly provision and manage storage, which has contributed to a hassle-free experience.” That’s what you want to hear when you come to design. It’s hassle-free. I don’t have to worry about it.

We also have a quote there from John Crites from Modus. John says that he’s found that SoftNAS is cost-effective and a secure solution for storing massive amounts of data in the AWS environment.

Cloud NAS addresses security and access concerns. You need to be able to tier into the active directory. You need to be able to tier into LDAP. You need to be able to secure your environment using IAM rules. Why? Because I don’t want my security, my secret keys to be visible to anybody. I want to be able to utilize the security that’s already initiated by AWS and have it just through my app.

VPC security groups. We ensure that with your VPC and with the security groups that you set up, only users that you want to have access to your infrastructure. From a data protection standpoint, block replication. How do we make sure that my data is somewhere else?

Data redundancy. We’ve been preaching that for the last 20 years. The only way I can make sure that my data is fully protected is that it’s redundant. In the cloud, although we’re extended redundancy, how do I make sure that my data is faultlessly redundant? We’re talking about bock replication. For data protection, we’re talking about encryption. How could you actually encrypt that data to make sure that even if someone did get access to it they didn’t know what they would do with it? It would be gobbledygook.

You need to be able to have the ability to do instant snapshots. How can I go in, based on my scenario, go in and create an instant snapshot of my environment? So worst case scenario if anything happens, I have a point in time that I could quickly come back to. Write up with snap clones. How do I stand up my data quickly? Worst case scenario if anything happens and I need to be able to revert to a period before that I know I wasn’t compromised; how can I do that quickly?

High availability in ZFS and Linux. How do I protect my infrastructure underneath?

 

Then performance. A cloud NAS gives you dedicated infrastructure. That means that it gives you the ability to be tunable. If my workload or use-case increases, I have the ability to tune my appliance to be able to grow as my use-case grows or as my use-case needs growth. Performance and adaptability. From disk to SSD, to networking, how can I make my environment or my storage adaptable to the performance that I would need? From no-burst limitations, dedicated throughput, compression, and deduplication are all things that we need to be considering as we go through this design process. The cloud NAS gives you the ability to do it.

Then flexibility. What can I grow to? Can I scale from gigabyte to petabyte with the way that I have designed? Can I grow from petabytes to multiple petabytes? How do I make sure that I’ve designed with the thought of flexibility? The cloud NAS gives you the ability to do that.

We are also talking about multiplatform, multi-cloud, block, and object storage. Have I designed my environment so that I could switch to new storage options? Cloud NAS gives you the ability to do that.

We also need to get to the point of the protocols. What protocols are supported, CIFS, NFS, and iSCSI? Can I then provision these systems? Yes. The cloud NAS gives you the ability to do that.

Easily Migrate Applications to AWS Cloud

softnas aws cloud nas

I just wanted to give a very quick overview of SoftNAS and what we do from a SoftNAS perspective. SoftNAS, as we said, is a cloud NAS. It’s the most downloadable and most utilized cloud NAS in the cloud.

We give you the ability to easily migrate those applications to AWS. You don’t need to change your applications at all. As long as your applications connect via CIFS or NFS or iSCSI or AFP, we are agnostic. Your applications would connect exactly the same way that they had connected as they do on-premise. We give you the ability to address cloud storage and this is in the form of S3. It’s in the form of Provisioned IOPS. It’s in the form of gp2s.

Anything that Amazon has available as storage, SoftNAS gives you the ability to aggregate into a storage pool and enhance sharing it out via volumes that are CIFS, NFS, or iSCSI. Giving you the ability to have your applications move seamlessly.

These are some of our technology partners. We work hand-in-hand with Talon, ScanDisk, NetApp, and SwiftStack. All of these companies love our software and we work hand-in-hand as we deliver solutions together.

We talked about some of the functions that we have that are native to SoftNAS. With the Cloud-Native IAM Role Integration, we have the ability to do that – to encrypt your data at rest or in transit.

Then also the fact of firewall security, we have the ability to be able to utilize that too. From a data protection standpoint, it’s a copy-on-write file system so it gives you the ability to be able to ensure the data integrity of the information in your storage.

We’re talking about instant storage snapshots whether manual or scheduled and rapid snapshot rollback there, we support RAID all across the board with all AWS EBS and also the ability to do it with S3 back storage.

We also give you the ability to…From a built-in snapshot scenario for your end-users, this is one of the things that our customers love, the Windows previous version support. IT teams love this; because if they have a user that made a mistake, instead of them having to go back in and recover a whole volume to be able to get the data back, they just tell them to go ahead.

Go back in. Windows previous versions, right click on that, restore that previous version. Giving your users something that they are used to on-premise that they have immediately within the cloud.

High performance. Scaling up to gigabytes per second for throughput. For performance, we talked about no burst limits protecting against Split-brain on HA fall over giving you the ability to migrate those applications without writing or rewriting a single piece of code.

We talked about automation in the ability to utilize our REST APIs. Very robust REST API cloud integration using ARM or cloud formation template available in every AWS ARM region.

10 Architectural Requirements for Protecting Business Data in the Cloud

10 Architectural Requirements for Protecting Business Data in the Cloud

“10 Architectural Requirements for Protecting Business Data in the Cloud”. Download the full slide deck on Slideshare

Designing a cloud data system architecture that protects your precious data when operating business-critical applications and workloads in the cloud is of paramount importance to cloud architects today. Ensuring the high availability of your company’s applications and protecting business data is challenging and somewhat different than in traditional on-premise data centers.

For most companies with hundreds to thousands of applications, it’s impractical to build all of these important capabilities into every application’s design architecture. The cloud storage infrastructure typically only provides a subset of what’s required to properly protect business data and applications. So how do you ensure your business data and applications are architected correctly and protected in the cloud?

In this post, we covered: Best Practices for protecting business data in the cloud. How To design a protected and highly-available cloud system architecture, Lessons Learned from architecting thousands of cloud system architectures.

10 Architectural Requirements for Protecting Business Data in the Cloud!

We’re going to be talking about 12 architectural requirements for protecting your data in the cloud. We will cover some best practices, and some lessons learned from what we’ve done for our customers in the cloud. Finally, we will tell you a little bit about our product SoftNAS Cloud NAS and how it works.

1) High Availability (HA)

High Availability (HA) is one of the most important of these requirements. It’s important to understand that not all HA solutions are created equal. The key aspect to high availability is to make sure that data is always highly available. That would require some type of replication from point A to point B.

high avilability ha

You also need to ensure-depending upon what public cloud infrastructure you may be running this on-that your high availability supports properly the redundancy that’s available on the platform. Take something like Amazon Web Services which offers different availability zones. You would want to find an HA solution that is available to run in different zones and provide availability across set availability zones, which is to the point of making sure that you have all your data stored in more than one zone.

You would want to ensure that you have a greater uptime in a single computer instance. The high availability that is available today from SoftNAS, for example, allows you on a public cloud infrastructure to deploy an instance in separate availability zones. It allows you to choose different storage types. It allows you to replicate that data between those two. And in case of a failover or an incident that would require a failover, your data should be available to you within 30 to 60 seconds.

You also want to ensure that whatever HA solution you’re looking for avoids what we like to call the split-brain scenario, which means that data could end up on a node that is not on the other node or newer data could end up on the target node after an HA takeover. You want to ensure that whatever type of solution you find that provides high availability that it meets the requirements of ensuring there is no split-brain in nodes.

2) Data Protection

The next piece that we want to cover is data protection. I want to stress that when we talk about data protection, there are multiple different ways to perceive that requirement.

Data Protection

We are looking at data protection from the storage architecture standpoint. You want to find a solution that supports snapshots and rollbacks. In snapshots, we look at these as a form of insurance – you buy them; you use them, but you hope you never have to need them.

I want to point out that snapshots do not take the place of a backup either. You want to find a solution that can replicate your data, whether that would be from replicating your data on-premise to a cloud environment, whether it would be replicating it from different regions within a public cloud environment, or even if you wanted to replicate your data from one public cloud platform to the other to ensure that you had a copy of the data in another environment.

You want to ensure that you can provide RAID mirroring. You want to ensure that you have efficiency with your data – being able to provide different features like compression, deduplication, etc. A copy-on-write file system is a key aspect to avoid data integrity risks. Being able to support things like Window’s previous versions for rollbacks. These are all key aspects of a solution that should provide you with the proper data protection.

3) Data security and access control

Data security and access control are always at the top of mind with everyone these days, so a solution that supports encryption. You want to ensure that the data is encrypted not only at rest but during all aspects of data transmission – data-at-rest, data-in-flight.

Data security and access control

The ability to provide the proper authentication authorization, whether that’s integration with LDAP for NFS permissions, for example, or leveraging active directory for Windows environments. You want to ensure that whatever solution you find can support the native cloud IAM roles available or servers principle roles available on the different public cloud platforms.

You want to ensure that you’re using firewalls and limiting access to who can gain access to certain things to limit the amount of exposure you have.

4) Performance

Performance is always at the top of everyone’s mind. If you take a look at a solution, you want to ensure that it uses dedicated storage infrastructure so that all applications can have the performance throughput that’s required.

Performance

No-burst limitations. You will find that some cloud platform vendor solutions use a throttling mechanism in order to give you only certain aspects of performance. You need a solution that can give you guaranteed predictable performance.

You want a solution that when you start it up on day one with 100,000 files, the performance is the same on day 300 and there are 5 million files. It’s got to be predictable and it can’t change. You have to ensure that you look at what your actual storage’s throughput and IOPS requirements are before you deploy a solution. This is the key aspect of things.

A lot of people come in and look to deploy a solution without really understanding what their performance requirements are and sometimes we see people who undersize the solution; but a lot of times, we see people who oversize the solution as well. It’s something to really take into consideration to understand.

5) Flexibility and Usability

You want a solution that’s very flexible from a usability standpoint – something that can run on multiple cloud platforms. You can find a good balance of the cost for performance; broad support for protocols like CIFS, NFS, iSCSI, and AFP; some type of automation with the cloud integration; the ability to support automation via uses of the script, APIs, command-lines, and all of these types of things.

Something that’s software-defined, and something that allows you to create clones of your data so that you can test your usable production data in a development environment; this is a key aspect that we found.

If you have the functionality, it allows you to test out what your real performance is going to look like before going into production.

6) Adaptability

Adaptability

You need a solution that’s very adaptable. The ability to support all of the available instances in VM types on different platforms, so whether you want to use high-memory instances, whether your requirements mandate that you need some type of ephemeral storage for your application.

Whatever that instance may be or VM may be, you want a solution that can work with it. Something that will support all the storage types that are available; whether that would be block storage, so EBS on Amazon, or Premium storage on Microsoft Azure, to also being able to support object storage.

You want to be able to leverage that lower-cost object storage like Azure Blobs or Amazon S3 in order to set specific data that maybe doesn’t have the same throughput and IOPS requirements as something else to be able to leverage that lower-cost storage.

This goes back to my point of understanding your throughput and IOPS requirements so that you can actually select the proper storage to run your infrastructure. Something that can support both an on-premise on a cloud or a hyper cloud environment – multiple cloud support and being able to adapt to the requirements as they change.

7) Data Expansion Capability

Data Expansion Capability

You need to find a solution that can expand as you grow. If you have a larger storage appetite and your need for storage to grow, you want to be able to extend this on the fly. This is going to be one of the huge benefits that you’d find in a software-defined solution run on a cloud infrastructure. There is no more rip and replace to extend storage. You could just attach more disks and extend your useable data sets.

This goes then to dynamic storage capacity and being able to support maximum files and directories. We’ve seen certain solutions that once they get to a million files, performance starts to degrade. You need something that can handle billions of files and Petabyte worth of data so that you know that what you got deployed now today will meet your data needs five years from now.

8) Support, Testing, and Validation

You need a solution that has that support safety net and availability of 24/7, 365, with different levels of support so that you can access it through multiple channels.

Support, Testing, and Validation

You would probably want to find a solution that has design support and offered a free trial or a proof of concept version. Find out what guarantees and warranties and SLA different solutions can provide to you. The ability to provide monitoring integration with it, integrations with things like Cloud Watch, integration with things like Azure Monitoring and Reporting uptime requirements, all of your audit log, system log, and integration.

Make sure that whatever solution you find can handle all of the troubleshooting gates. How will a vendor stand behind its offer? What’s their guarantee? Get a documented guarantee from each vendor that spells out exactly what’s covered, what’s not covered, and if there is a failure, how is that covered from a vendor perspective.

9) Enterprise Readiness

Enterprise Readiness

You need to make sure that whatever solution you choose to deploy that its enterprise-ready. You need something that can scale to billions of files because we’re long passed millions of files. We are dealing with people and customers that have billions of files with petabytes of data.

It can be highly resilient. It can provide a broad range of applications and workloads. It can help you meet your DR requirements in the cloud and also can give you some reporting and analytics on the data that you have deployed and are in place.

10) Cloud Native

Cloud Native

Is the solution cloud-native? Was the solution built from the ground to reside in the public cloud or is it a solution that was converted to run in a public cloud? How easy is it to move legacy applications onto the solution in the cloud?

You should outline your cloud platform requirements. Honestly take the time and outline what your cost and your company’s requirements for the public cloud are. Are you doing this to save money to get better performance? Maybe you’re closing a data center. Maybe your existing hardware NAS is up for a maintenance renewal or it’s requiring a hardware upgrade because it is no longer supported. Whatever those reasons, they are very important to understand.

Look for a solution that has positive product reviews. If you look in the Amazon Web Services Marketplace for any type of solutions out there, the one thing about Amazon is it’s really great for reviews.

Whether that’s deploying a software solution out of the marketplace, or whether that’s going and buying a novel on Amazon.com, check out all earlier reviews. Look at third-party testing and benchmark results. Run your own tests and benchmarks. This is what I would encourage you. Look at different industry analysts, and customer and partner testimonials and find out if you have a trustworthy vendor.

About SoftNAS Cloud NAS

I’d like to talk to you for just a few seconds now about SoftNAS Cloud NAS. What SoftNAS NAS Filer offers is a fully featured Enterprise cloud NAS for primary data storage.

It allows you to take your existing applications that maybe are residing on-premise that needs that legacy protocol support like NFS, CIFS, or iSCSI and move them over to a public cloud environment. The solution allows you to leverage all of the underlying storage of the public cloud infrastructure. Whether that would be object storage or block storage, you can use both. You could mix and match.

We offer a solution that can run not only on VMware on-premise but it can also run on public cloud environments such as AWS and Microsoft Azure. We offer a full high availability cross-zone solution on Amazon and a full high availability cross-network in Microsoft Azure.

We support all types of storage on all platforms. Whether that’s hot blob or cool blob on Azure, magnetic EBS, we can allow you to create pools on top of that and essentially give you file server-like access to these particular storage mediums. SEE SOFTNAS DEMO

Best Practices Learned from 1,000 AWS VPC Configurations

Best Practices Learned from 1,000 AWS VPC Configurations

AWS VPC Best Practices with SoftNAS 

Buurst SoftNAS has been available on AWS for the past eight years providing cloud storage management for thousands of petabytes of data. Our experience in design, configuration, and operation on AWS provides customers with immediate benefits.  

Best Practice Topics  

  • Organize Your AWS Environment
  • Create AWS VPC Subnets
  • SoftNAS Cloud NAS and AWS VPCs
  • Common AWS VPC Mistakes
  • SoftNAS Cloud NAS Overview
  • AWS VPC FAQs

Organize Your AWS Environment

Organizing your AWS environment is a critical step in maximizing your SoftNAS capability. Buurst recommends the use of tags. The addition of new instances, routing tables, and subnets can create confusion. The simple use of tags will assist in identifying issues during troubleshooting.  

When planning your CIDR (Classless Inter-Domain Routing) block, Buurst recommends making it larger than expected. This is because every VPC subnet created uses five IP addresses for the subnet. Thus, remember that all newly created subnets have a five IP overhead.  

Additionally, avoid using overlapping CIDR blocks as any future updates in pairing the VPC with another VPC will not function correctly with complicated VPC pairing solutions. Finally, there is no cost associated with a larger CIDR block, so simplify your scaling plans by choosing a larger block size upfront. 

Create AWS VPC Subnets

A best practice for AWS subnets is to align VPC subnets to as many different tiers as possible. For example, the DMZ/Proxy layer or the ELB layer uses load balancers, application, or database layer. If your subnet is not associated with a specific route table, then by default, it goes to the main route table. Missing subnet associations are a common issue where packets are not flowing correctly due to not associating subnets with route tables.  

Buurst recommends putting everything in a private subnet by default and using either ELB filtering or monitoring services in your public subnet. A NAT is preferred to gain access to the public network as it will become part of a dual NAT configuration for redundancy. Cloud formation templates are available to set up highly available NAT instances which require proper sizing based on the amount of traffic going through the network.      

Set up VPC peering to access other VPCs within the environment or from a customer or partner environment. Buurst recommends leveraging the endpoints for access to services like S3 instead of going out either over a NAT instance or an internet gateway to access services that don’t live within the specific VPCs. This setup is more efficient with a lower latency by leveraging an endpoint rather than an external link.  

Control Your Access

Control access within the AWS VPC by not cutting corners using a default route to the internet gateway. This access setup is a common problem many customers spend time on with our Support organization. Again, we encourage redundant NAT instances leveraging cloud formation templates available from Amazon or creating highly available redundant NAT instances 

The default NAT instance size is an m1.small, which may or may not suit your needs depending on the traffic volume in your environment. Buurst highly recommends using IAM (Identity and Access Management) for access control, especially configuring IAM roles to instances. Remember that IAM roles cannot be assigned to running instances and are set up during instance creation time. Using those IAM roles will allow you to not have to continue to populate AWS keys within the specific products to gain access to some of those API services. 

How Does SoftNAS Fit Into AWS VPCs?

Buurst SoftNAS offers a highly available architecture from a storage perspective, leveraging our SNAP HA capability, allowing us to provide high availability across multiple availability zones. SNAP HA offers 99.999% HA with two SoftNAS controllers replicating the data into block storage in both availability zones. Buurst customers who run in this environment qualify for our Buurst No Downtime Guarantee 

Additionally, AWS provides no SOA (Start of Authority) unless your solution runs in a multi-zone deployment.    

SoftNAS uses a private virtual IP address in which both SoftNAS instances live within a private subnet and are not accessible externally, unless configured with an external NAT, or AWS Direct Connect.

SoftNAS SNAP HA provides NFS, CIFS and iSCSI services via redundant storage controllers. One controller is active, while another is a standby controller. Block replication transmits only the changed data blocks from the source (primary) controller node to the target (secondary) controller. Data is maintained in a consistent state on both controllers using the ZFS copy-on-write filesystem, which ensures data integrity is maintained. In effect, this provides a near real-time backup of all production data (kept current within 1 to 2 minutes). 

A key component of SNAP HA™ is the HA Monitor. The HA Monitor runs on both nodes that are participating in SNAP HA™. On the secondary node, HA Monitor checks network connectivity, as well as the primary controller’s health and its ability to continue serving storage. Faults in network connectivity or storage services are detected within 10 seconds or less, and an automatic failover occurs, enabling the secondary controller to pick up and continue serving NAS storage requests, preventing any downtime.  

Once the failover process is triggered, either due to the HA Monitor (automatic failover) or as a result of a manual takeover action initiated by the admin user, NAS client requests for NFS, CIFS and iSCSI storage are quickly re-routed over the network to the secondary controller, which takes over as the new primary storage controller. Takeover on AWS can take up to 30 seconds, due to the time required for network routing configuration changes to take place. 

Common AWS VPC Mistakes

These are the most common support issues in AWS VPC configuration: 

  • Deployments require two NIC interfaces with both NICs in the same subnet. Double-check during configuration.  
  • SoftNAS health checks perform a ping between the two instances requiring the security group to be open at all times  
  • A virtual IP address must not be in the same CIDR of the AWS VPC. So, if the CIDR is 10.0.0.0/16, select a virtual IP address not within that subnet.  

SoftNAS Overview

Buurst SoftNAS is an enterprise virtual software NAS available for AWS, Azure, and VMware with industry-leading performance and availability at an affordable cost. 

SoftNAS is purpose-built to support SaaS applications and other performance-intensive solutions requiring more than standard cloud storage offerings.

  • Performance – Tune performance for exceptional data usage 
  • High Availability – From 3-9’s to 5-9’s HA with our No Downtime Guarantee 
  • Data Migration – Built-in “Lift and Shift” file transfer from on-premises to the cloud 
  • Platform Independent – SoftNAS operates on AWS,Azure, and VMware

Learn: SoftNAS on AWS Design & Configuration Guide 

    AWS VPC FAQs

    Common questions related to SoftNAS and AWS VPC: 

    We use VLANs in our data centers for isolation purposes today. What VPC construct do you recommend to replace VLANs in AWS?

    That would be subnets, so you could either leverage the use of subnets or if you really wanted to get a different isolation mechanism, create another VPC to isolate those resources further and then actually pair them together via the use of VPC pairing technology.

    You said to use IAM for access control, so what do you see in terms of IAM best practices for AWS VPC security?

    The most significant thing is that you deal with either thirdparty products or customized software that you made on your web server. Anything that requires the use of AWS API resources needs to use a secret key and an access key, so you can store that secret key and access key in some type of text file and have it reference it, or, b, the easier way is just to set the minimum level of permissions that you need in the IAM role, create this role and attach it to your instance and start time. Now, the role itself cannot be assigned, except during start time. However, the permissions of several roles can be modified on the fly. So you can add or subtract permissions should the need arise.

    So when you’re troubleshooting the complex VPC networks, what approaching tools have you found to be the most effective?

    We love to use traceroute.  I love to use ICMP when it’s available, but I also like to use the AWS Flow Logs which will actually allow me to see what’s going on in a much more granular basis, and also leveraging some tools like CloudTrail to make sure that I know what API calls were made by what user to understand what’s gone on.

    What do you recommend for VPN intrusion detection?

    There are a lot of them that are available. We’ve got some experience with Cisco and Juniper for things like VPN and Fortinet, whoever you have, and as far as IVS goes, Alert Logic is a popular solution. I see a lot of customers that use that particular product. Some people like some of the opensource tools like Snort and things like that as well.

    Any recommendations around secure junk box configurations within AWS VPC?

    If you’re going to deploy a lot of your resources within a private subnet and you’re not going to use a VPN, one of the ways that a lot of people do this is to just configure a quick junk box, and what I mean by that is just to take a server, whether it be a Windows or Linux, depending upon your preference, and put that in the public subnet and only allow access from a certain amount of IP addresses over to either SSH from a Linux perspective or RDP from a Windows perspective.  It puts you inside of the network and actually allows to gain access to the resources within the private subnet.

    And do junk boxes sometimes also work? Are people using VPNs to access the junk box too for added security?

    Some people do that. Sometimes they’ll just put like a junk box inside of the VPN and your VPN into that. It’s just a matter of your organization security policies.

    Any performance or further considerations when designing the VPC?

    It’s important to understand that each instance has its own available amount of resources, from not only from a network IO but from a storage IO perspective, and also it’s important to understand that 10GB, a 10GB instance, like let’s say take the c3.8xl which is a 10GB instance. That’s not 10GB worth of network bandwidth or 10GB worth of storage bandwidth. That’s 10GB for the instance, right? So if you have a high amount of IO that you’re pushing there from both a network and a storage perspective, that 10GB is shared, not only from the network but also to access the underlying EBS storage network. This confuses a lot of people, so it’s 10GB for the instance not just a 10GB network pipe that you have.

    Why would use an elastic IP instead of the virtual IP?

    What if you had some people that wanted to access this from outside of AWS? We do have some customers that primarily their servers and things are within AWS, but they want access to files that are running, that they’re not inside of the AWS VPC.  So you could leverage it that way, and this was the first way that we actually created HA to be honest because this was the only method at first that allowed us to share an IP address or work around some of the public cloud things like node layer to broadcast and things like that.

    Looks like this next question is around AWS VPC tagging. Any best practices for example?  

    Yeah, so I see people that basically take different services, like web and database or application, and they tag everything within the security groups and everything with that particular tag.  For people that are deploying SoftNAS, I would recommend just using the name SoftNAS as my tag. It’s really up to you, but I do suggest that you use them. It will make your life a lot easier.

    Is storage level encryption a feature of SoftNAS Cloud NAS or does the customer need to implement that on their own?  

    So as of our version that’s available today which is 3.3.3, on AWS you can leverage the underlying EBS encryption. We provide encryption for Amazon S3 as well, and coming in our next release which is due out at the end of the month we actually do offer encryption, so you can actually create encrypted storage pools which encrypt the underlying disk devices.

    Virtual VIP for HA: does the subnet this event would be part of add in to the AWS VPC routing table?

    It’s automatic. When you select that VIP address in the private subnet, it will automatically add a host route into the routing table. Which allows clients to route that traffic.

    Can you clarify the requirement on an HA pair with two next, that both have to be in the same subnet? 

    So each instance you need to move NIC ENIs, and each of those ENIs actually need to be in the same subnet.

    Do you have HA capability across regions? What options are available if you need to replicate data across regions? Is the data encryption at-rest, in-flight, etc.?

    We cannot do HA with automatic failover across regions.  However, we can do SnapReplicate across regions. Then you can do a manual failover should the need arise. The data you transfer via SnapReplicate is sent over SSH and across regions. You could replicate across data centers. You could even replicate across different cloud markets.

    Can AWS VPC pairings span across regions?

    The answer is, no, that it cannot.

    Can we create an HA endpoint to AWS for use with AWS Direct Connect?

    Absolutely. You could go ahead and create an HA pair of SoftNAS Cloud NAS, leverage direct connect from your data center, and access that highly available storage.

    When using S3 as a backend and a write cache, is it possible to read the file while it’s still in cache?

    The answer is, yes, it is. I’m assuming that you’re speaking about the eventual consistency challenges of the AWS standard region; with the manner in which we deal with S3 where we treat each bucket as its own hard drive, we do not have to deal with the S3 consistency challenges.

    Regarding subnets, the example where a host lives in two subnets, can you clarify both these subnets are in the same AZ?

    In the examples that I’ve used, each of these subnets is actually within its own VPC, assuming its own availabilities. So, again, each subnet is in its own separate availability zone, and if you want to discuss more, please feel free to reach out and we can discuss that.

    Is there a white paper on the website dealing with the proper engineering for SoftNAS Cloud NAS for our storage pools, EBS vs. S3, etc.?

    Click here to access the white paper, which is our SoftNAS architectural paper which was co-written by SoftNAS and Amazon Web Services for proper configuration settings, options, etc. We also have a pre-sales architectural team that can help you out with best practices, configurations, and those types of things from an AWS perspective. Please contact sales@softnas.com and someone will be in touch.

    How do you solve the HA and failover problem?

    We actually do a couple of different things here. When we have an automatic failover, one of the things that we do when we set up HA is we create an S3 bucket that has to act as a third-party witness. Before anything takes over as the master controller, it queries the S3 bucket and makes sure that it’s able to take over. The other thing that we do is after a take-over, the old source node is actually shut down.  You don’t want to have a situation where the node is flapping up and down and it’s kind of up but kind of not and it keeps trying to take over, so if there’s a take-over that occurs, whether it’s manual or automatic, the old source node in that particular configuration is shut down.  That information is logged, and we’re assuming that you’ll go out and investigate why the failover took place.  If there are questions about that in a production scenario, support@softnas.com is always available.

    On-Premise vs AWS NAS Storage – Which is Best for Your Business?

    On-Premise vs AWS NAS Storage – Which is Best for Your Business?

    On-Premise vs AWS Cloud NAS Storage – Which is Best for Your Business? Should you keep your On-Premises NAS: Upgrade, Pay Maintenance, or Public Cloud? download the full slide deck on Slideshare

    The maintenance bill is due for your on-premises SAN/NAS–or it just increased. It’s hundreds of thousands or millions of dollars just to keep your existing storage gear under maintenance. And you know you will need to purchase more storage capacity for this aged hardware.

    • Do you renew and commit another 3-5 years by paying the storage bill and further commit to a data center architecture?
    • Do you make a forklift upgrade and buy new SAN/NAS gear or move to hyper-converged infrastructure?
    • Do you move to the AWS cloud for greater flexibility and agility?
    • Will you give up security and data protection?

    Difference between on-premise vs Hyper-converged vs AWS

    We’re going to be talking about on-premises to cloud conversation. We’re going to show you the difference between on-premise vs hyper-converged vs AWS. We will also tell you why you should choose AWS over on-premise and hyper-converged. Then we will tell you a little bit about SoftNAS and how it helps with your cloud migrations.

    On-premise vs AWS: upgrade, paid maintenance, or public cloud?

    Let’s focus on a storage dilemma that we have happening with IT teams and organizations all across the world.

    on-premises VS hyper-converged VS AWS
    The looming question is what do I do when my maintenance renewal comes up?

    Teams are left with three options. Either we stay on-premise and pay the renewal fee for your maintenance bill which is a continuously increasing expense, or you could consider a forklift upgrade where you’re buying a new NAS or SAN and moving to a hyper-converged platform.

    The drawback with this option is that you still haven’t solved all your problems because you’re still on-prem, you’re still using hardware and the new maintenance renewal is about 24 to 12 months away. Finally, customers can Lift and Shift data to AWS – hardware will no longer be required, and data centers could be unplugged.

    on premises nas upgrade or not

    There is an increase in maintenance costs for support, swapping disks, for downtime. There is exorbitant pricing for SSD drives and SSD storage when you pay a nominal leg for pricing that you need to pay to ensure that your environment works as advertised. You have never-ending pressure from businesses to add more storage capacity – we need it, we need it now, and we need more of it. There is a lack of low-cost high-performance object storage, and you’re pressured by the business owners for agile infrastructure.

    On-Premise vs Hyper-Converged vs AWS Cloud

    The business is growing; data is growing. You need to be ahead of the curve and way ahead of the curve to actually keep up. Let’s take a look. Let’s do a stare and compare all these three options that are there so On-Premise, Hyper-Converged, and AWS Cloud.

    on premise vs hyper converged vs aws cloud computing

    From a security standpoint, all these three options deliver a secure environment with all the rules and policies that you’ve already designed to protect your environment — they travel with you. From an infrastructure and management scenario, your on-premise and hyper-converged still require your current staff to maintain and update the underlying infrastructure.

    That’s where we’re talking about your disk swaps, your networking, your racking, and un-racking. AWS can help you limit this IT burden with its Manage Infrastructure.

    From a scalability standpoint, I dare that you call your NAS or SAN provider and tell them that you think you bought too much storage last year and you want to give them back some. In AWS, you get just that option. You can scale up or scale down allowing you to grow as needed and not as estimated.

     

    On-premise vs AWS Management

    AWS vs.On-Premises NAS Storage

    For your on-premise and hyper-converged system, you control and manage everything from layer one all the way up to layer seven. In an AWS model, you can remove the need for jobs like maintenance, disk swapping, and monitoring the health of your system and hand that over to AWS. You’re still in control of managing user-accounts access in your application, but you could wave goodbye to hardware, and maintenance fees in your forklift upgrades.

    8 reasons to choose AWS for your storage infrastructure.

    From a scalability standpoint. We talked about this earlier. Gives you the ability to grow your storage as needed, not as estimated. For the people who are storage gurus, you know exactly what that means.

    reasons to choose AWS for your storage infrastructure

    I’ve definitely been in rooms sitting with people with predictive modeling about how much data we are going to grow by for the next quarter or for the next year. I could tell you for 100% fact, that I have never ever been in a room where we’ve come up with an accurate number. It’s always been an idea, a hope, a dream, a guess.

    With AWS and the scalability that it provides the ability to grow your storage and then pay for it as you go and only pay for what you use. That in itself is worth its own weight in gold. Not only that, you get a chance to end your maintenance renewals and no longer pay for that maintenance ransom where they are holding access to your data until your maintenance ransom is paid.

    There are also huge benefits for trading-in CAPEX for the OPEX model. There is no more long-term commitment. When you’re finished with the resource, you send it back to the provider. You’re done with using your S3 disk, you turn it off and you send that back to Amazon.

    You also gain freedom from having to make a significant long-term investment for equipment that you and I know will eventually break down or become outdated. You also have a reliable infrastructure. We’re talking about S3 and its (11) nines worth of durability. EC2, where it gives you (4) nines worth of accessibility for your multi-AZ deployments. You have functions like disaster recovery, ease of use, and management, and you’re utilizing the best in the class in security to protect your data.

    If you’re currently using an on-premises NAS system and it’s coming up for maintenance renewal, what do you intend to do?
    • Are you going to do an in-place upgrade or you use your existing hardware, but you update the software?
    • Are you going to go a forklift update where you buy new hardware and software?
    • Or are you going to move to a hyper-converged system? Or are you considering the public cloud whether it’s AWS or other options?

    I think most of you are intending to move to the public cloud, whether it’s AWS or others. Looks like a lot of you are interested in in-place NAS and SAN upgrades, so it’s interesting.

    A lot of you are also considering moving to hybrid-converged. For those of you who answered other, we would be curious to learn more about what your other plans are. On the questions pane, you’re more than welcome to write what you’re intending to do.

    Lift and Shift Data Migration while keeping enterprise NAS features

    Lifting and shifting to the cloud can be done in multiple ways. We’re talking about, from a petabyte-scale, you can incorporate AWS Import using Snowball. You can connect directly using AWS Direct Connect, or you could use some open-source tools or programs like Rsync, Lsync, and Robocopy.

    Once your data is in the cloud, it’s how you’re going to maintain that same enterprise-level of experience that you’re used to. With SoftNAS Cloud NAS, you have that ability. We give you the opportunity to be able to have no downtime SLA and we’re the only company that gives that guarantee.

    Lift and Shift Data Migration

    SoftNAS allows you to lift and shift while maintaining the same enterprise-level of service and experience that you are used to. We are the only company that gives a no downtime guarantee SLA. We will give you the same enterprise feel in the cloud that you are used to on-premise. Whether or not that’s serving out your data via NFS for your apps that need NFS, CIFS, or SMB, we can do that.

    We deploy within minutes. We give you the ability, as we demonstrated, to do storage snapshots. The GUI in itself is easy to learn, easy to use, no training. You don’t need to send your teams back to get training to be able to use SoftNAS as software.

    We allow you to use the standard protocols. We leverage AWS’s storage elasticity. SoftNAS enables the existing applications to migrate unchanged. Our software provides all the Enterprise NAS capabilities — whether it’s CIFS, NFS, or iSCSI — and it allows you to make the move to the cloud and preserve your budget for the innovation and adaptations that translate to improved business outcomes.

    SoftNAS can also run on-premise via a virtual machine and create a virtual NAS from a storage system and connect to AWS for cloud-hosted storage.

    What storage workloads do you intend to move to the AWS Cloud?

    For those of you who are interested in moving to AWS, is it going to be NFS, CIFS, iSCSI, AFP, or you are not just intending to move to AWS at all? It looks like about over 40% of you are intending to move NFS workloads to the AWS Cloud, which is pretty common with what we’ve seen.

    CIFS and iSCSI support is also balanced too. A couple of you just have no interest in moving to the AWS Cloud. For those of you who don’t have an interest in moving to the AWS Cloud, again, on the questions page, please let us know why you don’t intend to move to AWS.

    Easily Migrate Existing Applications to AWS Cloud

    SoftNAS in a nutshell is an enterprise NAS filer that exists on a Linux appliance with a ZFS backing in the cloud or on-premise.

    migrate application to aws cloud

    We have a robust API, CLI, and cloud base that integrate with AWS S3, EBS on-Premise storage, and VMware. This allows us to provide data services like block replication, which allows you to access cloud disks. We give you storage enhancements such as compression and in-line deduplication, multi-level caching, and the ability to produce writable snap clones, and encrypt your data at rest or in-flight.

    We continue to deliver the best brand services by working with our industry-leading partners. Some of these people you guys might know as Amazon, Microsoft, and VMware. We continue to partner with them to enhance both our offerings and theirs.

    We recommend that you have your technical leads try out SoftNAS Virtual NAS Appliance. Tell them to visit our website and they’ll be able to go and try out SoftNAS.

    For some of the more technical people in our audience, we invite you to go to our AWS page where you can learn a little bit more about the details of how SoftNAS AWS NAS works.

    Why chose SoftNAS over the AWS storage points?

    We give you the ability, from a SoftNAS standpoint, to be able to encrypt the data. As we spoke about in the webinar, we’re the only appliance that gives a no downtime SLA. We stand by that SLA because we have designed our software to be able to address and take care of your storage needs. We do have the ability to connect to blob storage, and we are in other platforms other than AWS – that would be CenturyLink, and Azure, among some the others.

    You have an application that you need to be able to move to the cloud. However, in order for you to rewrite that application, it’s going to take you six months to a year to be able to support S3 or any kind of block storage. We give you the ability to migrate that data by setting up an NFS or a CIFS whatever that application is used to already in your enterprise.

    Consolidating File Servers into the Cloud

    Cloud File Server Consolidation Overview

    Maybe your business has outgrown its file servers and you’re thinking of replacing them. Or your servers are located throughout the world, so you’re considering shutting them down and moving to the cloud. It might be that you’re starting a new business and wondering if an in-house server is adequate or if you should adopt cloud technology from the start.

    Regardless of why you’re debating a physical file server versus a cloud-based file server, it’s a tough decision that will impact your business on a daily basis. We know there’s a lot to think about, and we’re here to show why you should consolidate your physical file servers and move your data to the cloud.

    We’ll discuss the state of the file server market and talk about the benefits of cloud file sharing. What we’re going to talk about is some of the challenges and some of the newest technologies to step up to the challenges of unstructured data not only sitting in one place but scattered around the world.

    Managing Unstructured Data

    The image below is how Gartner looks at unstructured data in the enterprise. The biggest footprint of data that you have as an enterprise or a commercial user is your unstructured data. It’s your files.

    cloud file server consolidation unstructured data

    That one is where you buy a large single platform that might be a petabyte or even larger to house all of that file data, but what creeps up on us is the data that doesn’t leave  in the data, that which isn’t right under your nose and surrounded by best practices. And those who distribute file servers that live around the world, on average an enterprise with 50 locations, be they branch offices, distribution centers, manufacturing facilities, oil rigs, etc, they’ve got 50 or 100 locations, they’re going to have at least 50 or 100 data centers.

    The analyst community (Gartner, Forrester and 451) tell us that almost 80% of the unstructured data you’re dealing with actually sits outside of your well protected data center. This presents challenges for an enterprise because it’s outside of your control.

    It’s been difficult to leverage the cloud for unstructured data. Customers by and large are being fairly successful moving workloads and applications to the cloud, along with the storage those applications use. However, when you’re talking about user data and your users are all around the world, you’re dealing with distance, latency, network unavailability in general and multiple hubs through routing.

    Which has led to some significant challenges, such as having the situation where you’ve data islands popping up everywhere. You have massive amounts of corporate data that’s not subject to the same kind of data management security that you would have in an enterprise datacenter. Including backup, recovery, audit, compliance, secure networks and even physical access.

    And that is what has led to a really “bleeding from the neck problem.” That being, how am I going to get this huge amount of data around the world under our control?

    Unstructured Data Challenges

    These are some of the issues that you find: Security problems, lost files. Users calling in and saying, “Oops, I made a mistake. Can you restore this for me?” And the answer quite often is, “No. You people in that location are supposed to be backing up your own file server.”

    Bandwidth issues are significant as people are trying to have everyone in the world work from a single version of the truth and they’re trying to all look at the same data. But how do you do that when it’s file data?

    You have a location in London trying to ship big files to New York. NY then makes some changes and ships the files to India. Yet people are in different time zones. How do you make sure they’re all working off of the same version of information? That has led to the kind of problems driving people to the cloud. Large enterprises are trying to get to the cloud not only with their applications, but with their data.

    If you look at what Gartner and IDC say about the move to the cloud, you see that larger enterprises have a cloud-first strategy. We’re seeing SMBs (small and medium businesses) and SMEs (small and medium enterprises) also have a cloud-first strategy. They’re embracing the cloud and moving significant amounts of their workloads to the cloud.

    cloud file server consolidation

    More companies are going to install a cloud IT infrastructure at the expense of private clouds. We see customers all the time that are saying, “I have 300,000 sq ft. data center. My objective is to have a 100,000 sq ft. data center within the next few months.”

    NAS/SAN vs. Hyperconverged vs. The Cloud

    And so many customers are now saying, “What am I going to do next? My maintenance renewal is coming up. My capacity is reaching its limit because unstructured data is growing in excess of 30% annually in the enterprise. So what is the next thing am I going to do?”

    Am I going to add more on-premise storage to my files? Am I going to take all of my branch offices that are currently 4 terabytes and double them to 8 terabytes?

    You probably have seen the emergence of hyperconverged hardware — single instance infrastructure platforms that do applications, networking and storage. It’s a newer, different way of having an on-premise infrastructure. With a hyperconverged infrastructure, you still have some forklift upgrade work both in terms of the hardware platform and in terms of the data.

    nas vs hyperconverged vs cloud

    Customers that are moving off of traditional NAS and SAN systems onto hyperconverged have to bring in the new hardware, migrate all the data, get rid of the old hardware, so it’s still lift and shift from a datacenter as well as a footprint.

    Because of that, a lot of SoftNAS customers are asking, “Is it possible to do a lift and shift to the cloud? I don’t want to get the infrastructure out of my data center and out of my branch offices. I don’t want to be in the file server business. I want to be in the banking, or the retail, or the transportation business.”

    I want to let the cloud providers — Azure, AWS, or Google — to use their physical resources, but it’s my data and I want everybody to have access to it. That’s opened the world to a lift and shift into a cloud-based infrastructure. That means you and your peers are going through a pros and cons discussion. If you look at on-premises versus hyperconverged versus the cloud, the good news is all of them have an secure infrastructure available. That could be from the level of physical access, authentication and encryption – either in-transit or at-rest or in-use, all the way down to rights management.

    nas vs hyperconverged vs cloud

    What you’ll find is that all the layers of security apply across the board. In that area, cloud has become stronger in the last 24 months. In terms of infrastructure management — which is getting to be a really key budget line item for most IT enterprises — for on-premise and hyperconverged, you’re managing that. You’re spending time and effort on physical space, power, cooling, upgrade planning, capacity planning, uptime and availability, disaster recovery, audit and compliance.

    The good news with the cloud is you get to off load that to someone else. Probably the biggest benefit that we see is in terms of scalability. It’s in terms of the businesses that say, “I have a pretty good handle on the growth rates of my structured data but my unstructured data is a real unpredictable beast. It can change overnight. We may acquire another company and find out we have to double the size of our unstructured data share. How do I do that?” Scalability is a complicated task if you’re running an on premise infrastructure.

    With the cloud, someone else is doing it — either at AWS, Azure, Google, etc. From a disaster recovery perspective, you pretty much get to ride on the backs of established infrastructure. The big cloud providers have great amounts of staff and equipment to ensure that failover, availability, pointing to a second copy, roll-back etc, has already been implemented and tested.

    Adding more storage becomes easy too. From a financial perspective, the way you pay for an on-premise environment, is you buy your infrastructure and you use it. It’s the same thing with hyperconverged. Although, they have lower price points than traditional legacy NAS and SAN. But the fact is only the cloud gives you the ability to say “I’m going to pay for exactly that I need. I’m not buying 2 Terabytes because I currently need 1.2 Terabytes and I’m growing 30% per annum.” If you’re using 1.2143 terabytes, that’s what you pay for in the cloud.

    A Single Set of Data

    But just as important, they have found out that there is a business use-case. There is the ability to do things from a centralized consolidated cloud viewpoint which you simply cannot do from the traditional distributed storage infrastructure.

    If you think about what customers are asking for now, more and more enterprises are saying “I want centralized data.” That’s one of the reasons they’re moving to the cloud. They want security. They want to make sure that it’s using best practices in terms of authentication, encryption, and token management. And whatever they use has to be able to scale up for their business.

    cloud file server consolidation unstructured data

    But how about from a use case perspective? You need to make sure you have data consistency. Meaning, if I have people on my team in California, New York and London, I need to make sure they’re not stepping on each other’s work as they collaborate on projects.

    You need to make sure you have flexibility. If you’re getting rid of old infrastructure in 20 or 30 branch offices, then you need to get rid of them easily and quickly spin up the ability for them to access centralized data within minutes. Not within hours and weeks of waiting for new hardware to come in.

    Going back to data consistency, if I’m going to have one copy of the truth that everyone is using, I need to make sure that I have that distributed files working. Because face it, that what file servers do. That is the foundation of file servers since they were invented in the market. Those are the type of benefits that are being brought to bear by people that move their file servers into the cloud. They cut costs and increase flexibility.

    Cloud File Server Reference Architecture

    Here’s an example. In the image below, a SoftNAS customer needed to build a highly available 100 TB Cloud NAS on AWS. The NAS needs to be accessed in the cloud via a CIFS protocol and they need to have data elsewhere. Not the primary location, but they need to have across the region and different continents.

    cloud file server consolidation reference architecture

    They needed to have to have access from the remote office. Also, they need Active Directory and giving them a need to have them for the help build a new space with the district file locking as well.

    The solution provided along with Talon FAST, deployed two instances in UFCs. In this case in two separate zones — control A deployed in one zone and control B deployed in the second zone. We leveraged S3 and EBS for different type of applications for their SLA.

    We set up replication between two nodes so the data is available in two different places and is within the zone. We deployed HA on top of it to give that availability with minimal down time. So we give you that flexibility to migrate data or flip to another node without management intervention.

    Next Steps

    You can also try SoftNAS Cloud NAS free for 30 days to start consolidating your file servers in the cloud:

    softnas cloud nas free trial

    Docker Persistent Storage on AWS

    Docker Persistent Storage on AWS

    Persistent storage is critical when running applications across containers on AWS. In this article, we cover how to build persistent storage for Docker containers on AWS. Learn best practices to spin-up, spin-down and move containerized applications across AWS environments, whether running Docker or Amazon EC2 Container Services (ECS).

    You can jump to different sections of the article by clicking the hyperlinks below:

    1. Resources (Recording video, slides, whitepapers)
    2. What is Docker?
    3. Virtual Machines vs. Containers
    4. Why Does Docker Persistent Storage Matter?
    5. Application Delivery with Persistent Storage
    6. Amazon EC2 Container Service
    7. Docker and SoftNAS Cloud NAS
    8. SoftNAS Cloud NAS Overview
    9. Docker Persistent Storage Q&A

    SlideShare: How to Build Docker Persistent Storage on AWS

    SoftNAS Cloud NAS on the AWS Marketplace: Visit SoftNAS on the AWS Marketplace

    What is Docker?

    docker persistent storage what is

    What is Docker and what are containers?

    Containers running on a single machine share the same operating system kernel. They start instantly and they make more efficient use of RAM. Images are constructed from layered file systems so they share common files. This makes disk usage and image downloads much more efficient. Docker containers are based on open standards. Which allows containers to run on major Linux distributions and Microsoft operating systems. Containers isolate applications from each other and the underlying infrastructure. This provides an added layer of protection for the application.

    Virtual Machines vs. Containers

    docker persistent storage containers vs vm

    People often ask, “How are virtual machines and containers different?” Containers have similar resource isolation and allocation benefits as virtual machines. But have a different architectural approach that allows them to be much more portable and efficient. Virtual machines may include the application, necessary binaries and libraries. But also have the overhead of an entire guest operating system, and that can take tens of gigabytes. It’s a challenge that virtual desktop people have taken on.

    Containers have a very different approach where there’s more containers running into a single instance or virtual machine. They’re better for isolating processes and user space not tied into any specific infrastructure. They’re much more portable and can run virtually everywhere, but it has a Docker infrastructure. The benefits of Docker containers over VMs is less overhead, faster instantiation, better isolation, and easier scalability. Another benefit of containers over virtualization is containers are a great fit for automation. Containers are the best for automation. 

    So why does DevOps care? Again, it’s all about automation, setup, launch and run. Don’t worry about what hardware you’re on. Don’t worry about finding the drivers for your servers. Now you can focus on your life cycle repeatability and not worry about keeping your infrastructure going.

    Why Does Docker Persistent Storage Matter?

    docker persistent storage devops

    Why does persistent storage matter for Docker? We need to think about what our storage options are. The Docker containers themselves have checked storage. You can use the storage that’s in that container if you want. The huge problem with that is simple: it goes away when the container is gone. The container is useful as a scratch pad, but not great if you have data you want to keep. So that’s the storage problem.  

    Docker containers can mount directories from the host instance on AWS. It would be in this instance that storage can be shared by all containers that run within that host. So what are the issues? You can deploy a cluster of instances to house containers. Your containers move around those different instances, hosts, and your storage is persistent. Also, there’s no guarantee of how you can share that storage.

    docker persistent storage container storage options

    Network storage is a much better option because now you can share storage like you used to. You can access it from anywhere, and then there’s cloud storage such as EBS and S3. If you wanted the block, block doesn’t share very well. If you want S3, you have to code your containers to work directly with object. SoftNAS Cloud NAS does give you that middle ground option with network storage and native cloud storage. Let’s put CIFS shares, NFS shares, etc., onto your cloud storage and have that complete solution.

    Application Delivery with Persistent Storage

    docker persistent storage application delivery

    Let’s talk about application delivery with persistent storage. If you look at container services, there’s really three components in a container service. There’s your front-end service, and think of that as what you see. It’s the stuff that provides information often like on a webpage. Your back-end service provides APIs in front of the service and the execution part of the application within Docker. Then there’s data storage services. If you use SoftNAS Cloud NAS as your data storage service, you now get high availability and persistence.

    So to really use EC2 and SoftNAS together, what does that mean? You’re going to use Amazon’s clustering to kick off a cluster of containers and instances, and auto-scaling.  

    docker persistent storage aws ecs

    By doing this, we can have a SoftNAS Cloud NAS instance in one availability zone using our virtual IP address and mount that storage a couple of ways.  You can mount those directly to the containers so that they’re using NFS directly, or you could even mount SoftNAS Cloud NAS into the container instance. This lets each of the containers use this as local storage and that lessens the amount of capacity you need on your container instances and also still provides that level of sharability that you model on.

    The other thing we stress with ECS and Docker containers is you really want to stretch those across a couple of availability zones. That way if an AZ completely goes out, your auto-scaling can help by bringing up new container instances. This distributes the load on new containers, and by continuing access to SoftNAS virtual IP, you’ll then be able to keep up with your storage and stay online.

    Amazon EC2 Container Service

    docker persistent storage amazon ec2

    Now let’s go into Amazon EC2 Container Services. It’s a highly scalable, fast, container management service that makes it easy to run, stop, and manage Docker containers on a cluster of Amazon EC2 instances. Amazon EC2 lets you launch and stop container-enabled applications with simple API calls and allows you to get the state of your cluster from a centralized service, gives you access to many familiar Amazon features.

    When you use Amazon ECS’ schedule, placing the containers across your cluster based on resource needs, isolation policies and availability requirements, having the ability to schedule, that’s important. ECS also eliminates the need for you to operate your own cluster management and configuration management system and not have to worry about scaling your management infrastructure.

    One of the benefits of ECS is being able to easily to manage clusters for any scale. There is flexible container placement. You want them to flow across availability zones for most of that time.  

    docker persistent storage amazon ec2 benefits

    Docker and SoftNAS Cloud NAS

    docker persistent storage softnas cloud nas

    Let’s talk about why SoftNAS matters to Docker. Again, the storage for Docker and container services, they’re in multiple places. There’s temporary storage in the containers. There’s global storage on the server, the instance that the containers are running on, but if you really want to have your storage around portable, accessible and highly available, then you need the capabilities that SoftNAS Cloud NAS provides. We try to make it simple.

    Part of that’s making sure we have APIs that plug into your automation system that goes along with ECS. The setup we showed you had high availability setup as part of that CFT deployment. So to have that kind of ability within quick deployment is amazing. Now try to be agile, and really when you think about DevOps, they want to be fast and they want to be quick.

    We have a feature called SnapClone that lets you take snapshots or remember ahead of schedule for hourly snapshots during business hours turned on as a result of that container cluster deployment. Each of those snapshots or a snapshot on-demand could be mounted as what we call a SnapClone so it’s a space efficient writeable snapshot and a 99% rate, and then use that for like a DevOps test case where you want to test against real production data without, heaven forbid, damaging your real production data. The SnapClones are very useful that way. Continuous deployment, and that’s again making it easy, using the SnapClones.  

    What are the read/write latency, max throughputs, IO costs for SoftNAS? We don’t use the ephemeral storage except for read cache as part of an EC2 instance, so the storage is the EBS general purpose, EBS provision IOPS. What percentage of that is due to SoftNAS overhead? It’s very light as overall IO.

    The reason you might want to do that is because of our fault system. We have background scanning, so if you want an additional layer of protection, allow us to recognize that some bit-rot occurred or something went wrong in the EBS and then with our fault system underlying we can fix that. The reason I bring that into what’s our overhead, so, you know, in this case I configured it for marriage so that means every write is two writes, every read still maps to one read and we go to, you know, the most available storage. If we’re doing like a RAID 5 of course, then it’s a, we do a parity writing but there’s no read-back.  

    One additional thought on that is in some scenarios, SoftNAS will actually improve the IO profile simply because of the way that we use the FS in the backend with read caching. In the event that you’re re-reading something that’s been pulled into cache, you’ll typically see an even better IO profile than what the storage typically provides. There’s definitely some considerations associated with performance and the way that we’re structured and the way the product is designed.

    SoftNAS Cloud NAS Overview

    docker persistent storage softnas architecture

    Let’s talk about SoftNAS Cloud NAS and what it is. SoftNAS is a software-based Cloud NAS filer. We deliver on Amazon through the AWS Marketplace as an EC2 instance. One of the huge benefits you get from SoftNAS is being able to use cloud native storage, to deliver files such as NFS for Linux and CIFS for Microsoft, and blocks through an iSCSI interface.

    Being able to layer that on the different types of EBS volumes, whether it’s provisioned IOPS or general purpose or any of the other flavors or on object storage such as Amazon S3, SoftNAS can take S3 and EBS, and we look at those as this device is aggregated into storage pools and then carve those storage pools into volumes and export those interfaces that I mentioned as on with AFP. So being able to take that cloud made of storage and provide it to software that used to work indirectly with shared files is a huge benefit.

    Another huge benefit is our ability to replicate data, and through data replication also provide high availability where we won’t have parity instances. and where with the data replicated mutually more often in separate availability zones, and sort of the secondary monitor, the primary’s health through network and storage heartbeats and then do a takeover and continue to provide uninterrupted service to the servers, the users and the files. That’s just as important in Docker you’re going to spend that two, four, five hundred of containers that all want to have shared storage, you want to make sure that your infrastructure stays up during that entire time so that you don’t have any outages. Since we go across availability zones, it usually leaves a whole Amazon availability zone within a region and then through auto provisioning with your containers and SoftNAS turnover have completely uninterrupted service.

    Docker Persistent Storage Q&A

    1. SnapReplicate and public elastic IPs. If it’s private, how do you have the service storage using private IPs which is specific to a subnet AZ or availability zone for AWS?
      • We have two modes of HA and with a virtual IP.  For the longest time we’ve been supporting the HA through Amazon’s elastic IP and that in fact does use the public IP, of course. The other mode is our private virtual IP and in that case everything would be completely private. We’re managing the route tables between availability zones and moving that virtual IP from primary and secondary instance that way so that’s how we deliver that.
    2. Can EBS volumes be encrypted with AWS key management service?
      • We have encryption built into our product for data storage and we’re using a common third-party encryption software called LUKS so we can encrypt the data on the disk and we also have a pretty nice application guide on how to do data flight encryption for both NFS and CIFS.
    3. Is there a way to backup the SoftNAS managed storage? What types of recovery can we leverage?
      • So we’ve built into our product the ability to back up our storage tools through EBS snapshots. If you’re familiar with EBS snapshots, it’ll take a full copy of your volume and copy that into EBS and manage the changes through there, so we had that built into our storage pool panel in the UI so you can take full backups that way and then be able to restore it to the full storage tool as well. But, you know what, that’s just one avenue for backup. Obviously what we’re doing with storage in the public cloud and Amazon in this case is mirror any other enterprise-class storage product, enterprise-class NAS, and we highly recommend that you have a complete backup and recovery strategy. There’s a lot of really good products on the market today, you know, that we’ve integrated and tested within our lab and a lot of others that I’m sure work just fine because we’re completely about open standards. It’s very important that with our storage or anybody’s storage that, you know, want to have a very comprehensive backup plan and use of those third party products.
    4. What RAID types are being used under SoftNAS?
      • So in the safety built for containers, that’s RAID 1, but that was just a choice.  Often when you look at Amazon and the short answer is we support RAID 0, RAID 1, RAID 5, and RAID 6. I probably want to use RAID 1 at really durable storage, and then if you were to deploy us in a data center where we’re on raw drives, that’s where you want to really look hard at the RAID 5, RAID 6 and use them there because as the drives got bigger and bigger the rebuilds take a while and you kind of want to get to the point where if I have a failed drive and replace it, well, we’ll say, “Oh, it’s in another one,” we’ll develop some type of bit error, we’ll undo it, you know, recovering from the other, so those kind of factors go in. We support the whole gambit of RAID levels.
    5. What version of NFS do we support?
    6. What is the underlying file system used by SoftNAS?
      • At SoftNAS we are very much an open standards, open source company. We’ve built the Z file system commonly referred to as ZFS into our product.
    7. What is the maximum storage capacity of the SoftNAS instance?
      • We don’t really have a limit that we enforce ourselves. Amazon has certain rules for the amount of drive mounts that they provide. But if you’re using S3, our capacity range is virtually limitless. We’ll quote up to 6PB. On the AWS marketplace, we do have editions based on the capacity that they’ll manage. Our express edition manages up to 1TB, our standard up to 20, and then we have a BYOL edition.
    8. Is it possible to get a trial version of SoftNAS?
      • Yes it is. Through the AWS marketplace, we have a 30 day trial as long as you’ve never tried our product before. It just works out of the box there, just off the console. If you would like to try it through BYOL, then contact our sales team at sales@softnas.com.
    9. Is it a good idea to use SoftNAS as a backup target?
      • Yes. It’s a common use case for us since we enable native cloud storage. Even on premise storage you could have a good backup plan of maybe keeping your nightly backups on the very fast storage such as EBS or, you know, SSD or spending the rest of this in your data center, but then also have a storage pool made out of object storage and S3 up in the cloud and use that for your weekly archival. Very common for people that are using product like VMware to use SoftNAS for backup, as a backup target.
    10. Is it a good idea to replace a physical NAS device with SoftNAS?
      • The considerations would be with replacing that type of a solution, just adding where you want to store your backups. If you want to leverage the cloud or if you want to retain those locally. If you want to retain those locally, we have an offering that allows you to connect to local disks and you have a lot of flexibility on the types of disks that you can attach to, local attach disks, iSCSI targets, as well as tying into S3. You can have a local instance that’s tied into S3 from all object storage. 
      • Additionally a secondary option would be to have a SoftNAS node that’s deployed within the cloud and use that as a backup target, and essentially you’re getting a 2-for-1 with that type of a strategy.  By backing up to the target, you get a backup storage resource that you don’t have to store on premise but secondarily it offers a disaster recovery strategy by being able to take your backups and store those offsite.  So those are two approaches that might make sense for that particular scenario.
    11. Is it advisable to utilize an AWS-based SoftNAS instance for on premise apps?
      • I advise not to deploy SoftNAS into an Amazon VPC and access it remotely through NFS or CIFS. And those protocols are very tatty and degrade over long distances. What is common is to deploy SoftNAS into your VMware cluster and your virtual loader data center. Then mount Amazon S3 into a storage pool. Backup your applications and storage pools use them in your data center. It’s great for backups.
      • You’ll have to be somewhat sensitive to latency. There’s applications that would not be great for, because there’s IO latency. It’s going to happen from your data center to the Amazon region where the S3 is. For example, it wouldn’t be a good idea to use like a database with transactional IO. Using a backup with S3, you put SoftNAS onto your data center and back up to the storage pool.
      • A typical customer use case is when they segment out their hot data which is highly active. It’s typically a smaller subset of their overall data set. One use case is to tie into object storage that you host in the cloud for cool data. Use on premise storage that is not affected by latency to service hot data requirements. Then leverage S3 object storage as the backend larger repository for your cool data.
    12. If we use S3 as a storage pool for on premise, does it provide write back caching?
      • With on premises, we’re able to leverage high performance local disk as a block cache file to front-end S3 storage. It functions like a page file for read and write operations. It  also allows higher performance, essentially caching for S3 access, and enhances overall performance.
      • Using the log cache for both reads and writes will do read-aheads and make it easier to handle read/writes.

    We hope that you found the content useful and that you gained something out of it. Hopefully, you don’t feel we marketed SoftNAS Cloud NAS too much. Our goal here was just to pass on some information about how to build docker persistent storage on AWS. As you’re making the journey to the cloud, hopefully this saved you time from tripping over some common issues. 

    We’d like to invite you to try SoftNAS Cloud NAS on AWS. We do have a SofNAS 30 day trial