Webinar: Consolidate Your Files in the Cloud

Webinar: Consolidate Your Files in the Cloud

The following is a recording and full transcript from the webinar, “Consolidate Your Files in the Cloud”. You can download the full slide deck on Slideshare

Consolidating your file servers in AWS or Azure cloud can be a difficult and complicated task, but the rewards can outweigh the hassle.

Covered in this webinar:

– The state of the file server market today
– How to conquer unstructured data
– Benefits of file consolidation in the cloud
– Real customer use cases

Full Transcript: Consolidate Your Files in the Cloud

Trace Hunt:             Good morning, good afternoon, good evening, no matter where on this beautiful planet you may be. We’ll be starting this webinar here in a couple of minutes. We’re waiting for everyone to join before we get started. Just be patient with us, and we will get started. Thank you!
To the people who have just joined, we are going to be starting here in about two minutes. We’re waiting for everyone who wants to join to get in. Thank you.

We’ll be starting here in about one minute.

Let’s get started. Again, good morning, good afternoon, or good evening no matter where you may be on this beautiful planet. I am Trace Hunt, the vice-president of sales at SoftNAS, and we are happy to have you here.

We are in a webinar series. This is the third of webinars that we have been putting on here. We are very excited to have the opportunity to speak with you about what we do good at SoftNAS and how we go about solving customers’ problems in the cloud.

Joining me here today will be Kevin Brown, our solutions architect. He’ll be going through some of the product details. What we’ll do on the agenda here today is that I’ll deliver the housekeeping and an overview of what we’re talking about here will be file server storage consolidation pieces.

I’ll hand it over to Kevin to talk about the product itself and the features it has. Then we’ve got a customer’s success story here at the end that we’ll speak too. Then we’ll wrap it up with some cause of action and also take any questions.

The goal here is to get everything on this part done probably in the next 40 minutes, at the most, maybe 45. We’ve got time on the backend to take care of questions and any other thing that you may want to ask or cover with us here.

With that in mind, let’s get started.

Some housekeeping pieces here if you’ve not been used to using the application. Make sure that if you want to listen through your computer speakers, you have the mic and speakers on. Everyone is muted.

If you need to use the telephone number is up there in the “Access Code and Audio PIN.” This session is being recorded. Any questions you may have, you can either put it in the questions.

Some people use the Chat as well, but please use the questions. We will address those as we go on here with it and try to make sure that we’re able to handle those and answer those for everyone.

Let’s go on. What we want to talk about here today is, everyone is on this cloud journey and where we are on this cloud journey will vary from client to client, depending on an organization, and maybe even parts of the infrastructure or the applications that are moving over there.

The migration to the cloud is here and upon us. You’re not alone out there with it. We talk to many customers. SoftNAS has been around since 2012. We were born in the cloud and that’s where we cut our teeth and we’ve done our expertise with it.

Today, we’ve done approximately 3,000 VPCs. There’s not a whole lot in the cloud that we haven’t seen, but at the same time, I am sure we will see more and we do that every day with it here.

What you’re going to find out there right now as we talk to companies, we know that storage is always increasing. I was at a customer’s website, a month ago, of a major health care company. Their storage grows 25% every year so they double every refresh cycle out of it.

This whole idea of this capacity growing is there with us. They talk about 94% of workloads will be in the cloud. 80% of enterprise information in the form of unstructured data will be there.

A lot of the data analytics and IoT which you’re seeing now are all being structured in the cloud.

Four or five years ago, they were talking about cloud migration and about reducing CAPEX. Now, it’s become much more strategic. The areas that we solve with them is to try and understand “Where do we start?”
If I am an enterprise organization and I’m looking to move to the cloud, where do I start? Or maybe someone out there that is already there but looking to try to solve other issues with this.

The first use case we want to talk about as you saw on the title is around file server and storage consolidation. What we mean by that is that these enterprise organizations have these file servers and storage arrays, and I like to use the term that are rusting out.

What I mean by that is probably around three [axis 0:07:35]. One, you can be coming up on a refresh cycle because your company is going on a normal refresh cycle due to lease or just due to the overall policy in the budget, how they refresh here.

Two, you could be coming up on the end of the initial three-year maintenance that you bought with that file server or that storage array. Getting ready for that fourth year, which if you’ve been around this game enough, you know that fourth and fifth year or fourth and subsequent years are always hefty renewals.

That might be coming up, or you may be getting to a stage here where the end of services or end of life is happening with that particular hardware.

What we want to talk about here today and show you, how do you lift that data into the cloud and how do we move that data to the cloud so I can start using it.

That way, when you go to this refresh, there is no need to go refresh those particular file servers and/or storage arrays, especially where you’ve got secondary and tertiary data where it’s mandatory to keep it around.

I’ve talked to clients where they’ve got so much data it would cost more to figure out what data they have, versus just keeping it. If you are in that situation out there, we can definitely help you on this.

The ability to go move that now, take that file server towards array, take that data, move it to the cloud, and use SoftNAS on it to be able to go access that data is what we are here to talk to you about today and how we can solve this.

This also can be solved in DR situations, even the overall data center situations too. Anytime you’re looking to go make a start to the cloud or if you’ve got this old gear sitting around.

You’re looking at the refresh and trying to figure out what do I with it? Definitely give us a ring here with that too.

As we talk about making the case for the cloud here, the secondary and tertiary data is probably one of the biggest problems that these customers deal with because it gets bigger and bigger.

It’s expensive to keep on-premise, and you got to migrate. Anytime you’re having a refresh or you buy new gear, you got to migrate this data. No matter what tools sets that you have, you’ve got to migrate every time you go through a refresh.

Why not just migrate one to get it done with and just be able to just add more as needed with time? Now, the cloud is much safer, easier to manage, and much more secure.

A lot of those miss knowledge we’ve had in the past about what’s going on in the cloud around security aspects have all been taken care of.

What you’ll find here with SoftNAS and what makes us very unique is that our customers continue to tell us that, “By using your product, I’m able to run at the same speed or see the same customer-experience in the cloud as I do on-prem.”
A lot of that is because we’re able to tune our product. A lot of it is because of the way we designed our product, and more importantly, are smart people behind this that can help you make sure that your experience in the cloud is going to be the same you have on-prem with it.

Or if you’re looking for something lesser than that, we can help you with that piece here too. What you’re going to see that we’re going to be offering around migration and movement of the data, speed, performance, high availability, which is a must, especially if you’re running any type of critical applications or even if this data has to be highly-available with it too.

You’re going to see us scalable. We can move all the way up to 16 PB. We’ve got compliance so anyone around your security pieces would be happy to hear that because we take care of the security and the encryption with the data as well to make sure it works in a seamless fashion for you.

We are very proud of what we’ve done here at SoftNAS, and we are very happy to have the opportunity to bring this to you here in part of series out here.

What I’m going to do next here is I’m going to hand it over to Kevin Brown to walk you through what’s underneath the covers of what SoftNAS does and talk a little bit more about some of the capabilities.

If questions come up on that, please ask and we’ll be ready to handle them. Kevin, I’m going to give you the keyboard and mouse, and you should be able to start it off.

Kevin Brown:             Not a problem. Thank you, Trace. SoftNAS is a virtual appliance. It’s a virtual appliance whether or not it’s in your cloud platform or if it’s sitting on-prem in your VMware environment.

We are storage agnostic so we don’t care where that storage comes from. If you’re in AWS, we’re going to allow you to utilize S3 all the way up to your provisioned IOPS disk.

If you’re in Azure, from cool blob all the way to premium. If you are on-prem, whatever SSDs or hard drives that you have in your environment that’s connected to your VMware system, we also have the ability to do that.

As much as we are storage agnostic on the backend, on the frontend, we’re also application agnostic. We don’t care what that application is. If that application is speaking general protocols such as CIFS, NFS, AFP, or iSCSI, we are going to allow you to address that storage.

And that gives you the benefit of being able to utilize backend storage without having to make API calls to that backend storage. It helps customers move to the cloud without them having to rewrite their application to be cloud-specific.

Whether or not that’s AWS or Azure. SoftNAS gives you that ability to have access to backend storage without the need of talking to APIs that are associated with that backed storage.

Let me see, and try to move to the next slide. I’m sorry. Trace, is it possible to move me to the next slide? The keyboard and mouse is not responding.

Trace:             Give me the control back and I’ll do that. There you go.

Kevin:              All right, perfect. Benefits that come out of utilizing SoftNAS as a frontend to your storage. Since I’ve been at this company, our customers have been asking us for one thing in specific and that is can we get a tiering structure across multiple storage types?

This is regardless of my backend storage. I want my data to move as needed. We’re going at multiple environments and what we see for multiple environments is that probably about 10% to 20% of that data is considered hot data, with 90 to 80% of it being cool if not cold data.

Customers knowing their data lifecycle, it allows them to eventually save money on their deployment. We have customers that come in and they say that “My data lifecycle is the first 30 to 60 days, it’s heavily hit. The next 60 to 90 days, somebody will touch it or not touch it. Then the next 90 to 120 days, it needs to move to some type of archive care.”
SoftNAS gives you the ability to do that by setting up smart tiers within that environment. Due to the aging policy associated with the blocks of data, it migrates that data down by tier as need be.

If you’re going through the process, after your first 30 to 60 days, the aging policy will move you down to tier two. If afterwards that data is still not touched after the 90 to 120 days, it will move you down to an archive tier, giving you the cost-savings that are associated with being in that archive tier or being in that lower-cost tier two storage.

The benefit also is that, as much as you could migrate down these tiers, you could migrate back up these tiers. So you get into a scenario where you’re going through a review, and this is data that has not been touched for a year. However, you’re in the process of going through a review.

Whether it’s a tax review or it’s some other type of audit, what will happen is that as that data continues to be touched, it will first migrate. It will move from tier three back up to tier two.

If that data continues to be touched, it will move back all the way up to tier one and it will start that live policy going all the way back down.

Your tier three could be multiple things. It could be object storage. It could be EBS magnetic or cold HDDs. Your tier, depending on the platform that you’re on, it could be EBS throughput optimized, GP2 magnetic, and vice versa.

Your hot tier could be GP2 provisioned IOPS, premium disk, standard disk – it depends to what your performance needs would be.

SoftNAS has patented high availability on all platforms that we support. Our HA is patented on VMware, it’s patented within Azure and also within AWS.

What happens within that that environment is that there is a virtual IP that sits between two SoftNAS instances. The two SoftNAS instances have their data stores that are associated with the instance.

There is a heartbeat that goes between those two instances; application is talking to the virtual IP between them. If there is an issue or a failure that occurs, what will happen is that your primary instance will shut down and that data will be moved to your secondary instance which is now turned back to your primary instance.

The benefit behind that is that it’s a seamless transition for any kind of outage that you might have within your environment.

It’s also structured with AWS’s best practice, so that’s to have those instances be placed in an availability set or within different availability zones so that you have the ability to utilize the SLAs that are associated with the provider.

If we can move to the next slide. There we go. We’ll go ahead and flash this out all the way.

In this scenario, we have your HA setup and your HA setup is within the California region. Within the California region, you have the system set up SnapReplication and HA because that’s what’s needed within that region.

It allows you to go ahead and do a failover in case there is any issue that happens to an instance itself. In utilizing Azure environment by you having it within an availability set, what happens is that neither one of those instances will exist on the same hardware or the same infrastructure and it will allow you do five 9s work of durability.

Within AWS, it’s structured so that you could actually do that by using availability zone. By using availability zones, it gives you application durability and it is up to five 9s there, also within AWS.

Up until a year ago, you could say that an availability zone or a region had never gone down for any one of the providers.

But about a year ago and about a month apart from each other, AWS had a region go down and Azure also had a region go down. A customer came to us and asked for a solution around that.

The solution that we gave them was DR to a region that’s outside of that availability zone or that region altogether. That’s what it shows in this next picture.

It’s that although you have SmapReplicate within that region, to be able to protect you, you also have DR replication that is centered entirely outside the zone to ensure that your data is protected in case a whole region fails.

The other thing that our customers have been asking for, as we come to their environments, they have tons of data. The goal is to move to the cloud. The goal is to move to the cloud either as quickly as possible or as seamlessly as possible.

They’ve asked us for multiple solutions to help them to get that data to the cloud. If your goal is to move your data as quickly as possible to the cloud — and we’re talking about Petabytes, hundreds of Terabytes of data — your best approach at that particular point in time is taking the FedEx route.

With whether it’s Azure Data Box or AWS Snowball, being able to utilize that to be able to send the data over to the cloud and then importing that data into SoftNAS to make it easier for you to be able to manage the data.

That is going to be across cut, over. That means that at some point in time, on-prem, you’re going to have to stop that traffic and say that “This is what is considered enough. I’m going to send this over to the cloud, and then I’m going to populate and start over with my new data set in the cloud and have it run that way.”
If you’re looking for a one cut-over, that’s what you’re [inaudible 0:24:59] and we’re not talking about Petabytes worth of data. The way that we explain to customers and give customers ways of actually using it would be Lift & Shift.

By you using SoftNAS and the Lift & Shift capability of SoftNAS, what you could actually do is you could do a warm cut-over so data still running on your legacy storage servers being copied over to the SoftNAS device in the cloud.

Then once that copy is complete, then you just roll over to the new instance existing in the cloud. SoftNAS has congestion algorithms that we use around UltraFAST that basically allow that data to go over highly-congested lines.

What we’ve seen within our testing and within different environments is that we could actually push data up to 20X faster by using UltraFAST across lines between the two SoftNAS [inaudible 0:26:12].

This is where it comes. You need to make that decision. Are you cold cutting that data to send it over to the cloud via Azure Data Box or Snowball, or is it a possibility for you to use Lift & Shift and do a warm cut-over to the SoftNAS device that you would have in the cloud.

Trace:              Thank you, Kevin. I appreciate you taking some time with talking about SoftNAS does and what our features are. In the next part here, we want to take a moment and talk about a customer’s success story.

We’ve got a customer up in New York City called iNDEMAND. You may have never heard of them but they are very much part of our life. They are the number one distributor for all your video demand and paper-view type of products.

What we mean by that is that they distribute EST to over 200 North American TV operators, which in turn reaches well over 60 million homes. As you see that availability with your local dish or even in a hotel for paper-view, they order these particular movies from iNDEMAND and then iNDEMAND uploads it into their system.

It’s been a very good relationship. They’ve been with us here since around 2015, and they are happy one of our raving fans that we have. We’ll go ahead and explain the challenge they had.

It was definitely a file server and storage array consolidation of what they were trying to solve on this piece for this – large video type of files. Kevin, I’ll bring it back to you to tell the crowd a little bit more about what was the story about and what they solved.

Kevin:              Not a problem. SoftNAS and Keith Son who is the director of infrastructure at iNDEMAND discovered each other at an AWS summit. At that point in time, it was the goal of iNDEMAND to increase their flexibility by running their storage in the cloud.

It had gotten to the point that managing their over 1 PB on-premise storage was costly. They were paying an outrageous sum of money at every refresh time that came on just for the maintenance cost that was associated with that storage.

Keith came by. He wanted to focus more on flexibility. How can I save money by moving to the cloud? How can I get it so that I am not cutting down on my performance?

We got the time actually to sit down and talk with him and did an ad-hoc design of what that environment should look like. One of his biggest stress-points was that he wanted to remove the complicated programming from his administrators.

He did not want the administrators to have to dig in to be able to figure out if I’m sending it to S3. I need to use this API. If I’m sending it to an EBS disk, I need to do this.

He wanted it to be brain-dead simple and that’s what SoftNAS brought to him. One of his biggest griefs was the fact that it took them months to be able to add storage to the environment and that was not working for them.

Given the fact that they were bringing in new data on a regular basis, they needed to make decisions as possible, they needed to have storage when they needed it and not months later.

As we went through the design of what SoftNAS could do and were supposed to do in their environment, Keith fell in love with it. iNDEMAND fell in love with it. Within a couple of hours after having that conversation, they had spun up an instance within AWS and they had added storage to that device.

They had production storage within an hour after having that conversation, and them actually going through and testing the environment.

Our solution allowed iNDEMAND to add storage in minutes instead of months. It scales out up to potentially 16 PB. We have customers that are running hundreds and hundreds of Terabytes within multiple cloud environments.

Some of the things that we also were able to help Keith do is, Keith was able to actually run multiple workloads in the cloud with minimal to no need to application change at all.

Being able to lift that application to the cloud, attach it to the storage, and utilize it just the same way that they would do.

Trace:              Today, Keith runs about 200 TB in SoftNAS, up in AWS side of it. He’s told me many times that the beauty of it was, “I just have to present a mount point to my workload, and from there, I know that everything is safe, protected, and it’s going to work. I don’t have to worry about SoftNAS being up because it’s always up.

It’s been a great success story here. As you saw Kevin talk through there, they had a nice [inaudible 0:32:10] shop. They have a Petabyte on the ground. They didn’t want to buy anymore. They had enough.

This was a way that they could offset that cost and grow that piece out of it without having to go invest in the infrastructure or invest in the footprint in the data center in Manhattan.

Successfully stories out of it, we have got other ones here but we wanted to highlight this one for you.

How do I get to SoftNAS? We are available on both marketplaces in AWS and Azure. In both marketplaces, you have the ability to go do a free trial. What you see right there is an example out of it.

If you go to our web page, you could find access to the free trial as well as our prices in the marketplace is up there with it too.

Within the marketplace, what you’re going to find is that we’ve got listing up there that you can pick and also pick a particular instance or VM and get an idea of what that costs.

Again, we vary our storage on the backend. You’ll need to go figure that part in if you’re looking a total cost of ownership with that piece. If you’re looking for help, we do have a Quick Start Program.

What that means is that if you want to go do a free trial, you’re more than welcome. We welcome you to go do that and do it on your own. But if you need that white glove help, come out and talk to us.

I’ve got staff here that are anxious to go help you through that and help you with the onboarding piece out here too. We have much more information on our websites, much more webinars, architectural white papers, as well as a contact us.

I know it’s in AWS Marketplace. At the same time, we also run in the Azure Marketplace and are very successful around that piece too. Let me give you another piece of information before a question comes up of, “How do I buy?”
The first place I would go to, you buy on the marketplaces. You buy on the marketplaces and you buy the Pay-As-You-Go. On AWS, you have choices between a 1 TB, a 10 TB, a 20 TB, and a 50 TB listing.

Also, we offer a Consumption listing on the AWS side, but unfortunately, there is not a free trial associated with that. With consumption, you use as much as you need out of it and you pay by the drink.

If you need 2 TB only, you can buy 2 TB. If you need 102, you can do 102 TB. We have those available. On the Azure side, they come in 1 TB, 10 TB, 20 TB, and the 50 TB.

If you’re looking for something greater than that or if you want to go with a longer type of contract, you can buy directly from us or through our partners our resource.

You can buy BYOL, but BYOL contracts are for a minimum of 12 months. Gold Support is always available to all these, a 24/7 support. We promise you it is some of the best support that you’ll ever see up there with it.

We constantly get praise around our support side because when you need help, you’ve got 24/7 help there to help you with that piece.

If you don’t go with the Gold Support, our Silver Support is 9:00 to 5:00. It’s 9:00 to 5:00 as well as follow-the-sun out of it. If you’re in Australia and you’re on Silver Support, 9:00 to 5:00, we’ll get someone in your region to go help you.

With that, I’m going to go next and go look for any questions that we may have out there.

I got a question here. Is it the same product on Azure and AWS?

It is. In fact, the core sets are the same with it, feature sets are the same with it. The only difference in both is that if you’re running BYOL, there is no difference out of it.

The only thing that Azure doesn’t offer that AWS offers is the Consumption listing. Everything else around the other listings of 1, 10, 20, and 50 TB are all the same.

We’re just starting to hear people wanting to talk about multi-cloud strategies and be able to use multi-cloud. You can use this in a multi-cloud, be in the same core, and we can be able to talk between both clouds.

If you’re looking for that type of use case with it, please come back to us. Any other questions? Do you see any questions out there, Kevin?

Kevin:              Yes, I had a question out there. Do we run on-prem? Yes, we do. We are a virtual appliance. We have an OVA that you could put within your VMware environment then you could actually run it on-prem and we would be able to address any storage that was connected to your ESXi Server.

Trace:              We have another question here, and I’ll let you answer this one here, Kevin. How long does it take in an HA environment to flow forward to the secondary node?

Kevin:              I’ll actually answer that in two parts. Between each node, that data is synced every 60 seconds. The way that data is synched is sending the details over from primary to secondary.

First, only details that are being sent over from primary to secondary. That means that replication should be quick, easy, and simple. It’s also ZFS-based so it’s only blocks of data that we’re sending over.

It’s also whenever it comes to having to failover to an instance, it depends. If there is a failure and you are looking for it to fail, you as a customer determines how many read checks that you need to do before it does a failover.

If you’re just going with the defaults, you could actually failover within 60 seconds. If you’re looking to do a takeover, that is instantaneous. Literally, you could go into the SoftNAS device, go to the secondary and tell it to take over, and that way, you actually have the VIP routed to the secondary and immediately you have access to the storage behind it.

Trace:              Any other questions out there? Excellent. Again, I want to thank everyone for coming. I appreciate your attendance out of it. If you like to find out any more information about SoftNAS, you saw that information or just email us at sales@softnas.com.

We are going to be having another webinar here next month, and we’re going to be talking about how to migrate your application data and how to look at particular workloads that are mission-critical and how do we run those on the cloud and get the same type of performance we have on-prem but giving me all the data protection and all the protection capability I have on-prem.

We appreciate your time. We appreciate your attendance and have a nice day. Take care.

Webinar: Migrating Existing Applications to AWS Without Reengineering

Webinar: Migrating Existing Applications to AWS Without Reengineering

The following is a recording and full transcript from the webinar, “Migrating Existing Applications to AWS Without Reengineering”. You can download the full slide deck on Slideshare

Designing a cloud data system architecture that protects your precious data when operating business-critical applications and workloads in the cloud is of paramount importance to cloud architects today.

Ensuring the high availability for your company’s applications and protecting business data is challenging and somewhat different than in traditional on-premise data centers. For most companies with hundreds to thousands of applications, it’s impractical to build all of these important capabilities into every application’s design architecture.

The cloud storage infrastructure typically only provides a subset of what’s required to properly protect business data and applications. So how do you ensure your business data and applications are architected correctly and protected in the cloud? In this webinar, we covered: Best Practices for protecting business data in the cloud How To design a protected and highly-available cloud system architecture, Lessons Learned from architecting thousands of cloud system architectures

Full Transcript: Migrating Existing Applications to AWS Without Reengineering

Taran Soodan:             Good afternoon everyone, and welcome to a SoftNAS webinar, today, on No App left behind – Migrate Your Existing Applications to AWS Without Re-engineering.

Definitely a wordy title, but today’s webinar is going to focus on migrating the existing applications that you have on your on-premises systems to AWS without having to re-write a single line or code.

Our presenter today will be myself, Taran Soodan, and our senior solutions architect, Kevin Brown. Kevin, go ahead and say hey.

Kevin Brown:             Hello! How are you doing? I’m looking forward to presenting today.

Taran:             Thank you, Kevin. Before we begin this webinar, we do want to cover a couple of housekeeping items. Number one, the webinar audio is available to both your mic and speakers on either your desktop or your laptop or you can use your telephone as well. The local numbers are available to you on the “Go to” mini-control panel.

Kevin, if you just go back. We will also be dedicating some portion of time at the end to answering any questions that you may have during today’s webinar in the questions pane.

If you have any questions that pop up during the webinar, please feel free to post them in the questions pane here and we’ll go and answer them at the end.

Finally, this webinar is being recorded. For those of you who want to share the slides or the audio with your colleagues, you’ll receive a link later on today. The next slide please, Kevin.

To thank everyone for joining today’s webinar, we are offering a $100 AWS credit and we’ll provide the link for that credit a little bit lateron.

For our agenda today, this is going to be a technical discussing. We’ll be talking about security and data concerns for migrating for migrating your applications.

We will also talk a bit about the design and architectural considerations around security and access control, performance, backup and data protection, mobility and elasticity, and high-availability.

A spoiler alert – we are going to talk about our SoftNAS product and how it’s able to help you migrate without having to re-engineer your applications.
Finally, we will close off this webinar with the Q&A session. Kevin, I’ll go ahead and turn it over to you.

Kevin:              All right, not a problem. Thank you very much, Taran. Hello. My name is Kevin Brown, and I thank you for joining us today from wherever you’ve logged in. Our goal for this webinar is to discuss the best practices for designing cloud storage for existing apps on AWS.

All right, not a problem. Thank you very much, Taran. Hello. My name is Kevin Brown, and I thank you for joining us today from wherever you’ve logged in. Our goal for this webinar is to discuss the best practices for designing cloud storage for existing apps on AWS.

We’ve collected a series of questions that we get asked regularly as we work with our customers and potential customers as they go through the decision of migrating their existing environments to the cloud. We hope that you’ll find this informative as you go through your decision-making and your design processes.

First, let’s actually talk about SoftNAS a little bit. We’re just going to give our story and why do we have this information and why do we want to share this information.

SoftNAS was born in the cloud in 2012. It was actually born from…Our founder, Rick Brady went out to find a solution that would be able to give him access to cloud storage.

When he went out and looked for one, he could find anything that was out there. He took it upon himself to go through the process of creating this solution that we now know as SoftNAS.

SoftNAS is a cloud NAS. It’s a virtual storage appliance that exists on-premise or within the AWS Cloud and we have over 2,000 VPC deployments. We focus on no app left behind.

We give you the ability to migrate your apps into the cloud so that you don’t have to change your code at all. It’s a Lift and Shift bringing your applications to be able to address cloud storage very easily and very quickly.

We work with Fortune500 to SMB companies and we have thousands of AWS subscribers. SoftNAS also owns several patents including patents for HA in the cloud and data migration.

The first thing that we want to go through and we want to talk about is cloud strategy. Cloud strategy, what hinders it? What questions do we need to ask? What are we thinking about as we go through the process of moving our data to the cloud?

Every year, IDG, the number one tech media company in the world. You might know them for creating CIO.com, Computer World, Info World, IT World, Network World. Basically, if it has technology and a world next to it, it’s probably owned by IDG.

IDG calls its customers every single year with the goal of measuring cloud computing trends among technology decision-makers to figure out uses and plans across various cloud services, development models, investments, and trying to figure out business strategies and plans for the rest of the IT world to focus on.

From that, we actually took the top five concerns. The top five concerns belief it or not has to do around our data. Number one concerns come across. Concerns regarding where will my data be stored?

Is my data going to be stored safely and reliably? Is it going to be stored in a data center? What type of storage is it going to be stored on? How could I figure that out?

Especially when you’re thinking about moving to a public cloud, these are some questions that rake people’s minds.

Questions number. Concerns about the security of cloud computing solutions. Is there a risk of unauthorized access, data integrity protection? These are all concerns that are out there about the security of what I’m going to have in an environment.

We also have concerns that are out there about vendor lock-in. What happens if my vendor of choice changes costs, changes offerings, or just goes?

There are concerns out there, number four, with surrounding integration of existing infrastructure and policies. How do I make the information available outside the cloud while preserving the uniform set of access privileges that I have worked the last 20 years to develop?

Number five. You have the concerns about the ability to maintain enterprise or industry standards — that’s your ISOs, PSI, SaaC, SaaS. What we plan to do in the upcoming slide is to share some of the questions that our customers have asked and which they have asked earlier in the design structure of moving their data to the cloud so that this would be beneficial to you.

Question number one and it’s based of off the same IDG poll. How do I maintain or extend security and access control into the cloud? We often think outside to end when we design for threats.

This is how our current on-prem environment is built. It was built with protection from external access. It then goes down to physical access to the infrastructure. Then it’s access to the files and directories. All of these protections need to be considered and extended to your cloud environment, so that’s where the setup of AWS’s infrastructure plays hand-in-hand with SoftNAS.

First it’s the fact of setting up a firewall and ensuring Amazon already has the ability for you to be able to utilize the firewalls through your access controls to your security groups.

Setting up stateful protection from your VPCs, setting up network access protection, and then you go into cloud native security. Access to infrastructure. Who has access to your infrastructure?

By you setting up your IAM roles or your IAM users, you have the ability to control that from security groups. Being able to encrypt that data. If everything fails and users have the ability to touch my data, how do I make sure that even when they can see it, it’s encrypted and it’s something that they cannot use.

We also talk about the extension of the enterprise authentication schemes. How do I make sure that I am tiering into active directory or tiering into LDAP which is already existing in my environment?

The next question that we want to ask and this is structured around backups and data protection. How to safeguard from data loss or corruption. We get asked this question almost — I don’t know — probably about 10-15 times a day.

We get asked, I’m moving everything to the cloud; do I still need a backup? Yes, you still do need a backup. An effective backup strategy is still needed. Even though your data is in the cloud, you still have redundancy.

Everything has been enhanced, but you still need to be able to have a point in time or extended period of time that you could actually back and grab that data from.

Do I still need antivirus and malware protection? Yes, antivirus and malware protection is still needed. You also need the ability to have snapshots and rollbacks and that’s one of the things that you want to design for as you decide to move your storage to the cloud.

How do I protect against user-error or compromise? We live in a world of compromise. A couple of weeks ago, we saw all in Europe that it had run through the problem of ransomware. Ransomware was basically hitting companies left and right.

From a snapshot and rollback standpoint, you want to be able to have a point in time that you could quickly rollback so that your users will not experience a long period of downtime. You need to design your storage with that in mind.

We also need to talk about replication. Where am I going to store my data? Where am I going to replicate my data to? How can I protect so that data redundancy that I am used to on my on-prem environment, that I have the ability to bring some of that to the cloud and have data resiliency?

I also need to think about how do I protect from data corruption? How do I design my environment to ensure that data integrity is going to be there, that I am making that the protocols with the underlying infrastructure that I am protecting myself from being corrupt from different scenarios that would cause my data to lose its integrity?

Also, you want to think of data efficiency. How can I minimize the amount of data while designing for costs-savings? Am I thinking in this scenario about how do I dedupe, how do I compress that data?

All of these things need to be taken into account as you go through that design process because it’s easier to think about it and ask those questions now than to try to shoehorn or fit it in after you’ve moved your data to the cloud.

The next thing that we need to think about is performance. We get asked this all the time. How do I plan for performance? How do I design for performance but not just for now? how do I design for performance in the future also?

Yes, we could design a solution that is angelic for our current situation; but if it doesn’t scale with us for the next five years, for the next two years, it’s not beneficial – it gets archaic very quickly.

How do I structure it so that I am designing not just for now but for two years from now, for five years from now, for potentially ten years from now? There are different concerns that need to be taken at this point.

We need to talk about dedicated versus shared infrastructure. Dedicated instances provide the ability for you to tune performance. That’s a big thing because what you do right now and your use-case as it changes, you need to make sure that you could actually tune for performance for that.

Shared infrastructure. Although shared-infrastructure can be cost-effective, multi-tenancy means that tuning is programmatic. If I go ahead and I use a shared-infrastructure, that means that if I have to tune for my specific use-case or multiple use-cases within that environment, I have to wait for a programmatic change because it’s not dedicated to me solely. It is used by many other users.

Those are different concerns that you need to think about when it actually comes to the point of, am I going to use dedicated or am I going to use a shared-infrastructure.

We also need to think about bursting and bursting limitations. You always design with the ability to burst beyond peak load. That is number one. 101, you have to think about the fact, I know that my regular load is going to be X but I need to make sure that I could burst beyond X.

You need to also understand the pros and cons of burst credit. There’re different infrastructures and different solutions that are out there that introduce burst credits.

If burst credits are used, what do I have the ability to burst to? Once that burst credit is done, what does it do? Does it bring me down to subpar or does it bring me back to par?

These are questions that I need to ask, or you need to ask as you go into the process of designing for storage and what the look and feel of your storage is going to be while you’re moving to the public cloud.

You also need to look and consider predictable performance levels. If I am running any production, I need to know that I have a baseline. That baseline should not fluctuate as much as I have the ability to burst beyond my baseline and be able to use more when I need to use more.

I need to know that when I am at the base, I am secure with the fact that my baseline predictable is predictable and my performance levels are going to be just that something that I don’t have to worry about.

We need to be able to go ahead and you should already be thinking about using benchmark programs to validate the baseline performance of your system.

There’re tons of freeware tools out there, but that’s something that you definitely need to do while you’re going through that development process or design process for performance within the environment.

Storage, throughput, and IOPS. What storage tier is best suited for my use case? In every environment, you’re going to have multiple use cases. Do I have the ability to design my application or the storage that’s going to support my application to be best-suited for my use-case?

From a performance standpoint, you have multiple apps 19:04 for your storage tiers. You could go with your GP2 – these are the general-purpose SSD drives. There’s Provisioned IOPS. There’s Throughput Optimize. There is Cold HDDs. All of these are options that you need to make a determination on.

A lot of times, you’ll think about the fact that “I need general-purpose IOPS for this application, but Throughput Optimize works well with this application.” Do I have the ability? What’s my elasticity to use both? What’s the thought behind doing it?

AWS gives you the ability to address multiple storage. The question that you need to ask yourself is, based on my use case, what is most important to my workload? Is it IOPS? Is it Throughput?

Based on the answer to that question, it gives you the larger idea of what storage to choose. This slide breaks down a little bit of moving from anything greater than 65,000 IOPS. Are you positioned to use anything that you need higher throughput from? What type of storage should you actually focus on?

These are definite things that we actually work with our customers on a regular basis to actually steer them to the right choice so that it’s cost-effective and it is also user-effective to their applications.

Then S3. A cloud NAS should have the ability to address object storage because there’s different use cases within your environment that would benefit from being able to use object storage.

We were at a meetup a couple of weeks ago that we did with AWS and there was an ask from the crowd. The ask came across. They said, “If I have a need to be able to store multiple or hundreds of petabytes of data, what should I use? I need to be able to access those files.”

The answer back was, you could definitely use S3, but you’ll need to be able to create the API to be able to use S3 correctly. With a cloud NAS, you should have the ability to use object storage without having to utilize the API.

How do you actually get to the point that you’re using APIs already built into the software to be able to use S3 storage or object storage the way that you would use any other file system?

Mobility and elasticity. What are my storage requirements and limitations? You would think that this would be the first questions that get asked as we go through this design process.

As companies come to us and we discuss it, but a lot of times, it’s not. It’s, people are not aware of their capacity limitations, so they make a decision to use a certain platform or to use a certain type of storage and they are unaware of the maximums. What’s my total growth?

What is the largest size that this particular medium will support? What’s the largest file size? What’s the largest folder size? What’s the largest block size? These are all questions that need to be considered as you go through the process of designing your storage.

You also need to think about platform support. What other platforms can I quickly migrate to if needs be? What protocols can I utilize?

From a tiered storage support and migration, if I start off with Provisioned IOPS disks, am I stuck with Provisioned IOPS disks? If I realize that my utilization is that of the need of S3, is there any way for me to migrate from Provisioned IOPS storage to S3 storage?

We need to think about that as we’re going with designing storage in the backend. How quickly can we make that data mobile? How quickly can we make it something that could be sitting on a different tier of storage, in this case, from object to block or vice versa, from block back to object?

And automation. Thinking about automation, what can I quickly do? What can I script? Is this something that I could spin up using cloud formation? Is there any tools? Is there API associated with it, CLI? What can I do to be able to make a lot of the work that I regularly do quick, easy scriptable?

We get asked this question also a lot. What strategy should I choose to get the data or application to the cloud? There are two strategies that are out there right now.

There is the application modernization strategy which comes with its pros and cons. Then there is also the Lift and Shift strategy which comes with its own pros and cons.

What’s associated with that modernization is the fact that the pros, you build a new application. You can modify and delete and update existing applications to take advantage of cloud-native services. It’s definitely a huge pro. It’s born in the cloud. It’s changed in the cloud. You have access to it.

However, the cons that are associated with that is that there is a slower time to production. More than likely, it’s going to have significant downtime as you try to migrate that data over. The timeline we’re looking at is months to years.

Then there are also the costs associated with that. It’s ensuring that it’s built correctly, tested correctly, and then implemented correctly.

Then there is the Lift and Shift. The pros that you have for the Lift and Shift is that there is a faster time to cloud production. Instead of months to years, we’re looking at the ability to do this in days to weeks or, depending on how aggressive you could be, it could be hours to weeks. It totally depends.

Then there’s costs associated with it. You’re not recreating that app. You’re not going back and rewriting code that is only going to be beneficial for your move. You’re having your developers write features and new things to your code that’s actually going to benefit and point you in a way of making sure that it’s actually continuously going to support your customers themselves.

The cons associated with the Lift and Shift approach is that your application is not API optimized, but that’s something that you can make a decision on whether or not that that’s something that’s beneficial or needed for your app. Does your app need to speak the API or does it just need to address the storage?

The next thing that we want to discuss, we want to discuss high availability. This is key in any design process. It’s how do I make sure or plan for failure or disruption of services? Anything happens; anything goes wrong, how do I cover myself to make sure that my users don’t feel the pain? My failover needs to be non-disruptive.

How can I make sure that if a reading fails, if an instance fails, that my users come back and my users don’t even notice that a problem happened? I need to design for that.

How do I ensure that during this process that I am maintaining my existing connections? It’s not the fact that failover happens and then I need to go back and recreate all my tiers, repoint my pointers to different APIs to different locations.

How do I make sure that I have the ability to maintain my existing connections within a consistent IPO? How do I make sure that I have and what I’ve designed fulfills my IPO needs?

Another thing that comes up and this is one of the questions that generally comesin to our team is, is HA per app or is this HA for infrastructure? When you go ahead and you go through the process of app modernization, you’re actually doing HA per app.

When you are looking at a more holistic solution, you need to think in advance. On your on-premise environment, you’re doing HA for infrastructure. How do you migrate that HA for infrastructure over to your cloud environment? And that’s where the cloud NAS comes in.

The cloud NAS solves many of the security and design concerns. We have a couple of quotes up there. They are listed on our website for some of our users on the AWS platform.

We have Keith Son which we did a webinar with a couple of weeks ago. It might have been last week. I don’t remember if it’s up in my mind. Keith Son loves that software, constantly coming to us asking for different ways and multiple ways that he could actually tweak or use our software more.

Keith says, “Selecting SoftNAS has enabled us to quickly provision and manage storage, which has contributed to a hassle-free experience.” That’s what you want to hear when you come to design. It’s hassle-free. I don’t have to worry about it.

We also have a quote there from John Crites from Modus. John says that he’s found that SoftNAS is cost-effective and a secure solution for storing massive amounts of data in the AWS environment.

Cloud NAS addresses the security and access concerns. You need to be able to tier into active directory. You need to be able to tier into LDAP.

You need to be able to secure your environment using IAM rules. Why? Because I don’t want my security, my secret keys to be visible by anybody. I want to be able to utilize the security that’s already initiated by AWS and have it just [be 33:00] through my app.

VPC security groups. We ensure that with your VPC and with the security groups that you set up, only users that you want to have access to your infrastructure has access to your infrastructure.

From a data protection standpoint, block replication. How do we make sure that my data is somewhere else?

Data redundancy. We’ve been preaching that for the last 20 years. The only way I can make sure that my data is fully protected is that it’s redundant. In the cloud, although we’re extended redundancy, how do I make sure that my data is faultlessly redundant?

We’re talking about bock replication. For data protection, we’re talking about encryption. How could you actually encrypt that data to make sure that even if someone did get access to it they didn’t know what they would do with it. It would be gobbledygook.

You need to be able to have the ability to do instant snapshots. How can I go in, based on my scenario, go in and create an instant snapshot of my environment? So worst case scenario if anything happens, I have a point in time that I could quickly come back to.

Write up with snap clones. How do I stand up my data quickly? Worst case scenario if anything happens and I need to be able to revert to a period before that I know I wasn’t compromised; how can I do that quickly?

High availability in ZFS and Linux. How do I protect my infrastructure underneath? Then performance. A cloud NAS gives you dedicated infrastructure. That means that it gives you the ability to be tunable.

If my workload or use-case increases, I have the ability to tune my appliance to be able to grow as my use-case grows or as my use-case needs growth.

Performance and adaptability. From disk to SSD, to networking, how can I make my environment or my storage adaptable to the performance that I would need? From no-burst limitations, dedicated throughput, compression, deduplication are all things that we need to be considering as we go through this design process. The cloud NAS gives you the ability to do it.

Then flexibility. What can I grow to? Can I scale from gigabyte to petabyte with the way that I have designed? Can I grow from petabytes to multiple petabytes? How do I make sure that I’ve designed with the thought of flexibility? The cloud NAS gives you the ability to do that.

We are also talking about multiplatform, multi-cloud, block and object storage. Have I designed my environment that I could switch to new storage options? Cloud NAS gives you the ability to do that.

We also need to get to the point of the protocols. What protocols are supported, CIFS, NFS, iSCSI? Can I then provision these systems? Yes. The cloud NAS gives you the ability to do that.

With that, I just wanted to give a very quick overview onSoftNAS and what we do from a SoftNAS perspective. SoftNAS, as we said, is a cloud NAS. It’s the most downloadable and the most utilized cloud NAS in the cloud.

We give you the ability to easily migrate those applications to AWS. You don’t need to change your applications at all. As long as your applications connect via CIFS or NFS or iSCSI or AFP, we are agnostic. Your applications would connect exactly the same way that they had connected as they do on-premise. We give you the ability to address cloud storage and this is in the form of S3. It’s in the form of Provisioned IOPS. It’s in the form of gp2s.

Anything that Amazon has available as storage, SoftNAS gives you the ability to aggregate into a storage pool and enhance sharing it out via volumes that are CIFS, NFS, or iSCSI. Giving you the ability to have your applications move seamlessly.

These are some of our technology partners. We work hand-in-hand with Talon, ScanDisk, NetApp, SwiftStack. All of these companies love our software and we work hand-in-hand as we deliver solutions together.

We talked about some of the functions that we have that are native to SoftNAS. The Cloud-Native IAM Role Integration, we have the ability to do that – to encrypt your data at rest or in transit.

Then also the fact of firewall security, we have the ability to be able to utilize that to. From a data protection standpoint, it’s a copy on write file system so it gives you the ability to be able to ensure the data integrity of your information of your storage.

We’re talking about instant storage snapshots whether manual or scheduled and rapid snapshot rollback there, we support RAID all across the board with all EBS and also the ability to do it with S3 back storage.

We also give you the ability to…From a built-in snapshot scenario for your end-users, this is one of the things that our customers love, the Windows previous version support.

IT teams love this; because if they have a user that made a mistake, instead of them having to go back in and recover a whole volume to be able to get the data back, they just tell them to go ahead.

Go back in. Windows previous versions, right click on that, restore that previous version. Giving your users something that they are used to on-premise that they have immediately within the cloud.

High performance. Scaling up to gigabytes per second for throughput. For performance, we talked about no burst limits protects against Split-brain on HA fallover giving you the ability to migrate those applications without writing or rewriting a single piece of code.

We talked about automation in the ability to utilize our REST APIs. Very robust REST API cloud integration using ARM or cloud formation template available in every AWS ARM region.

Brands you know that trust SoftNAS. With the list of logos that we have on the screen, everywhere from your Fortune 500 all the way down to SMBs, they are using us in multiple different use cases within the environment – all the way from production, to DevOps, to UAT, enabling their end-users or development teams or production environments to be able to scale quickly and utilize the redundancy that we have, from a SoftNAS standpoint.

You could try SoftNAS for free for 30 days. Very quickly, very easily, our software stands up within 30 minutes and that’s all the way from deploying the instance to creating a tier.

Creating your volumes, creating your disks, aggregating those disks into a storage pool, creating your volumes, and setting up your tiers – 30 minutes. Very quickly, very easy, and you could actually try it for 30 days. Figure out how it fits in your environment. Test it, test the HA, test the ability to use a virtual IP between two systems – very quickly, very easy, very simple.

Taran:              Okay, Kevin, let me give some Q&R really quick. To thank everyone for joining today’s webinar, we are giving out $100 AWS credits for free. All we need is for you to click on the link that I just posted in the chat window.

Just go in, click on that link, and it will take you to a page where all you have todo is provide us your email address; and then within 24 hours, you will receive a free $100 a month AWS credit.

For those of you who are interested in learning more about SoftNAS on AWS, we welcome you to go visit softnas.com/aws or the Bitly link that’s right here on the screen.

Just go ahead and visit that link to learn more about how SoftNAS works on AWS. If you have any questions or you want to get in contact with us, please feel free to visit our “Contact us” page. Basically, you can submit a request to learn more about how SoftNAS works.

Kevin and our other SAs are more than happy to work through your use case to learn about what you may be using cloud storage for and how we can help you out with that.

Then also, we do have a live chat function on our website as well so if you want to speak to one of our people immediately, you can just go ahead and use that live chat feature and our team will answer your questions pretty quickly.

We’ll go ahead and start the Q&A now. We have a couple of questions here and let’s go ahead and knock them out. It should be about five minutes. The first question that we have here is can we download the slide deck?

You absolutely can download the slide deck. We’re going to upload it to SlideShare shortly after this webinar is over. Later on in the afternoon, we are going to send you an email with a SlideShare link and a YouTube link for the recording of the webinar.

The second question that we have here is can SoftNAS be used in a hybrid deployment? Kevin?

Kevin:              SoftNAS can be used in a hybrid deployment. The same code that exists within the AWS environment also exists that you can actually deploy it on VMware. Each SoftNAS device gives you the ability to address cloud storage so you would be able to go ahead and utilize your regular access within AWS so you could still go ahead and utilize EBS or S3 storage at the backend.

Taran:              Fantastic. Thank you, Kevin. The next question that we have here is if I do choose to migrate my applications to AWS, can I do it natively without SoftNAS or do I have to migrate with SoftNAS?

Kevin:              That is an interesting question. It depends. It depends on how much storage you actually have behind that application. If you’re looking at something that is a one server solution and you’re not concerned directly with HA for that environment, then yes. You could definitely take that one server, bring it up to the cloud and you’d be able to do that.

However, if you’re looking to be able to recreate an enterprise like solution within your environment, then it would make sense to consider some type of NAS like solution to able to have that redundancy in your data taken care of.

Taran:              Great. Thanks, Kevin. The next question that we have here is, any integration with OpenStack for hybrid cloud, VIO VMware OpenStack?

Kevin:              You guys, I don’t know who’s asking that question, but we do have the ability to integrate with OpenStack. We would love to talk to you about your use case. If you could actually reach out to our sales team, we’ll go ahead and schedule a meeting with an SA so we could talk through that.

Taran:              Awesome. Thank you, Kevin. That was from Paulo, by the way.

Kevin:              OK, thanks a lot.

Taran:              That’s all the questions that we had for today’s webinar. Before we end this webinar, we do want to let everyone know that there is a survey available after this webinar is over. If you could please fill out that survey just so we can get some feedback on today’s webinar and let us know what topics we should definitely work on for our future webinars.

With that, our webinar today is over. Kevin, thanks again for presenting. We want to thank all of you for attending as well. We look forward to seeing you at our future webinars. Everyone have a great rest of the day.

Webinar: 12 Architectural Requirements for Protecting Business Data in the Cloud

Webinar: 12 Architectural Requirements for Protecting Business Data in the Cloud

The following is a recording and full transcript from the webinar, “12 Architectural Requirements for Protecting Business Data in the Cloud”. You can download the full slide deck on Slideshare

Designing a cloud data system architecture that protects your precious data when operating business-critical applications and workloads in the cloud is of paramount importance to cloud architects today. Ensuring the high availability for your company’s applications and protecting business data is challenging and somewhat different than in traditional on-premise data centers.

For most companies with hundreds to thousands of applications, it’s impractical to build all of these important capabilities into every application’s design architecture. The cloud storage infrastructure typically only provides a subset of what’s required to properly protect business data and applications. So how do you ensure your business data and applications are architected correctly and protected in the cloud?

In this webinar, we covered: Best Practices for protecting business data in the cloud. How To design a protected and highly-available cloud system architecture, Lessons Learned from architecting thousands of cloud system architectures.

Full Transcript: 12 Architectural Requirements for Protecting Business Data in the Cloud

Taran Soodan:             Hello everyone, and welcome to a SoftNAS webinar today on the 12 architectural requirements for protecting business data in the cloud. My name is Taran Soodan, and along with me, I have our presenter today Erick Olson the VP of engineering for SoftNAS.

Eric, do you want to go ahead and take a second to say hi to everyone?

Eric Olson:             Good afternoon everyone. Thank you for joining.

Taran:             Awesome. Thanks for that, Eric. Before we begin today’s webinar, we do want to cover a couple of housekeeping items that are on here. Just as a reminder, the webinar audio today can be through either your computer or your local telephone.

If you want to go ahead and dial-in to today’s webinar, the information is available for you on the top right of the GoToWebinar control panel. Also, we will have a Q&A session at the end of today’s webinar.

If you have any questions that come up regarding some of the content that we are talking about or questions about how SoftNAS works, just go ahead and post your questions in the questions pane and we will go ahead and answer them at the end of today’s webinar.

Today’s session is being recorded. For those of you who want to be able to watch the webinar recording on-demand or have access to the slides, we’ll share links with you after today’s webinar in a couple of hours.

Moving on to the agenda for today’s webinar, basically, we’re going to be talking about those 12 architectural requirements for protecting your data. We will cover some best practices, some lessons learned from what we’ve done for our customers in the cloud.

Finally, we will tell you a little bit about our product SoftNAS Cloud NAS and how it works. Then we will close it off with a Q&A to answer any questions that might pop up.

With that, I’ll go ahead and hand it over to Eric to go ahead and walk you through those architectural requirements. It’s all you, Eric.

Eric:              Thanks, Taran. Good afternoon everyone. Good morning to those of you on the West Coast. Thanks for joining us. I am here to talk to you about the 12 architectural requirements for protecting your data in the cloud.

I’d like to start with the first one, and these are in no particular order. However, I would like to stress this is probably one of the most important of these requirements and that would be high availability.

It’s important to understand that not all HA solutions are created equal. The key aspect to high availability is to make sure that data is always highly available. That would require some type of replication from point A to point B.

You also need to ensure-depending upon what public cloud infrastructure you may be running this on-that your high availability supports properly the redundancy that’s available on the platform.

Take something like Amazon Web Services which offers different availability zones. You would want to find an HA solution that is available to run in different zones and provide availability across set availability zones, which is to the point of making sure that you have all your data stored in more than one zone.

You would want to ensure that you have a greater uptime in a single computer instance. The high availability that is available today from SoftNAS, for example, allows you on a public cloud infrastructure to deploy an instance in separate availability zones.

It allows you to choose different storage types. It allows you to replicate that data between those two. And in case of a failover or an incident that would require a failover, your data should be available to you within 30 to 60 seconds.

You also want to ensure that whatever HA solution you’re looking for avoids what we like to call the split-brain scenario, which means that data could end up on a node that is not on the other node or newer data could end up on the target node after an HA takeover.

You want to ensure that whatever type of solution you find that provides high availability that it meets the requirements of ensuring there is no split-brain in nodes.

The next piece that we want to cover is around data protection. I want to stress that when we talk about data protection, there’s multiple different ways to perceive that requirement.

We are looking at the data protection from the storage architecture standpoint. You want to find a solution that supports snapshots and rollbacks. In snapshots, we look at these as a form of insurance – you buy them; you use them, but you hope you never have to need them.

I want to point out that snapshots do not take the place of a backup either. You want to find a solution that can replicate your data, whether that would be from replicating your data on-premise to a cloud environment, whether it would be replicating it from different regions within a public cloud environment, or even if you wanted to replicate your data from one public cloud platform to the other to ensure that you had a copy of the data in another environment.

You want to ensure that you can provide RAID mirroring. You want to ensure that you have efficiency with your data – being able to provide different features like compression, deduplication, etc.

A copy-on-write file system is a key aspect to avoid data integrity risks. Being able to support things like Windows previous versions for rollbacks. These are all key aspects to a solution that should provide you the proper data protection.

Data security and access control. This is always at the top of mind with everyone these days, so a solution that supports encryption. You want to ensure that the data is encrypted not only at rest but during all aspects of data transmission – data-at-rest, data-in-flight.

The ability to provide the proper authentication authorization, whether that’s integration with LDAP for NFS permissions, for example, or leveraging active directory for Windows environments.

You want to ensure that whatever solution you find can support the native cloud IAM roles available or servers principle roles available on the different public cloud platforms.

You want to ensure that you’re using firewalls and limiting access to who can gain access to certain things to limit the amount of exposure you have.

Performance is always at the top of everyone’s mind. If you take a look at a solution, you want to ensure that it uses dedicated storage infrastructure so that all applications can have the performance throughput that’s required.

No-burst limitations. You will find that some cloud platform vendor solutions use a throttling mechanism in order to give you only certain aspects of performance. You need a solution that can give you guaranteed predictable performance.

You want a solution that when you start it up on day one with 100,000 files, the performance is the same on day-300 and there’s 5 million files. It’s got to be predictable and it can’t change.

You have to ensure that you look at what your actual storage’s throughput and IOPS requirements are before you deploy a solution. This is the key aspect to things.

A lot of people come in and look to deploy a solution with really understanding what their performance requirements are and sometimes we see people that undersized the solution; but a lot of times, we see people that oversized the solution as well. It’s something to really take into consideration to understand.

You want a solution that’s very flexible from a usability standpoint – something that can run on multiple cloud platforms. You can find a good balance of cost for performance; broad support for protocols like CIFS, NFS, iSCSI, AFP; some type of automation with the cloud integration; the ability to support automation via uses of script, APIs, command-lines, all of these types of things.

Something that’s software-defined, and something that allows you to actually create clones of your data so that you can actually test your usable production data in a development environment; this is a key aspect that we found.

If you have the functionality, it really allows you to test out what your real performance is going to look like prior to going into production.

You need a solution that’s very adaptable. The ability to support all of the available instances in VM types on different platforms, so whether you want to use high-memory instances, whether your requirements mandate that you need some type of ephemeral storage for your application.

Whatever that instance may be or VM may be, you want a solution that can work with it. Something that will support all the storage types that are available; whether that would be block storage, so EBS on Amazon, or Premium storage on Microsoft Azure, to also being able to support object storage.

You want to be able to leverage that lower cost object storage like Azure Blobs or Amazon S3 in order to set specific data that maybe doesn’t have the same throughput and IOPS requirements as something else to be able to leverage that lower cost storage.

This goes back to my point of understanding what your throughput and IOPS requirements so that you can actually select the proper storage to run your infrastructure.

Something that can support both an on-premise on a cloud or a hyper cloud environment – multiple cloud support and being able to adapt to the requirements as they change.

You need to find a solution that can expand as you grow. If you have a larger storage appetite and your need for storage to grow, you want to be able to extend this on the fly.

This is going to be one of the huge benefits that you’d find in a software-defined solution ran on a cloud infrastructure. There is no more rip and replace to extend storage. You could just attach more disks and extend your useable data sets.

This goes then to dynamic storage capacity and being able to support maximum files and directories. We’ve seen certain solutions that once they get to a million files, performance starts to degrade.

You need something that can handle billions of files and Petabyte worth of data so that you know that what you got deployed now today will meet your data needs five years from now.

You need a solution that has that support safety net and availability of 24/7, 365, with different levels of support so that you can access it through multiple channels.

You would probably want to find a solution that has design support, offered a free-trial or a proof of concept version. Find out what guarantees and warranties and SLA different solutions can provide to you.

The ability to provide monitoring integration with it, integrations with things like Cloud Watch, integration with things like Azure Monitoring and Reporting uptime requirements, all of your audit log, system log, integration.

Make sure that whatever solution you find can handle all of the troubleshooting gates. The SLA which I covered, how will a vendor stand behind their offer? What’s their guarantee?

Get a documented guarantee from each vendor that spells out exactly what’s covered, what’s not covered, and if there is a failure, how is that covered from a vendor perspective.

You need to make sure that whatever solution you choose to deploy that its enterprise ready. You need something that can scale to billions of files because we’re long passed millions of files.

We are dealing with people and customers that have billions of files with Petabyte of data.

It can be highly resilient. It can provide a broad range of applications and workloads. It can help you meet your DR requirements in the cloud and also can give you some reporting and analytics on the data that you have deployed and in-place.

Is the solution cloud-native? Was the solution built from the ground to reside in the public cloud or is it a solution that was converted to run in a public cloud? How easy is it to move legacy applications onto the solution in the cloud?

You should outline your cloud platform requirements. Really honestly take the time and outline what your cost and your company’s requirements to the public cloud are. Are you doing this to save money to get better performance?

Maybe you’re closing a data center. Maybe your existing hardware NAS is up for a maintenance renewal or it’s requiring a hardware upgrade because it is no longer supported. Whatever those reasons, they are very important to understand.

Look for a solution that has positive product reviews. If you look in the Amazon Web Services Marketplace for any type of solutions out there, the one thing about Amazon is it’s really great for reviews.

Whether that’s deploying a software solution out of the marketplace, or whether that’s going and buying a novel on Amazon.com, check out all earlier reviews. Look at third-party testing and benchmark results.

Run your own tests and benchmarks. This is what I would encourage you. Look at different industry analysts, customer and partner testimonials and find out if you have a trustworthy vendor.

I’d like to talk to you for just a few seconds now about SoftNAS and our product SoftNAS Cloud NAS. What SoftNAS offers is a fully featured Enterprise cloud NAS for primary data storage.

It allows you to take your existing applications that maybe are residing on-premise that needs that legacy protocol support like NFS, CIFS, or iSCSI and move them over to a public cloud environment.
The solution allows you to leverage all of the underlying storage of the public cloud infrastructure. Whether that would be object storage or block storage, you can use both. You could mix and match.

We offer a solution that can run not only on VMware on-premise but it can also run on public cloud environments such as AWS and Microsoft Azure. We offer a full high availability cross-zone solution on Amazon and a full high availability cross-network in Microsoft Azure.

We support all types of storage on all platforms. Whether that’s hot blob or cool blob on Azure, magnetic EBS or S3 disks on Amazon, we can allow you to create pools on top of that and essentially give you file server like access to these particular storage mediums.

At this point, I’d like to go ahead and take a pause. If you have any questions, please feel free to put them into that chat. I’ll go ahead and turn it over to Taran here to wrap things up and to begin our Q&A session.

Hopefully, you found this part of the webinar to be useful and good information. Taran, over to you.

Taran:             Awesome. Thank you for that, Eric. What we would like to let everyone know is that at SoftNAS, we do partner with a lot of cloud technology companies so, Amazon Web Services, Microsoft Azure, VMware; and cloud consulting companies like 2ndWatch, Relis, and other well-known cloud providers.

Eric, could you move on to the next slide, please. Just to give you guys a sense of who is using SoftNAS, we have companies of all sizes using our product whether they are small businesses or large enterprises like Nike, Netflix, Samsung, Deloitte, Symantec, Ratheone, and a lot of well-known and large companies are using SoftNAS.

If you have any concerns about whether or not SoftNAS is able to meet your needs, just know that some of the largest companies in the world that had these legacy on-premises systems have shifted to the cloud and they are using SoftNAS for that.

The next slide, please, Eric. Before we move on to the Q&A, we just want to let everyone know that you are able to try SoftNAS free for 30 days on either AWS, Azure, or even your on-prem through VMware VSphere.

We’ve got some links. You’re able to click here on the right. If you want to try SoftNAS on AWS, just go to softnas.com/trynow and you will be able to go and download SoftNAS and choose which platform you want to use SoftNAS for.

Moving on to our Q&A, it looks like we’ve got a bunch of questions here so let’s just go ahead and get these knocked out. The first question that we have here is, “Is SoftNAS a physical device? How does it work on AWS?”

Eric:              SoftNAS is a software-based solution. On VMware, for example, the solution is packaged up as an OBA and deployed as a VM. On Amazon, it is available via the AWS Marketplace and is an AMI or an Amazon Machine Image. On Azure, the solution is available via the Azure Marketplace as well.

Taran:             Thanks for that Eric. The next question here, “What are some of the use-cases for SoftNAS in the cloud?”

Eric:              That’s a very broad question so I’ll try to cover a couple of them fairly simply just to give you an idea. One of them would be let’s say that you had a bunch of 19:21 that require access to an NFS share, so being able to provide NFS share access.

The same thing with the availability, you require CIFS access. We see a lot of our customers deploy us when it comes to using SoftNAS, for example, as a target for backups in the cloud. That’s another great use-case.

We also see a lot of customers that deploy us to taking existing applications and SaaSify it so to speak – turn it into a Software-as-a-Application. We see a lot of customers that deploy us and those particular use-cases are fairly common.

Taran:             Thanks, Eric. The next question that we have here is, “What file protocols are supported?”

Eric:             We support CIFS. We support NFS version 3 and 4. We support iSCSI. We also support AFP.

Taran:             The next question that we have here is, “Will I be able to get access to the slides?” The answer to that is yes you will. After we’re done here within the next hour or two, we’re going to send you an email with a link to the slides on SlideShare along with a recording of this webinar on YouTube.

The final question that we have here is, “I have about 15 TB of storage I need to move from my legacy NAS. Do you recommend moving all the data at once or should it be done in phases?”

Eric:              Without knowing more about what your particular use-case and problem that you are trying to solve is, I would have to defer that. For the person that asked that question, if you’d follow up and contact us at sales@softnas.com, I’d be happy to further discuss your particular application and use-case.

Taran:             Thanks for that, Eric. Everyone, that’s all the questions that we have for today’s webinar. Just as a reminder, you are able to go to softnas.com and try our product for free for 30 days.

After this webinar is over there is a quick survey. We’d like to ask that everyone fill out that survey so that we know how to better improve future webinar that we do for our customers and everyone else who is interested in learning more about SoftNAS.

With that, we want to go ahead and thank everyone for joining today’s webinar and we look forward to seeing you again in the future.

Best Practices Learned from 1,000 AWS VPC Configurations

Best Practices Learned from 1,000 AWS VPC Configurations

AWS VPC Best Practices with SoftNAS 

Buurst SoftNAS has been available on AWS for the past eight years providing cloud storage management for thousands of petabytes of data. Our experience in design, configuration, and operation on AWS provides customers with immediate benefits.  

Best Practice Topics  

  • Organize Your AWS Environment
  • Create AWS VPC Subnets
  • SoftNAS Cloud NAS and AWS VPCs
  • Common AWS VPC Mistakes
  • SoftNAS Cloud NAS Overview

Organize Your AWS Environment

Organizing your AWS environment is a critical step in maximizing your SoftNAS capability. Buurst recommends the use of tags. The addition of new instances, routing tables, and subnets can create confusion. The simple use of tags will assist in identifying issues during troubleshooting.  

When planning your CIDR (Classless Inter-Domain Routing) block, Buurst recommends making it larger than expected. This is because every VPC subnet created uses five IP addresses for the subnet. Thus, remember that all newly created subnets have a five IP overhead.  

Additionally, avoid using overlapping CIDR blocks as any future updates in pairing the VPC with another VPC will not function correctly with complicated VPC pairing solutions. Finally, there is no cost associated with a larger CIDR block, so simplify your scaling plans by choosing a larger block size upfront. 

Create AWS VPC Subnets

A best practice for AWS subnets is to align VPC subnets to as many different tiers as possible. For example, the DMZ/Proxy layer or the ELB layer uses load balancers, application, or database layer. If your subnet is not associated with a specific route table, then by default, it goes to the main route table. Missing subnet associations are a common issue where packets are not flowing correctly due to not associating subnets with route tables.  

Buurst recommends putting everything in a private subnet by default and using either ELB filtering or monitoring services in your public subnet. A NAT is preferred to gain access to the public network as it will become part of a dual NAT configuration for redundancy. Cloud formation templates are available to set up highly available NAT instances which require proper sizing based on the amount of traffic going through the network.      

Set up VPC peering to access other VPCs within the environment or from a customer or partner environment. Buurst recommends leveraging the endpoints for access to services like S3 instead of going out either over a NAT instance or an internet gateway to access services that don’t live within the specific VPCs. This setup is more efficient with a lower latency by leveraging an endpoint rather than an external link.  

Control Your Access

Control access within the AWS VPC by not cutting corners using a default route to the internet gateway. This access setup is a common problem many customers spend time on with our Support organization. Again, we encourage redundant NAT instances leveraging cloud formation templates available from Amazon or creating highly available redundant NAT instances.  

The default NAT instance size is an m1.small, which may or may not suit your needs depending on traffic volume in your environment. Buurst highly recommends using IAM (Identity and Access Management) for access control, especially configuring IAM roles to instances. Remember that IAM roles cannot be assigned to running instances and are set up during instance creation time. Using those IAM roles will allow you to not have to continue to populate AWS keys within the specific products to gain access to some of those API services. 

How Does SoftNAS Fit Into AWS VPCs?

Buurst SoftNAS offers a highly available architecture from a storage perspective, leveraging our SNAP HA capability, allowing us to provide high availability across multiple availability zones. SNAP HA offers 99.999% HA with two SoftNAS controllers replicating the data into block storage in both availability zones. Buurst customers who run in this environment qualify for our Buurst No Downtime Guarantee 

Additionally, AWS provides no SOA (Start of Authority) unless your solution runs in a multi-zone deployment.    

SoftNAS uses a private virtual IP address in which both SoftNAS instances live within a private subnet and are not accessible externally, unless configured with an external NAT, or AWS Direct Connect.

SoftNAS SNAP HA™ provides NFS, CIFS and iSCSI services via redundant storage controllers. One controller is active, while another is a standby controller. Block replication transmits only the changed data blocks from the source (primary) controller node to the target (secondary) controller. Data is maintained in a consistent state on both controllers using the ZFS copy-on-write filesystem, which ensures data integrity is maintained. In effect, this provides a near real-time backup of all production data (kept current within 1 to 2 minutes). 

A key component of SNAP HA™ is the HA Monitor. The HA Monitor runs on both nodes that are participating in SNAP HA™. On the secondary node, HA Monitor checks network connectivity, as well as the primary controller’s health and its ability to continue serving storage. Faults in network connectivity or storage services are detected within 10 seconds or less, and an automatic failover occurs, enabling the secondary controller to pick up and continue serving NAS storage requests, preventing any downtime.  

Once the failover process is triggered, either due to the HA Monitor (automatic failover) or as a result of a manual takeover action initiated by the admin user, NAS client requests for NFS, CIFS and iSCSI storage are quickly re-routed over the network to the secondary controller, which takes over as the new primary storage controller. Takeover on AWS can take up to 30 seconds, due to the time required for network routing configuration changes to take place. 

Common AWS VPC Mistakes

These are the most common support issues in AWS VPC configuration: 

  • Deployments require two NIC interfaces with both NICs in the same subnet. Double-check during configuration.  
  • SoftNAS health checks perform a ping between the two instances requiring the security group to be open at all times  
  • A virtual IP address must not be in the same CIDR of the AWS VPC. So, if the CIDR is, select a virtual IP address not within that subnet.  

SoftNAS Overview

Buurst SoftNAS is an enterprise virtual software NAS available for AWS, Azure, and VMware with industry-leading performance and availability at an affordable cost. 

SoftNAS is purpose-built to support SaaS applications and other performance-intensive solutions requiring more than standard cloud storage offerings.

  • Performance – Tune performance for exceptional data usage 
  • High Availability – From 3-9’s to 5-9’s HA with our No Downtime Guarantee 
  • Data Migration – Built-in “Lift and Shift” file transfer from on-premises to the cloud 
  • Platform Independent – SoftNAS operates on AWS,Azure, and VMware 
  • Data Tiering – Intelligent storage management reducing overall storage costs 


    Common questions related to SoftNAS and AWS VPC: 

    We use VLANs in our data centers for isolation purposes today. What VPC construct do you recommend to replace VLANs in AWS?

    That would be subnets, so you could either leverage the use of subnets or if you really wanted to get a different isolation mechanism, create another VPC to isolate those resources further and then actually pair them together via the use of VPC pairing technology.

    You said to use IAM for access control, so what do you see in terms of IAM best practices for AWS VPC security?

    The most significant thing is that you deal with either thirdparty products or customized software that you made on your web server. Anything that requires the use of AWS API resources needs to use a secret key and an access key, so you can store that secret key and access key in some type of text file and have it reference it, or, b, the easier way is just to set the minimum level of permissions that you need in the IAM role, create this role and attach it to your instance and start time. Now, the role itself cannot be assigned, except during start time. However, the permissions of several roles can be modified on the fly. So you can add or subtract permissions should the need arise.

    So when you’re troubleshooting the complex VPC networks, what approaching tools have you found to be the most effective?

    We love to use traceroute.  I love to use ICMP when it’s available, but I also like to use the AWS Flow Logs which will actually allow me to see what’s going on in a much more granular basis, and also leveraging some tools like CloudTrail to make sure that I know what API calls were made by what user to understand what’s gone on.

    What do you recommend for VPN intrusion detection?

    There are a lot of them that are available. We’ve got some experience with Cisco and Juniper for things like VPN and Fortinet, whoever you have, and as far as IVS goes, Alert Logic is a popular solution. I see a lot of customers that use that particular product. Some people like some of the opensource tools like Snort and things like that as well.

    Any recommendations around secure junk box configurations within AWS VPC?

    If you’re going to deploy a lot of your resources within a private subnet and you’re not going to use a VPN, one of the ways that a lot of people do this is to just configure a quick junk box, and what I mean by that is just to take a server, whether it be a Windows or Linux, depending upon your preference, and put that in the public subnet and only allow access from a certain amount of IP addresses over to either SSH from a Linux perspective or RDP from a Windows perspective.  It puts you inside of the network and actually allows to gain access to the resources within the private subnet.

    And do junk boxes sometimes also work? Are people using VPNs to access the junk box too for added security?

    Some people do that. Sometimes they’ll just put like a junk box inside of the VPN and your VPN into that. It’s just a matter of your organization security policies.

    Any performance or further considerations when designing the VPC?

    It’s important to understand that each instance has its own available amount of resources, from not only from a network IO but from a storage IO perspective, and also it’s important to understand that 10GB, a 10GB instance, like let’s say take the c3.8xl which is a 10GB instance. That’s not 10GB worth of network bandwidth or 10GB worth of storage bandwidth. That’s 10GB for the instance, right? So if you have a high amount of IO that you’re pushing there from both a network and a storage perspective, that 10GB is shared, not only from the network but also to access the underlying EBS storage network. This confuses a lot of people, so it’s 10GB for the instance not just a 10GB network pipe that you have.

    Why would use an elastic IP instead of the virtual IP?

    What if you had some people that wanted to access this from outside of AWS? We do have some customers that primarily their servers and things are within AWS, but they want access to files that are running, that they’re not inside of the AWS VPC.  So you could leverage it that way, and this was the first way that we actually created HA to be honest because this was the only method at first that allowed us to share an IP address or work around some of the public cloud things like node layer to broadcast and things like that.

    Looks like this next question is around AWS VPC tagging. Any best practices for example?  

    Yeah, so I see people that basically take different services, like web and database or application, and they tag everything within the security groups and everything with that particular tag.  For people that are deploying SoftNAS, I would recommend just using the name SoftNAS as my tag. It’s really up to you, but I do suggest that you use them. It will make your life a lot easier.

    Is storage level encryption a feature of SoftNAS Cloud NAS or does the customer need to implement that on their own?  

    So as of our version that’s available today which is 3.3.3, on AWS you can leverage the underlying EBS encryption. We provide encryption for Amazon S3 as well, and coming in our next release which is due out at the end of the month we actually do offer encryption, so you can actually create encrypted storage pools which encrypt the underlying disk devices.

    Virtual VIP for HA: does the subnet this event would be part of add in to the AWS VPC routing table?

    It’s automatic. When you select that VIP address in the private subnet, it will automatically add a host route into the routing table. Which allows clients to route that traffic.

    Can you clarify the requirement on an HA pair with two next, that both have to be in the same subnet? 

    So each instance you need to move NIC ENIs, and each of those ENIs actually need to be in the same subnet.

    Do you have HA capability across regions? What options are available if you need to replicate data across regions? Is the data encryption at-rest, in-flight, etc.?

    We cannot do HA with automatic failover across regions.  However, we can do SnapReplicate across regions. Then you can do a manual failover should the need arise. The data you transfer via SnapReplicate is sent over SSH and across regions. You could replicate across data centers. You could even replicate across different cloud markets.

    Can AWS VPC pairings span across regions?

    The answer is, no, that it cannot.

    Can we create an HA endpoint to AWS for use with AWS Direct Connect?

    Absolutely. You could go ahead and create an HA pair of SoftNAS Cloud NAS, leverage direct connect from your data center, and access that highly available storage.

    When using S3 as a backend and a write cache, is it possible to read the file while it’s still in cache?

    The answer is, yes, it is. I’m assuming that you’re speaking about the eventual consistency challenges of the AWS standard region; with the manner in which we deal with S3 where we treat each bucket as its own hard drive, we do not have to deal with the S3 consistency challenges.

    Regarding subnets, the example where a host lives in two subnets, can you clarify both these subnets are in the same AZ?

    In the examples that I’ve used, each of these subnets is actually within its own VPC, assuming its own availabilities. So, again, each subnet is in its own separate availability zone, and if you want to discuss more, please feel free to reach out and we can discuss that.

    Is there a white paper on the website dealing with the proper engineering for SoftNAS Cloud NAS for our storage pools, EBS vs. S3, etc.?

    Click here to access the white paper, which is our SoftNAS architectural paper which was co-written by SoftNAS and Amazon Web Services for proper configuration settings, options, etc. We also have a pre-sales architectural team that can help you out with best practices, configurations, and those types of things from an AWS perspective. Please contact sales@softnas.com and someone will be in touch.

    How do you solve the HA and failover problem?

    We actually do a couple of different things here. When we have an automatic failover, one of the things that we do when we set up HA is we create an S3 bucket that has to act as a third-party witness. Before anything takes over as the master controller, it queries the S3 bucket and makes sure that it’s able to take over. The other thing that we do is after a take-over, the old source node is actually shut down.  You don’t want to have a situation where the node is flapping up and down and it’s kind of up but kind of not and it keeps trying to take over, so if there’s a take-over that occurs, whether it’s manual or automatic, the old source node in that particular configuration is shut down.  That information is logged, and we’re assuming that you’ll go out and investigate why the failover took place.  If there are questions about that in a production scenario, support@softnas.com is always available.

    Webinar: AWS vs. On-Premises NAS Storage – Which is Best for Your Business?

    Webinar: AWS vs. On-Premises NAS Storage – Which is Best for Your Business?

    The maintenance bill is due for your on-premises SAN/NAS–or it just increased. It’s hundreds of thousands or millions of dollars just to keep your existing storage gear under maintenance. And you know you will need to purchase more storage capacity for this aged hardware. Do you renew and commit another 3-5 years by paying the storage bill and further commit to a data center architecture? Do you make a forklift upgrade and buy new SAN/NAS gear or move to hyperconverged infrastructure? Do you move to the AWS cloud for greater flexibility and agility? Will you give up security and data protection?

    The following is a recording and full transcript from a webinar that aired live on 08/30/16. You can download the full slide deck on Slideshare

    Full Transcript: AWS vs. On-Premises NAS Storage

    Taran Soodan:             Good afternoon everyone. My name is Taran Soodan, and welcome to our webinar today on on-premises NAS upgrade, paid maintenance or public cloud. Our speaker today is going to be Kevin Brown who is a solutions architect here at SoftNAS. Kevin, do you want to go ahead and give a quick hello?

    Kevin Brown:              Hello? How are you guys doing?

    Taran:                         Thanks, Kevin. Before we begin the webinar, we just want to cover a couple of housekeeping items with you all. In order to listen to today’s webinar, you’ll click on the telephone icon that you see here on the GoTo Meeting Control Panel.

    Any questions that you might have during the webinar can be posted in the questions box, and we’re going to be answering those questions at the end of the webinar so please feel free to ask your questions.

    Finally, this webinar is also being recorded. For those of you who are unable to make it or have colleagues that this might be of interest to, you’ll be able to share the webinar according with them later on today. Please keep an eye out for your email and we’ll send that information your way.

    Also, as a bonus for attending today’s webinar, we are going to be handing out $100 AWS credits, and we’ll have more information about that later on at the end of the webinar.

    Finally, for our agenda today, we’re going to be talking about on-premises to cloud conversation. We’re going to show you the difference between on-premises versus hyperconverged versus AWS.

    We are going to demo how to move your on-premises storage to AWS without having to modify any of your applications. We will also tell you here is why you should choose AWS over on-premises and hyperconverged.

    We’ll also give you some information about the actual total cost for ownership for AWS, and we’ll show you how to use the AWS TCO Calculator. Then we will tell you a little bit about SoftNAS and how it helps with your cloud migrations.

    Finally, we’ll have a Q&A at the end where we answer any and all questions that you might ask. With that being said, I’ll go ahead and hand it over to Kevin to begin the webinar.

    Kevin, you are now the presenter. Thanks, Kevin. I think you might also be on.

    Kevin:                         Not a problem. Can you guys see my screen?

    Taran:                         Yes, we can.

    Kevin:                         All right, perfect. Good morning, good afternoon, goodnight, and we appreciate that you’ve logged in today from wherever you are in the world. We thank you for taking the time and joining us for this webinar.

    Today, we’re actually going to focus on a storage dilemma that we have happening with IT teams and organizations all across the world. The looming question is what do I do when my maintenance renewal comes up?

    Teams are left with three options. Either we stay on-premise and pay the renewal fee for your maintenance bill which is a continuously increasing expense, or you could consider a forklift upgrade where you’re buying a new NAS or SAN and moving to a hyper-converged platform.

    The drawback with this option is that you still haven’t solved all your problems because you’re still on-prem, you’re still using hardware and the new maintenance renewal is about 24 to 12 months away.

    Finally, customers can Lift and Shift their data to AWS – hardware will no longer be required, and data centers could be unplugged. Does this sound familiar to anybody on this call?

    There is an increase in maintenance costs for support, for swapping disks, for downtime. There is exorbitant pricing for SSD drive and SSD storage when you paying a nominal leg for pricing that you need to pay to ensure that your environment works as advertised.

    You have a never-ending pressure from business to add more storage capacity – we need it, we need it now, and we need more of it. There is a lack of low-cost high-performance object storage, and you’re pressured by the business owners for agile infrastructure.

    The business is growing; data is growing. You need to be ahead of the curve and way ahead of the curve to actually keep up. Let’s take a look. Let’s do a stare and compare of all these three options that are there so On-Premise, Hyper-Converged, and AWS Cloud.

    From a security standpoint, all these three options deliver a secure environment with all the rules and policies that you’ve already designed to protect your environment — they travel with you.

    From an infrastructure and management scenario, your on-premise and hyper-converged still requires your current staff to maintain and update the underlying infrastructure.

    That’s where we’re talking about your disk swaps, your networking, your racking and un-racking. AWS can help you limit this IT burden with its Manage Infrastructure.

    From a scalability standpoint, I dare that you call your NAS or SAN provider and tell them that you think you bought too much storage last year and you want to give them back some. Tell me how that plays out.

    I say this in jest but, in AWS, you get just that option. You have the ability to scale up or scale down allowing you to grow as needed and not as estimated. We talked a bit, on the last slide, about the infrastructure and how AWS can help lessen some of that IT burden.

    For your on-premise and hyper-converged system, you control and manage everything from layer one all the way up to layer seven. In an AWS model, you can remove the need for jobs like maintenance, disk swapping, monitoring the health of your system and hand that over to AWS.

    You’re still in control of managing user-accounts access in your application, but you could wave goodbye to hardware, maintenance fees in your forklift upgrades.

    With that, I’d like to share eight reasons for you to choose AWS for your storage infrastructure. From a scalability standpoint. We talked about this earlier. Giving you the ability to grow your storage as needed, not as estimated. For the people who are storage gurus, you know exactly what that means.

    I’ve definitely been in rooms sitting with people with predictive modelling about how much data we are going to grow by for the next quarter or for the next year. I could tell you for 100% fact, I have never ever been in a room where we’ve come up with an accurate number. It’s always been an idea, a hope, a dream, a guess.

    With AWS and the scalability that it provides, we give you the ability to grow your storage and then pay for it as you go and only pay for what you use. That in itself is worth its own weight in gold.

    Not only that, you get a chance to end your maintenance renewals and no longer pay for that maintenance ransom where they are holding access to your data until your maintenance ransom is paid.

    There is also huge benefits for trading-in CAPEX for the OPEX model. There is no more long-term commitment. When you’re finished with the resource, you send it back to the provider. You’re done with using your S3 disk, you turn it off and you send that back to Amazon.

    You also gain freedom from having to make a significant long-term investment for equipment that you and I know will eventually breakdown or become outdated. You also have a reliable infrastructure.

    We’re talking about S3 and its (11) nines worth of durability. EC2, where it gives you (4) nines worth of accessibility for your multi-AZ deployments. You have functions like disaster recovery, easy of use, ease to manage, and your utilizing the best in the class in security to protect your data.

    Taran:                         Thanks for that, Kevin. What we are going to do now is just ask a quick poll question to the audience here. If you’re currently using an on-premises NAS system and it’s coming up for maintenance renewal, what do you intend to do?

    Are you going to do an in-place upgrade or you use your existing hardware, but you update the software? Are you going to go a forklift update where you buy new hardware and software?

    Or are you going to move to a hyper-converged system? Or are you considering the public cloud whether it’s AWS or other options? I’ll go ahead and give just a few more seconds for everyone to answer.

    Looks like about half of you have voted already, so I’ll give that about 10 more seconds and I’ll close the poll. I’ll go ahead and close the poll and share the results with everyone.

    As you can see here, we have a good mix of what you all intend to do. Most of you are intending to move to the public cloud, whether it’s AWS or others. Looks like a lot of you are interested in in-place NAS and SAN upgrade, so it’s interesting.

    A lot of you are also considering moving to hybrid-converged. For those of you who answered other, we would be curious to learn more about what your other plans are. On the questions pane, you’re more than welcome to write what you’re intending to do.

    With that, I’ll go ahead and hand it back to Kevin to handle the demo. Kevin.

    Kevin:                         Not a problem. Can you guys see my screen?

    Taran:                         I can.

    Kevin:                         Perfect. For the sake of my feed, I went ahead and did a recording of this just to make sure that no issues or problems would happen with the demo, so I can walk through. I’m going to talk through this video that I’m playing.

    We’re talking about lifting and shifting to the cloud and that can be done in multiple ways. We’re talking about, from a petabyte scale, you can incorporate AWS Import using Snowball. You can connect directly using AWS Direct Connect, or you could use some open-source tools or programs like Rsync, Lsync, and Robocopy.

    Once your data is in the cloud, it’s how you’re going to maintain that same enterprise-level of experience that you’re used to. With SoftNAS, you have that ability. We give you the opportunity to be able to have no downtime SLA and we’re the only company that gives that guarantee.

    We will be able to walk through the demo. Let me show you. One of the things I’m going to show is the easy use that we have in what we actually do with attaching storage and sharing that storage out to your data consumers.

    In this demo, we’re going to go ahead and we’re going to add a device. We are going to add some S3 storage. Very simple and easy, we click “add device.” It comes up “Cloud Disk Expander,” and we’re going to choose Amazon Web Services, S3.

    We’re going to go ahead and we’re going to click next. Because we’ve defined our IAM Role, you don’t have to share your access keys or security keys. We do give you the ability to go in and select your region.

    For this demo, my instances exist in Northern California so I’m going to select Norther California. If you had an S3 region that’s closer to you, you’d have the ability to do that also.

    We give you the ability to choose the bucket name and we increment up the S3 buckets as they are actually created. For the sake of this demo, I’m going to go ahead. I’m going to select a 10GB drive just to make sure that this creates quickly and easy for you to see.

    We also give you the ability to encrypt that disk – that would be Amazon’s encryption that we allow you to bleed through. We give you the ability to also encrypt data at a different level.

    From this, we see that our S3 disk has been created and is now available to be assigned to a storage group. We are also going to add some EBS disk. We know S3 is not Amazon’s most performing disk. For your data that requires more performance backend, we allow you to add Amazon’s EBS disk.

    With that, we already told you IAM there so we don’t have to use our key. For the sake of this, we’re going to do 15GB. We have the ability to encrypt it again. This is disk level encryption.

    Then we also give you the ability to choose the storage that is best suited towards your purpose. We have General Purpose SSD. You could use Provisioned IOP disks for your more performance data, or you could choose to use your standard.

    For the sake of this demo, we will choose general purpose and I will select to use three. With that, we’re just going to go and we’re going to create that EBS disk.

    As you see, the wizard in the background is creating it. It’s going through the process of creating, attaching, and partitioning those disks to be used by the SoftNAS device.

    We give it a couple of seconds. We’ll see that this completes. All right, now we have our system. We have that both our S3 and our EBS disk, they have the ability for us to assign them to our storage pool.

    Our storage pool is what we use to aggregate our disk. We have then the ability to add some enhancements while we go through that process. We’ll go ahead and we’ll select “Create.”

    At this point in time, I am very descriptive with what I name my disk. I am going to call this EBS pool. I’m going to go ahead and select RAID zero and that’s because we’re already working highly-performing redundant disk.

    At that point, nothing that I add can add to the performance system that Amazon has, but we can stripe across those disks, which should add performance going in. we’ll go ahead.

    We’ll select that drive, and we’ll go ahead. If we go ahead, it’s coming up saying that we’ve choose no RAID and that’s okay because we’re talking about cloud and highly redundant disks. We’re in.

    We’re going to do the same thing for our S3 pool. We’re going to go in. We’re going to call it S3_pool if I could spell it correctly. We’re going to go ahead and we’re going to select our S3 storage. We’re going to select RAID zero so we could stripe across.

    Remember, stripping across is similar to what you would have in your environment with a server. The more disks you’re stripping across, the more performance you make in a system.

    We’re going to go ahead. We’re going to create this pool. Now we have an EBS pool and we also have an S3 pool set up. We’re going to go in. These are some of the other enhancements that we could add to the system.

    In front of my S3 pool, I’m going to add some read cache. With the system that I chose, I have ephemeral drive associated with it. This is m3.xlarge. With that, I’ll be able to put that ephemeral drive in front of my system as a way to make my S3 disk be more performing.

    I’m going to go ahead. I’m going to select it. Just as simple as that and select it in “Select/Add Cache.” Just as simple as that, I’ve made my S3 disk more performing as I’m utilizing not only striping, but I’m also utilizing the fact that now “read cache” is in front of that system to make those disks more performing.

    Now that we’ve added our disk devices, we’ve aggregated them in a storage pool. The next thing that we have to do is that we have to share this out to our end-users, to our applications, to our servers.

    The way that we do that is we create volumes in nodes. The way that this is going to share out, your volume name should adhere to your company policies. Whatever share you’re used to sharing out, you should adhere to those rules.

    From a volume name standpoint, we’re going to go with user-share, and I’m going to go ahead and select that storage pool. It’s S3. I’m going to select the storage pool.

    I don’t need my user-shares and access to my user-shares to be the most performing. I’m going to share it out via NFS and via CIFS. I also have the ability to make a thin provisioned or a thick provisioned.

    We also have the ability to do compression and deduplication at this level. They both come with a resource cost. If you are looking at doing compression, you’re going to add a little bit more CPU. For dedupe, you’re going to want to add some more memory.

    We’ll talk about snapshots. We get right out of the box. By default, we enable snapshotting of the system and that gives us the ability to be able to sustain the readable/writable clones which we will show you later.

    By default, we snapshot the volume every three hours, but it’s definitely tunable if you need something else that would be better for your environment, your LAN, or you could schedule it for something.

    We also have a scheduled snapshot retention policy. We start of ten-hourly, six-daily, one-weekly. We could go ahead, and we could create that volume. Now we have a volume created that allows you to access it within a cloud via CIFS or via NFS and it’s backed by S3 storage.

    We now are going to go ahead and create a more performing storage. I’m setting up a website that I’m going to have in the cloud, and I want to be able to have some fast disk that’s sitting behind it.

    I probably wouldn’t call this a website in production. However, for the sake of this demo, I’m going to go ahead and call it website. I go ahead. I select my storage pool. It’s an EBS disk.

    I’m going to go ahead. I’m going to share this out via NFS, and I’m sharing it out via CIFS. We talked about compression and duplication and snapshots. We’re going to go ahead, and we’re just going to do create.

    Now we have both volumes created; they are backed by AWS disk. We’re going to go in and we’re going to talk about snapshotting.

    QA did something that they ended up breaking our environment. Based on the fact that they broke our environment, they want to be able to test that the fix that they have created works.

    What I’m going to do is I’m going to go in. I can’t let them use my production data. What I could do is do a snapshot of my existing environment, and I’m going to create a readable/writable clone so that they could test their fix on that. It’s just as simple.

    This is now a point in time instance of the data that existed in website and it’s useable, readable, writable, and shared out in my environment to my users. We also have the ability to attach and integrate with LDAP and Active Directory to ensure that security of your environment is intact.

    With that, that is in a nutshell what we went about doing for giving you access to your data. I’m going to run another video. This video should be a little faster. We’re going to go in, and what we’re actually going to do is set up snap replicate between two instances within AWS.

    We talked about the SLA within Amazon that gives you five nines worth of availability for instances that you have in multiple AZs. SoftNAS allows you to do that.

    If have two instances in this environment, both are in the west. One is in west IA and one is in west IB. with that, I am going to set up replication of my data between both of my instances.

    The first thing that we’re going to go in is that we require that the storage size be the same in both nodes. It doesn’t have to be the exact same type of storage, but it needs to be the same amount of storage.

    We also require that you name the storage pools the same. I’ll go in. For the sake of time, I went ahead and I added the drives in advance. With that, I’ll just go ahead and I’ll create my storage pools.

    Let me create my EBS storage pool if I could spell it correctly, again. Selecting RAID zero, and then I’m going to go ahead and select “Create.” We’re being warned again.

    We are very concerned about your data. We want to make sure that it’s extremely secure. With that, we have EBS pool. We’re going to go ahead and we’re going to create our S3 pool or pool of S3 disks.

    We’re going to go and we’ll be able to create. Now we have both systems ready to be able to do replication between them. Everything that exists on the lines and volumes, those will be automatically synched from the source node over to the target.

    Just verifying one last time to make sure that the pool names are correct in there. Now we should be able to go to snap replicate and set up the snap replication between the source and the node.

    We’re going to go in. We’re going to add. We’re going to click next. All that it is asking me is for the host name of the incidence on the other side. I’ll go ahead and act properly there, and then it’s going to prompt me for the password for the other side.

    We go ahead. Just with that click, I’ve made my system more resilient. I’m going through the process right now of replicating my data into a different availability zone.

    Given my being able to do a manual failover, if push came to shove, to be able to access my data from a different node. If you notice, we’ll go ahead and we’ll show that during this process, we are now taking everything that we created, even the readable/writable close, and we are moving that over to the secondary instance.

    I’ll show you. We have source node, we have a target node, and it’s being asynchronously replicated between nodes. This is great. We have the ability to have our data in two locations. It took little or no time in order to do that.

    What if we could stand up two instances but put a VIP in front of those instances and give you the ability to failover in case of an issue. We have the ability to do that with snap HA.

    We are going to select our virtual IP. You have the ability to use a virtual IP or an elastic IP. We are going to select a virtual IP. For the sake of this demo, we are going to ahead.

    I’m going to choose I’m going to select next. Because of the IAM Role we don’t need to share out the access key. We are going to go ahead and click install. While we are going and installing this, there is a bunch of heavy lifting that’s being done in the background that SoftNAS is handling on its own.

    We are updating routing tables. We are ensuring that S3 is accessible. We are making sure that each instance could actually talk to each other. We are setting the heartbeats and putting the heartbeats in place.

    With that VIP that sits in front of these instances, we want to make sure that if your source node goes down that your target node is going to stand up and be able to give you no downtime access to your data.

    We are going to have around 30% for a couple of seconds. We are going to jump to 50, and within a second, we’re going to be at a hundred. There we go. This is real-time. It was not sped up, just to show how easy it is for us to be able to set up a HA environment which allows you to be able to, from a cloud standpoint, lift and shift your applications. Giving them the ability to use the protocols that they are familiar with. That’s your NFS, that’s the CIFS, AFP, and iSCSI if you see fit.

    With that, I’ll end the video showing part of the demo and I’ll go back to showing more of the PowerPoint slides. I’m sorry. Let me take a sip of water. I apologize.

    I know if you’re in a room with a bunch of folks or even if you’re by yourself, you’re saying, “It looks good but how much is this going to cost?”

    I am actually very glad that you ask that question. I want to introduce you to the TCO Calculator. You could use this calculator to compare the costs of your running applications with what it would actually cost you within an AWS environment.

    All you have to do is describe a bit of your configuration and you’d be able to do it. Let me go ahead. I’m going to show it fairly quickly. Let’s calculate. It’s very easy.

    You’re going in. You’re selecting currency. It’s going to ask you what type of environment it is and the region, your application, what’s the amount of VMs that you’re looking at, and storage. What are you looking from a storage standpoint?

    For the sake of this demo, we went ahead and we did a three-year total cost of ownership for AWS for 40TB instance. What you could actually see is that there is a huge cost-savings between going from an on-premise system to an AWS system and the majority of this cost-saving is actually located in infrastructure.

    Getting rid of the need for you to manage the underlying infrastructure ends up saving you about 61% worth of your costs-savings.

    How do you pay for this? Right after how much does it cost? The next thing is how can we pay for it? The challenge is you want to move to AWS but where’s your budget for AWS?

    We talked earlier in this slide where budgets are not increasing. Budgets are being struggled, but you’re being forced to think about the factor of moving to the cloud and how to move to the cloud.

    What we suggest is an innovative method is to allocate your existing maintenance in you budget to be able to make your shift to the cloud. We do have a mock up right now just in the slide.

    We are seeing that for a 50TB on-premise system, from a maintenance renewal budget, we’re looking at around $450,000. For $265,000, you could increase the size of your environment in AWS by 20TB. That in itself is cost effective enough.

    At the point you select it. Some of the steps that you might be thinking about. Pick a tier 2 application to migrate to AWS, test the waters with that application. From your learnings, you could then create a workplan to migrate the remaining apps, workloads, and data.

    When all the migration is done, it’s time for you to unplug your on-premise hardware and have a party because of all the money that you are going to be saving.

    Lift and Shift. SoftNAS allows you to lift and shift while maintaining the same enterprise-level of service and experience that you are used to. We are the only company that gives a no downtime guarantee SLA.

    We will give you the same enterprise feel in the cloud that you are used to on-premise. Whether or not that’s serving out your data via NFS for your apps that need NFS, CIFS, or SMB, we have the ability to do that.

    We deploy within minutes. We give you the ability, as we demonstrated, to do storage snapshots. The GUI in of itself is easy to learn, easy to use, no training. You don’t need to send your teams back to get training to be able to use SoftNAS as a software.

    It is there, it is intuitive, and it’s easy to use. We talked about disaster recovery in HA and being able to move to a monthly or annual subscription.

    Help us help you migrate your existing applications to the cloud. We allow you to use the standard protocols. We leverage AWS’s storage elasticity. SoftNAS enables existing application to migrate unchanged.

    Our software provides all the Enterprise NAS capabilities — whether it’s CIFS, NFS, iSCSI — and it allows you to make the move to the cloud and preserve your budget for the innovation and adaptations that translate to improved business outcomes.

    SoftNAS can also run on-premise via a virtual machine and create a virtual NAS from a storage system and connect to AWS for cloud-hosted storage.

    Taran:                         Thanks for that, Kevin. Going on to our next poll. I’m going to launch that real quick. What storage workloads do you intend to move to the AWS Cloud? For those of you who are interested in moving to AWS, is it going to be NFS, CIFS, iSCSI, AFP, or you are not just intending to move to AWS at all?

    It looks like we’ve gotten about half of you to vote so far, so I’ll give that about 10 more seconds. I’ll go ahead and close the poll and pull up our results. It looks like about over 40% of you are intending to move NFS workloads to the AWS Cloud, which is pretty common with what we’ve seen.

    CIFS and iSCSI support is also balanced too. A couple of you just have no interest in moving to the AWS Cloud. For those of you who don’t have interest in moving to the AWS Cloud, again, in the questions pane, please let us know why you don’t intend tomove to AWS.  I’ll go ahead and hand it over back to you, Kevin.

    Kevin:                         Not a problem. Let’s go over. I’ll do a very quick review of SoftNAS. SoftNAS in a nutshell is an enterprise filer that exists on a Linux appliance with a ZFS backing in the cloud or on-premise.

    We have a robust API, CLI, and cloud base that integrate with AWS S3, EBS on-Premise storage, and VMware. This allows us to provide data services like block replication, allows you to access cloud disks.

    We give you storage enhancements such as compression and in-line deduplication, multi-level caching, and the ability to produce writable snap clones, and encrypt your data at rest or in-flight.

    We continue to deliver best of brand services by working with our industry-leading partners. Some of these people you guys might know such us Amazon, Microsoft, VMware. We continue to partner with them to enhance both our offerings and theirs.

    These are some of the brands that you trust and they trust us also because these companies are using SoftNAS in many different use-cases all over their environment.

    We have a couple of these just to bring out a few – Netflix, Weather Channel, and too many to name.

    Taran:                         Thanks, Kevin. I’m just going to go ahead and take over the screen share really quick. To thank everyone for attending this webinar, we are handing out AWS credits. If you click on this Bitly link right here, you’ll actually be able to register for a $100 AWS credit.

    We’re only giving out 100 of these so the first 100 people to register for it will get it. I’m also going to paste the link here in the chat window and you’ll be able to click on that link to register for the credit.

    You just have to go to the page, put in your email address, and one of our team members will get back to you later today with your free credit information.

    As far as next steps go, for CXOs in our audience, we invite you to try out that AWS TCO Calculator. If you have any questions about it, feel free to contact us. Just visit our “contact us” page here on the bottom and we’ll be happy to answer any questions that you might have about cloud costs and how SoftNAS and AWS result in more cost savings over an on-premise solution.

    We also recommend that you have your technical leads try out SoftNAS. Tell them to visit the softnas.com/tryaws page and they’ll be able to go and try out SoftNAS free for 30 days.

    For some of the more technical people on our audience, we invite you to go to our AWS page where you can learn a little bit more about the details of how SoftNAS works on AWS. Any questions that you might have, again, please feel free to reach out to us.

    We also do have a whitepaper available that covers how the SoftNAS architecture work on AWS and that gets into some technical details that should help you out.

    That covers everything that we had to talk about today, so now let’s go on to the Q&A session. We do thank you guys for asking questions throughout the webinar. I’ll go ahead and start having a few questions answered.

    Our first question here is why chose SoftNAS over the AWS storage points?

    Kevin:                         I think that that’s a good question. Why? We give you the ability, from a SoftNAS standpoint, to be able to encrypt the data. As we spoke about in the webinar, we’re the only appliance that gives a no down-time SLA.

    We stand by that SLA because we have designed our software to be able to address and take care of your storage needs.

    Taran:                         Thanks for that Kevin. Out to the next question — is Amazon the only S3 target or are there other providers as well?

    Kevin:                         Amazon is not the only S3 provider. We do have the ability to connect to blob storage, and we are in other platforms other than AWS – that would be CenturyLink, Azure, among some of the others.

    Taran:                         Thanks Kevin. Our next question; do you have any performance data for running on standard EBS non-provisioned volumes?

    Kevin:                         I think that there might be a session. I believe that question came from James. James, feel free to reach out to us. We’ll definitely get on a call with you, and that’s something that we could dig into more as we discuss your use-case. Depending on your use-case and depending on the enhancements that we could actually put in front of these disks, we could definitely come up with a solution that would be beneficial for you.

    Taran:                         Thanks Kevin. Then onto the next question. If I move my storage to the cloud, do I have to move SSL too?

    Kevin:                         It totally depends on your use-case. This is another use-case question, and depending on your internet connectivity, that’s something that would be defined by that. The last mile of your connectivity into the cloud is always what’s going to determine your level of performance.

    Each use case is different, and that’s something that we could definitely do a deep-dive and dig into your use-case again.

    Taran:                         Thanks Kevin. It looks like we have two more questions. We’ve got quite a bit of time so for those of you who are interested in asking questions, just go ahead and paste them in the questions pane and we’ll be happy to answer them.

    As far as our next questions goes, what kind of use-cases are best for SoftNAS on AWS?

    Kevin:                         That’s a very good question. There are multiple use-cases. If you have production data that you need CIFS access for, SoftNAS gives you the ability to do that. You have a web back-end that you need to be able to apply iSCSI or more performing disk. We give you the ability to do that also.

    Like we talked about in this webinar, you have an application that you need to be able to move to the cloud. However, in order for you to rewrite that application, it’s going to take you six months to a year to be able to support S3 or any kind of block storage.

    We give you the ability to migrate that data by setting up a NFS or a CIFS [inaudible 50:29] whatever that application is used to already in your enterprise.

    Taran:                         Thanks Kevin. It looks like that’s all the questions that you guys have today. Before we close up this webinar, we just want to enlighten you to try out SoftNAS free for 30 days on AWS — just visit softnas.com/tryaws.

    Also, at the end of this webinar, we do have a quick survey available. We ask that you please fill out the information so that we can better serve in the future. Just knowing what kind of webinars really interest you will help us out with improving our webinars.

    With that, that’s all we had for you today. We hope you guys have a great day.

    Why SoftNAS on AWS?

    Why SoftNAS on AWS?

    Why Use SoftNAS on AWS?

    SoftNAS Enterprise and SoftNAS Essentials for AWS extend the native storage capabilities of AWS and feature the POSIX-compliant and required storage access protocols needed to create a virtual cloud NAS, without having to re-engineer existing customer applications.

    SoftNAS products allow customers to migrate existing applications and data to AWS with NFS, CIFS/SMB, iSCSI or AFP support. Customers gain the performance, data protection and flexibility needed to move to the cloud cost effectively and ensure successful project outcomes (e.g., snapshots, rapid recovery, mirroring, cloning, high availability, deduplication and compression).

    Each customer’s journey to the cloud is unique and SoftNAS solutions are designed to facilitate adopting cloud projects according to what makes the most immediate business sense, resulting in the highest return on invested resources (budget, people, time). Whether your need is to consolidate file servers globally, utilize the cloud for data archival or backup, migrate SaaS and other business applications, or carry out Big Data or IoT projects, SoftNAS products on AWS deliver effective, tangible results.

    aws cloud nas

    When to use SoftNAS versus Amazon EFS?

    SoftNAS is a best-in-class software-defined, virtual, unified Cloud NAS/SAN storage solution for businesses that need control of their data as well as frictionless and agile access to AWS cloud storage. SoftNAS supports AWS EBS and S3 Object storage;  SoftNAS is deployed by thousands of organizations worldwide, supporting a wide array of application and workload use cases. Amazon EFS provides basic and scalable file-based NFS access for use with Amazon EC2 instances in the AWS Cloud. As a basic NFS filer, Amazon EFS is easy to use, allowing quick and simple creation and configuration of file systems. The multi-tenant architecture of EFS accommodates elastic growth and scales up and down for AWS customers that require basic cloud file services.

    If you need more than a basic cloud filer on AWS, SoftNAS is the right choice:

    1. For mission-critical cloud data requirements that demand low latency, high-availability with the highest-performance cloud storage I/O possible.
    2. SoftNAS ObjectBacker™ increases storage I/O performance up to 400% faster, giving customers performance approaching that of EBS at the price of S3 storage.
    3. For environments with multiple cloud storage projects that require an enterprise-class, virtual NAS storage solution that provides flexible and granular pricing and control with performance and instance-selection capabilities.
    4. For customers with low cloud storage requirements (low TBs) that don’t want to overprovision storage to get desired performance.
    5. For requirements that demand the broadest POSIX-compliant and full NFS 4.1 feature set required storage access support.
    6. For enterprises requiring multi-cloud environment (i.e. AWS & Other Clouds) capabilities, flexibility and data risk mitigation.
    7. For organizations that need the industry’s most complete NAS filer feature set, including data protection provided by the patented cross-zone high availability architecture, at-rest and in-flight encryption (360-degree Encryption™) and full replication using SnapReplicate™. Additional features include: High Availability (SNAP HA™* and DCHA**), Snapshots, Rapid Recovery, Deduplication, Compression, Cloning, and Mirroring.

    *   = SNAP HA is available only in SoftNAS Enterprise and SoftNAS Platinum

    ** = DCHA is available as an optional add-on to SoftNAS Essentials via BYOL only

    aws softnas

    How and Why is SoftNAS a Better Way to Store & Control Data?

    Moving to SoftNAS on AWS is helping thousands of public organizations and private enterprises around the world move from the old way of controlling and efficiently utilizing data storage into a new and better way. Never-ending backups, storage capacity bottlenecks, proliferation of file servers, existing applications that are preventing organizations from moving to the cloud due to complexity and the spiraling data storage costs become a thing of the past.


    30-Day Free Trial of SoftNAS on AWS

    Put SoftNAS to the Test and Try it for 30-Days. Contact a SoftNAS representative for more details and a demo to learn more about both SoftNAS products on AWS and the Beta of SoftNAS Platinum so that you make the right decision.

    30 days or less and 20TB or less AWS Marketplace Free Trial is available. Just launch and go in minutes.