Webinar: Consolidate Your Files in the Cloud
Buurst Staff

The following is a recording and full transcript from the webinar, “Consolidate Your Files in the Cloud”. You can download the full slide deck on Slideshare

Consolidating your file servers in AWS or Azure cloud can be a difficult and complicated task, but the rewards can outweigh the hassle.

Covered in this webinar:

– The state of the file server market today
– How to conquer unstructured data
– Benefits of file consolidation in the cloud
– Real customer use cases

Full Transcript: Consolidate Your Files in the Cloud

Trace Hunt:             Good morning, good afternoon, good evening, no matter where on this beautiful planet you may be. We’ll be starting this webinar here in a couple of minutes. We’re waiting for everyone to join before we get started. Just be patient with us, and we will get started. Thank you!
To the people who have just joined, we are going to be starting here in about two minutes. We’re waiting for everyone who wants to join to get in. Thank you.

We’ll be starting here in about one minute.

Let’s get started. Again, good morning, good afternoon, or good evening no matter where you may be on this beautiful planet. I am Trace Hunt, the vice-president of sales at SoftNAS, and we are happy to have you here.

We are in a webinar series. This is the third of webinars that we have been putting on here. We are very excited to have the opportunity to speak with you about what we do good at SoftNAS and how we go about solving customers’ problems in the cloud.

Joining me here today will be Kevin Brown, our solutions architect. He’ll be going through some of the product details. What we’ll do on the agenda here today is that I’ll deliver the housekeeping and an overview of what we’re talking about here will be file server storage consolidation pieces.

I’ll hand it over to Kevin to talk about the product itself and the features it has. Then we’ve got a customer’s success story here at the end that we’ll speak too. Then we’ll wrap it up with some cause of action and also take any questions.

The goal here is to get everything on this part done probably in the next 40 minutes, at the most, maybe 45. We’ve got time on the backend to take care of questions and any other thing that you may want to ask or cover with us here.

With that in mind, let’s get started.

Some housekeeping pieces here if you’ve not been used to using the application. Make sure that if you want to listen through your computer speakers, you have the mic and speakers on. Everyone is muted.

If you need to use the telephone number is up there in the “Access Code and Audio PIN.” This session is being recorded. Any questions you may have, you can either put it in the questions.

Some people use the Chat as well, but please use the questions. We will address those as we go on here with it and try to make sure that we’re able to handle those and answer those for everyone.

Let’s go on. What we want to talk about here today is, everyone is on this cloud journey and where we are on this cloud journey will vary from client to client, depending on an organization, and maybe even parts of the infrastructure or the applications that are moving over there.

The migration to the cloud is here and upon us. You’re not alone out there with it. We talk to many customers. SoftNAS has been around since 2012. We were born in the cloud and that’s where we cut our teeth and we’ve done our expertise with it.

Today, we’ve done approximately 3,000 VPCs. There’s not a whole lot in the cloud that we haven’t seen, but at the same time, I am sure we will see more and we do that every day with it here.

What you’re going to find out there right now as we talk to companies, we know that storage is always increasing. I was at a customer’s website, a month ago, of a major health care company. Their storage grows 25% every year so they double every refresh cycle out of it.

This whole idea of this capacity growing is there with us. They talk about 94% of workloads will be in the cloud. 80% of enterprise information in the form of unstructured data will be there.

A lot of the data analytics and IoT which you’re seeing now are all being structured in the cloud.

Four or five years ago, they were talking about cloud migration and about reducing CAPEX. Now, it’s become much more strategic. The areas that we solve with them is to try and understand “Where do we start?”
If I am an enterprise organization and I’m looking to move to the cloud, where do I start? Or maybe someone out there that is already there but looking to try to solve other issues with this.

The first use case we want to talk about as you saw on the title is around file server and storage consolidation. What we mean by that is that these enterprise organizations have these file servers and storage arrays, and I like to use the term that are rusting out.

What I mean by that is probably around three [axis 0:07:35]. One, you can be coming up on a refresh cycle because your company is going on a normal refresh cycle due to lease or just due to the overall policy in the budget, how they refresh here.

Two, you could be coming up on the end of the initial three-year maintenance that you bought with that file server or that storage array. Getting ready for that fourth year, which if you’ve been around this game enough, you know that fourth and fifth year or fourth and subsequent years are always hefty renewals.

That might be coming up, or you may be getting to a stage here where the end of services or end of life is happening with that particular hardware.

What we want to talk about here today and show you, how do you lift that data into the cloud and how do we move that data to the cloud so I can start using it.

That way, when you go to this refresh, there is no need to go refresh those particular file servers and/or storage arrays, especially where you’ve got secondary and tertiary data where it’s mandatory to keep it around.

I’ve talked to clients where they’ve got so much data it would cost more to figure out what data they have, versus just keeping it. If you are in that situation out there, we can definitely help you on this.

The ability to go move that now, take that file server towards array, take that data, move it to the cloud, and use SoftNAS on it to be able to go access that data is what we are here to talk to you about today and how we can solve this.

This also can be solved in DR situations, even the overall data center situations too. Anytime you’re looking to go make a start to the cloud or if you’ve got this old gear sitting around.

You’re looking at the refresh and trying to figure out what do I with it? Definitely give us a ring here with that too.

As we talk about making the case for the cloud here, the secondary and tertiary data is probably one of the biggest problems that these customers deal with because it gets bigger and bigger.

It’s expensive to keep on-premise, and you got to migrate. Anytime you’re having a refresh or you buy new gear, you got to migrate this data. No matter what tools sets that you have, you’ve got to migrate every time you go through a refresh.

Why not just migrate one to get it done with and just be able to just add more as needed with time? Now, the cloud is much safer, easier to manage, and much more secure.

A lot of those miss knowledge we’ve had in the past about what’s going on in the cloud around security aspects have all been taken care of.

What you’ll find here with SoftNAS and what makes us very unique is that our customers continue to tell us that, “By using your product, I’m able to run at the same speed or see the same customer-experience in the cloud as I do on-prem.”
A lot of that is because we’re able to tune our product. A lot of it is because of the way we designed our product, and more importantly, are smart people behind this that can help you make sure that your experience in the cloud is going to be the same you have on-prem with it.

Or if you’re looking for something lesser than that, we can help you with that piece here too. What you’re going to see that we’re going to be offering around migration and movement of the data, speed, performance, high availability, which is a must, especially if you’re running any type of critical applications or even if this data has to be highly-available with it too.

You’re going to see us scalable. We can move all the way up to 16 PB. We’ve got compliance so anyone around your security pieces would be happy to hear that because we take care of the security and the encryption with the data as well to make sure it works in a seamless fashion for you.

We are very proud of what we’ve done here at SoftNAS, and we are very happy to have the opportunity to bring this to you here in part of series out here.

What I’m going to do next here is I’m going to hand it over to Kevin Brown to walk you through what’s underneath the covers of what SoftNAS does and talk a little bit more about some of the capabilities.

If questions come up on that, please ask and we’ll be ready to handle them. Kevin, I’m going to give you the keyboard and mouse, and you should be able to start it off.

Kevin Brown:             Not a problem. Thank you, Trace. SoftNAS is a virtual appliance. It’s a virtual appliance whether or not it’s in your cloud platform or if it’s sitting on-prem in your VMware environment.

We are storage agnostic so we don’t care where that storage comes from. If you’re in AWS, we’re going to allow you to utilize S3 all the way up to your provisioned IOPS disk.

If you’re in Azure, from cool blob all the way to premium. If you are on-prem, whatever SSDs or hard drives that you have in your environment that’s connected to your VMware system, we also have the ability to do that.

As much as we are storage agnostic on the backend, on the frontend, we’re also application agnostic. We don’t care what that application is. If that application is speaking general protocols such as CIFS, NFS, AFP, or iSCSI, we are going to allow you to address that storage.

And that gives you the benefit of being able to utilize backend storage without having to make API calls to that backend storage. It helps customers move to the cloud without them having to rewrite their application to be cloud-specific.

Whether or not that’s AWS or Azure. SoftNAS gives you that ability to have access to backend storage without the need of talking to APIs that are associated with that backed storage.

Let me see, and try to move to the next slide. I’m sorry. Trace, is it possible to move me to the next slide? The keyboard and mouse is not responding.

Trace:             Give me the control back and I’ll do that. There you go.

Kevin:              All right, perfect. Benefits that come out of utilizing SoftNAS as a frontend to your storage. Since I’ve been at this company, our customers have been asking us for one thing in specific and that is can we get a tiering structure across multiple storage types?

This is regardless of my backend storage. I want my data to move as needed. We’re going at multiple environments and what we see for multiple environments is that probably about 10% to 20% of that data is considered hot data, with 90 to 80% of it being cool if not cold data.

Customers knowing their data lifecycle, it allows them to eventually save money on their deployment. We have customers that come in and they say that “My data lifecycle is the first 30 to 60 days, it’s heavily hit. The next 60 to 90 days, somebody will touch it or not touch it. Then the next 90 to 120 days, it needs to move to some type of archive care.”
SoftNAS gives you the ability to do that by setting up smart tiers within that environment. Due to the aging policy associated with the blocks of data, it migrates that data down by tier as need be.

If you’re going through the process, after your first 30 to 60 days, the aging policy will move you down to tier two. If afterwards that data is still not touched after the 90 to 120 days, it will move you down to an archive tier, giving you the cost-savings that are associated with being in that archive tier or being in that lower-cost tier two storage.

The benefit also is that, as much as you could migrate down these tiers, you could migrate back up these tiers. So you get into a scenario where you’re going through a review, and this is data that has not been touched for a year. However, you’re in the process of going through a review.

Whether it’s a tax review or it’s some other type of audit, what will happen is that as that data continues to be touched, it will first migrate. It will move from tier three back up to tier two.

If that data continues to be touched, it will move back all the way up to tier one and it will start that live policy going all the way back down.

Your tier three could be multiple things. It could be object storage. It could be EBS magnetic or cold HDDs. Your tier, depending on the platform that you’re on, it could be EBS throughput optimized, GP2 magnetic, and vice versa.

Your hot tier could be GP2 provisioned IOPS, premium disk, standard disk – it depends to what your performance needs would be.

SoftNAS has patented high availability on all platforms that we support. Our HA is patented on VMware, it’s patented within Azure and also within AWS.

What happens within that that environment is that there is a virtual IP that sits between two SoftNAS instances. The two SoftNAS instances have their data stores that are associated with the instance.

There is a heartbeat that goes between those two instances; application is talking to the virtual IP between them. If there is an issue or a failure that occurs, what will happen is that your primary instance will shut down and that data will be moved to your secondary instance which is now turned back to your primary instance.

The benefit behind that is that it’s a seamless transition for any kind of outage that you might have within your environment.

It’s also structured with AWS’s best practice, so that’s to have those instances be placed in an availability set or within different availability zones so that you have the ability to utilize the SLAs that are associated with the provider.

If we can move to the next slide. There we go. We’ll go ahead and flash this out all the way.

In this scenario, we have your HA setup and your HA setup is within the California region. Within the California region, you have the system set up SnapReplication and HA because that’s what’s needed within that region.

It allows you to go ahead and do a failover in case there is any issue that happens to an instance itself. In utilizing Azure environment by you having it within an availability set, what happens is that neither one of those instances will exist on the same hardware or the same infrastructure and it will allow you do five 9s work of durability.

Within AWS, it’s structured so that you could actually do that by using availability zone. By using availability zones, it gives you application durability and it is up to five 9s there, also within AWS.

Up until a year ago, you could say that an availability zone or a region had never gone down for any one of the providers.

But about a year ago and about a month apart from each other, AWS had a region go down and Azure also had a region go down. A customer came to us and asked for a solution around that.

The solution that we gave them was DR to a region that’s outside of that availability zone or that region altogether. That’s what it shows in this next picture.

It’s that although you have SmapReplicate within that region, to be able to protect you, you also have DR replication that is centered entirely outside the zone to ensure that your data is protected in case a whole region fails.

The other thing that our customers have been asking for, as we come to their environments, they have tons of data. The goal is to move to the cloud. The goal is to move to the cloud either as quickly as possible or as seamlessly as possible.

They’ve asked us for multiple solutions to help them to get that data to the cloud. If your goal is to move your data as quickly as possible to the cloud — and we’re talking about Petabytes, hundreds of Terabytes of data — your best approach at that particular point in time is taking the FedEx route.

With whether it’s Azure Data Box or AWS Snowball, being able to utilize that to be able to send the data over to the cloud and then importing that data into SoftNAS to make it easier for you to be able to manage the data.

That is going to be across cut, over. That means that at some point in time, on-prem, you’re going to have to stop that traffic and say that “This is what is considered enough. I’m going to send this over to the cloud, and then I’m going to populate and start over with my new data set in the cloud and have it run that way.”
If you’re looking for a one cut-over, that’s what you’re [inaudible 0:24:59] and we’re not talking about Petabytes worth of data. The way that we explain to customers and give customers ways of actually using it would be Lift & Shift.

By you using SoftNAS and the Lift & Shift capability of SoftNAS, what you could actually do is you could do a warm cut-over so data still running on your legacy storage servers being copied over to the SoftNAS device in the cloud.

Then once that copy is complete, then you just roll over to the new instance existing in the cloud. SoftNAS has congestion algorithms that we use around UltraFAST that basically allow that data to go over highly-congested lines.

What we’ve seen within our testing and within different environments is that we could actually push data up to 20X faster by using UltraFAST across lines between the two SoftNAS [inaudible 0:26:12].

This is where it comes. You need to make that decision. Are you cold cutting that data to send it over to the cloud via Azure Data Box or Snowball, or is it a possibility for you to use Lift & Shift and do a warm cut-over to the SoftNAS device that you would have in the cloud.

Trace:              Thank you, Kevin. I appreciate you taking some time with talking about SoftNAS does and what our features are. In the next part here, we want to take a moment and talk about a customer’s success story.

We’ve got a customer up in New York City called iNDEMAND. You may have never heard of them but they are very much part of our life. They are the number one distributor for all your video demand and paper-view type of products.

What we mean by that is that they distribute EST to over 200 North American TV operators, which in turn reaches well over 60 million homes. As you see that availability with your local dish or even in a hotel for paper-view, they order these particular movies from iNDEMAND and then iNDEMAND uploads it into their system.

It’s been a very good relationship. They’ve been with us here since around 2015, and they are happy one of our raving fans that we have. We’ll go ahead and explain the challenge they had.

It was definitely a file server and storage array consolidation of what they were trying to solve on this piece for this – large video type of files. Kevin, I’ll bring it back to you to tell the crowd a little bit more about what was the story about and what they solved.

Kevin:              Not a problem. SoftNAS and Keith Son who is the director of infrastructure at iNDEMAND discovered each other at an AWS summit. At that point in time, it was the goal of iNDEMAND to increase their flexibility by running their storage in the cloud.

It had gotten to the point that managing their over 1 PB on-premise storage was costly. They were paying an outrageous sum of money at every refresh time that came on just for the maintenance cost that was associated with that storage.

Keith came by. He wanted to focus more on flexibility. How can I save money by moving to the cloud? How can I get it so that I am not cutting down on my performance?

We got the time actually to sit down and talk with him and did an ad-hoc design of what that environment should look like. One of his biggest stress-points was that he wanted to remove the complicated programming from his administrators.

He did not want the administrators to have to dig in to be able to figure out if I’m sending it to S3. I need to use this API. If I’m sending it to an EBS disk, I need to do this.

He wanted it to be brain-dead simple and that’s what SoftNAS brought to him. One of his biggest griefs was the fact that it took them months to be able to add storage to the environment and that was not working for them.

Given the fact that they were bringing in new data on a regular basis, they needed to make decisions as possible, they needed to have storage when they needed it and not months later.

As we went through the design of what SoftNAS could do and were supposed to do in their environment, Keith fell in love with it. iNDEMAND fell in love with it. Within a couple of hours after having that conversation, they had spun up an instance within AWS and they had added storage to that device.

They had production storage within an hour after having that conversation, and them actually going through and testing the environment.

Our solution allowed iNDEMAND to add storage in minutes instead of months. It scales out up to potentially 16 PB. We have customers that are running hundreds and hundreds of Terabytes within multiple cloud environments.

Some of the things that we also were able to help Keith do is, Keith was able to actually run multiple workloads in the cloud with minimal to no need to application change at all.

Being able to lift that application to the cloud, attach it to the storage, and utilize it just the same way that they would do.

Trace:              Today, Keith runs about 200 TB in SoftNAS, up in AWS side of it. He’s told me many times that the beauty of it was, “I just have to present a mount point to my workload, and from there, I know that everything is safe, protected, and it’s going to work. I don’t have to worry about SoftNAS being up because it’s always up.

It’s been a great success story here. As you saw Kevin talk through there, they had a nice [inaudible 0:32:10] shop. They have a Petabyte on the ground. They didn’t want to buy anymore. They had enough.

This was a way that they could offset that cost and grow that piece out of it without having to go invest in the infrastructure or invest in the footprint in the data center in Manhattan.

Successfully stories out of it, we have got other ones here but we wanted to highlight this one for you.

How do I get to SoftNAS? We are available on both marketplaces in AWS and Azure. In both marketplaces, you have the ability to go do a free trial. What you see right there is an example out of it.

If you go to our web page, you could find access to the free trial as well as our prices in the marketplace is up there with it too.

Within the marketplace, what you’re going to find is that we’ve got listing up there that you can pick and also pick a particular instance or VM and get an idea of what that costs.

Again, we vary our storage on the backend. You’ll need to go figure that part in if you’re looking a total cost of ownership with that piece. If you’re looking for help, we do have a Quick Start Program.

What that means is that if you want to go do a free trial, you’re more than welcome. We welcome you to go do that and do it on your own. But if you need that white glove help, come out and talk to us.

I’ve got staff here that are anxious to go help you through that and help you with the onboarding piece out here too. We have much more information on our websites, much more webinars, architectural white papers, as well as a contact us.

I know it’s in AWS Marketplace. At the same time, we also run in the Azure Marketplace and are very successful around that piece too. Let me give you another piece of information before a question comes up of, “How do I buy?”
The first place I would go to, you buy on the marketplaces. You buy on the marketplaces and you buy the Pay-As-You-Go. On AWS, you have choices between a 1 TB, a 10 TB, a 20 TB, and a 50 TB listing.

Also, we offer a Consumption listing on the AWS side, but unfortunately, there is not a free trial associated with that. With consumption, you use as much as you need out of it and you pay by the drink.

If you need 2 TB only, you can buy 2 TB. If you need 102, you can do 102 TB. We have those available. On the Azure side, they come in 1 TB, 10 TB, 20 TB, and the 50 TB.

If you’re looking for something greater than that or if you want to go with a longer type of contract, you can buy directly from us or through our partners our resource.

You can buy BYOL, but BYOL contracts are for a minimum of 12 months. Gold Support is always available to all these, a 24/7 support. We promise you it is some of the best support that you’ll ever see up there with it.

We constantly get praise around our support side because when you need help, you’ve got 24/7 help there to help you with that piece.

If you don’t go with the Gold Support, our Silver Support is 9:00 to 5:00. It’s 9:00 to 5:00 as well as follow-the-sun out of it. If you’re in Australia and you’re on Silver Support, 9:00 to 5:00, we’ll get someone in your region to go help you.

With that, I’m going to go next and go look for any questions that we may have out there.

I got a question here. Is it the same product on Azure and AWS?

It is. In fact, the core sets are the same with it, feature sets are the same with it. The only difference in both is that if you’re running BYOL, there is no difference out of it.

The only thing that Azure doesn’t offer that AWS offers is the Consumption listing. Everything else around the other listings of 1, 10, 20, and 50 TB are all the same.

We’re just starting to hear people wanting to talk about multi-cloud strategies and be able to use multi-cloud. You can use this in a multi-cloud, be in the same core, and we can be able to talk between both clouds.

If you’re looking for that type of use case with it, please come back to us. Any other questions? Do you see any questions out there, Kevin?

Kevin:              Yes, I had a question out there. Do we run on-prem? Yes, we do. We are a virtual appliance. We have an OVA that you could put within your VMware environment then you could actually run it on-prem and we would be able to address any storage that was connected to your ESXi Server.

Trace:              We have another question here, and I’ll let you answer this one here, Kevin. How long does it take in an HA environment to flow forward to the secondary node?

Kevin:              I’ll actually answer that in two parts. Between each node, that data is synced every 60 seconds. The way that data is synched is sending the details over from primary to secondary.

First, only details that are being sent over from primary to secondary. That means that replication should be quick, easy, and simple. It’s also ZFS-based so it’s only blocks of data that we’re sending over.

It’s also whenever it comes to having to failover to an instance, it depends. If there is a failure and you are looking for it to fail, you as a customer determines how many read checks that you need to do before it does a failover.

If you’re just going with the defaults, you could actually failover within 60 seconds. If you’re looking to do a takeover, that is instantaneous. Literally, you could go into the SoftNAS device, go to the secondary and tell it to take over, and that way, you actually have the VIP routed to the secondary and immediately you have access to the storage behind it.

Trace:              Any other questions out there? Excellent. Again, I want to thank everyone for coming. I appreciate your attendance out of it. If you like to find out any more information about SoftNAS, you saw that information or just email us at sales@softnas.com.

We are going to be having another webinar here next month, and we’re going to be talking about how to migrate your application data and how to look at particular workloads that are mission-critical and how do we run those on the cloud and get the same type of performance we have on-prem but giving me all the data protection and all the protection capability I have on-prem.

We appreciate your time. We appreciate your attendance and have a nice day. Take care.

Subscribe to Buurst Monthly Newsletter 

More from Buurst