Webinar Recap – Three Ways to Slash Your Enterprise Cloud Storage Cost

Webinar Recap – Three Ways to Slash Your Enterprise Cloud Storage Cost

The above is a recording and follows is a full transcript from the webinar, “Three Ways to Slash Your Enterprise Cloud Storage Cost.” You can download the full slide deck on Slideshare.

My name is Jeff Johnson. I’m the head of Product Marketing here at Buurst. In this webinar, we will talk about three ways to slash your Enterprise cloud storage cost.

Companies trust Buurst for data performance, data migration, data availability, data control and security, and what we are here to talk about today is data cost control. We think about the storage vendors out there. The storage vendors want to sell more storage.

At Buurst, we are a data performance company. We take that storage, and we optimize it; we make it perform. We are not driven or motivated to sell more storage. We just want that storage to run faster.

We are going to take a look at how to avoid the pitfalls and the traps the storage vendors use to drive revenue, how to prevent being charged or overcharged for storage you don’t need, and how to reduce your data footprint.

Data is increasing every year. 90% of the world’s data has been created over the last two years. Every two years, that data is doubling. Today, IT budgets are shifting. Data centers are closing – they are trying to leverage cloud economics – and IT is going to need to save money at every single level of that IT organization by focusing on this data, especially data in the cloud, and saving money.

We say now is your time to be an IT hero. There are three things that we’re going to talk about in today’s webinar.

 

We are going to be looking at all the tools and capabilities that you have for on-premises solutions and moving those into the cloud and trying to figure out which of those solutions are already in the cloud or not.

We’ll be taking a look at reducing the total cost of acquisition. That’s just pure Excel Spreadsheet cloud storage numbers, which cloud storage to use that don’t tax you on performance—speaking of performance, reducing the cost of performance because some people want to maintain performance but have less expense.

I bet you we could even figure out how to have better performance with less costs. Let’s get right down into it.

Reducing that cost by optimizing data

We think about all of these tools and capabilities that we’ve had on our NAS, on our on-premise storage solutions over the years. We expect those same tools, capabilities, and features to be in that cloud storage management solution, but they are not always there in cloud-native storage solutions. How do you get that?

Well, that’s probably pretty easy to figure out. The first one we’re going to talk about is deduplication. This is inline deduplication. The files are compared block to block and see which ones we can eliminate and just leave a pointer there. To the end-user, they think they have the file, but it’s just a duplicate file.

 

 

We could eliminate…in most cases reduce that data storage by 20 to 30% less storage, and this becomes exponentially critical in our cloud storage.

The next one we have is compression. Well, with compression, we are going to reduce the numbers of bits needed to represent that data. Typically, we can reduce the storage cost by 50 to 75% depending on the types of files that are out there that can be compressed, and this is turned on by default with our SoftNAS program.

 

 

The last one we want to talk about is Data Tiering. 80% of data is rarely used past 90 days, but we still need it. With SoftNAS, we have data tiering policies or aging policies that can move me from more expensive, faster storage to less expensive storage, to even way back to ice-cold storage.

 

 

We could gain some efficiency in this tiering, and for a lot of customers, we’ve reduced their storage cost with an active data set by 67%.

What’s crazy is we add all these together. If I take a look at 50 TB of storage at 10 cents per GiB, is $5,000 a month. If I dedupe that just 20%, it brings it down to $4,000 a month. Then if I compress that by 50, I can get it down to 2,000 a month. Then if I tier that with 20% SSD and 80% HDD, I can get down to $1,000 a month, reducing my overall cost by 80% from 5,000 to $1,000 a month.

Again, not everything is equal out in the cloud. With SoftNAS, obviously, we have dedup, compression, and tiering. With AWS EFS, they do have tiering – great product. With AWS FSx, they have deduplication but not compression and tiering. Azure Files doesn’t have that.

Actually, with AWS infrequent storage, they charge you to write and read from that cold storage. They charge a penalty to use the data that’s already in there. Well, that’s great.

Reducing the total cost of acquisition is just use the cheapest storage.

Now I see a toolset here that I’ve used on-premise. I’ve always used dedupe on-premises. I’ve always used compression on-premises. I might have used tiering on-premises, but it’s really like NVME type of disk, and that’s great.

I see the value in that, but TCA is a whole different ball part here. It’s self-managed versus managed. It’s different types of disks to choose from. We take a look at this. This is like I said earlier. It’s just Excel Spreadsheet stuff  — what do they charge, what do I charge, and who has the least cost.

We take a look at this in two different kinds of buckets. We have self-managed storage like NVME disks and block storage. We have managed-storage as a service like EFS FSx and Azure Files.

If we drill down that a little bit, there are still things that you need to do and there are things that your managed storage service will do for you. For instance, of course, if it’s self-managed, you need to migrate the data, mount the data, grow the data, share the data, secure the data, backup the data. You have to do all those things.

 

 

Well, what are you paying for because if I have a managed storage service, I still have to migrate the data? I have to mount the data. I have to share and secure the data. I have to recover the data, and I have to optimize that data. What am I really getting for in that price?

The price is, block storage, AWS is 10 cents per Gig Per month. In Azure, it’s 15 cents per Gig per month. Those things that I’m trying to alleviate like securing, migrating, mounting, sharing, recovery, I am still going to pay 30 cents – three times the price of AWS SSD; or FSx, 23 cents; or Azure File, 24 cents. I’m paying a premium for the storage, but I am still having to do a lot on the management layer of that.

 

 

If we dive a little bit deeper into all that. EFS is really designed for NFS connectivity, so my Linux clients. AWS/FSx is designed with CIFS for the Windows clients with SMB, and Azure Files with CIFS for SMB. That’s interesting.

If I have Windows and Amazon, if I have Windows and Linux clients, I have to have an EFS account and an FSx account. That’s fine. But wait a second. This is a shared access model. I’m in contention with all the other companies who have signed up for EFS.

Yeah, they are going to secure my data, so company one can’t access company two’s data, but we’re all in line for the contention of that storage. So what do they do to protect me and to give me performance? Yeah, it’s shared access.

They’ll throttle all of us, but then they’ll give us bursting credits and bursting policies. They’ll charge me for extra bursting, or I can just pay for increased performance, or I can just buy more storage and get more performance.

At best, I’ll have an inconsistent experience. Sometimes I’ll have what I expect. Other times, I won’t have what I expect – in a negative way. For sure, I’ll have all of the scalability, all the stability and security with these big players. They run a great ship. They know how to run a data center better than all on-premises data centers combined.

But we compare that to self-managed storage. Self-managed, you have a VM out there, whether it’s Linux or Windows, and you attach that storage. This is how we attached storage back in the ‘80s or ‘90s, with a client-server with all its attached storage. That wasn’t a very great way to manage that environment.

Yeah, I had dedicated access, consistent performance, but it wasn’t very scalable. If I wanted to add more storage, I had to get a screwdriver, pop the lid, add more disks, and that is not the way I want to run a data center. What do we do?

We put a NAS in between all of my storage and my clients. We’re doing the same thing with SoftNAS in the cloud. With SoftNAS, we have an NFS protocol, CIFS protocol, or we use iSCSI to connect just the VMs of my company to the NAS and have the NAS manage the storage out to the VMs. This gives me dedicated access to storage, a consistent and predictable performance.

 

 

The performance is dictated by the NAS. The bigger the NAS, the faster the NAS. The more RAM and the more CPU the NAS has, the faster it will deliver that data down to the VMs. I will get that Linux and Windows environment with scalability, stability, and security. Then I can also make that highly available.

I can have duplicate environments that give me data performance, data migration, data cost control, data availability, data control, and security through this complete solution. But you’re looking at this and going, “Yeah, that’s double the storage, that’s double the NAS.” How does that work when you’re talking about Excel spreadsheets kind of data?

 

 

Alright. We know that EBS storage is 10 cents per GiB per month. EFS storage is 30 cents per GiB per month. The chart is going to expand with the more…two more terabytes I have in my solution.

If I add a redundant set of block storage, redundant set of VMs, and then I turn on dedupe and depression, and then I turn on my tiering, the price of the SoftNAS solution is so much smaller than what you pay for storage. It doesn’t affect the storage cost that much. This is how we’re able to save companies huge amounts of money per month on their storage bill.

 

 

This could be the single most important thing you do this year because most of the price of a cloud environment is the price of the storage, not the compute, not the RAM, not the throughput. It’s the storage.

If I can reduce and actively manage, compress, optimize that data and tier it, and use cheaper storage, then I’ve done the appropriate work that my company will benefit from. On the one hand, it is all about reducing costs, but there is a cost to performance also.

Reducing the Cost of Performance

No one’s ever come to me and said, “Jeff, will you reduce my performance.” Of course not. Nobody wants that. Some people want to maintain performance and lower costs. We can actually increase performance and lower costs. Let me show you how that works.

We’ve been looking at this model throughout this talk. We have EBS storage at 10 cents with a NAS, a SoftNAS between the storage and the VMs. Then we have this managed storage like EFS with all of the other companies in contention with that storage.

It’s like me working from home, on the left-hand side, and having a consistent experience to my hard drive from my computer. I know how long it takes to boot. I know how long it takes to launch an application. I know how long it takes to do things.

But if my computer is at work in the office and I had to hop in a freeway, I’m in contention with everybody else who’s going to work who also needs to work on their hard drive at the computer in their office. Some days the traffic is light and fast, some days it’s slow, some days there’s a wreck, and it takes them twice as long to reach there. It’s inconsistent. I’m not sure what I am paying for.

If we think about what EFS does for performance, and this is based on their website, you get more performance or throughput with more storage that you have. I’ve seen some ads and blog articles that a lot of developers.

They say, “If I need 100 MB of throughput for my solution and I only need one terabyte worth of data, I’ll put an extra terabyte of dummy data out there on my share so that I can get the performance I want.” I put another terabyte at 30 cents per GiB per month that I’m not even going to use just to get the performance that I need.

Then there’s bursting, then there is throttling, and then it gets confusing. We are so focused on delivering performance. SoftNAS is a data-performance company. We have levels or scales of performance, 200, 400, 800, to 6,400. Those relate to throughput, so the throughput and IOPS that you can expect for the solution.

We are using storage that’s only 10 cents per GiB on AWS. It’s a dedicated performance that you can determine the performance you need and then buy that solution. On Azure, it’s a little bit different. Their denominator for performance is of vCPUs. A 200 is a 2 vCPU. A 1,600 is a 20 vCPU. Then we publish the IOPS and throughput that you can expect to have for your solution.

Of course, reducing cost performance, use a NAS to deliver the storage in the cloud. Use a predictable performance. Use attached storage with a NAS. Use a RAID configuration. You can focus on read and write cache even with different disks that you use or with a NAS on the amount of RAM that you use.

Pay for performance. Don’t pay more for the capacity to get the performance. We just took a real quick look at three ways to slash your storage cost – optimizing that storage with dedupe, compression, and tiering, making less expensive storage work for you, right, and then reducing the cost of performance. Pay for the performance you need, not for more storage to get the performance you need.

What do you do now? You could start a free trial on AWS or Azure. You can schedule a performance assessment where you talk with one of our dedicated people who do this 24/7 to look to how to get you the most performance you can at the lowest price.

We want to do what’s right by you. At Buurst, we are a data-performance company. We don’t charge for storage. We don’t charge for more storage. We don’t charge for less storage. We want to deliver the storage you paid for.

 

 

You pay for the storage from Azure or AWS. We don’t care if you attach a terabyte or a petabyte, but we want to give you the performance and availability that you expect from an on-premises solution. Thank you for today. Thank you for your time.

At Buurst, we’re a data-performance company. It’s your time to be this IT hero and save your company money. Reach out to us. Get a performance assessment. Thank you very much.   

Learn the new rules of cloud storage

Learn the new rules of cloud storage

SoftNAS is now Buurst, and we’re about to change the enterprise cloud storage industry as you know it.

Watch the recording of our groundbreaking live webinar announcement on 4/15/20 and learn how:

  • To reduce your cloud storage costs, and save up to 80% on cloud storage costs and increase performance (yes, you read that right!)
  • Applying configuration variables will maximize data performance, without storage limitations
  • Companies such as Halliburton, SAP, and Boeing are already taking advantage of these rules and effectively managing Petabytes of data in the cloud

Who should watch?

 

  • Cloud Architects, CIO, CTO, VP Infrastructure, Data Center Architects, Platform Architects, Application Developers, Systems Engineers, Network Engineers, VP
    Technology, VP IT, VP BI/Data Analytics, Solutions Architects
  • Amazon Elastic File System (EFS) customers, Amazon FSx customers, Azure NetApp File System (NFS) customers, Isilon customers
Webinar: Get your On-Premises NAS in the Azure Cloud

Webinar: Get your On-Premises NAS in the Azure Cloud

The following is a recording and full transcript from the webinar, “Get your On-Premises NAS in the Azure Cloud”. You can download the full slide deck on Slideshare

Full Transcript: Get your On-Premises NAS in the Azure Cloud

David Mitchell: Okay, folks, we’re just on the hour now so let’s get started. I want to click on record. Okay, it’s done. First of all, welcome to today’s webinar. Today we’re going to be talking about getting your on-premise NAS in Azure Cloud. Today’s presenter is going to be Matt Blanchard, a solutions architect with us here in SoftNAS.

My name is David Mitchell. Before I hand you over to Matt, I just have a couple of slides to cover. As I mentioned, Matt is our presenter today and I’ll hand you over to him shortly.

It looks like everyone has safely got into GoToWebinar. Hopefully, you can see and you can hear us. If it’s your first time using GoToWebinar, you do have a couple of options for audio. You can either use the mic and speakers or the telephone.

If you’re using the telephone, we do have a direct dial-in for most countries so make sure you do that and enter in your audio pin. If not, use your mic and speakers. You may need just to configure that if you have a couple of different options there in your local device.

Throughout the session today, we are going to have everyone on mute so the best way to handle a question, we found, is to use the questions pane. As Matt goes through the slides and the demo, if you have any questions please post them there.

We have allocated some time at the very end to go over the questions. I’m sure Matt will remind you as he goes through the webinar.

Lastly, as I mentioned and as you probably heard me saying about recording, we are recording this session. If you do need to leave or if a colleague couldn’t make it or if you know of someone else who’s interested and maybe couldn’t make it, we will be sending out a link to the recording after this and also a link to the slides.

We post our slides on SlideShare so don’t worry about writing down notes or anything like that should get access to all the material. That’s it. I am going to now hand you over to Matt.

Matt, if you want to unmute your line. I’m going to make you presenter. I can see your slide, matt, but I can’t hear you.

Matt Blanchard: Can you hear me now?

David: Yeah, loud and clear.

Matt: Great! Do you see the slides?

David: I do. You want to put it into presentation mode so we make sure we can.
Matt: I thought we were in presentation mode there. How’s that?

David: No, I can just see them in regular view.

Matt: We’ll try this one. How’s that?

David: No, it’s still the same for me, Matt.

Matt: I am sorry. Are you seeing the car on the background?

David: No, I’m just seeing a picture of the slides.

Matt: Let me do this.

David: I guess everyone is seeing on the webinar. I don’t know if you want to put a comment in the questions pane there if everyone is seeing the same thing.

Matt: How about that? Now, do you see just this?

David: Yeah, that’s perfect now. That’s it.

Matt: We will go from there. I’m sorry.

David: It’s perfect. Over to you, Matt.

Matt: I’m sorry about that David. Starting off once again, my name is Matt Blanchard. I am a principal solutions architect here at SoftNAS. Today, we’re going to talk about some of the advantages of using Microsoft Azure for your cloud storage devices inside the cloud and helping you make plans to move from your on-premise solution today into the cloud of tomorrow.

This is not a new concept. This is what we’ve seen in trend for the last several years. The bill versus buy aspect where we’re going to have a great economy of scale whenever we buy assets or we buy an OpEx partner and we are able to use that type of partnership to advance our IT needs versus a low economy of scale. If I have to invest my own money to build up the information systems and buy large SAN suppliers in networking, storage networks, and so forth.

Hosting that and building that all out myself makes a lot of capital investiture. This is the paradigm — it’s on-premise versus the cloud architecture. A lot of the things that we see that we have to provide for ourselves on-premise are things that are assumed and given to us in configurations in the cloud, such as with Microsoft Azure giving us the availability to have full-fledged VMs running inside of our Azure repository and accessing our SoftNAS virtual SANs. We are able to give you a network access control towards all your storage needs within a packaged small useable space.

What does this afford us? I don’t have to build my own data center. I can have all my applications running in the cloud on-services versus having them on-premise running physically and having to maintain them physically on datasets.

If you think about rebuilding applications for the next generation of databases or having the next generation of server componentry that we’re going to install that may not have the correct driver sets for our applications and having to rebuild all those things. It makes it quite tedious to help move forward your architecture.

However, when we start to blow those lines and move into let’s say a hosting provider or a cloud services, those dependencies on the actual hardware devices and the physical device drivers start to fade away because we’re running these applications as services and not as physical supported sideload architectures.

This movement towards Azure in the cloud, it makes quite a bit of sense whenever you start looking at the economies of scale, how fast we could grow in capacity, and things like bursting control whenever we have large amounts of data services that we’re going to have to supply on-demand versus things that we have on a constant day-to-day basis.

Say we are a big software company or a big game company that’s releasing the next new Star Wars game. I’ll have to TM that or something in my conversation. You’ll have to see us. It might be some sort of online game that needs extra capacity for the first weekend out just to support all the new users who’re going to be accessing that.

This burst ability and this expandability into the cloud make all the sense in the world because who wants to spend that money on that hardware to build out that infrastructure for something that may or may not continue to be that large of an investment in the future. If we can scale that down overtime or scale it up over time, either way. Maybe we undersized our built. You can think of it in that aspect.

It really makes sense – this paradigm switch into the cloud mantra.

At SoftNAS, we’ve built our architecture to be flexible and adaptable inside of this cloud architecture. We’ve built a Linux virtual machine; it’s built on CentOS. It runs ZFS as our file system on that kernel.

We run all of our systems on open controllable systems. We have staff on-site that contribute into these open-source amalgams to make these systems better into CentOS and ZFS. We contribute a lot of intellectual property to help advance these technologies into the future.

We, of course, run HTML5 as our admin UI, we have PHP, and Apace is our web server. We have all these open systems to allow us to be able to take advantage of a great open-source community out there on the internet.

We integrate with multiple different service users. If you have customers that are currently running in AWS or CenturyLink Cloud and they are looking to migrate into Azure — make a change — it’s very easy for us to come in and help you make that data migration change because inserting a SoftNAS service into both of those service providers and then simply migrating that data is a very simple and easy to do task.

We are actually going to cover that in our demonstration here in just a short few slides. I promised David I would not slide you all to death today. We’re going to go through a few of these slides further, then we’re going to get into a demonstration, then we’ll touch on a few that we’re going to end up with, and then we’ll do a quick Q&A.

As I said, we really do take in responses. We want to be flexible. We want to be open. We want to have all of our data resources that have multiple use-cases. We are able a full-featured NAS service that does all of these things in the data services tab.

Block replication, we can do inline deduplication, caching, storage pools, thin provisioning, writable snapshots, and snapclones. We can do compression, encryption. All of these different offerings, we are able to give you in a single packaged NAS solution.

Once again, all the things that you think you’ll come back in like, “I’m going to have to implement all of that stuff. I’m going to have to buy all these different componentry and insert them into my hardware,” those are things that are assumed and used and we are able to go ahead and give you directly in our NAS solution.

How does SoftNAS work? To be very forthcoming, it’s basically a gateway technology. We are able to present storage capacity whether it be a CIFS or SMB access medium for Windows user for some sort of Windows file share or if it’s an NFS share for some Linux machines or even just an iSCSI block device or an Apple File Protocol for entire machine backups.

If you have end-users or end-devices that need storage repositories of multiple different protocols, we are able then to store that data into say an Azure Blob Storage or even a native Azure storage device.

We are able then to translate those protocols into an object protocol, which is not a native language. We don’t speak in object whenever we’re going through a normal SMB connection, but we do also speak native object directly into Azure Blob. We offer the best of both worlds with this solution.

Just the same as native block devices, we have a native block protocol that we are to talk directly into Azure disks that directly attach to these machines. We are able to create flexible containers that make data unifiable accessible.

How does this play out and work in the real world? What we’re basically going to do is we’re going to present a single IP point of access that all of these file systems will land on. All of our CIFS access, all of our NFS exports, all of the AFP shares will all be enumerated out on a single SoftNAS instance and they will be presented to these applications, servers, and end-users.

The storage pools are nothing more than conglomerations of disks that have been offered up by the Microsoft Azure platform. Whether it’s Microsoft Blob or it’s just native disks, if it’s even another type of object device that you’ve imported into these drives, we can support all of those device types and create storage pools of different technologies.

And we can attach volumes and LUNs that have shares of different protocols to those storage pools so it allows us to have multiple different connection points to different storage technologies on the backend.

And we do this as a basic translation and it’s all seamless to the end-user or the end device.

We’re going to go really quick into a demonstration of this. If you don’t mind, just stick with me here. David, please interrupt me if my screen does not show up correctly here. I should be showing my screen now that has my Azure portal on it.

What we’re going to do right now is we’re going show you how easy it is to deploy a SoftNAS virtual machine into my Azure portal. I’ve got both the virtual portal up here as well as Microsoft Azure…
My Azure portal has timed out on me so I’ll just come back to this one here. I’m going to show you how to deploy this VM within the gallery. It’s very simple. All we have to do is come down and click on new.

I’m going to select compute in virtual machine. Once I select that, I’m going to select gallery and it’s going to bring up a selector. I could simply come in here and insert SoftNAS. Once I type soft, it’s actually going to appear here and I can select my instance that I would like to provision.

I’m not going to go all the way through this provisioning system, but you can kind of get the gist. It’s for the interest of how and not build up some machines for us.

We’re going to go through and call this MS Blob demo. You would then select your different platforms. If we’re going to have an A2. I think D4 is one of our standard offerings. We can build out these machines in a multitude of different ways according to your data needs.

If you’re going to be doing quite a bit of caching for read/cache, we might want to increase the RAM size because ZFS is very heavy on RAM for caching. We might come in here and add more memory, 28 gigs of memory.

We can come in then and create a user. Let’s call it SoftNAS and give it a password. Create a password. I’m sorry I’m not great at talking while I’m typing. Then we just continue forward.

After we select our password, we can come in and create a new cloud service or select a cloud service that we’ve already created before. Then we’ll come in and add some DNS names for this.

We can come in and add some different information for our network in our subnet if we wanted to select a different network. The last piece that we would need to use is set up SSH access as well as HTTPS. Where is HTTPS? There it is.

Once that is created, we are ready to go. We would be able to come in here and click next, next, next, and it would create this instance. I’m going to go ahead and kill this and show you all what we are going to be presented with.

You are going to be presented with a machine that looks something like this. After this machine has built up and everything is lined out correctly, you’re going to have a SoftNAS machine that you’re going to log into and be presented with this UI.

Now how do I add disk repositories to this? How do I add resources? If I want to add a native Microsoft disk to this or an Azure disk, I can come back into my Azure portal and simply select my system that I would like to add it. I am going to come in and I’m going to click on the dashboard.

Then down here at the bottom…Ops! I’m on the wrong one. I think I need to be on the SoftNAS one here. Yes, this is the one I need to be on.

Down at the bottom here, you’ll see attach and I can attach a new disk. Create a disk, I’ll call this one 10 gigs and attach. It will go through the attachment process of this disk.

Once it finishes, the disk will be available for use. We could move forward with adding a protocol as we chose. In this instance, I’m going to go ahead and show you all how to add a blob device as well.

A brand new option that we’ve just released is adding blob devices for use inside of our SoftNAS storage system. I’m back into my SoftNAS virtual machine – it’s running on my Azure system.

I’m going to come in and I’m going to add a device. I’m going to select Azure Blob. After I select Azure Blob, you’ll notice that I’m given my user name. I can put in MBlanchard is my user name.

I could come in and my access key. I’m not going into the rigamor of typing my access key out or copying and pasting it. I’m sorry. I don’t want to show that off to the whole world.

I’ll add a container base name here. We would want to customize this so I’m going to call this Matt Blob or something like that. You’ll notice that once I select off of that area, the Matt Blob container base name pops itself down here into the container name.

And that’s basically just coming in and creating a custom container. All containers in the world will have to be named something unique so we go ahead and throw in some unique characters here at the end of your base name to make sure that it’s completely randomized and unique.

We can select our disk size as we would with any maximum disk size. This is thin provisioned by default but we’re going to have to set maximum sealing limit. Then we could select if we’d like to encrypt this disk as well and give it a password to encrypt that upon.

Once again I would have to add my access key in here to create the blob devices. I’m not going to go through that rigamor of doing that. For the interest of time, I have gone ahead and gone forward and added in some blob devices and gotten us ready for the rest of the demonstration.

The rest of this demonstration is going to be going through and configuring two SoftNAS machines to talk with a synchronized ZFS replication running between them. Right now, what I have set up is two different machines.

You see they are pretty much identical, both machines have disk drives that have already been provisioned, and I have already provisioned these devices for use on a pool on my second machine.

On my second machine, I’ve already configured this pool but I have not added any protocols – basically no files or data on to these pools. I have created this storage pool for interest of time so I don’t have to come and create this twice.

I’m going to replicate this data on my primary instance. This primary instance could be in a datacenter that I am going to be using as my primary datacenter and my primary means of access.

Once that primary datacenter is up and running, which would be this machine, we were going to have storage repositories and protocols attached to this machine and all the data will be asynchronously replicated across the wire to our secondary machine.

This happens on a schedule about every one minute and it’s a ZFS sync replication that goes on. And after the copy happens, it will happen another one minute afterward, and one minute afterward, and so forth.

The two things that have to be configured upon this replication is the name of the pool and the size of the pool. Both those variables need to be the same in order for replication to happen.

Let’s go ahead and set up a pool that is equal to the pool that we’ve set up on our secondary machine. Our secondary machine, I have called Microsoft Blob and it has 10 gig disks. If you look at our details here, you’ll see that it has two disks that are hosted on this SoftNAS instance from Azure.

Let’s go ahead and do that on my primary machine. I come in here and click “create” on the pool creation wizard. I will name it Microsoft Blob just like my other one.

You’ll see that I have several different RAID options to use. I can use JBOD Array. I can use RAID 0 for striping. I can use RAID 10 for mirror and stripes. Five, six, and seven parity. I can do single parity, duo parity, and triple parity.

Just for demonstration’s sake, I used RAID 0 to give me the maximum speed possible across these two disks. I can select the two disks I would love to use. At the bottom, you’ll see a couple of different options.

I can force creation which basically says, “Hey, if there was a pool already created on these disks, overwrite it.” If you do have a pool on this disk and you’re trying to create another pool on top of it, we’re going to warn you because ZFS is very resilient and it can recover from a lot of errors.

If you do happen to have an issue where you disconnect a disk and it had a pool on it and now you reconnect it, we don’t want you to lose that data. It’s going to flag it and say, “Hey don’t use this disk. It already has a pool on it.”
Lex Encryption. That’s Linux Encryption System. We are able then to supply the password and a repetitive password to enable AES Encryption System. The last one is sync mode which is a write checksum. It’s making sure the writes are landing on the disk correctly.

We have three options; standard, which does its best case to check the write on every write. If not, it comes back for it and checks it later. Always, it reserves CPU time to check every write. And disable which we don’t ever [inaudible 26:02] people using. That never checks that write. It just goes on forward and goes along its business. It is the fastest mechanism, but it is also the most careless and worrisome.

I’m going to go ahead and create this pool. Now we will have a pool of equal size and equal name to my carrier pool on my secondary instance.

There are a couple of other options I can do on this tab if I did come in later and I need to extend this for more data volume. I could come in and click “expand –add any disks to this array” and it’s going to add those disks along and make that storage larger.

I can import any ZFS pools that have been brought in orphaned and this in case of a disaster recovery area, we can bring in those disks and attach them directly and import the pools.

We can add a read/cache. If we have high-speed local disks, that would be great usage for read cache to allow us to have a certain percentage size space for read caching.

By default, the ZFS takes half the system RAM for read hot caching and this is going to be layer two hot cache. We automatically have that much resources for caching. However, this is just layering back on top of that to give us even more caching.

The last piece here is write logging. This is ZIL, which is ZFS Intent Log. This is giving us write security for some writes that are under 32K. Anything that we’re writing on disk is going to be enumerated on the ZIL and we will be able to use that to reset where those writes had landed in a previous time.

We can also add a hot spare device in here if we care to, but I’m not going to go into those any further. The next piece after we’ve created our storage pool is we need to create our writable protocols or our volumes or shares.

Let’s go over here to our volumes in LUNs tab and let’s create some volumes. We’re going to call this first one just Vol and make it very simple. We will attach it to Microsoft Blob.

We can say let’s just do CIFS and AFP for this tab. We will thin provision this. Notice we can choose to thick or thin provision. We can choose if we’d like to use compression or deduplication.

A bit of “warning,” compression uses a little bit more CPU space. A little bit CPU time for compression and depublication is intensive on RAM so we advise you to bump your RAM up about I gig per terabyte of deduped data.

This is inline dedupe, ZFS’s inline file system, so everything is inline when it’s deduped so it’s on the fly and ready to go. Once again, we can set our sync mode directly on the volume versus directly on the pools. Either way, you can set it on volumes or pools.

Also, notice we have a snapshots tab. This allows us to select which type of snapshotting we’d like. If we’d like to have a default schedule which is about every three hours or so; 24/7, which is every single hour for every 24 hours. You can come in and edit that schedule or create schedules as you would like.

We also have a retention policy here and sets that retention times for each of the types of snapshots. These are ZFS snapshots that are stored on the volume itself. I’m going to go ahead and create a couple of these volumes to be used for our data just to demonstrate that whenever we do our replication that that data is actually replicated across the wire.

I’m going to select Vol 2, and this time we’ll do maybe NFS and CIFS. I’ll create it. Then we’ll create a block device on this last one.

Vol 3, and we’ll call this Microsoft Blob once again. This time I’m going to do an iSCSI line. Automatically notice, whenever we select the iSCSI block device, our thick provisioning button is selected. This is basically because most of the time whenever you do have an iSCSI device, it has a finite LUN size.

I’m going to say 5 gigs. Also, notice that we have a LUN targets tab up here. That means we just need to generate an IQN for any of the devices to hook into. We’ll generate IQN here and that way all of our iSCSI initiators can slot those targets, and click “create.”
Everything now is basically created. We’ve created disk repository shares ready for users to start dropping data in there. If we wanted somebody to come in here and write to Vol 1, we would say “Hey this one’s a CIFS share. Go to wackwackmicrosoft_blob, wack Vol 1, IP address wack and you would access to these volumes.

If you had an NFS share, you could come in and do the same with Vol 1 and Vol 2. All of these exports are all ready to go and ready to be written to.

Notice that we do have access to integrate directly with active directory. It’s a simple active directory wizard that goes through and asks you for your domain name, asks you for your net bios name, and then asks for administrator (a machine user that can add machines into the AD). And this is basically doing some addition into AD.

Once that’s all done in this machine over here, you add it into active directory. You can then assign user-rights and group-rights to all it’s file shares and so forth within Windows.

Now we have everything set up. However, if we look at our secondary machine, we don’t have any data here. If we look at our volumes and LUNs tab, there is no data on this secondary machine.

We want to now have a backup, a replicated copy of data on this second machine. Let’s go ahead and set that up through something that we call SnapReplicate.

I’m going to go ahead and add our replication and we’re going to replicate to these other machines. This is 49.121.150.65. I’m going to give it its password. Let’s see which one is this?

Make sure this is the right password. I’m not sure if it is or not. Wrong password. Let’s try again. Next, and finish. Notice now on the background, work is in progress to set up this replication. Replication is now underway so you can see all the mirrors are going. Mirror complete, mirror on the way. Complete.

Now we’ve basically taken all that data, and if we did have volumes of user-data in here, we would have all that data now and it is now copied to our secondary machine. If I refresh here, I will now see all of my data repositories.

We can demonstrate how our replication works by simply coming in and saying let’s go to a volume here. In volume 1, let’s come down and let’s create a snapshot. Ops, we already have snapshots.

If somebody came to me and said, “Hey I’ve got information on Vol 1. I need to recover that and it’s in an NFS share.” I could say, “Okay, let me go ahead and build you a snapclone. Then you could mount that snapclone and grab that data for yourself.”
We already support Microsoft previous versions. If this was in a CIFS directory that was configured for previous versions, they would be able to do this all on their own.

However, in this instance, this is someone coming from an NFS background saying, “Hey I need access to this machine.” Notice now, I’ve created a snapclone of this information. Then they would be able to come in and mount that data.

Let’s come in over here and make sure that my replication is happening. Oh, I’ve just had a failure on it. Something happened. I’m sorry. I grabbed the wrong snapshot for that.

I would need to have a full snapshot in order to create a snapclone in order for it to be replicated. But basically, that’s the idea. All of our data is going to be copied from one machine directly over to the other machine. Every one minute, we will be doing a replication of that data.

That’s basically all we have for the demonstration. We’re going to jump back over to this slide where I talk about a couple of little use-cases. Then we’ll end up and close, get some questions answered, and finish up here.

Let me bring up the slideware one more time. David, can you see my slideware?

David:  Yeah, we can see that, Matt.

Matt: Great. A couple of use cases where SoftNAS and Azure really make sense. I’m going to go through these and talk about the challenge. The challenge would be a company needs to quickly SaaS-enable a customer facing application on Azure but the app doesn’t support blob. They also need LDI or LDAP Integration for that application.
What would the solution be? Basically, the solution will be rewriting your application to support blob and AD authentications. That is highly unlikely that it would ever happen.

What else could you do? Instead of rewriting that application to support blob, continue to do business the way you always have. That machine needs access via NFS, fine. We’ll just support that via NFS through SoftNAS.

Drop all that data on a Microsoft Azure backend, store it in blob, and let us do the translation. Very simple access so then we could have access for all of our applications on-premise or in the cloud directly to whatever data resources they need and it could be presented with any protocol that’s listed – via CIFS, NFS, AFP, iSCSI.

The next use case, disaster recovery. This is what we did on the demonstration. The challenge is we have got a company that needs reliable off-site data protection.

Maybe they have a big EMC array at their location that they have several years of support left on. They need to be able to meter the use to it, but they need to be able to have a simple integration solution. What would be the solution?

It would be very easy to spin up a SoftNAS instance on the premise, directly access that EMC array and utilize the data resources for SoftNAS. We can then represent those data repositories to their application servers and end-users on site and replicate all that data using Snapreplicate into Microsoft Azure.

We would have our secondary blob storage in Azure and we’d be replicating all that data that’s on-premise into the cloud.

What’s great about this solution is it becomes a gateway drive when I get to the end of support on that EMC array and I say, “We need to go buy a new array or we need to have support for that array.”
We’ve got this thing running in Azure already, why don’t we just cut the code? It is the exact same thing that’s running in Azure. We could just start directing our application resources to Azure. It’s a great way to get you moving into the cloud and get a migration strategy moving forward.

The last one is hybrid on-premise usage and I alluded to this one earlier about the burst to cloud type of thing. This is a company that has performance sensitive applications that need a local LAN. They need off-site protection or capacity.

The solution basically would be to set up replication to Azure and then have that expand capacity. So basically whenever they run out of space on-premise, we would then be able to burst out into Azure and create more and more virtual machines to access that data.

Maybe it’s a web services account that has a web portal UI or something like that that needs just a web presence. Then we’re able to multiple copies of different web servers that are load balanced all accessing the same data on top of Microsoft Azure through SoftNAS.

All of these use cases are very possible. These are all use cases that I have had customers experience today.

Last, SoftNAS overview where our products land. SoftNAS is our main web offering. It’s offered on Azure, AWS, V Cloud Air, and CenturyLink Cloud. It is a public cloud NAS so any resources locally that are available on that cloud offering are present on our SoftNAS, as well as any object offering throughout the world. We can have any object connections throughout the world and access it.

SoftNAS File Gateway. This is an on-premise NAS. It would be built off of a VMware architecture, so this is basically a SoftNAS VM that has access to your local NAS files as well as local disk storage.

SoftNAS Object Filer. This is directed at somebody who is going to not have local data resources but wants to utilize an object resource either in the cloud or an object device locally. We would be able to give them an object file that has just S3 object access included so they’ll just be able to presently use object data repositories on that installation.

Last is SoftNAS Service providers, which is creating a multi-tenure NAS solution. It has Rest API so you can integrate building and tiering into this solution. It also has iSCSI connections with object storage so we are able to use that type of connection to a multitude of different backend offerings.

Some of our last things are technology partners. We’d like to that all of them – Microsoft, the Amazons, the VMwares. All these guys are out there that help us make our product great. We wouldn’t be here without Microsoft Azure helping us promote our product and go forward with a great solution.

Lastly, here is our brand sheet, people that you know that are today SoftNAS customers and we have many hundreds of customers out there that are not listed here.

Here’s just some of our customers that we work with directly — Netflix, Coca Cola, Nike, Boeing. We have all sorts of customers out there from all different verticals using our product in all different ways.

With that, I’m going to give it back to David. I’m going to take a look at some of the questions. While he finishes that up, I’ll go through some questions and we’ll go back to it.

David:  Okay, Matt. Thanks a lot. Again, just a reminder if you have any questions, please use the questions pane, but I also have a few here that I’ll read out. Just for our next step. I’m not sure. Most of you it’s your first time hearing about SoftNAS and our solution.

If you want to learn more, we do have a free 30-day trial version of SoftNAS cloud on Azure that you can try. If you go to softnas.com/azure, you can download that version there and we can help you out.

If you want to learn a bit more, you can go to our website softnas.com/azure. If you want to contact us, you can go to the contact page there. If you have any follow-up questions for the likes of Matt and the team, you can go and also make sure to follow us on Twitter.

Matt, if you want to jump at it, there’s one question there and I have a few questions here that I can call out.

Matt:  The question is, “For a BDR solution, Cloud File Gateway for the client side with replication to SoftNAS.” You’ll want to replicate that data from on-premise file gateway up into the SoftNAS on Microsoft Azure. That’s correct.

David: Another question here, Matt. What version of NFS is supported?
Matt:  We support both version three and four for NFS. The follow-up that probably will be the question that will be asked is what versions of SMB do we support? We support 2 and 3 SMB.
David:  What’s the max latency SoftNAS will support for site replication?

Matt:  The max latency that we support for site replication is really not a question. We are flexible to be able to handle latency from any reasonable network. It’s not a set on a stone number that 200-milliseconds latency is an acceptable or not acceptable range. We are very flexible with our solution. As long as we can have a fairly reliable connection, we can make up the latency and build that SoftNAS snap backup.

David:  Someone has a question here on RAID. What type of RAID is being used under SoftNAS?

Matt:  It’s built around RAID. We don’t tell you what types of RAIDs you have to use. It depends on what your situation is. If you’re inside of Microsoft Azure and you trust their local disk storages under enough level of AF4 that you are not going to have to worry about RAID in your solution or it’s not that much-pressing data, you can go ahead and use RAID 0 and get the fastest capabilities out of it.

However, if you’re on-premise and you don’t have a hardware RAID solution, we give you the ability to use up to RAID 7. If you wanted to use RAID 6 to give a really good performance and redundancy at the same time, you are welcomed to do that.

David:   I see Travis has another question there on the questions pane. How much would encryption inhibit or prevent deduplication benefits?

Matt:  That’s a tricky question. Deduplication actually happens on the fly, so we’re going to be doing the dedupe inline. Encryption is not going to come into play there. The encryption is going to happen on the actual container itself.

We are going to encrypt the channel itself and then whenever we drop the data in there it’s going to dedupe.

David:   A couple of more questions up here. Is it a good idea to use SoftNAS as a backup target? I think you covered that in one of our use cases there, I believe.

Matt: Absolutely, that is one of our biggest use cases. Can we use it as a backup target? I guess I didn’t touch on it as much on the use-cases. I have done a previous webinar directly on this subject where we demonstrated how we went about using a Veem Backup solution from a Windows 2012 server and using SoftNAS as our target.

It is a great solution for backup solutions. We have used it here locally with a backup solution for SoftNAS incorporated. It is absolutely a perfect solution for that because we can provide fast access to any protocol that your backup solution needs.

David: That’s right. That webinar, you can find on softnas.com within the webinars archive section if you’re interest in playing that back. Just the last question that I have here; does SoftNAS provide performance reports to show or to see hot versus cold data volumes?

Matt:             Absolutely. We do provide a dashboard that gives you access to all that data, so you actually can come here and see which data disks are getting hit the hardest, where we have data that’s just stored and asleep, basically never touched. We do have availability access for that dashboard to see that data and it reports in. And we can actually export that via SMTP server as well, so you can integrate it with SMTP or SNMP via things like What’s a gold or like product.

David:             I think that’s all the questions we have. That looks like that’s it. If you have any further questions as I mentioned, there’s a few places where you can contact SoftNAS. If you want to reach out and learn more or download a demo, please do that. I’m sure Matt and the team will be involved in that POC.

Matt, any addition to add at the end? Any common things that you see or what people should look out for or we’ve covered most of the areas?

Matt: No, I think I got everything covered. But yes, if you do have any questions, please do not hesitate to contact us. My email address is mblanchard@softnas.com. I am more than willing to answer any questions you have about SoftNAS and assist you all in doing a free trial and setting this up and getting it running.

David: Thanks, Matt. Thanks to everyone for attending. As you leave today, there will be a short survey. So if you can provide some feedback there, that also gives us an indication of any topics you’d like for future webinars.

As I mentioned at the start, recording and slides will be sent out very shortly. I hope to see you at the next webinar and have a good day. Thank you.

Webinar: Using AWS Disaster Recovery to Close Your DR Datacenter

Webinar: Using AWS Disaster Recovery to Close Your DR Datacenter

The following is a recording and full transcript from the webinar, “Using AWS Disaster Recovery to Close Your DR Datacenter”. You can download the full slide deck on Slideshare

Full Transcript: Using AWS Disaster Recovery to Close Your DR Datacenter

Taran Soodan:   Good morning, good afternoon, and good evening everyone. Welcome to a SoftNAS webinar today on how to use AWS for disaster recovery and close your DR data center.

In today’s webinar, we’ll be discussing how you can use AWS to manage your disaster recovery. We will give a brief overview of how it works, some of the benefits of using AWS, and an overview of some of the architectures that you can build to give you a better sense of just how easy it is to manage your DR with AWS.

Before we begin today’s webinar, we just want to confirm a couple of housekeeping items out there. Webinar audio can be done through either your computer speakers or through the telephone.

By clicking the telephone option, you’re able to go in and dial-in using your cell phone, or desk phone, or whatever phones you may have available.

We will also be answering questions at the end of the webinar. For those of you who are currently in attendance if you have any questions that may pop up during the webinar post them in the questions pane and we’ll go ahead and answer those questions at the end here.

Finally, the session is being recorded. For those of you who want to share this recording with some of your colleagues or you just want to watch it on-demand, we’ll go ahead and send you a link to the recording of the webinar and a link to the slides shortly after today’s webinar is over.

Also, to thank all of you for joining us today, we are offering $100 AWS credits. Later on at the end of the webinar, we are going to give you a link which you can click and go to earn a free $100 AWS credit as thanks for attending today’s webinar.

On the agenda today, we’ll be talking about AWS’s disaster recovery.

We’ll give you a brief overview of how it works and some of the benefits of using AWS for your disaster recovery.

We are going to demo how to actually build a Hot Standby DR environment on AWS. We will also give a brief overview of some DR architectures. We’ll talk a bit about HA on AWS. The main reason is, the more prepared you are for a disaster the better it will work out for you.

Finally, at the end of the webinar, we will be doing a Q&A. Again, if you have any questions during today’s webinar, please post them in the questions pane here on GoToWebinar and we’ll answer your questions at the end.

With that, let’s go ahead and get this party started. Briefly talking a bit about AWS’s DR, we want to knock out some terminology here in the beginning. There are four terms that you’re going to be hearing throughout this webinar – business continuity, disaster recovery, recovery point objectives or RPO, and recovery time objectives, otherwise known as RTO.

Business continuity is basically just ensuring that your organization’s mission-critical business functions are continuing to operate or they recover pretty quickly from serious incidents.

Disaster recovery is all about preparing for and recovering from a disaster so any event that has a negative impact on your business. Things like hardware failures; software failures, power outages, physical damages to your buildings like fire, flooding, hurricanes, or even human error — disaster recovery is all about planning for those incidents.

The recovery point objective is the acceptable amount of data loss measured in time. For example, if your disaster hits at 12:00 PM, noon, and your RPO is one hour, basically, your system should recover all the data that was in the system before 11:00 AM, so your data loss will only span from 11:00 AM to 12:00 PM.

Finally on to the recovery time objective. That’s basically the time it takes after a disruption to restore your business processes back to the agreed upon service levels. For example, if your disaster occurs at noon and your RTO is eight hours, you should be back up and running by no later than 8:00 PM.

What we want to stress during today’s webinar is we’re not saying that you need to shut down all of your datacenters and migrate them to AWS. All we are saying today is that you can keep your primary datacenter. That DR data center that you’re currently paying for, we’re saying you can close that down and migrate those workloads to AWS.

Here you can see on the far left, we have our traditional DR architecture. You have your primary data center, and you have your DR data center.

With the DR data center, there’s replication between the primary and the DR so that it can recover as soon as some kind of disaster happens in the primary datacenter – power outage, some kind of hardware failure, a fire, or flood. This way, your users are still able to be up and running without too much of an impact.

With your traditional DR, you have, at a minimum, the infrastructure that’s required to support the duplicate environment. These are things like physical location, including the power and the cooling, security to ensure the place is protected, making sure that you have enough physical space, procuring storage. And enough server capacity to run all those missing-critical services including user-authentication, DNS monitoring and alerting.

What you’re seeing here on the far right with the AWS DR, what we’re saying is you have your main data center but you can set up replication to AWS using a couple of their services that we’ve listed out here so S3, Route 53, EC2, and SoftNAS.

You can use those to create your cloud-based disaster recovery. You’ll get all the benefits of your current DR data center but without having to maintain any additional hardware or have to worry about overprovisioning your current data center.

Here, we’ll give a brief overview comparing a DR data center to AWS. With your DR data center here on the left, there is a high cost to build it. You are responsible for storage, power, networking, internet connection. There is a lot of capital expenditures involved in maintaining that DR data center.

There’s also a high cost of — storage is expensive, backup tools are expensive. Retrieval tools are also expensive as well, and backup can take time. It often takes weeks to add in more capacity because planning, procurement, and deployment just take time with your DR data centers.

It’s also challenging to verify your DR plans. Testing for DR on-site is very time-consuming and it takes a lot of effort to make sure that it’s all working correctly.

Here on the far right, we have the benefits of using AWS for your DR. There is not a lot of capital expenditures by going to AWS. The beauty of using AWS to manage DR is it’s all on-demand so you’re only going to be paying for what you use.

There’s also a consistent experience across the AWS environments. AWS is highly durable. It’s highly available so there’s a lot of protection in making sure that your disaster recovery is going to be up and running to go.

You can also automate your recovery, as well, and we’ll talk a bit about that later on during today’s webinar. Finally, you can even set up disaster recovery per application or business unit.

Business different units within your organization can have different disaster recovery objectives and goals.

Here, we’ll just talk a bit about the benefits of using the cloud for disaster recovery. You’ve got the automated protection and replication of your virtual machines. You can monitor the health of your data center through AWS.

The recovery plans are pretty customizable. You can also do no-impact recovery plan testing. You can test without having to worry about messing up any of your production data. You can orchestrate to recovery when needed. Finally, you can do replication to and recovery in AWS.

Here, we’ll talk a bit about managing your DR infrastructure. Right now, on the left, with your current DR data center, you have to manage the routers, the firewalls, the network, the operating system, your SAN or NAS or whatever you may be using, your backup, and your achieve tools.

The beauty of using AWS to manage your DR is you really only responsible for your Snapshot storage. AWS is handling the routers, the firewalls, your backup, your archiving, and your operating systems. AWS just takes all that off your hands, so you don’t have to worry about these things, and you can focus on more important tasks and projects.

To give you a sense of just how AWS is able to map your DR data center, what this slide is showing is your services and then AWS’s services on the far right.

For example with DNS, AWS has Route 53. For load balancers, AWS has Elastic Load Balancing. Your web servers can be EC2 or Auto scaling. Data centers are managed by availability zones. Finally, disaster recovery can be multi-region.

Because everything is on AWS, there are some enterprise security standards that are definitely met. AWS is always up to date with certifications whether it’s ISO, HIPAA, Sarbanes-Oxly or other compliant standards that are constantly getting updated.

There is also the physical security aspect of it as well. AWS data centers are very secure, they are nondescript facilities, the physical access is strictly controlled, and they are logging all the physical access to the data centers.

Even with the hardware and software network, they have systematic change management, updates are phased, they also do safe storage decommission, there is also automated monitoring and self-audit, and then finally, advanced network protection.

Before we move on to the demo, we do want to have you all do just a quick poll here. Give me just a second to load that up.

To get a better understanding of everyone here in the audience, we currently just want to know, are you managing your disaster recovery right now with a DR data center or are you doing it with AWS?

If you’re not sure, that’s okay. It’s one of those things that you can probably easily find out by talking to your IT admins. Let’s go ahead. It looks like about half of you have voted, so we’ll give that about another 10 seconds before we close it up.

I’m going to go ahead and close the poll now. Looking at the results here, it looks like about half of you here are using DR data centers. For those of you who are using the DR data centers, one thing that we would love to know is exactly why you joined today’s webinar.

For those of you who are on the DR data centers; if you’re interested in moving your DR to AWS or you’re just curious, go ahead and in the questions pane, just let us know. The more information we have, the better we can assist you in the future.

It looks like a couple of you are not sure about how you’re managing disaster recovery. Again, that’s perfectly fine. What we recommend you do is you reach out to your IT admins and get some feedback from them to understand “We have a DR data center,” or you might be on AWS.

It also looks like a few of you are on AWS. For those of you who are on AWS, we’re going to give you a couple of tips and tricks, during today’s demo, on how to further leverage AWS’s DR capabilities.

That being said, now, we’re ready to go ahead and demo how to build a standby environment on AWS. I’m going to ahead and to turn it over to one of my colleagues, Joey Wright, who is a solutions architect here at SoftNAS.

Joey, I’m going to give you screen control.

Joey Wright: Thanks, Taran. Hey everyone. Joey Wright, solutions architect. Give me one moment to share my screen. We’ve painted this picture of why AWS is a good fit for your DR data center.

I’d like to illustrate how SoftNAS fits in the process and how we make this transition to a cloud-based DR data center much more attainable, much more manageable, and much more available above and beyond what AWS natively offers within their system.

SoftNAS obviously at its core is a NAS product. What we’re going to do is we’re going to abstract all those modern storage offering that AWS has.

All those different flavors of EBS Storage, the S3 Object storage, we’re going to abstract all of that from our services. All of those applications and all of the users that need to consume those applications. They won’t know what’s going on in the background. They’ll simply be able to access and consume that data, regardless of where it comes from, via native file system protocols.

What this means as you are progressing from on-prem DR to cloud-based DR, you don’t necessarily have to modernize all of your applications to consume cloud natively.

We can get those applications that aren’t targeted for modernization, be it budget, be it timeframe, be it lack of interest. We can get them to the cloud. We can put the SLA for availability and uptime on the shoulders of AWS and SoftNAS and we can be certain that that application is going to run because its data sits and is accessible natively within the cloud.

To walk you through the soup-to-nuts overview at a high level, this process starts within our AWS console. We’re assuming that you’ve already got your infrastructure laid out and you’ve created your VPCs, your network, your security, etc.

If you haven’t, let us know. We have got some great partners that can assist with that effort. Once that’s created, we want to install Buurst SoftNAS. We want to turn this modern storage into something that’s shared and available to all of our different services.

To that end, we’ve built some EC2 instances. As it relates to SoftNAS, these instances can be built via a self-contained AMI that exist on our marketplace – it also exists within the community AMIs.

We have several different offerings to fit your needs. Within the marketplace, we have a 20 TB SoftNAS Standard offering. We have a 1 TB SoftNAS Express offering. We even have a Pay-As-You-Go SoftNAS Consumption offering.

What’s great about this is they are through the marketplace, which means they are built through your subscription. You don’t have to deal with SoftNAS, from a procurement perspective.

Another great thing about these offerings within the AWS Marketplace is that they all come with a free trial. That means you can evaluate either one of these offerings, based on your capacity needs, for 30-days without any SoftNAS subscription billing.

You’re still subject to any AWS infrastructure charges but; as far as SoftNAS is concerned, we give you 30-days for free. If you leave it up or like to use it beyond these 30 days, that’s when subscription starts.

We also have a community AMI offering. The difference between the community AMI and the marketplace offering is that the capacity is determined in 1 TB increments and a license-key to expose that capacity to your AIM is provided by SoftNAS.

This is really for those scenarios that don’t fit within the current licensing model of the marketplace so if you need beyond 20 TB. For example if you 100 TB or PB or whatever it might be, we will work with you to get the appropriate license key for that. And that can either be built directly for SoftNAS, or we can even go through AWS for that as well but it does require a touch with SoftNAS organization in order to leverage.

Once you’ve selected the appropriate image, we do go ahead and we select an instance size. We go ahead and build this system and let it create itself. It does take about nine minutes to go through that entire process of initialization.

Once it’s initialized, we’re going to browse to the IP address of that machine. I’m actually going to a public facing IP address because I am not directly connected to my AWS system at this point.

You’re going to login to a web-based HTML5 based storage center GUI. This is the SoftNAS storage center. I’m going to log into that really fast. This is the first experience you will see if you were to walk along this process with me, if you were to evaluate it on your own, etc.

When we finally log into this console, we’re going to be greeted with a “Getting Started” page. What this is going to do is it’s going to coach us through all the processes necessary to get the system up so we can start getting data in the AWS so we can start leveraging a lot of these failover processes.

We’ll do some general housekeeping. We’ll make sure it can talk to our network. We’ll update it. We’ll apply license keys, etc. The point is we need to get capacity first because we’ve got applications that need to talk via CIFS or via SMB to a bunch of storage.

We have got Linux machines that need to talk via NFS to do whatever it is they do. Maybe we’ve got some old legacy applications that we need to turn S3 into our own personal SAN. This is where we’re going to do this.

We’re going to start this process after the maintenance is complete by adding storage or provisioning storage to SoftNAS. When you log into this disk/devices applet for the first time, what you’re going to see are any disks that are already associated with your instance.

If you happen to pick an EC2 compute instance that has ephemeral drives, you actually see those ephemeral drives here. If you’re on-premises and you’re doing this in VMware, for example, you might see any disk you have already provisioned to that virtual machine here as well.

The point is, this is where we take the modern cloud storage offerings — the EBS and the S3 offerings within Amazon — and we turn it this virtual disk that a file system can understand – that the CIFS protocol, the NFS protocol, the AFP protocol, and the iSCSI protocol can actually consume.

If we want to add S3, it’s as simple as choosing S3 as a cloud disk extender and we can very quickly assign a S3 bucket as a virtual disk to SoftNAS and these buckets can be in either Gigabytes or Terabytes. We can actually create 500 TB buckets if we want and have multiple buckets aggregate to a pool that will talk about in just a minute.

We can support over a Petabyte on a general size instance, within Amazon if we need to, depending upon the workload. Once you select that bucket, you have the option to turn on Amazon’s bucket level encryption if we need to. We can just turn that on.

We can go ahead that fast and provision 500 TB – or a half a Petabyte in this case — of S3 storage to SoftNAS. Such is the beauty of Amazon that we’re not consuming half a Petabyte of S3 at this point. We are actually taking up about 2 MB worth of metadata data inside that packet, but that’s about it. We pay as we go as it relates to S3.

If we need to add EBS disk offerings, we can also add them on the same device. So we can actually create a SoftNAS machine that has multiple different storage types based on capacity, based on performance — however, we need to do this — and we can manage all of this through one machine.

I could have general purpose SSDs. I can make some Provisioned IOPS, Throughput Optimized – whatever the offerings are at that particular time within your region inside of AWS.

If we want to create some Throughput Optimized hard drives, we simply select the hard drives, we select the capacity we want, and we create those disks. We can go ahead and just create one of those disks so you can see that in action.

The good thing about this is I don’t to go back to my AWS console to do this. We can actually assign these capabilities or these responsibilities for managing storage services to someone else within the organization and not actually give them access to the AWS console.

Once we have provisioned AWS storage as virtual disk on this device, we can now start creating these logical groups that represent capacity. We call these pools. We aggregate these disk devices to these pools.

Now, these pools can be created to reflect your business unit — maybe you’ve got a backup business unit; maybe you’ve got HR. However you need to logically define pools, you can.

You could create them based on project. Perhaps we’ve got a cloud migration project or a SAP HANA project or something like that. We can create a pool name that represents that particular project.

That benefit with that is if you have to charge back to some other scenario that you have to deal with. In addition to the app that you’re going through, you could very easily come through and say, “Okay, I have assigned 10 TB worth of capacity to this effort and now I know this is how much they have left available, this is how much they’re consuming, etc. We can get creative with how we manage these pools.

Once you have the pool name created, we’re also able to define a RAID level that exists on top of the RAID levels that AWS already has inherent to their system.

One thing we can do is if we’re using multiple disks — maybe we’re using several 100 Terabyte buckets to make up a Petabyte — we can stripe across those buckets using a software level RAID zero at the SoftNAS server and extend the performance characteristics of that.

To do the same thing with block devices, etc, obviously, you just need to have multiple devices if you’re selecting RAID.

In this particular case, I’m choosing no RAID because I don’t necessarily want to have multiple disks in this pool. One other thing I want to cover on this particular exercise is that we also have an additional option to turn on encryption.

We can turn on bock level encryption at the file pool level. Now you have the option to turn on to turn on encryption at the disk level. You have the option to turn on file level at Rest Encryption.

Now if someone was to hypothetically grab a disk at your DR data center and it happened to come from Iraq and either US has your data on it, you’ve got AWS encryption they’ve got to get through before they can get to your data. And once they get to your data, now you have at Rest Encryption they have to get through. We can satisfy a lot of different requirements with how we configure this.

I’ve already got a pool created and that’s simply because I want to show you some things that require historical information in just a minute. One of the frequent use-cases that we see are organizations using storage for home drives, so I have created pools specifically for that effort.

In your DR scenario if you’re using Windows Active Directory and your users have home drives and they’ve got those pictures of the animals that are on the background that are critical to getting their job done efficiently. It’s obviously important that when we failover, that this information is still available. We have a pool here for that.

The interesting about this particular pool that I built is that, if I looked at the details, I’m making this pool with S3. S3 has some performance limitations. That’s the way it’s architecture.

We can scale to monumental levels here but it is limited via Throughput, it is limited via IOPS, and that’s one of the reasons it is so expensive to leverage.

What we can do since we do have to read and we have to write to this system is we can actually leverage some of the additional features of SoftNAS to change the performance characteristics of the backend storage, and this isn’t specific to S3. I’m just using it as an example. We can do this to block level storage as well.

We can augment the way reads and writes occur. When we read from a system, we grab that object, that file, that data, or whatever it might be from this remote system. If this is a DR data center, the backend storage might be different than your production storage.

You might have a nice fast on-prem Isilon NetApp or whatever it might be, but you might actually have a Throughput Optimized magnetic or cold HDD or maybe even S3 in your DR for purposes of cost-savings.

We still need to be able to use this. The understanding in this scenario is performance is going to be different, but we need access to our data. For a DR scenario, chances are we’re going to be reading a lot of that data pretty heavily. And the way we read by default is we copy things and we put them in memory. So when somebody else needs to access that bit of data another time, it comes from RAM rather than that backend storage. It’s so great because RAM is really fast, but we run out of that.

We can actually take another EBS offering, maybe it’s general SSD or maybe it’s one of those local ephemeral disks that come with an instance – those guys that run 100,000 IOPS – and we can use that to buffer these reads. Now all the recently used data and the most frequently used data is coming from the super high-speed SSD drive rather than S3.

Now your users are none the wiser why is it they’re on a performance limited system. We’re not hammering S3 and having S3 come back and tell the system to leave us alone and putting up walls. We’re still allowing things to be performing.

We can also do the same thing with rights. SoftNAS treats your data and the durability of your data as paramount. It’s important that our data is integral, it’s correct, and it’s available.

When we write something, we by default write it to two places at the same time. We write it to what’s called a write log that sits in memory and rewrite it to that backend storage.

The minute we get that object written to memory on SoftNAS since it is a storage controller, we can tell your application to go ahead and send something else because we’ve taken care of what you’ve already sent us previously.

In reality, we don’t delete or remove that object from memory until that backend storage not only says they’ve written it to disk but we’ve checked and we’ve verified that it is correct and not corrupted in any way, shape, or form.

This is great because that means we’ve got two instances of your data at any given point. Until we verify that everything is correct, we’re not going to get rid of it in memory until it is correct.

The problem is memory is finite. Again, we have a certain amount of RAM associated with these EC2 instances. If by chance we have no more room in memory to accommodate additional writes that are coming to the machine, we’ve got IO congestion; we’ve got a problem.

In the case of something like S3 where we can only write so much to it before S3 literally sends us a message that says “Hey, back off,” we have a problem. We can either scale up our EC2 instances to give us whole bunch of RAM or we can augment that write log process with another high-speed SSD.

We could take some of those general SSDs or those preferred IOPS drives that EBS has available, we could assign them as a virtual disk, and now we can buffer that RAM with these high-speed offerings.

The good thing is, now, the write characteristics are at the speed of memory in this SSD, not the backend storage. So if we are going to experience congestion on the backend because we’re write limited or perhaps the block offering is not necessarily performing enough to keep up with it, the storage controller is going to eat that overhead. Your user or your servers, your application, actually leveraging the storage it doesn’t have to
This is very important especially when we’re considering DR, and we are potentially looking at scenarios whereby we are trying to save on cost and we are using different tiers or different performance levels of service from the backend.

Once we have the capacity defined — so we create these pools based on our projects, business units, customers, or whatever it might be — we need to give them access to it and we give them access by creating volumes and LUNs.

This is where we’re actually going to build that Windows CIFS, or SNB share, or that NFS mount, or the Apple File Protocol, or even the iSCSI LAN. This is where we’re either creating shares or we’re turning AWS Storage offerings into our personal SAN.

We do that very simply by going to another wizard that should look very similar to what you’ve seen this far. What we’re going to do here is we create volumes.

This is our root amount for NFS. This is our root share for SNB, etc. once you create that volume name, you then assign it to one of those storage pools you’ve created. I can see my storage pool home drives here. I can see how much free space is available. I’ll just go ahead and assign.

The next step is where we define which protocol we’re going to access this capacity. Are we going to use a file-level protocol, NFS, CIFS, or AFP; or are we going to use a block level protocol iSCSI. It’s as simple as checking the boxes or switching to a block level device.

As it relates to these file systems, we do allow you to offer to the same volume multiple protocols. We can allow to this NFS root share access via NFS in SNB — CIFS in this case.

Obviously we need to make sure security is going to permit how we configure this. The benefit is, now, if you have to failover Linux machines that write to a specific share — maybe they are dumping logs, or maybe they are dumping invoices, or whatever it might be — and you also have to failover Windows devices that need to get that information to process it through the ERP system web [Logix 0:34:14], or something along those lines.

Now you can house that data in the same volume, rather than having mechanism that have to copy it from one place to another, just so multiple systems can access and consume the same data. Yes, we can put both of them together if we need to.

Once we chose the file system, then we can chose how we actually provision the capacity. Are we going to allow this to dynamically grow based on the space available within that pool or are we going to put a quota against this? Are we going to thick provision?

As it relates to thick provisioning, this means we’re going out and we’re pre-allocating whatever you define here as space that no one else can use within that pool. This is important because thick provisioning equals utilization, as far as AWS billing is concerned.

If you provision block devices here, obviously, you as the customer pay for those EBS devices once you provision them; but with S3, you pay-as-you-consume. If you thick provision something, you consume it. If I do 10 GB here, you’re going to see your consumption go up by 10 GB.

The same thing applies with iSCSI. If I provision object storage as a block device to you and I thin provision that, it could be 100 TB. But since we have to format, more than likely, that block device so your computer can access it through a virtual machine or whatever it might be, we then consume that information. So that once you format that 100 TB, we consume 100 TB. Just be mindful of that, but we do you the option to manage towards either scenario.

Beyond that, we get to chose whether or not we enable in-line compression and deduplication. This is important because; since we are talking about cloud, the amount of bits that go across the wire and the amount of bits that sit at-rest equal money.

If we can reduce those bits that are travelling and sitting somewhere, we can reduce your cost, so compression is a good thing even if it’s not very effective against your particular dataset. Obviously, the better the compression, the more money we will save; but the note is, here, to always compress especially when you are in the cloud.

Deduplication is really depended upon your type of data. If you have structured data, if we’re migrating your backups from Veeam to the cloud as a component of this DR exercise and Veem is not handling dedupe, we can turn dedupe on and we can save you a lot of space in that effort.

You can even turn it on in addition to what you’re already doing, but you really need to make sure that your EC2 compute instance has enough memory to handle those tables that are operating, etc.

But once you’ve configured your volume, since we’re talking the continuity of data here — you’ve just experienced a DR event if you’re using this — we want to configure how we give you access to these objects within this volume, should you need them, from a prior point in time.

To paint a picture for you, SoftNAS provides to you that file system access to that volume and the file system we use on the backend is what’s called a copy-on-write system.

That means if you were to write an Excel file to SoftNAS, the first time you send that Excel file, we save it as a persistent object. The next time you modify that Excel file, we save those modifications separate to the first object, and this just keeps building on and on as you change this file — this Excel object.

If you open that Excel object, what you’re seeing are all those different pieces of that file. Look down from the top. You’re seeing the current version. What we can do that’s interesting is we can place these snapshots or these bookmarks or whatever you want to call them in between all these different points of modification.

This is where we define where we’re putting those snapshots or those bookmarks. By default, we do this every three hours between 6:00 AM and 6:00 PM, Monday through Friday. You can build a schedule that does it every hour on the hour, or every day of the week if you want, or any combination thereof.

When then go ahead and we say, of all those snapshots we’re now taking, how many do I actually want to retain and how long to I want to retain them? Once we define that, we’ve basically set that bar that says, “Now I’ve failover to this new data center; now I have seen my data how far back in time do I want to be able to give our users or services access to different versions of this data?”
That means once we define the snapshot cycle like, “This has been running since December and we were maintaining three or four months worth of snapshots,” we could pick a snapshot from January and instantly give our user read/write access to that file as it existed in January.

The reason we can do it instantly is because the data is already pre-existing. It’s not a redundant backup copy. It’s already there. I’ll explain more about that in just a second.

The last tab is for when we choose iSCSI. It allows us to pool an iSCSI target so we LUN target so we can actually connect to and consume that block level device.

I’m not going to create that right now because I already have one in place. I have this user volume called users that sits on my home drives folders. This is will be where the actual users sit – John, Bill, Mary, or whoever it might be.

The reason I have this pre-existing is because I wanted you to see what the snapshotting process looks like. If I click on this snapshotting tab, you can see I’m maintaining all the way back to December 1st.

If I need access to January 9th 2017, from 1200 GMT, I simply select that snapshot from the list and I hit “Create Snapclone.” What we’re doing here is we’re creating a brand new volume exactly like the original volume.

Users in this case were changing the name to reflect that it’s a clone from a certain date in time. Now your users or services or whomever can simply connect to, via NFS or CIFS in this case, that volume and they can see that data from that date.

If they want, they can modify that data and it will be saved in this brand new volume. We can even create a branch snapshotting routine for this so we could branch our data and keep it proper, today, if we need to.

Or if we need to override production, we simply copy information out of that directory, paste it into the users’ volume and now we’ve affected that change in production. Since we’re copy-on-write, we’ve got a snapshot that allows us to go back from that if we need to.

We can get very creative. From a durability perspective, we can maintain a lot of durability with regards to our files system. Somebody accidentally delete something, a virus corrupts something, whatever the scenario; we can recover from that.

We are talking DR here. What if we want to have a redundant copy of this? Maybe we are considering closing that on-prem data center and we’re looking for redundancy for the cloud.

We do give you the ability to take all this information that’s now stored on EBS or S3 and managed by SoftNAS and replicate it to a second device. I actually got a second device configured in a different availability zone of the same region.

I’m going to login to that device right now. I’ll let that login while I go back here. What I want to illustrate is how easy it is to start replicating data to an entirely separate data center and this can be on-prem to the cloud, it can be within the cloud (region to region), it could be cloud to cloud.

Replication is just a process of tunneling through one port and we’ll handle all the transfer back-and-forth at the block level using delta-based replication every 60 seconds.

You can see I’ve got no replication defined right now. I need to set up that second machine to handle the acceptance of all this data. The way we do that is we decide which storage pools we want to migrate.

In this case, I want to migrate home-drives. It’s 50 gigs of storage. I need to go to this second machine. I need to make sure it has a disk capable of consuming that 50 gigs so I need 50 gigs or higher of virtual disk.

Since this is a replicate partner, the best case scenario is it’s the same exact disk type. That way, if we ever have to use it, you’re going to get the same performance.

Again, this is a DR scenario and the technology permits use to use a different disk type. I could use S3 here even though I might be using general SSD on the other side.

Obviously, your workload needs to be compatible with the performance characteristics of what I’m choosing. But the point is, here, we’re not limited to using the same type of storage. A general SSD on the front side and maybe magnetic on the backside, however we chose to do it, we can get creative.

This instance is just like my first. I have that ephemeral storage. We don’t want to use that. Ephemeral storage is not persistent. If we move to this machine, we will not get a different machine that’s not the same device and we’d lose all our data. Great for read caching, not great for storing persistent objects so I need to add some dip storage here.

I’ll go through and I’ll add a 50 gig S3 bucket. Let me remove one of these zeros and hit “Create” for that. Now I’ve got the capacity I need to replicate home drives. Now I need to configure the pool for home drives.

This is how we control what replicates. You might have a dozen pools but not all of them warrant the SLA or the overhead of replication. What we will do is we will allow you to define which pools you want to replicate simply by recreating that pool.

I need to recreate home drives. I need to give it 50 gigs of space, and I need to create that. I need to choose the correct RAID options. Once I do that, we’ll create.

It’s going to warn me. I’m going to erase all this information that’s already existing, etc., and then it’s going to build. That’s all I need to do. I don’t need to worry about the volumes that are on that, etc. all of that metadata will come across. All of that will get recreated for us.

Once that’s created, I’ll actually show you that my volumes and my LUNs, they are empty. I haven’t pre-staged any of this. This is live so hopefully everything goes well and there is no hiccups.

Once these volumes and LUNs application loads, you’ll see there is nothing there that will automatically come over.

Subsequent to this tab, what I’m going to do is I’m going to open up the “Start to Replicate Tab.” You can see how the SoftNAS Storage Center GUI actually illustrates or shows you replication occurring.

You’ll see the same graphic we saw on the other system in just a second. It basically says nothing’s to find. We’re not really doing anything. I’m going to go back over to our primary machine.

This is the machine that’s hypothetically already up. This is going to be the primary file services in the system and we want to replicate it. I go over to my Snap replication tab and I go through a very simple process.

I am assuming the person sitting in front this keyboard is not a storage expert, he is a code or Linux expert, and he might not be a networking expert. I can assume he understands the new answers associated with replication or the new answers associated with configuring things with AWS to allow it to occur.

I can assume he is somewhat familiar with IT and he knows the IP address of the second system, 10.0.2.197. I can assume he can type better than I can. Then I’m also going to assume he knows the credentials of the user that has the ability to login to that SoftNAS Storage Center.

Once I do that, we’ll go ahead and set up replication. What’s going to happen right now is we will mirror all the data from those pools that we’ve selected. We’ll send that over to that secondary machine – that target.

Once we complete that mirror, what’s going to happen is, every 60 seconds thereafter we’ll ship a delta. We’ll ship just the blocks that have changed in those 60 seconds. If you don’t have a high data change rate, it’s not going to be much to change. It’s going to be very fast to send it.

Obviously if you have a higher data change rate, then it takes a little longer to change it. Once that’s there, we’re maintaining a 60-second recovery point objective between these two machines. That means, worst case scenario, we lose the primary, we’ve got a copy that’s maintained within 60 seconds sitting in another availability zone in another data center wherever it might be.

That’s great because it’s not only a redundant set of data; it’s actually a redundant SoftNAS system. Which means if I refresh these volumes and LUNs, I have the same exact volumes already created and available via NFS, CIFS, so on and so forth.

The only thing I need to do is I need to redirect all my users to this DNS, this new IP address, whatever it might be. You work your networking magic however long it takes to propagate across your network and you’re back up and running.

The manual process for failover could be pretty efficient. If your RTO allows you to take that time to manually bring it up, then this is a great solution. But if your RTO dictates that, “Hey, I need high availability,” we can add that here as well.

High availability is a little different from a network requirement perspective in snap replicate. The two machines have to be within the same network for HA. In the case of AWS, they sit within the same VPC. They are simply separated by availability zone. They are sitting in two different physical locations but they are within the same Virtual Private Cloud within AWS.

I’m sitting in US, West, right now, Northern California, and my machines are separated by availability zones 2A and 2B. In order to add SnapHA, all I need to do is to enter a notification email. I’ll enter my SoftNAS address and then we are going to be off to the races.

The first thing I’m going to do here is I’m going to choose whether or not I want this SoftNAS machine to be available through the public interface — meaning over the internet — or only within my AWS networking team.

I’m going to choose a virtual IP. I want this thing contained privately. I don’t need to expose it to the internet. Once I choose that, then the only rule we have is that we need to choose a virtual IP that all of our users are going to use to consume this storage that’s outside of what’s called a CIDR block – outside the network of what you have.

The net is here and it can’t be the 10. IP address that I have for all the other machines. Since it’s internal only, I can choose any address that’s outside that networking scheme. We’ll take care of all the heavy lifting to make this function, i.e. the modification of route tables and the infrastructure within AWS to make it happen for you.

I simply enter that virtual IP address and then SoftNAS does everything else behind the scenes to make HA actually function. We’re doing a lot of complex exercises behind the scenes. We are creating buckets as witnesses.

We are installing services. We’re making sure these two machines have the appropriate communication going back and forth so we can provide HA service. There is a lot going on the background that you don’t have to worry about, which means you don’t have to manage it.

You don’t have to go through lengthy documentation of this is how failovers actually occur, etc. You simply light up HA, turn it on via this exercise, and you’re off to the races.

Once this service is installed and running, the graphics that you see up here that right now say “Current status, source node primary” will change to let you know that you are now in an HA relationship.

You’ll be able to at a glance tell between the source and the target who is actually the primary node. You’ll be able to see that virtual IP address so we know at a glance this is the IP address that we assigned to that DNS name in networking and that means how all of our users consume storage.

SoftNAS will take care of what server is providing service to that DNS name or that IP address. But the point is we don’t have to.

It just takes a few to install, once it installs, again, you will see the graphics change. But essentially what’s going on now is, behind the scenes, we’re checking the health of this machine.

We’re not just pinging this machine and hoping everything is okay. We are actually going in and we’re checking the health of those virtual disks. We are checking the health of those pools and the volumes.

Obviously, we are looking at things to actually be down in the event of a ping failure. Once we fail, though, what we’ll do is we’ll automatically update route tables in AWS, so this target node becomes the primary provision of file services.

We’ll break the replication so there’s no split-brain, and within some 30 seconds, your users are now consuming data within a 60-second RPO. That’s us at a very high-level. There’s a lot of features.

I know I talked about a lot, but you can definitely follow on to this as you see fit. I’m going to hand this back over to Taran right now.

Taran:  Fantastic. Thanks for that detailed overview, Joey. Let me share my screen here. Joey just talked a lot about what SoftNAS is capable of doing on AWS. Now, we want to talk through a bit of some DR architectures and scenarios for disaster recovery on AWS.

Now we’re going to talk about four main DR architecture – these are your backup and restore scenario when your data center goes down so you have to pull backup from AWS.

We’re going to be talking about Pilot Light, your Hot Standby, and then finally your Multi-Site DR architecture. To give you a sense of what AWS services are involved with these architectures and scenarios.

For the backup and restore, you’re not using too many of AWS’s core services. You’re going to be using S3 Glacier and SoftNAS specifically for the replication that Joey talked about.

You’ll also be using their Route53 service and their VPN service. Then as you move on to the Pilot Light, you’re going to add in Cloud Formation EC2, maybe a couple of EBS volumes along with their VPCs and Direct Connect.

Once you move over to Hot Standby, you add in the Auto scaling and the Elastic Load Balancing or ELB for short and then you set up multiple Direct Connects.

Finally for the Multi-Site, you’ll be using a whole host of AWS services for that. Talking a bit the backup and restore architecture. The way it works here is, on the screen, we’ve got your on-premises data center here on the left. Then on the right, we’ve got the AWS infrastructure as well.

Over here on the left, you’ve got a data center, you’ve got physical San using iSCSI file storage protocol, and then you’ve got a virtual appliance on top of that managing your storage.

What we are saying that you can do with AWS’s DR is you can use a combination of SoftNAS, S3, EC2, or EBS to basically go and manage your backup architecture.

In the traditional environment, data is backed up to a tape. You have to send it off quite regularly. If something fails it’s going to take a long time to restore your system because you have to pull those and pull the data from them. That’s what makes Amazon’s S3 Glacier really good for this.

You can transfer your data to S3, back it up one cent per gigabyte per month. Using a service like SoftNAS enables you to use snapshots of your on-premises data and copy them into S3 for your backup.

The beauty of this is you can also snapshot those data volumes to give you the highly-durable backup. Again, backing up is only half the equation here. The other half is actually doing the backing up.

We move on to the next slide here. The way the backup and restore works with AWS for your DR is it’s simple to get started. It is pretty cost-effective. You’re not paying a lot of money for it.

Then in case of disaster, what happens is you’re going to retrieve your backups from S3. You bring up your required infrastructure – these are the EC2 instances with prepared AMIs, Load Balancing, etc.

You restore the system from a backup. Basically, the objectives here for RTO is it’s as long as it takes to bring up your infrastructure and restore it from backups. Your RPO is the time since your last backup.

With the backup and restore architecture, it’s a little bit more time consuming and it’s not instant, but there is a workaround for that. The reason we keep bringing up SoftNAS is because you can set up that replication with SnapReplicate so your data is instantly available.

Instead of you having to wait for it to download and back it up, it’s now all instantly available to you so your RTO and your RPO go from hours or days into minutes or just one or two hours.

Moving on to the Pilot Light architecture, this is basically a scenario in which a minimal version of the environment is always running in the cloud. The idea of this is you can think of it as a gas heater.

On a gas heater, a small flame is always on that can quickly ignite the entire furnace to heat up your home. It’s probably the analogy I can come up with. It’s pretty similar to a backup and restore scenario.

For example, what happens with AWS is you can maintain that Pilot Light by configuring and running your most critical core elements of your system in AWS; so when the time comes for recovery, you can rapidly provision a full-scale production environment around that critical core.

Again, it’s very cost-effective. In order to prepare for the Pilot Light phase, you replicate all of your critical data to AWS. You prepare all of your required resources for your automatic starts – that’s the AMI, the network settings, load balancing. Then we even recommend reserving a few instances as well.

What happens in case of disaster is you automatically bring up those resources around the replicated core data set. You can scale the system as needed to handle your current production traffic.

Again, the beauty of AWS is you can scale higher or lower based on your current need. You also need to switch over to the new system. Point your DNS records to go from your on-premises data center to AWS.

Moving on to the Hot Standby architecture, this is a DR scenario which is a scaled down version of a fully-functional environment that’s always running in the cloud. A warm standby extends the Pilot Light elements.

It further reduces the recovery time because some of your services are always running in AWS – they are not idle and there is no downtime with them. By identifying your business-critical systems, you can fully duplicate them on AWS and have them always on.

Some advantages of it is it does handle production workloads pretty well. In order to prepare for it, you replicate all of your critical data to AWS. You prepare all your required resources and your reserved instances.

In case of disaster, what’s going to happen is you automatically bring up the resources around the replicated core data set. You scale the system as the need be to handle your current production traffic.

The objectives of this is Hot Standby is meant to get you up and running almost instantly. So your RTO can be about 15 minutes, and your RPO can vary from one to four hours. It’s meant to get you up and running for your tier 2 applications or workloads.

Finally we have the Multi-Site architecture. This is basically where your AWS DR infrastructure is running alongside your existing on-site infrastructure. So instead of it being active and inactive, it’s going to be an active/active configuration.

The way this works is this is the most instantaneous architecture for your DR needs. The way it works is, at any moment, once your on-premises data center goes down, AWS will go ahead and pick up the workload almost immediately. You’ll be running your full production load without any kind of decrease in performance.

Immediately it failover, all your production load, all you have to do is adjust your DNS records to point to AWS. Basically, your RTO and your RPO are within minutes so no need to worry about time re-architecting everything. You are up and running again within minutes.

To give you an example of how our customers are using disaster recovery on AWS. One of our customers right now is using AWS to manage their business applications and they’ve broken them down into tier 1, tier 2, and tier 3 apps.

For the tier 1 apps that need to be up and running 24/7, 365 days a year, what they are doing is they’ve got their EC2 instances for all services running at all times.

Their in-house and their AWS infrastructure are load balanced and configured for auto failover and they do the initial data synchronization using in-house backup software or FTP. And finally, they set up replication with SoftNAS.

So in case a disaster happens, they automatically go ahead and failover in minutes and they lose any productivity data or anything like that. With the tier 2 apps, what they’re doing is they go ahead and configure the critical core elements of the system. They don’t configure everything.

Again, they’ve got their EC2 instances running only for the critical services, so not all services. They go ahead and they’ve preconfigured their AMIs for the tier 2 apps that can be quickly provisioned. Their cloud infrastructure is load balanced and configured for automatic failover. Again, they did the initial data sync with the backup software and they did replication with SoftNAS.

Finally for the tier 3 apps where the RPO and the RTO isn’t too strict, they’ve basically replicated all their data into S3 using SoftNAS. They did, again, the data sync with the backup software. They went ahead and preconfigured their AMIs. Then their EC2 instances are spun up from objects within S3 buckets. It’s a manual process but they are able to get there quickly.

Again, using SoftNAS’s Snapreplicate feature, their backup and restore is a lot quicker than it would normally be just using AWS by itself.

Here, we’ll talk a bit about our highly-available architecture. I know we’re running past our scheduled here. Joey, if you can cover this in about two minutes, I think we should be good to go.

Joey: Certainly. We definitely talk about this in our demo. One of the great things about our ATA solution is if you architect it per our standards, we will offer you a five nines uptime guarantee, so there’s an additional SLA there on top of the SLA that AWS already provides.

We’ll go ahead and forward. This is a very high-level architecture, but again, the point is, here, we’re going to give you the ability to have cross-zone HA available either via an elastic IP which this illustrates or on the very next slide a virtual IP.

The notation is there. We can keep everything private. Or if you need to scale out and offer storage services to services that exist outside of your VPC, we do give you the ability to leverage that.

All the replication between these two machines, again, is block level replication and it is delta-based. And we do give you the ability to effectively have some 30 seconds failover between two machines that have data independent of one another separated by availability zone.

I hand it back to Taran. If you have any additional questions and you want to dive deeper into HA, definitely let me know and we can reach out and schedule a conversation.

Taran: Thanks for that, Joey. We’ll go ahead and move on to our next poll question here. To get a sense of which DR architectures you all intend to build…
I’m sorry. I clicked on the wrong link there. There we go. We’ll go ahead and launch this poll. Of the four DR architectures that we just talked about, which ones do you intend to use with AWS? Are you going to do the Backup and Restore, the Pilot Light, the Hot Standby, or the Multi-Site DR architecture?

We will go ahead and give that probably about 10 more seconds. It looks like nearly half of you have voted. Is that okay. We’re going to close the poll now.

Let’s go ahead and share the results. It looks like most of you are not sure of which DR architecture you want to use, and that’s perfectly fine. DR can be complicated for your potential use cases.

For those of you who aren’t sure, we recommend that you reach out to us. Go to softnas.com/contact or email sales@softnas.com and we’ll go ahead and reserve some time to talk to you about how you are using disaster recovery and how we can help you best pick the use case that you can be using it for.

Then it looks like a lot of you are interested in the Pilot Light architecture, which is great. Pilot Light get’s you up and running quickly at definitely a much more reduced cost when having a DR data center.

Moving on, we covered SoftNAS quickly here. What we want to cover also is to give you guys an idea of our technology ecosystem. We do partner with a lot of well-known technology partners. AWS is one of our partners.

You are able to go in and download SoftNAS, on AWS, free for 30 days to try it out. Then also, we do partner with companies like Talon, SwiftStack, 2ndWatch, NetApp, and a couple of other companies as well.

To give you a sense of the companies that are using Talon’s. Large well-known brands are using SoftNAS to manage their storage in the cloud. You’ve got Nike, Boeing, Coca Cola, Palantir, Symantec. All these names on the screen are managing hundreds of terabytes of data on AWS using SoftNAS.

In order to thank everyone for joining today’s webinar, we are offering $100 AWS credits that I’m going to go ahead and post in the chat window here on GoToWebinar.

If you click on that link, it will let you go in and register for your free $100 AWS credit. All we need is your email address. That’s the only information that we need from you. Once you put in your email address, you’ll receive a code for a free $100 AWS credit from one of our colleagues.

Finally, before we get to the Q&A here, we do want to let you know about a couple of more things. For those of you who are curious about how SoftNAS works on AWS or you’re just interested in learning more, go and click on this first link here. It will take you to our website and you’ll be able to learn more about how SoftNAS works on AWS.

You’ll learn about the use cases, some technical specifications, and you can also download a couple of whitepapers as well.

We do also offer a free 30-day trial SoftNAS on AWS. For those of you who liked what you saw or you’re curious about a couple of things, just go ahead and click on that “Try Now” button and you’ll be able to go in and start using SoftNAS and get up and running in less than 30 minutes.

We know we’ve covered a lot of content here today. For those of you who have any questions or you want things explained further, just go ahead and contact us. Our solutions architects like Joey are happy to sit down, talk with you, and answer any questions that you might have about disaster recovery or anything else on AWS.

Finally, if you are using SoftNAS and you have a couple of questions or you need some help, just go ahead and reach out to our support team and they’ll go ahead and answer any questions that you may have and they are also readily available to help you out.

With that, let’s go ahead and get on to the questions here. It looks like we have a lot of questions coming in today so we’ll go ahead and answer just a few of them here today.

The first question we have here is, “How do you recommend moving tier 1 applications like SAP to AWS?”

Joey: What we’re going to do is we’re going to look at the performance needs or requirements of these particular applications. How many connections are they making? How many in-flight files do they have in any given point in time? What’s the average file size?

We need to look at IOPS, Throughput, etc. The whole point of this exercise is to create a storage controller that can accommodate or exceed those expectations from IOPS, throughput, latency, etc.

The one caveat thing, we are a network attached storage so we are always subject to the network has usually been the slowest link within the system. Provided the networking is not the issue, it’s just a matter of architecting a system that can meet your data needs for both capacity and performance.

Taran:  Thanks so much, Joey. The next question that we have here is, “What is AWS running in the backend as a hypervisor?”

Joey:  They are running the Zen Hypervisor.

Taran:  The next question that we have here is, “Is there any whitepaper that discusses the performance of SoftNAS on AWS Specifically? I’m looking for a reference architecture.”

Joey:  I definitely have a reference architecture. I don’t believe it is published as of yet, but it does cover SoftNAS and all of its components within a multi-AZ infrastructure with HA so you can see how everything is configured and running within that architecture. I can definitely provide that to you. Reach out to sales@softnas.com if you’d like or even my personal email address and we’ll get back to you.

As far as the performance numbers are concerned, we do have some general very high level recommendations available on our website. More granularity is coming very shortly to that list. Beyond that, we have various difference sizes and instances for different matrix and that’s something we can share with you if you’d like to continue this conversation.

We also have some recommendations for how we size instances based on your needs so anything that would deviate from the prescribed guidance that we have out there now, then it’s going to be published very soon.

Taran:  Thanks for that, Joey. It looks like we don’t have any more questions. What we also want to do recommend is if you go to softnas.com/aws, you’re able to go and access some of our resources that provided more technical information of how SoftNAS works on AWS.

Before we end this webinar, we do want to thank everyone for attending. We also want to let you know that there is a survey at the end of this webinar. Please go ahead and fill that out. It only takes about two minutes.

The main reason being is once we get your feedback, that gives us a better sense of how to prepare for our future webinars. If you’re happy with today’s webinar, great. If you’re unhappy with today’s webinar, just let us know. That will give us a better sense of what we need to do better in the future.

With that, we want to thank everyone again for joining today’s webinar. We look forward to see you to our future webinars that we’ll be doing throughout the rest of the year. Thanks again everyone and have a great day.

Webinar: Best Practices Learned from 2,000 AWS VPC Configurations

Webinar: Best Practices Learned from 2,000 AWS VPC Configurations

The following is a recording and full transcript from the webinar, “Best Practices Learned from 2,000 AWS VPC Configurations”. You can download the full slide deck on Slideshare

Full Transcript: Consolidate Your Files in the Cloud

John Bedrick: Good afternoon everyone. Welcome to the SoftNAS webinar, today, on Lessons Learned from 2,000 Amazon VPC Configurations. As the title say, during today’s webinar, we’ll be talking about some of the lessons that we’ve learned from configuring over 2,000 VPCs for our customers on AWS.

Our presenter today will be Eric Olson the VP of engineering here at SoftNAS. Erick has personally configured most of these VPCs for our customers. Eric, I’ll go ahead and give you a second to say hi.

Eric Olson: Good morning and good afternoon for everyone. Thanks for taking the time out to join our webinar today.

Taran: Fantastic. Thank you, Eric

Eric:  You are welcome.

Taran:   Before we begin today’s webinar, we just want to cover a couple of housekeeping items with everyone. Just so everyone’s aware, you can listen to this webinar through your phone or through your computers audio. You can select the telephone option if you want to dial in through your phone or the mic and speakers option if you want to listen in through your computer.

We are also going to be taking questions during this webinar. On the GoTo webinar control panel, you’ll see a questions pane. Please feel free to ask any questions that you might have during this webinar.

We know that with Amazon VPCs there tend to be a lot of questions and we’re here to answer any questions that you may have. Eric is personally looking forward to answering all your questions.

Finally, this session is being recorded. After today’s webinar, we are going to send out a link that has both the webinar video and the slide deck readily available.

For those of you who are interested in using SoftNAS, you’ll have that information on-hand. For our partners, you’ll be able to share this webinar in the slide deck with your customers.

As a bonus for joining today’s webinar, we are offering $100 AWS credits. The first 100 attendees to register during this webinar will receive a $100 AWS credit, and the link for that credit will be announced at the end of the webinar.

For our agenda today, we’re going to be talking about what is a Virtual Private Cloud or VPC. We’re going to talk about the 10 lessons that we’ve learned from configuring all these VPCs. We’ll also tell you a bit about how SoftNAS uses VPCs, and we’ll give you a bit of an overview about SoftNAS as well.

Finally at the end of the webinar, we are going to be answering ant questions that may pop up during the webinar.

Just a little bit of background here. We’ve configured over 2,000 Amazon VPCs for our customers and some of the customers that we’ve configured the VPCs for are listed out here.

We’ve got a wide-range of experience in both the SMB and the Fortune500 market. Companies like Nike, Boeing, Autodesk, ProQuest, Raytheon, have all had their VPCs configured by SoftNAS.

Just to give you a brief overview of what we mean by SoftNAS. SoftNAS is our product that we use for helping manage storage on AWS. You can think of it as a software-defined NAS.

Instead of having a physical NAS as you do on traditional data center, our NAS is software-defined and it’s based-fully in the cloud with no hardware required. It’s easy to use.

You can get up and running in under 30 minutes and it works with some of the most popular cloud computing platforms so Amazon, VMware, Azure, and CenturyLink Cloud.

With that, I’ll go ahead and hand it over to Eric to talk about lessons learned from our VPCs. Eric, I’m going to go ahead and make you the presenter.

Eric: Thank you very much, Taran. Hopefully, everybody can hear me well as well as see my screen. Thanks again for taking the time out to join this morning or this afternoon, depending upon your time zone.

Let’s go ahead and dive into it. Just one other reminder; if you have any questions, go ahead and stick them into the chat, and at the end, we will get to them and I will hopefully answer as many of them as I can.

Let’s get started with some terminology and some overview before we dive in. what is a VPC or a Virtual Private Cloud? It can be broken down in a couple of different ways, and we’re going to break this down from how Amazon Web Service looks at this.

It’s a virtual network that’s dedicated to you. It’s essentially isolated from other environments in the AWS Cloud. Think of it as your own little mini data center inside of the AWS data center.

It’s actually a location where you launch resources into and allow you to virtually logically group them together for control. It gives you configuration flexibility to use your own private IP addresses, create different subnets, routing, whether you want to allow VPN access in, how you want to do internet access out, and configure different security settings from a security group. I’m on access control list point of view.

The main things to look at that I see are around control. What is your IP address range? How is the routing going to work? Are you going to allow VPN access? Is it going to be a hardware device at the other end? Are you going to use Direct Connect? How are you going to architect your subnets?

These are all questions. I’m going to cover some of the tips and tricks that I have learned throughout the years. Hopefully, these will things that will help everyone because there is not really a great VPC book out there or dot guidance. It’s just a smattering of different titbits and tricks from different people.

Security groups and ACL’s as well as some specific routing rules. There are some specific features that are available only in VPCs. You can configure multiple NIC interfaces.

You can set static private IPs so that you don’t ever lose that private IP when the machine is stopped and started, and certain instances such as the T2s and the M4s, their primary purpose is to be lost within a VPC.

This is the way that you could perform your hybrid cloud setup or configuration. You could use Direct Connect, for example, to securely extend your premise location into the AWS Cloud, or you could use your VPN connection in the internet to also extend your premise location into the cloud.

You can peer the different VPCs together. You can actually use multiple different VPCs and peer them together for different organizational needs. You can also peer them together with other organizations for access to certain things — think of a backend supplier potentially for inventory control data.

Then there is a bunch of endpoint flow logs that help you with troubleshooting. Think of this, for those of you that have a Linux background or any type of networking background, like a tcpdump or a Wireshark ability to look at the packets and how they flow and that can be very useful when you’re trying to do some troubleshooting.

Just some topology guidance so hopefully you’ll come away will something useful here. Our VPC is used in a single region but will be a multiple availability zone.

It will extend across at least two zones because you’re going to have multiple subnets. Each subnet lives in a single availability zone. If you configure multiple subnets, you can configure this across multiple zones. You can take the default or you can dedicate a specific subnet to a specific zone.

All the local subnets have the ability to reach each other and route to each other by default. The subnet sizes can be from a /16 to a /28 and you can choose whatever your IP prefix is.

How can access traffic within the virtual private cloud environment? There’s multiple difference of these gateways. What do these gateways mean and what do they do? You hear these acronyms IGW, VPG, and VGW? What does all this stuff do?

These gateways generally are provisioned at the time of VPC creation, so keep that also in mind. The internet gateway is an ingress and egress for internet access.

You can essentially in your VPC point specific machines or different routes to actually go out over the internet gateway to actually access resources outside of the VPC or you can restrict that and not allow that to happen. That’s all based upon your organizational policy.

A virtual private gateway, which is basically the AWS side of a VPN connection, if you’re going to have VPN access into your VPC, this is the VPC on the AWS side of that connection and the CG is the customer side of the VPN connection within a specific VPC.

From a VPN option, you have multiple subsets. I mentioned the Direct Connect which would essentially give you dedicated bandwidth to your VPC. If you wanted to extend your premise location up into the cloud, you could leverage Direct Connect for your high-bandwidth lower latency type of connections.

Or if you wanted to just be able to make a connection faster and you didn’t necessarily need that level of your throughput or performance, you can just tap up a VPN channel.

Most VPN vendors like Cisco and others are supported and you can easily download a template configuration file for those major vendors directly [pause 10:50].

Let’s talk a little bit about how the packets flow within a VPC. This is one of the things that I really wish I had known earlier on when I was first delving into configuring SoftNAS instances inside of VPCs and setting up VPCs for customers in their environments.

Its, there is not really a great documentation out there on how does packets get from point A to point B under specific circumstances. We’re going to back to this a couple of different times, but you’ve got to keep this in mind that we’ve got three instances here — instance A, B, and C — installed on three different subnets as you can see across the board.

How do these instances communicate to each other? Let’s look at how instance A communicates to instance B. The packets hit the routing table. They hit the node table. They go outbound to the outbound firewall.
They hit the source and destination check that occurs and then the outbound security group is checked. Then the inbound security group source and destination check in firewall.

This gives you an idea if you make different configuration changes in different areas, where they actually impact and where they actually come into play. Let’s just talk about how instances would talk B to C.

Let’s go back to this diagram. We’ve already shown how A would communicate with B. How do we get over here to this other network? What does that actually look like from a packet flow perspective?

This is how it looks from an instance B perspective to try to talk to instance C, where it’s actually sitting on two subnets and the third instance (instance C) is on a completely different subnet.

It actually shows how these instances and how the packets would flow out to a completely different network and this would depend with in which subnet each instance was configured in.

Some of the lessons that we’ve learned over time. These are personal lessons that I have learned and things that I wish that if, on day one, somebody handed me a piece of paper. What would I want to have known going into setting up different VPCs and some of the mistakes that I’ve made throughout my time.

Number one is tag all of your resources within AWS. If you’re not doing it today, go do it. It may seem trivial, but when you start to get into multiple machines, multiple subnets, and multiple VPCs, having everything tagged so that you can see it all in one shot really helps not make big mistakes even bigger.

Plan your CIDR block very carefully. Once you set this VPC up, you can’t make it any bigger or smaller. That’s it, you’re stuck with it. Go a little bit bigger than you think you may need because everybody seems to really wish they hadn’t undersized the VPC, overall.

Remember that AWS takes five IPs per subnet. They just take them away for their use. You don’t get them. Avoid overlapping CIDR blocks. It makes things difficult.

Save some room for future expansion, and remember, you can’t ever add any more. There is no more IPs once you set up the overall CIDR.

Control the network properly. What I mean by that is use your ACLs, use the things in the security groups. Don’t be lazy with them. Control all those resources properly. We have a lot of resources and flexibility right there within the ACLs and the security groups to really lock down your environment.

Understand what your subnet strategy is. Is it going to be smaller networks, or you’re just going to handout class Cs to everyone? How is that going to work? If your subnets aren’t associated to a very specific routing table, know that they are associated to the main routing table by default and only one routing table is the main.

I can’t tell you how many times I thought I had configured a route properly but hadn’t actually assigned the subnet to the routing table and put the entry into the wrong routing table. Just something to keep in mind — some of these are little things that they don’t tell you.

I’ve seen a lot of people configure things and aligning their subnets to different tiers. They have the DMZ tier, sloe the proxy tier, and subnet. They are subnets for load balancing, subnet for application, and subnet for databases. If you’re going to use RDS instances, you’re going to have to have at least three subnets, so keep that in mind.

Set your subnet permissions to “private by default” for everything. Use Elastic Load Balancers for filtering and monitoring frontend traffic. Use NAT to gain access to public networks. If you decide that you need to expand, remember the ability to peer your VPCs together.

Also, Amazon has endpoints available for services that exist within AWS such as S3. I highly recommend that you leverage the endpoints capability within these VPCs, not only from a security perspective but from a performance perspective.

Understand that if you try to access S3 from inside of the VPC without an endpoint configured, it actually goes out to the internet before it comes back in so the traffic actually leaves the VPC. These endpoints allow you to actually go through the backend and not have to go back out to the internet to leverage the services that Amazon is actually offering.

Do not set your default route to the internet gateway. This means that everybody is going to be able to get out. And in some of the defaulted wizard settings that Amazon offers, this is the configuration so keep it in mind. Everyone would have access to the internet.

You do use redundant NAT instances if you’re going to go with the instance mode, and there are some cloud formation templates that exist to make this really easy to deploy.

Always use IAM roles. It’s so much better than access keys. It’s so much better for access control. It’s very flexible. Here in the last 10 days, now you can actually attach an IAM role to a running instance, which is fantastic, even easier to leverage now that you don’t have to deploy the new compute instances to attach and set IAM roles.

After we’ve been through that, how does SoftNAS actually fit into using VPC and why is this important? SoftNAS, we have a high-availability architecture leveraging our SNAP HA feature which provides high availability for failover cross zones, so multiple AZ high-availability. We leverage our own secure block replication using SnapReplicate to keep the nodes in sync, and we can provide a no downtime guarantee within Amazon if you deploy SoftNAS with the multiple AZ configuration in accordance with our best practices.

This is how this looks and we actually offer two modes of high availability within AWS. The first is the Elastic IP based mode where essentially two SoftNAS controllers can be deployed in a single region each of them into a separate zone.

They would be deployed in the public subnet of your VPC and they would be given elastic IP addresses and one of this elastic IPs would act as the VIP or the virtual IP to access both controllers. This would be particularly useful if you have on-premises resources, for example, or resources outside of the VPC that need to access this storage, but this is not the most commonly deployed use-case.

Our private virtual IP address configuration is really the most common way that customers deploy the product today, and this is probably at this point about 85 to 90-plus percent of our deployments is in this cross-zone private approach, where you deploy the SoftNAS instance in the private subnet of your VPC.

It’s not sitting in the public subnet, and you pick any IP address that exists outside of the CIDR block of the VPC in order to be able to have high availability, and then you just mount your NFS clients or map your CIFS shares to that private virtual IP that exists outside of the subnet of the CIDR block for the overall VPC.

Some common mistakes that we see when people have attempted to deploy SoftNAS in a high availability architecture in VPC mode. You need to deploy two ENIs or Elastic Network Interfaces on each of the SoftNAS instances.

If you don’t catch this right away when you deploy it…Of course, these ENIs can be added to the instance after it’s already deployed, but it’s much easier just to go ahead and deploy the instances with the network interface attached.

Both of these NICs need to be in the same subnet. If you deploy an ENI, you need to make sure that both of them are in the same subnet. We do require ICMP to be open between the two zones as part of our health check.

We also see that the other problem is that people are providing access to S3. We actually as part of our HA provide a third-party witness and that third-party witness is an S3 bucket.

So, therefore, we require access to S3, so that would require an endpoint or access out of your data infrastructure.

For Private HA mode, the VIP IP must not be within the CIDR of the VPC in order to overcome some of the networking limitations that exist within Amazon. Taran, I’m going to turn it back over to you. That concludes my portion of the presentation.

Taran: Thanks, Eric. I just want to confirm that, yeah, my screen is showing. Just to give everyone a brief overview of SoftNAS cloud. Basically, what we are is a Linux virtual appliance that’s available on the AWS marketplace. You are able to go to SoftNAS on AWS and spin up an instance and get up and running in about 30 minutes.
As you can see on the slide here, our architecture is based on ZFS on Linux. We have an HTML5 GUI that’s very accessible and easy to use. We do work on a number of cloud platforms including AWS, as well as Amazon S3 and Amazon EBS.

To give you a brief overview of our partners here, we do partner heavily with Amazon Web Services and that’s why we’re doing this webinar. We also do partner with a couple of other technology providers as well – NetApp, SanDisk, VMware; and some other cloud partners as well including 2ndWatch, CloudHesive, and RELIS.

As we said earlier on in the webinar, in order to reward everyone for joining today’s webinar, if you go ahead and click on the link here, we’re offering $100 AWS credits to everyone who joined in.

The first 100 people to register by clicking on this link right down here on the bottom will be able to go in and register for a free $100 AWS credit. Then someone from our team will be in touch with you and will send you a code for that credit.

Before we begin the Q&A session, I’d just like to remind everyone, if you do have any questions that you wanted to ask us, please feel free to put those questions in the questions pane. I can see we’ve gotten a number of questions already. We’ll go ahead and answer them in just a few moments.

What we would like to invite everyone to do is to try SoftNAS free for 30 days on AWS. You’ll see here, we have some links on the right that will go ahead and give you some more information about SoftNAS.

This first link, “Learn More,” takes you to our page and lets you learn more about how SoftNAS works on AWS and answers any questions you might have about some of the features and the tech specs required in order to use SoftNAS.

Also, we do offer a 30-day free trial as well. For those of you who are interested in trying out SoftNAS, just go ahead and click on the “Free Trial” link and we will go ahead and allow you to do a free 30-day trial of SoftNAS.

During the free 30-day trial, our support team is available to answer any questions that you may have, and we are happy to work with you and help you with your specific use-case.

We also do invite you to contact us as well. If you have any questions about your specific use-case or how SoftNAS can help you with cloud migrations, setting up your VPCs, just go ahead and contact us and someone from our team will be in touch with you shortly.

If you also want to see a one-on-one demo of SoftNAS, that’s how you can get the demo as well. Finally, our support team is also available to answer any questions you may have. For those of you who are currently using SoftNAS or are thinking about using SoftNAS, go ahead and reach out to our support team.

That covers this slide. Now we’ll go ahead and answer some of the questions that all of you have been asking during today’s webinar.

The first question that we have is, “How do you assign an IP that’s not in any of the subnets of the VPC?” Eric?

Eric: Actually, that’s pretty simple. I know this sounds crazy. You just make it up. It just needs to be any IP address that’s not within the CIDR block of the VPC. When you go and set up, SoftNAS high availability will ask you for what that’s going to be.

Once that’s done, we’ll go ahead and take care of everything from there by adding the static routes into the routing table automatically. It’s just any IP that’s not within the CIDR block of the VPC.

Taran: The next question that we have here is probably more common. Regarding redundant NAT instances, it would be good to stress that NAT gateways are now available which makes NAT use very easy. Any comments on that, Eric?

Eric: Yeah, they are great. Some people can use them if your organization permits. They are fantastic. NAT gateways are great and it’s basically like a service. Instead of having to deploy physical compute, you can leverage the services for it.

Taran: The next question we have here is, “SoftNAS basically works like a NAS but in the cloud, is that correct?”

Eric: That’s correct, yes.

Taran: The next question that we have here is, “When I move storage to the cloud, do I also have to move my SQL server as well or can that stay on-prem?”

Eric: It can stay on-prem. However, you would be much better served by keeping all of your applications that use the storage as close together as possible. Putting a WLAN between the storage of the database is leveraging and itself. The physical database could lead to some performance challenges. But take that up for consideration before going and adding into production.

Taran: The next question that we have here is, “What’s the storage limit for SoftNAS on AWS?”

Eric: The storage limit for SoftNAS is 16 petabytes on AWS and that can be achieved via the use of both EBS and a combination of EBS and S3.

Taran: The next question that we have here is, “I saw on one of the slides that SoftNAS supports NFS, CIFS, and iSCSI. Is it true that SoftNAS also supports AFP for my Mac clients?”

Eric: That is true.

Taran: That looks like that’s all the questions that we have today. We want to thank everyone for joining today’s webinar. If you have any questions for SoftNAS, please feel free to reach out and we’ll be happy to answer any questions that you may have.

Again, we invite everyone to try out SoftNAS for 30 days, on AWS, and we’ll be happy to go ahead and set you up. We would go ahead and happy to help you out.

As a reminder, this webinar recording is being sent out as a link shortly after today’s webinar is over. So if you want to pass that email or that link around to your colleagues, you’ll be able to do that through the email that we send. The video will also be on YouTube and the slides will be on SlideShare.

Once again, we want to thank everyone for joining today’s webinar. All of you have a fantastic day.

Webinar: How To Reduce Public Cloud Storage Costs

Webinar: How To Reduce Public Cloud Storage Costs

The following is a recording and full transcript from the webinar, “How To Reduce Public Cloud Storage Costs”. You can download the full slide deck on Slideshare

Full Transcript: Consolidate Your Files in the Cloud

John Bedrick:  Hey! Welcome to SoftNAS webinar on how to reduce public cloud storage costs. We’ll be starting officially in about one to two minutes to allow all the attendees the option of getting in at the beginning of the webinar. Thank you, and just stay tuned.

Hey! We’ll get started in about one minute.

John: All right, welcome to SoftNAS webinar on how to reduce public cloud storage costs. My name is John Bedrick. I am the head of product marketing management at SoftNAS, and I’ll be conducting this webinar.

Please use the chat capability on the side to ask any questions which we will address during the question and answer session at the end of our webinar.

If you wish to reach out to me after the webinar, to ask some specific questions that you didn’t want to ask in a public forum, please feel free to do so. My email address is jbedrick@softnas.com. Let’s get started.

What we’re going to cover today are just a few items. We are going to cover the trends in data growth. We’re also going to cover, sort of as a primary if you will, what to look for in public cloud solutions.

From there, we’re going to transition over to how SoftNAS can help with our product SoftNAS 4. We also conduct a brief demo of our brand-new Storage Cost-Savings Calculator which will help you understand how much you can save on utilizing SoftNAS 4 in a public cloud storage environment.

Last, we’ll have some closure remarks, and then we’ll follow it up with the Q&A. Again, please use the chat facility to ask any of your questions.

The amount of data that has been created by businesses is staggering – it’s on an order of doubling every 18 months. This is really an unsustainable long-term issue when we see what IT budgets are growing at compared to businesses.

IT budgets on average are growing maybe about 2 to 3% annually. Obviously, according to IDC, by 2020 which is not that far off, 80% of all corporate data growth is going to be unstructured — that’s your emails, PDF, Word Documents, images, etc. — while only about 10% is going to come in the form of structured data like databases, for example. And that could be SQL databases, NoSQL, XML, JSON, etc.

Meanwhile, by 2020, we’re going to be reaching 163 Zettabytes worth of data at a pretty rapid rate. If you compel that by some brand new sources of data that we hadn’t really dealt with much in the past, it’s really going to be challenging for businesses to try to control and manage that when you add in things like Internet of Things, big data analytics, all of which will create gaps between where the data is produced versus where it’s going to be consumed, analyzed, backed-up.

Really, if you look at things even from a consumer standpoint, almost everything we buy these days generates data that needs to be stored, controlled, and analyzed – from your smart home appliances, refrigerators, heating, and cooling, to the watch that you wear on your wrist, and other smart applications and devices.

If you look at 2020, the number of people that will actually be connected will reach an all-time high of four billion and that’s quite a bit. We’re going to have over 25 million apps. We are going to have over 25 billion embedded and intelligent systems, and we’re going to reach 50 trillion gigabytes of data – staggering.

On the meantime, the data isn’t confined merely anymore to traditional data centers so the gap between where it’s stored and where it’s consumed and the preference for data storage is not going to be your traditional data center anymore.

Businesses are really going to be in a need of a multi-cloud strategy for controlling and managing this growing amount of data.

If you look at it, 80% of IT organizations will be committed to hybrid architectures and this is according to IDC. In another study from the “Voice of Enterprise” by the 451 Research Group, it was found that 60% of companies will actually be upgrading in a multi-cloud environment by the end of this year.

What to do. While data is being created faster and the rate of the IT budget is growing, you can see from the slide that there’s a huge gap, which leads to frustration from the IT organization.

Let’s transition to how do we address and solve some of these huge data monsters that are just gobbling up data as fast as it could be produced and creating a huge need for storage.

What do we look for in a public cloud solution to address this problem? Well, some of these have been around for a little while.

Data storage compression. Now, for those of you who haven’t been around the industry for very long, data storage compression basically removes the whitespace between and in data for more efficient storage.

If you compress the data that you’re storing, then you can get a net benefit of savings in your storage space and that, of course, immediately translates into cost savings. Now, all of this cost is subject to the types of data that you are storing.

Not all cloud solutions, by the way, include the ability to compress data. One example that comes to mind is a very well-promoted cloud platform vendor’s offering that doesn’t offer compression. Of course, I am speaking about Amazon’s Elastic File System or EFS for short.

An EFS does not offer compression. That means; you either need to have a third-party compression utility to compress your data prior to storing it in the cloud on EFS or solutions like EFS, and that can lead to all sorts of potential issues down the road. Or you need to store your data in an uncompressed format; and of course, if you do that, you’re paying unnecessarily more money for that cloud storage.

Another technology is referred to as deduplication. What is deduplication? Deduplication sounds and is exactly what it sounds like; it is the elimination or reduction of data redundancies.

If you look at all of the gigabytes, terabytes, petabytes that you might have of data, there is going to be some level of duplication. Sometimes it’s a question of multiple people may be even storing the exact same identical file on a system that gets backed-up into the cloud. All of that is going to take up additional space.

If you’re able to deduplicate that data that you’re storing, you can achieve some significant storage-space savings, translation into cost savings, and that, of course, is subject to the amount of repetitive data being stored.

Just like I mentioned previously with compression, not all solutions in the cloud include the ability to deduplicate data. Just as in the previous example that I had mentioned about Amazon’s EFS, EFS also does not include native deduplication.

Either, again, you’re going to need a third-party dedupe utility prior to storing it in EFS or some other similar solution, or you’re going to need to store all your data in an un-deduped format on the cloud. That means you’re, of course, going to be paying more money than you need to.

Let’s just take a look at an example of two different types of storage at a high level. What you’ll take away from this slide, I hope, is that you will see that object storage is going to be much more cost-effective especially in the cloud.

Just a word of note. All the prices that I am displaying in this table are coming from the respective cloud platform vendors on the West Coast pricing. They offer different prices based on different locations and regions. In this table, I am using West Coast pricing.

What you will see is that the more high-performing public cloud block storage costs are relatively more expensive than the lower performing public cloud object storage.

In this example, you can see ratios of five or six or seven to one where object storage is less expensive than block storage. In the past, typically what people would use that object storage for would be to put less active storage and data into the object storage.

Sort of more of a longer curve strategy. You can think of it as maybe akin to the more legacy type drives that are still being used today.

Of course, what people would do will be putting their more active data in block storage.

If you follow that and you’re able to make use of object storage in a way that’s easy for your applications and your users to obtain too, then that works out great.

If you can’t…Most solutions out in the market today are unable to utilize access to cloud-native object storage and so they need something in between to be able to get the benefit of that.

Similarly, being able to get cloud-native access to block storage also would require access to some solution for that and there are a few out in the market and, of course, SoftNAS is one of those. I’ll go into that a little bit later.

If you’re able to make use of object storage, what are some of the cool things you can do to save more money besides using object storage just by itself?
A lot of applications require high availability. High availability is exactly what it sounds like. It is maintaining a maximum amount of up-time for applications and access to data.

There has been an ability to have two computing instances access a single storage pool — they both share access to the same storage pool –in the past, on legacy on-premises storage systems and it hasn’t been fully brought over into the public cloud until recently.

If you’re able to do this as this diagram shows — having two computer instances access an object-storage storage pool — that means you’re relying on the robust nature of public cloud object storage. The SLAs typically for public cloud object storage are at least 10 or more 9s of uptime. That would be 99.999999% or better, of up-time, which is pretty good.

The reason why you would have two computer instances is that the SLAs for the compute is not the same SLAs for the storage. You can have your compute instance go down in the cloud just like you could on an on-premises system; but at least your storage would remain up, using object storage.

If you have two compute instances running in the cloud, if one of those — we’ll call it the primary node — was to fail, then the rollover or fell over would be to the second compute instance, or as I’m referring to on this diagram as the secondary node, and it would pick up.

There would be some amount of delay switching from the primary to the secondary. That will be a gap if you are actively writing to the storage during that period of time but then you would pick back up in a period of time — we’ll call it less than five minutes, for example — which is certainly better than being down for the complete duration until the public cloud vendor gets everything back up.
Just remember that not every vendor offers this solution, but it can greatly reduce your overall public cloud storage cost by half. If you don’t need to have twice the storage for a fully high-available system and you can do it all with an object storage in just two compute instances, you’re going to save roughly 50% of what the cost would normally be.

The next area of savings is one that a lot of people don’t necessarily think about when they are thinking about savings in the cloud and that is your network connection and how to optimize that high-speed connection to get your data moved from one point to another.

Traditional ways of filling lots of physical hard drives or storage systems, then putting them on a truck and having that truck drive over to your cloud provider of choice. Then taking those storage devices and physically transferring the data from those storage devices into the cloud or possibly mounting it can be, one, very expensive and filled with lots of hidden costs. Plus, you really do have to recognize that you run the risk of your data getting out of sync between the originating source in your data center and the ultimate cloud destination, all of which can cost you money.

Another option, of course, is I’ll lease some high-speed network connections between my data center or my data source and the cloud provider of your choice. That also could be very expensive. Needing a 1G network connection or a 10G network connection, those are pricy.

Having the data transfer taking longer than it needs to means that you have to keep paying for those leased lines, those high-speed network connections, longer than you would normally want.

The last option which would be transferring your data over slower more noisy error-prone network connections, especially in some parts of the world, is going to take longer due to the quality of your network connection and the inherent nature of the TCP/IP protocol.

If it needs to have a retransmit of that data, sometimes because of those error condition or drops or noise or latency, the process is going to become unreliable.

Sometimes the whole process of data transfer has to start over right from the beginning so all of the previous time is lost and you start from the beginning. All of that can result in a lot of time-consuming effort which is going to wind up costing your business money. All of those factors should also be considered.

The next option I’m going to talk to you about is one that’s interesting. That is, assuming that you can make use of both object storage and block storage and be able to use them together. Creating tiers of storage where you are making use of the high-speed higher performing block storage on one tier and then also using other tiers which would be less performance and less expensive.

If you can have multiple tiers, where your most active data is only contained within the most expensive higher performing tier, then you are able to save money if you can move the data from tier to tier.

A lot of solutions out in the market today are doing this via a manual process. Meaning that a person, typically somebody in IT, would be looking at the age of the data and moving it from one storage type to another storage type, to another storage type.

If you have the ability to create aging policies that can move the data from one tier to another tier, to another tier and back again, as it’s being requested, that can also save you considerable money in two ways.

One way is, of course, you’re only storing and using the data on the tier of storage that is appropriate at the given time, so you’re saving money on your cloud storage. Also, if it could be automated, you’re saving money on the labor that would have to manually move the data from one tier to another tier. It can all be policy-driven so you’ll save money on the labor for that.

These are all areas in which you should consider looking at to help reduce your overall public cloud storage expense.

I’m going to transition, here now, to what SoftNAS can offer in helping you save money in the public cloud. We are going to address all of the same areas that we talked about before plus a little bit more besides.

The first one that we had started with before was compression. SoftNAS cloud contains a compression utility – it’s built into the product, it’s no extra charge, it’s just one of our features.

SoftNAS utilizes LZ4 lossless data compression algorithm that’s focused on compression and decompression speeds so they are optimized for speeds.

Of course, your storage space savings is going to be dependent on data type. However, on average, your savings will fall somewhere between 2X and 90X of space savings. Yes, that’s quite a range.

If you look at the table that I have off to the right, just as some examples, you can see the percent range for some common file types. You can see that, for example, a text file can go from 0% all the way up to 99% space savings with an average space savings of about 73%.

Then you can see some Word files, Executable files, and Photo files. You can see things like picture files in a jpeg format. They’ll compress down as well as some other files but that’s going to vary.

Again, just to give you some idea that data type does have an absolute impact on ultimately the compression ratio that you are going to achieve and the space savings that that’s going to translate into.

If there’s any other questions on compression a little bit later, we can pick them up during the Q&A.

The next is deduplication. SoftNAS is built upon Open ZFS. Our in-line deduplication is passed over to us via ZFS in-line dedupe. The storage space savings from deduplication, again, is totally dependent upon your data redundancy. However, on average, it’s going to fall somewhere between 2X to 20X of space savings and that’s also going to translate into a tremendous amount of storage space savings and significant cost-savings in the public cloud.

Now, I realized I broke these two apart which I did also at the beginning of the webinar. But if you combine the two, both the compression from LZ4 lossless compression and the ZFS in-line deduplication, you could see that there can be very significant amounts of savings in your public cloud storage.

Again, just a reminder. Not all public cloud offerings contain deduplication and compression. Again, as an example, EFS does not. If you have 1TB of data that’s uncompressed and not deduplicated, if you put it into EFS, you’re going to require 1TB of storage so you gain nothing from the fact that your data could have been deduped and could have been compressed. Just keep that in mind as you’re looking at public cloud offerings.

The next thing I want to talk to you about is SoftNAS ObjFast and that’s our object storage accelerator. The ObjFast capability within SoftNAS makes object cloud storage like, for example, AWS, S3 or Azure Blob as an example storage. It helps it run very close or near to block level performance while taking advantage of object storage pricing, which can result in cost-savings on storage of up to two-thirds. You could save up to two-thirds of your storage costs by using SoftNAS and our feature for this called ObjFast.

How does ObjFast work? I’ve realized that we have a lot of bar charts here. By the way, this chart is for Azure. I have another one I’ll follow up with on AWS.

SoftNAS ObjFast makes cloud object storage like Azure Blob run at near block-level performance. You can see that you can actually achieve savings of up to 70% of your cost-savings versus using block storage alone. How does ObjFast work?
ObjFast works by throttling data transfer to cloud object storage so we achieve a speed that is as fast as possible but without exceeding Azure object storage read/write capabilities. I’ll talk a little bit more about that in a bit, but let’s switch over to Amazon – AWS.

One other point that I wanted to make. All of the charts that you see for Azure are using the DS15 compute capability. All of the graphs that you see on this slide in Amazon are using an Amazon EC2 instance of c5.9xlarge, just to put that into perspective.

Again here, ObjFast is making cloud object storage like AWS – S3 run at near block-level storage and you can get resulting savings up to 67% on Amazon versus block-level storage alone.

Next. I’m going to go into what SoftNAS is calling our Dual Controller High Availability or DCHA for short. DCHA uses object storage only and it’s an excellent way of leveraging the object storage resiliency to lower costs as well as lower replication overhead – meaning you don’t need to have two destinations of data.

The public cloud vendors, they’ve worked really hard to make their object storage offerings highly scalable and reliable, over (10-12) 9s of uptime. That means that you can achieve high-availability using a single storage pool of object storage versus the need to double that storage amount as required by regular high-availability as done by SoftNAS and some other vendors that use replication to move from one of the storage pools to another.

Using SoftNAS’s patent-pending Dual Controller High Availability, you can save at least half of your storage that you would need for standard high availability, and of course, that’s going to result in significant public storage cloud savings.

Switching to the WAN optimization section that we had talked about earlier, SoftNAS has a patented UltraFast technology and this saves costs by saving on the time needed to accelerate global bulk data.

UltraFast is able to operate up to 20 times faster than standard TCP/IP and also at one-tenth of the cost of alternative bulk data transfer solutions. SoftNAS UltraFast accelerates the transfer of data into, out of, and across private/public hybrid clouds and multi-clouds.

UltraFast can overcome up to 4,000 milliseconds of latency and up to 10% of packet loss due to network congestion connecting remote locations with the cloud over any kind of network conditions. This is also a significant time-saver as well as a money saver.

The next feature I want to talk about that’s within SoftNAS is our SoftNAS SmartTiers. SoftNAS SmartTiers is our patent-pending automated storage tiering feature that moves aging data from more expensive high-performance block storage to less expensive object storage, and this is all done according to your own customer-set policies while reducing public cloud storage cost.

What makes it interesting — the SoftNAS solution — is you could see in this slide that there is multiple tiers of storage. Let’s call tier 1 a high-speed block storage, tier 2 might be a medium speed block storage, and tier 3 might be a less performing object storage.

From an application or a user-perspective, SoftNAS hides all that and to the application and to the user, it looks just like any other cloud volume. You would access it just like you would any other drive or network share, but behind the scenes, there’s these multiple tiers of storage that are all policy-driven for moving data up and down from tier to tier, to tier and back up again.

In the case of SoftNAS SmartTiers, we’re saying that you can save up to 67% on AWS or up to 70% on Azure of your public cloud storage cost. We do this by moving data blocks from tier 1…in this chat, up to tier 3. Although, in our user-interface, we can support up to four tiers if you really need that.

We move the data blocks from tier 1, through tier 2, to tier 3, and even tier 4 if you’ve configured that via a policy that you set up. And then we will migrate it back up if you need access to that data.

Obviously, the goal is to only keep the most active data in the most expensive storage types and move the data as quickly as possible to the least expensive least performing tier of storage.

Let’s take a look at an example. This is AWS. If I had 100TB of storage that I needed to store and if I were to put it all into the high-performing block-level EBS storage at Amazon SSD, my cost would be about $123,000.

If I was to put it all in object storage assuming that I could, based on the application, then my cost would be about $37,000. That’s quite a jump between object storage and block storage.

By using SmartTiers and if I chose a ratio of 20% in block-level storage and 80% in object level storage. I would put 20TB in EBS and 80T in S3 and my cost would be about $54,000, which is fairly significant in terms of what the cost of EBS would be.

Let’s take a look at the same thing but in Azure. In Azure, I could save up to 70% using the same example. If I had 100TB in Premium SSD on Azure, that would cost me about $142,000. If I was to do the same 100TB in object storage in Azure Cool Blob, it would cost me about $19,000.

Then again, if I was using SmartTiers and I did the 20%-80% ratio, where 20% is in premium storage and 80% is in object storage, then I would be at $43,000 rate, so fairly significant.

We’re nearing the end. I’m going to quickly show you a demo of getting to the Smart Tiers cost-savings calculator for cloud storage.

You navigate to the SoftNAS web page. Then you would scroll down and you would see a “Discover How Much You Could Save On Public Cloud Storage Cost” and a “Calculate My Savings” button. You click on that and you get taken over to the SoftNAS SmartTiers Storage Cost Savings Calculator.

This is meant to be as easy as possible. You come over to “Choose a cloud platform vendor.” In this example, I’m going to choose Amazon Web Services, AWS. The UI wants to know how much you’re going to put of data initially. I’m going to put in 50TB.

Will I be using high-availability? In this example, I’m just going to say no. How many tiers of storage will I be using? I’m going to say three.

I’m going to choose how much year-over-year data growth am I going to assume I’m going to have. On this calculator, you can go up to 500% data growth. But in my example here, I’m doing 20%
What’s my initial storage type? Within Amazon, I’m going to choose one of this high-performing block-level storage so I’m choosing Provisioned IOPS. In my tier 2, I’m going to take Medium Magnetic. For my tier 3 storage, I’m going to go ahead and pick S3 Standard Storage.

How much do I want to keep in tier 1 and how much do I want to keep in tier 2 as a ratio? Those defaults are good for me. The tier 3 by default would be 70%.

I get a bar chart automatically generated for me. It says that my three-year cost without SmartTiers is the bar on the left, and my three-year cost with Smart Tiers is the bar on the right. Also, it calculates what my savings would be over three years.

If I really want to get a detailed report, all I need to do is fill out this form. This is live information. Yes, that’s my real name. I type that in. I type in my company. My company email again, jbedrick@softnas.com if you need to reach me.

Once I fill in all that information, I’m just going to click the “Download my detailed report” and automatically, you’re going to see your report will be generated for you. It’s thinking and thinking. Eventually, in the lower left…
I am using Chrome, in this example, for my browser, by the way. You’ll see the report. I’m going to click on that but you can save it. What I would advise you to do is save your report to your local system.

This is what your report will look like. As we scroll down, we’re going to confirm your entries for your selections. I chose AWS. For my tier 1, 2, and 3, those are the types of storage that I chose.

Scrolling down, you’re going to see the same familiar bar chart that you had for the 3b and all the numbers are there. You now need to hover to see what those are. A savings of almost $215,000 over three years, using Smart Tiers in this example.

What’s my storage growth? Because my data in tier 1 was chosen to be 10%, tier 2 was 20%, and tier 3 by default is 70%, you can see the growth of data over three years in each of those tiers.

What’s my savings over three years? In each year, you can see what my cost would be without SmartTiers and what my cost would be with Smart Tiers. And what my savings would be, in these green bar charts, in year one, two, and three, by using Smart Tiers.

We also give you a table with the actual numbers. You’ll be able to plug those in to any analysis that you’re doing in selecting cloud vendors. That’s all available to you. We hide nothing.
Then we also give you breakdown pie charts for three years. Year one; between tier 1, 2, and 3. Year two; your cost between tier 1, 2, and 3. And year three; your cost between years 1, 2, and 3.

Please visit the website to get to the calculator at softnas.com. Don’t forget you can also pick up the telephone and reach any of our sales people as well.

Why do customers choose SoftNAS? We have a number of very selective customers who need to have their data available to maximum uptime.

The first reason that they chose us is that we’ve been in business since 2013 and we’ve been cloud specialists all that time. We’re cloud native experts. Cloud is what we live, eat, breath, sleep day in and day out.

I also invite you to reach out to any of our cloud experts for consultations so that we can help you with your journey to the cloud. We help you with cost-savings without sacrificing performance.

A lot of people could say that they can save you some money and some may and some may not. If you add in that second part, “Without sacrificing performance,” that’s a critical aspect.

I showed you those bar charts of our performance, of accelerating object storage for example. We are also doing fast in our block level storage access as well, and pretty much, we will be one of the fastest offerings out there in the market.

Our support team is second to none. We have received more accolades on our support team than we could spend during this webinar talking about.

One of the last couple of things that I want to mention is that we’re very flexible in our offerings so that you can pretty much adopt it and grow with it as your business grows, whether that’s on-premises to private cloud, on-premises to public cloud in some kind of hybrid option or even multi-cloud scenarios.

We have all of that flexibility and our technology just works. That’s what we hear from our customers – it just works.

In terms of some of our customers that trust us — and these are all international companies for the most part — you could see that we have pretty much of who’s who in this space and they run all different industry verticals. Cross industry verticals running from SMB up to the Fortune50 all trust SoftNAS with their data.

At this point, I am going to pause and see if there are any questions that I can answer for you as we go along during this webinar.

We have one question that came in. The question is we say that we’re less expensive than Amazon Elastic File System or EFS but we didn’t account for the costs for the compute instance which is included with EFS. That is correct.

We do actually have a slide that I can jump to that will show you an example of EFS. Here, you can see the same slide that I had before except, now, I’m using EFS.

You can see, with EFS, it’s about 369,000 for that initial terabyte and the S3 and the Smart Tiers is still the same. I didn’t build into this slide what the compute costs would be for EFS.

If I click and reveal the next slide, you’ll see that I just picked three EC2 instances and you can see how far up the costs of those, for three years, would be. For those EC2 instances added in, you can see we’re still significantly cheaper than Amazon EFS. Hopefully that answered your question. Looking for the second question.

The next question. Is your ObjFast patented? Our ObjFast technology is patent-pending and we do expect to receive a patent on that. I do want to mention that SmartTiers is also patented-pending. Our UltraFast technology is patented and not just patent-pending.

Looking around, do we have any other questions from anybody? No. All right. I appreciate everybody’s time here today. We thank you. we hope that you’ve gotten a little bit of some ideas of how you can save costs in the public cloud.

We invite you to do several things. First, if you want to learn more, go to our website at softnas.com. We also offer a 30-day free trial so please feel free to go to softnas.com/trynow.

If you are wanting to speak to one of our cloud experts, again, go to softnas.com. There should be a chat feature available on our website. Feel free to go ahead and connect with that and you will be connected with one of our cloud experts.

Last but not least, again, if you would like to reach out to me directly to clarify or answer any other questions that you felt shy about asking in a public forum, please feel to do so at jbedrick@softnas.com.

With that, we are going to wrap up this webinar. I thank you again for your attendance. It was a great pleasure to be able to present this to you. Thank you, and have a great rest of your day.