Webinar Recap – Three Ways to Slash Your Enterprise Cloud Storage Cost

Webinar Recap – Three Ways to Slash Your Enterprise Cloud Storage Cost

The above is a recording and follows is a full transcript from the webinar, “Three Ways to Slash Your Enterprise Cloud Storage Cost.” You can download the full slide deck on Slideshare.

My name is Jeff Johnson. I’m the head of Product Marketing here at Buurst. In this webinar, we will talk about three ways to slash your Enterprise cloud storage cost.

Companies trust Buurst for data performance, data migration, data availability, data control and security, and what we are here to talk about today is data cost control. We think about the storage vendors out there. The storage vendors want to sell more storage.

At Buurst, we are a data performance company. We take that storage, and we optimize it; we make it perform. We are not driven or motivated to sell more storage. We just want that storage to run faster.

We are going to take a look at how to avoid the pitfalls and the traps the storage vendors use to drive revenue, how to prevent being charged or overcharged for storage you don’t need, and how to reduce your data footprint.

Data is increasing every year. 90% of the world’s data has been created over the last two years. Every two years, that data is doubling. Today, IT budgets are shifting. Data centers are closing – they are trying to leverage cloud economics – and IT is going to need to save money at every single level of that IT organization by focusing on this data, especially data in the cloud, and saving money.

We say now is your time to be an IT hero. There are three things that we’re going to talk about in today’s webinar.


We are going to be looking at all the tools and capabilities that you have for on-premises solutions and moving those into the cloud and trying to figure out which of those solutions are already in the cloud or not.

We’ll be taking a look at reducing the total cost of acquisition. That’s just pure Excel Spreadsheet cloud storage numbers, which cloud storage to use that don’t tax you on performance—speaking of performance, reducing the cost of performance because some people want to maintain performance but have less expense.

I bet you we could even figure out how to have better performance with less costs. Let’s get right down into it.

Reducing that cost by optimizing data

We think about all of these tools and capabilities that we’ve had on our NAS, on our on-premise storage solutions over the years. We expect those same tools, capabilities, and features to be in that cloud storage management solution, but they are not always there in cloud-native storage solutions. How do you get that?

Well, that’s probably pretty easy to figure out. The first one we’re going to talk about is deduplication. This is inline deduplication. The files are compared block to block and see which ones we can eliminate and just leave a pointer there. To the end-user, they think they have the file, but it’s just a duplicate file.



We could eliminate…in most cases reduce that data storage by 20 to 30% less storage, and this becomes exponentially critical in our cloud storage.

The next one we have is compression. Well, with compression, we are going to reduce the numbers of bits needed to represent that data. Typically, we can reduce the storage cost by 50 to 75% depending on the types of files that are out there that can be compressed, and this is turned on by default with our SoftNAS program.



The last one we want to talk about is Data Tiering. 80% of data is rarely used past 90 days, but we still need it. With SoftNAS, we have data tiering policies or aging policies that can move me from more expensive, faster storage to less expensive storage, to even way back to ice-cold storage.



We could gain some efficiency in this tiering, and for a lot of customers, we’ve reduced their storage cost with an active data set by 67%.

What’s crazy is we add all these together. If I take a look at 50 TB of storage at 10 cents per GiB, is $5,000 a month. If I dedupe that just 20%, it brings it down to $4,000 a month. Then if I compress that by 50, I can get it down to 2,000 a month. Then if I tier that with 20% SSD and 80% HDD, I can get down to $1,000 a month, reducing my overall cost by 80% from 5,000 to $1,000 a month.

Again, not everything is equal out in the cloud. With SoftNAS, obviously, we have dedup, compression, and tiering. With AWS EFS, they do have tiering – great product. With AWS FSx, they have deduplication but not compression and tiering. Azure Files doesn’t have that.

Actually, with AWS infrequent storage, they charge you to write and read from that cold storage. They charge a penalty to use the data that’s already in there. Well, that’s great.

Reducing the total cost of acquisition is just use the cheapest storage.

Now I see a toolset here that I’ve used on-premise. I’ve always used dedupe on-premises. I’ve always used compression on-premises. I might have used tiering on-premises, but it’s really like NVME type of disk, and that’s great.

I see the value in that, but TCA is a whole different ball part here. It’s self-managed versus managed. It’s different types of disks to choose from. We take a look at this. This is like I said earlier. It’s just Excel Spreadsheet stuff  — what do they charge, what do I charge, and who has the least cost.

We take a look at this in two different kinds of buckets. We have self-managed storage like NVME disks and block storage. We have managed-storage as a service like EFS FSx and Azure Files.

If we drill down that a little bit, there are still things that you need to do and there are things that your managed storage service will do for you. For instance, of course, if it’s self-managed, you need to migrate the data, mount the data, grow the data, share the data, secure the data, backup the data. You have to do all those things.



Well, what are you paying for because if I have a managed storage service, I still have to migrate the data? I have to mount the data. I have to share and secure the data. I have to recover the data, and I have to optimize that data. What am I really getting for in that price?

The price is, block storage, AWS is 10 cents per Gig Per month. In Azure, it’s 15 cents per Gig per month. Those things that I’m trying to alleviate like securing, migrating, mounting, sharing, recovery, I am still going to pay 30 cents – three times the price of AWS SSD; or FSx, 23 cents; or Azure File, 24 cents. I’m paying a premium for the storage, but I am still having to do a lot on the management layer of that.



If we dive a little bit deeper into all that. EFS is really designed for NFS connectivity, so my Linux clients. AWS/FSx is designed with CIFS for the Windows clients with SMB, and Azure Files with CIFS for SMB. That’s interesting.

If I have Windows and Amazon, if I have Windows and Linux clients, I have to have an EFS account and an FSx account. That’s fine. But wait a second. This is a shared access model. I’m in contention with all the other companies who have signed up for EFS.

Yeah, they are going to secure my data, so company one can’t access company two’s data, but we’re all in line for the contention of that storage. So what do they do to protect me and to give me performance? Yeah, it’s shared access.

They’ll throttle all of us, but then they’ll give us bursting credits and bursting policies. They’ll charge me for extra bursting, or I can just pay for increased performance, or I can just buy more storage and get more performance.

At best, I’ll have an inconsistent experience. Sometimes I’ll have what I expect. Other times, I won’t have what I expect – in a negative way. For sure, I’ll have all of the scalability, all the stability and security with these big players. They run a great ship. They know how to run a data center better than all on-premises data centers combined.

But we compare that to self-managed storage. Self-managed, you have a VM out there, whether it’s Linux or Windows, and you attach that storage. This is how we attached storage back in the ‘80s or ‘90s, with a client-server with all its attached storage. That wasn’t a very great way to manage that environment.

Yeah, I had dedicated access, consistent performance, but it wasn’t very scalable. If I wanted to add more storage, I had to get a screwdriver, pop the lid, add more disks, and that is not the way I want to run a data center. What do we do?

We put a NAS in between all of my storage and my clients. We’re doing the same thing with SoftNAS in the cloud. With SoftNAS, we have an NFS protocol, CIFS protocol, or we use iSCSI to connect just the VMs of my company to the NAS and have the NAS manage the storage out to the VMs. This gives me dedicated access to storage, a consistent and predictable performance.



The performance is dictated by the NAS. The bigger the NAS, the faster the NAS. The more RAM and the more CPU the NAS has, the faster it will deliver that data down to the VMs. I will get that Linux and Windows environment with scalability, stability, and security. Then I can also make that highly available.

I can have duplicate environments that give me data performance, data migration, data cost control, data availability, data control, and security through this complete solution. But you’re looking at this and going, “Yeah, that’s double the storage, that’s double the NAS.” How does that work when you’re talking about Excel spreadsheets kind of data?



Alright. We know that EBS storage is 10 cents per GiB per month. EFS storage is 30 cents per GiB per month. The chart is going to expand with the more…two more terabytes I have in my solution.

If I add a redundant set of block storage, redundant set of VMs, and then I turn on dedupe and depression, and then I turn on my tiering, the price of the SoftNAS solution is so much smaller than what you pay for storage. It doesn’t affect the storage cost that much. This is how we’re able to save companies huge amounts of money per month on their storage bill.



This could be the single most important thing you do this year because most of the price of a cloud environment is the price of the storage, not the compute, not the RAM, not the throughput. It’s the storage.

If I can reduce and actively manage, compress, optimize that data and tier it, and use cheaper storage, then I’ve done the appropriate work that my company will benefit from. On the one hand, it is all about reducing costs, but there is a cost to performance also.

Reducing the Cost of Performance

No one’s ever come to me and said, “Jeff, will you reduce my performance.” Of course not. Nobody wants that. Some people want to maintain performance and lower costs. We can actually increase performance and lower costs. Let me show you how that works.

We’ve been looking at this model throughout this talk. We have EBS storage at 10 cents with a NAS, a SoftNAS between the storage and the VMs. Then we have this managed storage like EFS with all of the other companies in contention with that storage.

It’s like me working from home, on the left-hand side, and having a consistent experience to my hard drive from my computer. I know how long it takes to boot. I know how long it takes to launch an application. I know how long it takes to do things.

But if my computer is at work in the office and I had to hop in a freeway, I’m in contention with everybody else who’s going to work who also needs to work on their hard drive at the computer in their office. Some days the traffic is light and fast, some days it’s slow, some days there’s a wreck, and it takes them twice as long to reach there. It’s inconsistent. I’m not sure what I am paying for.

If we think about what EFS does for performance, and this is based on their website, you get more performance or throughput with more storage that you have. I’ve seen some ads and blog articles that a lot of developers.

They say, “If I need 100 MB of throughput for my solution and I only need one terabyte worth of data, I’ll put an extra terabyte of dummy data out there on my share so that I can get the performance I want.” I put another terabyte at 30 cents per GiB per month that I’m not even going to use just to get the performance that I need.

Then there’s bursting, then there is throttling, and then it gets confusing. We are so focused on delivering performance. SoftNAS is a data-performance company. We have levels or scales of performance, 200, 400, 800, to 6,400. Those relate to throughput, so the throughput and IOPS that you can expect for the solution.

We are using storage that’s only 10 cents per GiB on AWS. It’s a dedicated performance that you can determine the performance you need and then buy that solution. On Azure, it’s a little bit different. Their denominator for performance is of vCPUs. A 200 is a 2 vCPU. A 1,600 is a 20 vCPU. Then we publish the IOPS and throughput that you can expect to have for your solution.

Of course, reducing cost performance, use a NAS to deliver the storage in the cloud. Use a predictable performance. Use attached storage with a NAS. Use a RAID configuration. You can focus on read and write cache even with different disks that you use or with a NAS on the amount of RAM that you use.

Pay for performance. Don’t pay more for the capacity to get the performance. We just took a real quick look at three ways to slash your storage cost – optimizing that storage with dedupe, compression, and tiering, making less expensive storage work for you, right, and then reducing the cost of performance. Pay for the performance you need, not for more storage to get the performance you need.

What do you do now? You could start a free trial on AWS or Azure. You can schedule a performance assessment where you talk with one of our dedicated people who do this 24/7 to look to how to get you the most performance you can at the lowest price.

We want to do what’s right by you. At Buurst, we are a data-performance company. We don’t charge for storage. We don’t charge for more storage. We don’t charge for less storage. We want to deliver the storage you paid for.



You pay for the storage from Azure or AWS. We don’t care if you attach a terabyte or a petabyte, but we want to give you the performance and availability that you expect from an on-premises solution. Thank you for today. Thank you for your time.

At Buurst, we’re a data-performance company. It’s your time to be this IT hero and save your company money. Reach out to us. Get a performance assessment done at buurst.com/contact. Thank you very much.   

A letter from our CEO and President

A letter from our CEO and President

Backed by our strong customer success, SoftNAS is now evolving further to smash the rules of cloud storage, dramatically changing the cloud storage business. SoftNAS is now Buurst – and it’s disrupting the cloud storage industry, and your data is going to be amazing in the cloud.

Storage vendors want to sell you more storage, but here at Buurst, our only motivation is to provide your business better application performance, lower cloud storage costs, and the control and availability you need unlock new opportunities to enable success for your business.

SoftNAS was born from an unmet need for enterprise-grade NAS software to be available to cloud computing and virtual computing customers. As our flagship CloudNAS product, Softnas has hundreds of happy enterprise customers, who all had the same problem: How do I migrate my data to the cloud and get the best performance at the best possible price? At Buurst we are customer first and truly excited serve you. These core values have guided us with everything we have done up to this point and will help us continue to do so as we evolve. We will continue to think about your data differently. We are honored for you to join us on this journey, and to learn how we can help your company achieve its goals. We are and will always be here for you and encourage you to reach out to us with any questions and feedback.

Our executive leadership team, alongside our employees, are here to serve you. Please check out our profiles to get familiar with us:</p

  • Rick Braddy, Founder & CTO
  • Alex Rublowsky, CMO
  • Krupa Amalani CFO
  • Vic Mahadevan Chairman, Board of Directors
  • Marc Palombo, CRO

Together, let’s be amazing and smash the rules of cloud storage!

Garry Olah, President & CEO

A letter from our CEO and President

The Adventure Continues…

It is a tremendous honor that I join the SoftNAS team as President and CEO. I truly believe SoftNAS is a once-in-a-lifetime opportunity with great customers, excellent products and an incredibly talented team – all of which have driven our success so far.

I’ve known our founder Rick Braddy since we worked together at both BMC and Citrix and was always a fan of his insights and how he thinks ahead of the market. It wasn’t until I started to dig deeply into SoftNAS that my curiosity became excitement. My prior consulting firm was hired to evaluate the business and its partnerships. It was during this time where I learned that SoftNAS has a tremendous opportunity and is already respected by industry leaders. 

I’m also looking forward to working with Vic Mahadevan, our newest board member and Chairman, as he brings us a vast amount of industry insights and experience.

SoftNAS is beginning to see great market traction. The next chapter of SoftNAS enables us to go from Good to GREAT! We’re at a critical “point in time” where billions are being invested to move to the cloud, especially as Microsoft and Amazon are creating a huge bow wake, positioning us to ride this wave and set ourselves up for “Built to Last” success.

I believe in SoftNAS, the people and our unique market opportunity. I’ve always been most excited about companies at a similar stage. As an example, I joined both BMC and Citrix early on their paths to $1B, and we quickly built strategic partnerships which led to both companies becoming global industry standards. I confidently believe that the accelerators which SoftNAS needs are firmly in sight: Building strong, multi-faceted alliances with Microsoft and AWS, improving solution marketing aligned with cloud industry initiatives, creating leverage by partnering with Systems Integrators, and expanding our best practices as a leader in Cloud Marketplace selling.

SoftNAS is uniquely positioned to help bring Business-critical Applications to the cloud. The public cloud business is now a race to migrate as many apps and as much data as possible before the other guy does. Though I know we are not a migration company, think of us as a way to improve cloud performance and economics to weaponize the cloud vendors for app migration. Much like VMware isn’t considered a migration company, they enabled more migration of apps than any company in history. SoftNAS’ early entry establishing the Cloud NAS category places us in a unique position to ride the cloud wave as a thought leader and innovator – ultimately leading to our achieving escape velocity and hyper-growth.

I’m incredibly proud of the SoftNAS team, its product and its culture, and while I’m just in my first few days as part of the team, I’m proud to be associated with that success. Watch us, it will be a fun ride! 

Garry Olah
CEO & President, SoftNAS


About Garry
Garry Olah is an accomplished Senior Executive, Entrepreneur and Thought Leader with more than 20 years of success in the technology industry. His broad areas of expertise include business development, alliance building, communication, enterprise software, startups, B2B, SaaS, storage, cloud computing, analytics, sales & marketing vision, and disruptive tech. Throughout his executive career, Garry has held leadership positions with Prime Foray, Vedify, GoodData, Coho Data, Apprenda, Citrix and BMC. 

Consolidating EMC VNX File Servers in the Cloud

SoftNAS Shares: A Use Case of How we Consolidated EMC VNX File Servers in the Cloud for Easy Access & Sync

Since our early days in 2013, SoftNAS® has seen hundreds of customers move out of their on-premises and colocation datacenters into the cloud. Today, we see an even sharper increase in the number of customers leaving their EMC® VNX and other traditional NAS file servers behind, choosing to consolidate and replace aging hardware storage systems with cloud-based NAS file shares. The business impetus to make the change often begins with an upcoming hardware maintenance refresh cycle or a corporate decision to move some or all of its applications into the cloud.

Of course, the users continue to require access to their file shares – over the LAN/WAN from the office and via VPN connections while traveling and working remotely.

How to consolidate file servers for on-premises users into the cloud – a use case

One of the first issues that comes up is how do we seed tens to hundreds of terabytes of live production data from VNX file shares, where it’s actively used today, into the cloud? And then how do we maintain synchronization of file changes during the migration and transition phase until we’re ready to flip the DNS and/or Active Directory policies to point to the cloud-based shares instead?

In this use case, we’re showcasing the implementation of a solution for a well-known media and entertainment company, with dozens of corporate file shares.

A hybrid cloud solution

The initial Seeding Phase involves synchronizing the data from many VNX-based file shares into the cloud. As shown in Figure 1 below, the customer chose a 1 Gbps Direct Connect from the corporate data center to the AWS® VPC for dedicated bandwidth.

The AWS Direct Connect link was used initially for the migration phase, and now provides the high-speed interconnect from the corporate WAN for site-to-site access to the corporate file shares. Later, it became the primary data path connecting the corporate WAN with AWS and the file shares (and other applications hosted in AWS).

Data Migration Seeding

Figure 1 – Data Migration Seeding

As shown above, a SoftNAS Platinum VM was created from a VMware OVA file and operated locally on VMware in the corporate data center. SoftNAS Platinum supports an Apache NiFi-based facility known as FlexFiles.

First, the CIFS shares on the VNX were mounted from SoftNAS. Another copy of SoftNAS Platinum was then launched on AWS® as the VNX NAS replacement. A storage pool was created, backed by four 5-terabyte EBS disk devices, configured in a RAID array to increase the IOPS and performance, and provide the necessary storage space.

A thin-provisioned SoftNAS® Volume was created with compression enabled. Data compression reduced the 20 TB of VNX data down to 12 TB. This left more headroom for growth and since the volume is thin-provisioned, the storage pool’s space was also available for other volumes and file shares that came later.

A SnapReplicate® relationship was created from the on-premise SoftNAS VM running on VMware® to the SoftNAS running in AWS. SnapReplicate performs snapshot-based block replication from the source node to the target. Once per minute, it accumulates all changes accumulated since the last snapshot, then replicates just those block changes to the target system. This is very efficient, and it includes data compression and SSH encryption.

Next, the SoftNAS team created several NiFi data flows which continuously replicated and synchronized the VNX CIFS share contents directly onto a local SoftNAS-based ZFS on Linux filesystem running in the same datacenter on VMware. These flows ran continuously, along with SnapReplicate, to actively seed and sync the VNX files share with the new SoftNAS Cloud NAS filer running in AWS.

After this phase was completed, the on-premises SoftNAS node was no longer required, so the SnapReplicate connection was deleted, leaving a copy of the VNX file share data in the cloud. Then the SoftNAS node was removed from VMware.

During the final phase of the migration, various user communities had to be migrated across dozens of file shares. To maintain synchronization during this phase, the FlexFiles/NiFi flow was moved to SoftNAS Platinum running in AWS, as shown in Figure 2 below.

Continuous Sync

Figure 2 – Continuous Sync

During a several week period, different departments’ file shares were cut over to use the new consolidated cloud file server. Throughout that period, any straggling changes still arriving on the VNX were picked up and replicated over to SoftNAS in AWS. After all the file shares were verified to be operating correctly in AWS, the VNX was decommissioned as part of the overall datacenter shutdown project.

Successful consolidation of EMC VNX file servers

This project was on a tight timetable from the start. The entire project had to be developed, tested and then used to migrate live corporate file shares from on-premises to AWS in a matter of 45 days in order to stay on schedule for the datacenter closedown project. The project was completed without impacting the user community, who didn’t see any differences in their workflow and business due to where their file share data is hosted.

Find the right fit

Corporations are increasingly choosing cloud hosting for both data and applications. As maintenance contracts come up for renewals of popular EMC VNX, Isilon®, NetApp® and many others, customers are increasingly choosing the cloud over continuing to be in the datacenter and hardware business. The customer is faced with a fork in the road – stay on the hardware treadmill of endless maintenance, capacity upgrades and periodic forklift replacement – or – move it to the cloud and let someone else worry about it for a change.

SoftNAS Platinum provides multiple avenues for data migration projects like this one, including the strategy used for this project. In addition to FlexFiles/NiFi and SnapReplicate, there’s also an end-to-end Lift and Shift feature that can be used to migrate both NFS and CIFS from virtually anywhere the data sits today into the cloud. SoftNAS also operates in conjunction with Snowball in several configurations for situations involving hundreds of terabytes to petabytes of data.

Request a free consultation with our cloud experts to identify the best way forward for your business to migrate from hardware storage to cloud-based NAS.

High Performance Computing (HPC) in the Cloud – Why We Need It, and How to Make It Work

In 2013, Novartis successfully completed a cancer drug project in AWS. The pharma giant leased 10,000 EC2 instances with about 87,000 compute cores for 9 hours at a disclosed cost of approximately $4,200. They estimated that the cost to purchase the equivalent hardware on-prem and associated expenses required to complete the same tasks would have been approximately $40M.

Clearly, High Performance Computing, or HPC, in the cloud is a game changer. It reduces capex, computing time, and provides a level playing field for all – you don’t have to make a huge investment on infrastructure. Yet, after all these years, cloud HPC hasn’t taken off as one would expect. The reasons for the lack of popularity of HPC in the cloud are many, but one big deterrent is storage.

Currently available AWS and Azure services have throughput, capacity, pricing or cross-platform compatibility issues that make them less than adequate for cloud HPC workloads. For instance, AWS EFS requires a large minimum file system size to offer adequate throughput for HPC workloads. AWS EBS is a raw block device with a 16TB limit, and requires an EC2 compute to front. AWS FsX for Lustre and Windows has similar issues and EBS and EFS.

The Azure Ultra SSD is still in preview. It supports only Windows Server and RHEL currently, and is likely to be expensive too. Azure Premium Files, still in preview, have a 100TB share capacity that could be restrictive for some HPC workloads. Still, Microsoft promises 5GiB per share throughput with burstable IOPS to 100,000 per share with capacity up to 100TB per share.


Making Cloud HPC storage work

For effective High Performance Computing in the cloud, it is necessary to have predictable functioning. All components of the solution (Compute, Network, Storage) have to be the fastest available to optimize the workload and leverage the massive parallel processing power available in the cloud. Burstable storage is not suitable – withdrawal of any resources will cause the process to fail.

With SoftNAS cloud, dedicated resources with predictable and reliable functioning become available in a single comprehensive solution. There’s no need to purchase or integrate separate software and configure it. This translates to an ability to rapidly deploy the solution from the marketplace. You can have SoftNAS up and running in an hour from the marketplace.

The completeness of the solution also makes it easy to scale. As a business, you can select the compute and title storage needed for your NAS and scale up as the entire cloud NAS as your needs increase.

Greater customization can be made to suit the specific needs of your business by choosing the type of drive needed, and choose between CIFs and NFS sharing with high availability.


HPC in the cloud – A use case

SoftNAS has worked with clients to implement cloud HPC. In one case, a leading oil and gas corporation commissioned us to identify the fastest throughput performance achievable with a single SoftNAS instance in Azure, in order to facilitate migration of their internal E&P application suite.

The suite was being run on-prem using NetApp SAN and HP Proliant current-gen blade servers, and remote customers connected to Hyper-V clusters running GPU-enabled virtual desktops.

Our team ascertained the required speeds for HPC in the cloud as:

  • Sustained write speeds of 500MBps to single CIFS share
  • Sustained read speeds of 800MBps from a single CIFS share

We started the POC using an Azure E64s_v3 VM with 5 x P30 Premium disks in RAID0 pool configurations. Azure Accelerated Networking was enabled. The initial test workstation was NV6s_v2 (GPU enabled).
hpc in the cloud

The top speeds achieved in this configuration were:
hpc in the cloud speed results

As we did not achieve the desired write throughput, we began testing faster instance types. The fastest performance we were able to achieve was on a LS64s_v2 Storage Optimized VM:
hpc in the cloud sizes

Test results for the LS64s_v2:
hpc in the cloud speed results


HPC in the Cloud PoC – our learnings

  • While the throughput performance criteria were achieved, the LS64s_v2 bundled nVME disks are ephemeral, not persistent. In addition, the pool cannot be expanded with additional nVME disks, just SSD. These factors eliminate this instance type from consideration.
  • Enabling Accelerated Networking on any/all VMs within an Azure solution is critical to achieve the fastest performance possible.
  • It appears that Azure Ultra SSDs could be the fastest storage product in any Cloud. These are currently available only in beta in a single Azure region/AZ and cannot be tested with Marketplace VMs as of time of publishing. On Windows 2016 VMs, we achieved 1.4GBps write throughput on a DS_v3 VM as part of the Ultra SSD preview program.
  • When testing the performance of SoftNAS with client machines, it is important that the test machines have network throughput capacity equal or greater to the SoftNAS VM and that accelerated networking is enabled.
  • On pools comprised of nVME disks, adding a ZIL or read cache of mirrored premium SSD drives actually slows performance.


Achieving Cloud HPC Success

SoftNAS is committed to leading the market as a provider of the fastest Cloud storage platform available. To meet this goal, our team has a game plan.

  • Testing/benchmarking the fastest EC2s and Azure VMs (ex. i3.16xlarge, i3.metal etc.) with the fastest disks.
  • Fast adoption of new Cloud storage technologies (ex. Azure Ultra SSD)
  • For every POC, production deployment, or internal test of SoftNAS, measure the throughput and IOPS, and document the instance & pool configurations. This info needs to be accessible to our team so we can match configurations to required performance.

Webinar: Migrating Existing Applications to AWS Without Reengineering

The following is a recording and full transcript from the webinar, “Migrating Existing Applications to AWS Without Reengineering”. You can download the full slide deck on Slideshare

Full Transcript: Migrating Existing Applications to AWS Without Reengineering

Taran Soodan:             Good afternoon everyone, and welcome to a SoftNAS webinar, today, on No App left behind – Migrate Your Existing Applications to AWS Without Re-engineering.

Definitely a wordy title, but today’s webinar is going to focus on migrating the existing applications that you have on your on-premises systems to AWS without having to re-write a single line or code.

Our presenter today will be myself, Taran Soodan, and our senior solutions architect, Kevin Brown. Kevin, go ahead and say hey.

Kevin Brown:             Hello! How are you doing? I’m looking forward to presenting today.

Taran:             Thank you, Kevin. Before we begin this webinar, we do want to cover a couple of housekeeping items. Number one, the webinar audio is available to both your mic and speakers on either your desktop or your laptop or you can use your telephone as well. The local numbers are available to you on the “Go to” mini-control panel.

Kevin, if you just go back. We will also be dedicating some portion of time at the end to answering any questions that you may have during today’s webinar in the questions pane.

If you have any questions that pop up during the webinar, please feel free to post them in the questions pane here and we’ll go and answer them at the end.

Finally, this webinar is being recorded. For those of you who want to share the slides or the audio with your colleagues, you’ll receive a link later on today. The next slide please, Kevin.

To thank everyone for joining today’s webinar, we are offering a $100 AWS credit and we’ll provide the link for that credit a little bit lateron.

For our agenda today, this is going to be a technical discussing. We’ll be talking about security and data concerns for migrating for migrating your applications.

We will also talk a bit about the design and architectural considerations around security and access control, performance, backup and data protection, mobility and elasticity, and high-availability.

A spoiler alert – we are going to talk about our SoftNAS product and how it’s able to help you migrate without having to re-engineer your applications.
Finally, we will close off this webinar with the Q&A session. Kevin, I’ll go ahead and turn it over to you.

Kevin:              All right, not a problem. Thank you very much, Taran. Hello. My name is Kevin Brown, and I thank you for joining us today from wherever you’ve logged in. Our goal for this webinar is to discuss the best practices for designing cloud storage for existing apps on AWS.
All right, not a problem. Thank you very much, Taran. Hello. My name is Kevin Brown, and I thank you for joining us today from wherever you’ve logged in. Our goal for this webinar is to discuss the best practices for designing cloud storage for existing apps on AWS.

We’ve collected a series of questions that we get asked regularly as we work with our customers and potential customers as they go through the decision of migrating their existing environments to the cloud. We hope that you’ll find this informative as you go through your decision-making and your design processes.

First, let’s actually talk about SoftNAS a little bit. We’re just going to give our story and why do we have this information and why do we want to share this information.

SoftNAS was born in the cloud in 2012. It was actually born from…Our founder, Rick Brady went out to find a solution that would be able to give him access to cloud storage.

When he went out and looked for one, he could find anything that was out there. He took it upon himself to go through the process of creating this solution that we now know as SoftNAS.

SoftNAS is a cloud NAS. It’s a virtual storage appliance that exists on-premise or within the AWS Cloud and we have over 2,000 VPC deployments. We focus on no app left behind.

We give you the ability to migrate your apps into the cloud so that you don’t have to change your code at all. It’s a Lift and Shift bringing your applications to be able to address cloud storage very easily and very quickly.

We work with Fortune500 to SMB companies and we have thousands of AWS subscribers. SoftNAS also owns several patents including patents for HA in the cloud and data migration.

The first thing that we want to go through and we want to talk about is cloud strategy. Cloud strategy, what hinders it? What questions do we need to ask? What are we thinking about as we go through the process of moving our data to the cloud?

Every year, IDG, the number one tech media company in the world. You might know them for creating CIO.com, Computer World, Info World, IT World, Network World. Basically, if it has technology and a world next to it, it’s probably owned by IDG.

IDG calls its customers every single year with the goal of measuring cloud computing trends among technology decision-makers to figure out uses and plans across various cloud services, development models, investments, and trying to figure out business strategies and plans for the rest of the IT world to focus on.

From that, we actually took the top five concerns. The top five concerns belief it or not has to do around our data. Number one concerns come across. Concerns regarding where will my data be stored?

Is my data going to be stored safely and reliably? Is it going to be stored in a data center? What type of storage is it going to be stored on? How could I figure that out?

Especially when you’re thinking about moving to a public cloud, these are some questions that rake people’s minds.

Questions number. Concerns about the security of cloud computing solutions. Is there a risk of unauthorized access, data integrity protection? These are all concerns that are out there about the security of what I’m going to have in an environment.

We also have concerns that are out there about vendor lock-in. What happens if my vendor of choice changes costs, changes offerings, or just goes?

There are concerns out there, number four, with surrounding integration of existing infrastructure and policies. How do I make the information available outside the cloud while preserving the uniform set of access privileges that I have worked the last 20 years to develop?

Number five. You have the concerns about the ability to maintain enterprise or industry standards — that’s your ISOs, PSI, SaaC, SaaS. What we plan to do in the upcoming slide is to share some of the questions that our customers have asked and which they have asked earlier in the design structure of moving their data to the cloud so that this would be beneficial to you.

Question number one and it’s based of off the same IDG poll. How do I maintain or extend security and access control into the cloud? We often think outside to end when we design for threats.

This is how our current on-prem environment is built. It was built with protection from external access. It then goes down to physical access to the infrastructure. Then it’s access to the files and directories. All of these protections need to be considered and extended to your cloud environment, so that’s where the setup of AWS’s infrastructure plays hand-in-hand with SoftNAS.

First it’s the fact of setting up a firewall and ensuring Amazon already has the ability for you to be able to utilize the firewalls through your access controls to your security groups.

Setting up stateful protection from your VPCs, setting up network access protection, and then you go into cloud native security. Access to infrastructure. Who has access to your infrastructure?

By you setting up your IAM roles or your IAM users, you have the ability to control that from security groups. Being able to encrypt that data. If everything fails and users have the ability to touch my data, how do I make sure that even when they can see it, it’s encrypted and it’s something that they cannot use.

We also talk about the extension of the enterprise authentication schemes. How do I make sure that I am tiering into active directory or tiering into LDAP which is already existing in my environment?

The next question that we want to ask and this is structured around backups and data protection. How to safeguard from data loss or corruption. We get asked this question almost — I don’t know — probably about 10-15 times a day.

We get asked, I’m moving everything to the cloud; do I still need a backup? Yes, you still do need a backup. An effective backup strategy is still needed. Even though your data is in the cloud, you still have redundancy.

Everything has been enhanced, but you still need to be able to have a point in time or extended period of time that you could actually back and grab that data from.

Do I still need antivirus and malware protection? Yes, antivirus and malware protection is still needed. You also need the ability to have snapshots and rollbacks and that’s one of the things that you want to design for as you decide to move your storage to the cloud.

How do I protect against user-error or compromise? We live in a world of compromise. A couple of weeks ago, we saw all in Europe that it had run through the problem of ransomware. Ransomware was basically hitting companies left and right.

From a snapshot and rollback standpoint, you want to be able to have a point in time that you could quickly rollback so that your users will not experience a long period of downtime. You need to design your storage with that in mind.

We also need to talk about replication. Where am I going to store my data? Where am I going to replicate my data to? How can I protect so that data redundancy that I am used to on my on-prem environment, that I have the ability to bring some of that to the cloud and have data resiliency?

I also need to think about how do I protect from data corruption? How do I design my environment to ensure that data integrity is going to be there, that I am making that the protocols with the underlying infrastructure that I am protecting myself from being corrupt from different scenarios that would cause my data to lose its integrity?

Also, you want to think of data efficiency. How can I minimize the amount of data while designing for costs-savings? Am I thinking in this scenario about how do I dedupe, how do I compress that data?

All of these things need to be taken into account as you go through that design process because it’s easier to think about it and ask those questions now than to try to shoehorn or fit it in after you’ve moved your data to the cloud.

The next thing that we need to think about is performance. We get asked this all the time. How do I plan for performance? How do I design for performance but not just for now? how do I design for performance in the future also?

Yes, we could design a solution that is angelic for our current situation; but if it doesn’t scale with us for the next five years, for the next two years, it’s not beneficial – it gets archaic very quickly.

How do I structure it so that I am designing not just for now but for two years from now, for five years from now, for potentially ten years from now? There are different concerns that need to be taken at this point.

We need to talk about dedicated versus shared infrastructure. Dedicated instances provide the ability for you to tune performance. That’s a big thing because what you do right now and your use-case as it changes, you need to make sure that you could actually tune for performance for that.

Shared infrastructure. Although shared-infrastructure can be cost-effective, multi-tenancy means that tuning is programmatic. If I go ahead and I use a shared-infrastructure, that means that if I have to tune for my specific use-case or multiple use-cases within that environment, I have to wait for a programmatic change because it’s not dedicated to me solely. It is used by many other users.

Those are different concerns that you need to think about when it actually comes to the point of, am I going to use dedicated or am I going to use a shared-infrastructure.

We also need to think about bursting and bursting limitations. You always design with the ability to burst beyond peak load. That is number one. 101, you have to think about the fact, I know that my regular load is going to be X but I need to make sure that I could burst beyond X.

You need to also understand the pros and cons of burst credit. There’re different infrastructures and different solutions that are out there that introduce burst credits.

If burst credits are used, what do I have the ability to burst to? Once that burst credit is done, what does it do? Does it bring me down to subpar or does it bring me back to par?

These are questions that I need to ask, or you need to ask as you go into the process of designing for storage and what the look and feel of your storage is going to be while you’re moving to the public cloud.

You also need to look and consider predictable performance levels. If I am running any production, I need to know that I have a baseline. That baseline should not fluctuate as much as I have the ability to burst beyond my baseline and be able to use more when I need to use more.

I need to know that when I am at the base, I am secure with the fact that my baseline predictable is predictable and my performance levels are going to be just that something that I don’t have to worry about.

We need to be able to go ahead and you should already be thinking about using benchmark programs to validate the baseline performance of your system.

There’re tons of freeware tools out there, but that’s something that you definitely need to do while you’re going through that development process or design process for performance within the environment.

Storage, throughput, and IOPS. What storage tier is best suited for my use case? In every environment, you’re going to have multiple use cases. Do I have the ability to design my application or the storage that’s going to support my application to be best-suited for my use-case?

From a performance standpoint, you have multiple apps 19:04 for your storage tiers. You could go with your GP2 – these are the general-purpose SSD drives. There’s Provisioned IOPS. There’s Throughput Optimize. There is Cold HDDs. All of these are options that you need to make a determination on.

A lot of times, you’ll think about the fact that “I need general-purpose IOPS for this application, but Throughput Optimize works well with this application.” Do I have the ability? What’s my elasticity to use both? What’s the thought behind doing it?

AWS gives you the ability to address multiple storage. The question that you need to ask yourself is, based on my use case, what is most important to my workload? Is it IOPS? Is it Throughput?

Based on the answer to that question, it gives you the larger idea of what storage to choose. This slide breaks down a little bit of moving from anything greater than 65,000 IOPS. Are you positioned to use anything that you need higher throughput from? What type of storage should you actually focus on?

These are definite things that we actually work with our customers on a regular basis to actually steer them to the right choice so that it’s cost-effective and it is also user-effective to their applications.

Then S3. A cloud NAS should have the ability to address object storage because there’s different use cases within your environment that would benefit from being able to use object storage.

We were at a meetup a couple of weeks ago that we did with AWS and there was an ask from the crowd. The ask came across. They said, “If I have a need to be able to store multiple or hundreds of petabytes of data, what should I use? I need to be able to access those files.”

The answer back was, you could definitely use S3, but you’ll need to be able to create the API to be able to use S3 correctly. With a cloud NAS, you should have the ability to use object storage without having to utilize the API.

How do you actually get to the point that you’re using APIs already built into the software to be able to use S3 storage or object storage the way that you would use any other file system?

Mobility and elasticity. What are my storage requirements and limitations? You would think that this would be the first questions that get asked as we go through this design process.

As companies come to us and we discuss it, but a lot of times, it’s not. It’s, people are not aware of their capacity limitations, so they make a decision to use a certain platform or to use a certain type of storage and they are unaware of the maximums. What’s my total growth?

What is the largest size that this particular medium will support? What’s the largest file size? What’s the largest folder size? What’s the largest block size? These are all questions that need to be considered as you go through the process of designing your storage.

You also need to think about platform support. What other platforms can I quickly migrate to if needs be? What protocols can I utilize?

From a tiered storage support and migration, if I start off with Provisioned IOPS disks, am I stuck with Provisioned IOPS disks? If I realize that my utilization is that of the need of S3, is there any way for me to migrate from Provisioned IOPS storage to S3 storage?

We need to think about that as we’re going with designing storage in the backend. How quickly can we make that data mobile? How quickly can we make it something that could be sitting on a different tier of storage, in this case, from object to block or vice versa, from block back to object?

And automation. Thinking about automation, what can I quickly do? What can I script? Is this something that I could spin up using cloud formation? Is there any tools? Is there API associated with it, CLI? What can I do to be able to make a lot of the work that I regularly do quick, easy scriptable?

We get asked this question also a lot. What strategy should I choose to get the data or application to the cloud? There are two strategies that are out there right now.

There is the application modernization strategy which comes with its pros and cons. Then there is also the Lift and Shift strategy which comes with its own pros and cons.

What’s associated with that modernization is the fact that the pros, you build a new application. You can modify and delete and update existing applications to take advantage of cloud-native services. It’s definitely a huge pro. It’s born in the cloud. It’s changed in the cloud. You have access to it.

However, the cons that are associated with that is that there is a slower time to production. More than likely, it’s going to have significant downtime as you try to migrate that data over. The timeline we’re looking at is months to years.

Then there are also the costs associated with that. It’s ensuring that it’s built correctly, tested correctly, and then implemented correctly.

Then there is the Lift and Shift. The pros that you have for the Lift and Shift is that there is a faster time to cloud production. Instead of months to years, we’re looking at the ability to do this in days to weeks or, depending on how aggressive you could be, it could be hours to weeks. It totally depends.

Then there’s costs associated with it. You’re not recreating that app. You’re not going back and rewriting code that is only going to be beneficial for your move. You’re having your developers write features and new things to your code that’s actually going to benefit and point you in a way of making sure that it’s actually continuously going to support your customers themselves.

The cons associated with the Lift and Shift approach is that your application is not API optimized, but that’s something that you can make a decision on whether or not that that’s something that’s beneficial or needed for your app. Does your app need to speak the API or does it just need to address the storage?

The next thing that we want to discuss, we want to discuss high availability. This is key in any design process. It’s how do I make sure or plan for failure or disruption of services? Anything happens; anything goes wrong, how do I cover myself to make sure that my users don’t feel the pain? My failover needs to be non-disruptive.

How can I make sure that if a reading fails, if an instance fails, that my users come back and my users don’t even notice that a problem happened? I need to design for that.

How do I ensure that during this process that I am maintaining my existing connections? It’s not the fact that failover happens and then I need to go back and recreate all my tiers, repoint my pointers to different APIs to different locations.

How do I make sure that I have the ability to maintain my existing connections within a consistent IPO? How do I make sure that I have and what I’ve designed fulfills my IPO needs?

Another thing that comes up and this is one of the questions that generally comesin to our team is, is HA per app or is this HA for infrastructure? When you go ahead and you go through the process of app modernization, you’re actually doing HA per app.

When you are looking at a more holistic solution, you need to think in advance. On your on-premise environment, you’re doing HA for infrastructure. How do you migrate that HA for infrastructure over to your cloud environment? And that’s where the cloud NAS comes in.

The cloud NAS solves many of the security and design concerns. We have a couple of quotes up there. They are listed on our website for some of our users on the AWS platform.

We have Keith Son which we did a webinar with a couple of weeks ago. It might have been last week. I don’t remember if it’s up in my mind. Keith Son loves that software, constantly coming to us asking for different ways and multiple ways that he could actually tweak or use our software more.

Keith says, “Selecting SoftNAS has enabled us to quickly provision and manage storage, which has contributed to a hassle-free experience.” That’s what you want to hear when you come to design. It’s hassle-free. I don’t have to worry about it.

We also have a quote there from John Crites from Modus. John says that he’s found that SoftNAS is cost-effective and a secure solution for storing massive amounts of data in the AWS environment.

Cloud NAS addresses the security and access concerns. You need to be able to tier into active directory. You need to be able to tier into LDAP.

You need to be able to secure your environment using IAM rules. Why? Because I don’t want my security, my secret keys to be visible by anybody. I want to be able to utilize the security that’s already initiated by AWS and have it just [be 33:00] through my app.

VPC security groups. We ensure that with your VPC and with the security groups that you set up, only users that you want to have access to your infrastructure has access to your infrastructure.

From a data protection standpoint, block replication. How do we make sure that my data is somewhere else?

Data redundancy. We’ve been preaching that for the last 20 years. The only way I can make sure that my data is fully protected is that it’s redundant. In the cloud, although we’re extended redundancy, how do I make sure that my data is faultlessly redundant?

We’re talking about bock replication. For data protection, we’re talking about encryption. How could you actually encrypt that data to make sure that even if someone did get access to it they didn’t know what they would do with it. It would be gobbledygook.

You need to be able to have the ability to do instant snapshots. How can I go in, based on my scenario, go in and create an instant snapshot of my environment? So worst case scenario if anything happens, I have a point in time that I could quickly come back to.

Write up with snap clones. How do I stand up my data quickly? Worst case scenario if anything happens and I need to be able to revert to a period before that I know I wasn’t compromised; how can I do that quickly?

High availability in ZFS and Linux. How do I protect my infrastructure underneath? Then performance. A cloud NAS gives you dedicated infrastructure. That means that it gives you the ability to be tunable.

If my workload or use-case increases, I have the ability to tune my appliance to be able to grow as my use-case grows or as my use-case needs growth.

Performance and adaptability. From disk to SSD, to networking, how can I make my environment or my storage adaptable to the performance that I would need? From no-burst limitations, dedicated throughput, compression, deduplication are all things that we need to be considering as we go through this design process. The cloud NAS gives you the ability to do it.

Then flexibility. What can I grow to? Can I scale from gigabyte to petabyte with the way that I have designed? Can I grow from petabytes to multiple petabytes? How do I make sure that I’ve designed with the thought of flexibility? The cloud NAS gives you the ability to do that.

We are also talking about multiplatform, multi-cloud, block and object storage. Have I designed my environment that I could switch to new storage options? Cloud NAS gives you the ability to do that.

We also need to get to the point of the protocols. What protocols are supported, CIFS, NFS, iSCSI? Can I then provision these systems? Yes. The cloud NAS gives you the ability to do that.

With that, I just wanted to give a very quick overview onSoftNAS and what we do from a SoftNAS perspective. SoftNAS, as we said, is a cloud NAS. It’s the most downloadable and the most utilized cloud NAS in the cloud.

We give you the ability to easily migrate those applications to AWS. You don’t need to change your applications at all. As long as your applications connect via CIFS or NFS or iSCSI or AFP, we are agnostic. Your applications would connect exactly the same way that they had connected as they do on-premise. We give you the ability to address cloud storage and this is in the form of S3. It’s in the form of Provisioned IOPS. It’s in the form of gp2s.

Anything that Amazon has available as storage, SoftNAS gives you the ability to aggregate into a storage pool and enhance sharing it out via volumes that are CIFS, NFS, or iSCSI. Giving you the ability to have your applications move seamlessly.

These are some of our technology partners. We work hand-in-hand with Talon, ScanDisk, NetApp, SwiftStack. All of these companies love our software and we work hand-in-hand as we deliver solutions together.

We talked about some of the functions that we have that are native to SoftNAS. The Cloud-Native IAM Role Integration, we have the ability to do that – to encrypt your data at rest or in transit.

Then also the fact of firewall security, we have the ability to be able to utilize that to. From a data protection standpoint, it’s a copy on write file system so it gives you the ability to be able to ensure the data integrity of your information of your storage.

We’re talking about instant storage snapshots whether manual or scheduled and rapid snapshot rollback there, we support RAID all across the board with all EBS and also the ability to do it with S3 back storage.

We also give you the ability to…From a built-in snapshot scenario for your end-users, this is one of the things that our customers love, the Windows previous version support.

IT teams love this; because if they have a user that made a mistake, instead of them having to go back in and recover a whole volume to be able to get the data back, they just tell them to go ahead.

Go back in. Windows previous versions, right click on that, restore that previous version. Giving your users something that they are used to on-premise that they have immediately within the cloud.

High performance. Scaling up to gigabytes per second for throughput. For performance, we talked about no burst limits protects against Split-brain on HA fallover giving you the ability to migrate those applications without writing or rewriting a single piece of code.

We talked about automation in the ability to utilize our REST APIs. Very robust REST API cloud integration using ARM or cloud formation template available in every AWS ARM region.

Brands you know that trust SoftNAS. With the list of logos that we have on the screen, everywhere from your Fortune 500 all the way down to SMBs, they are using us in multiple different use cases within the environment – all the way from production, to DevOps, to UAT, enabling their end-users or development teams or production environments to be able to scale quickly and utilize the redundancy that we have, from a SoftNAS standpoint.

You could try SoftNAS for free for 30 days. Very quickly, very easily, our software stands up within 30 minutes and that’s all the way from deploying the instance to creating a tier.

Creating your volumes, creating your disks, aggregating those disks into a storage pool, creating your volumes, and setting up your tiers – 30 minutes. Very quickly, very easy, and you could actually try it for 30 days. Figure out how it fits in your environment. Test it, test the HA, test the ability to use a virtual IP between two systems – very quickly, very easy, very simple.

Taran:              Okay, Kevin, let me give some Q&R really quick. To thank everyone for joining today’s webinar, we are giving out $100 AWS credits for free. All we need is for you to click on the link that I just posted in the chat window.

Just go in, click on that link, and it will take you to a page where all you have todo is provide us your email address; and then within 24 hours, you will receive a free $100 a month AWS credit.

For those of you who are interested in learning more about SoftNAS on AWS, we welcome you to go visit softnas.com/aws or the Bitly link that’s right here on the screen.

Just go ahead and visit that link to learn more about how SoftNAS works on AWS. If you have any questions or you want to get in contact with us, please feel free to visit our “Contact us” page. Basically, you can submit a request to learn more about how SoftNAS works.

Kevin and our other SAs are more than happy to work through your use case to learn about what you may be using cloud storage for and how we can help you out with that.

Then also, we do have a live chat function on our website as well so if you want to speak to one of our people immediately, you can just go ahead and use that live chat feature and our team will answer your questions pretty quickly.

We’ll go ahead and start the Q&A now. We have a couple of questions here and let’s go ahead and knock them out. It should be about five minutes. The first question that we have here is can we download the slide deck?

You absolutely can download the slide deck. We’re going to upload it to SlideShare shortly after this webinar is over. Later on in the afternoon, we are going to send you an email with a SlideShare link and a YouTube link for the recording of the webinar.

The second question that we have here is can SoftNAS be used in a hybrid deployment? Kevin?

Kevin:              SoftNAS can be used in a hybrid deployment. The same code that exists within the AWS environment also exists that you can actually deploy it on VMware. Each SoftNAS device gives you the ability to address cloud storage so you would be able to go ahead and utilize your regular access within AWS so you could still go ahead and utilize EBS or S3 storage at the backend.

Taran:              Fantastic. Thank you, Kevin. The next question that we have here is if I do choose to migrate my applications to AWS, can I do it natively without SoftNAS or do I have to migrate with SoftNAS?

Kevin:              That is an interesting question. It depends. It depends on how much storage you actually have behind that application. If you’re looking at something that is a one server solution and you’re not concerned directly with HA for that environment, then yes. You could definitely take that one server, bring it up to the cloud and you’d be able to do that.

However, if you’re looking to be able to recreate an enterprise like solution within your environment, then it would make sense to consider some type of NAS like solution to able to have that redundancy in your data taken care of.

Taran:              Great. Thanks, Kevin. The next question that we have here is, any integration with OpenStack for hybrid cloud, VIO VMware OpenStack?

Kevin:              You guys, I don’t know who’s asking that question, but we do have the ability to integrate with OpenStack. We would love to talk to you about your use case. If you could actually reach out to our sales team, we’ll go ahead and schedule a meeting with an SA so we could talk through that.

Taran:              Awesome. Thank you, Kevin. That was from Paulo, by the way.

Kevin:              OK, thanks a lot.

Taran:              That’s all the questions that we had for today’s webinar. Before we end this webinar, we do want to let everyone know that there is a survey available after this webinar is over. If you could please fill out that survey just so we can get some feedback on today’s webinar and let us know what topics we should definitely work on for our future webinars.

With that, our webinar today is over. Kevin, thanks again for presenting. We want to thank all of you for attending as well. We look forward to seeing you at our future webinars. Everyone have a great rest of the day.