How can you leverage the cloud to manage the costs of your Isilon data while you grow?

How can you leverage the cloud to manage the costs of your Isilon data while you grow?

Recently, a SoftNAS customer that provides a sales enablement and readiness platform evaluated their current on-premises storage environment. They had growing concerns regarding whether or not their existing solution could support the large volumes of data their applications needed.  The customer had Isilon storage in their datacenter and a dedicated remote Disaster Recovery (DR) datacenter with another Isilon storing 100 TB of data. With the growth of their data-intensive applications, the company predicted the dedicated DR site would be at full capacity within two years. This pushed them to evaluate cloud-based solutions that wouldn’t require them to continue purchasing and maintaining new hardware.

Buurst SoftNAS quickly became the ideal choice as it could support the petabytes of file storage they needed, allowing them to dynamically tier their storage by moving aging data to slower, less expensive storage. This new capability would solve their need for 100 TB of storage, as well as allow them to pay only for the services they used while also eliminating the need to pay for and maintain physical datacenters, network-attached storage, and storage area network (SAN) appliances.

By moving data to SoftNAS, the customer was able to quickly build a cloud-based DR platform with 100 TB of storage attached to it. Because of the successful disaster recovery platform, the company plans to make SoftNAS their primary storage solution, leveraging enterprise-class features like bottomless scalable storage and highly available clustering, allowing them to take full advantage of what the cloud has to offer.

So, how does this success story relate to you?

Data volumes are increasing at rates that are virtually impossible to keep up with using on-premises storage. If you’re leveraging an Isilon storage solution, you’re likely looking for ways to expand your storage capacity quickly, securely, and with the lowest cost. When considering your data storage strategy, ask yourself a few key questions:

  1. Do I have the physical space, power, and cooling for more storage?
  2. Do I have CapEx to purchase more storage?
  3. Do I want to build more data centers for more storage?

On-premises storage solutions can limit your organization from truly unlocking modern data analytics and AI/ML capabilities. The cost and upkeep required to maintain on-premises solutions prevent your teams from exploring ways to position your business for future growth opportunities. This push for modernization is often a driving factor for organizations to evaluate cloud-based solutions, which comes with its own considerations:

  1. Do I have a reliable and fast connection to the internet?
  2. How can I control the cost of cloud storage?
  3. How can I continuously sync live data?

With SoftNAS running on the cloud you can:

  • Make cloud backups run up to 400% faster at near-block-level performance with object storage pricing, resulting in substantial cost savings. SoftNAS optimizes data transfer to cloud object storage, so it’s as fast as possible without exceeding read/write capabilities.
  • Automate storage tiering policies to reduce the cost of cloud storage by moving aged data from more expensive, high-performance block storage to less expensive, slower storage, reducing cloud storage costs by up to 67%.
  • Continuously keep content up to date when synchronizing data to the cloud of your choice by reliably running bulk data transfer jobs with automatic restart/suspend/resume.

Get started by pointing SoftNAS to existing Isilon shared storage volumes, select your cloud storage destination, and immediately begin moving your data. It’s that easy.

Find out more about how SoftNAS enables you to:

  • Trade CapEx for OpEx
  • Seamlessly and securely migrate live production data
  • Control the cost of cloud storage
  • Continuously sync live data to the cloud

Download our eBook to learn about managing cloud data costs without sacrificing performance.

Check Also:

Consolidate Your Files in the Cloud
Migrating Existing Applications to AWS
Moving your On-Premises NAS to the Azure Cloud
Replacing EMC Isilon, VNX & NetApp with AWS & Azure

How to Maintain Control of Your Core in the Cloud

How to Maintain Control of Your Core in the Cloud

For the past 7 years, Buurst’s SoftNAS has helped customers in 35 countries globally to successfully migrate thousands of applications and petabytes of data out of on-premises data centers into the cloud. Over that time, we’ve witnessed a major shift in the types of applications and the organizations involved.

The move to the cloud started with simpler, low risk apps and data until companies became comfortable with cloud technologies. Today, we see companies deploying their core business and mission-critical applications to the cloud, along with everything else as they evacuate their data centers and colos at a breakneck pace.

At the same time, the mix of players has also shifted from early adopters and DevOps to a blend that includes mainstream IT.

The major cloud platforms make it increasingly easy to leverage cloud services, whether building a new app, modernizing apps or migrating and rehosting the thousands of existing apps large enterprises run today.

Whatever cloud app strategy is taken, one of the critical business and management decisions is where to maintain control of the infrastructure and where to turn control over to the platform vendor or service provider to handle everything, effectively outsourcing those components, apps and data.

So how can we approach these critical decisions to either maintain control or outsource to others when migrating to the cloud? This question is especially important to carefully consider as we move our most precious, strategic, and critical data and application business assets into the cloud.

One approach is to start by determining whether the business, applications, data and underlying infrastructure are “core” vs. “context”, a distinction popularized by Geoffrey Moore in Dealing with Darwin.

He describes Core and Context as a distinction that separates the few activities that a company does that create true differentiation from the customer viewpoint (CORE) from everything else that a company needs to do to stay in business (CONTEXT).

Core elements of a business are the strategic areas and assets that create differentiation and drive value and revenue growth, including innovation initiatives.

Context refers to the necessary aspects of the business that are required to “keep the lights on”, operate smoothly, meet regulatory and security requirements and run the business day-to-day; e.g., email should be outsourced unless you are in the email hosting business (in which case it’s core).

It’s important to maintain direct control of the core elements of the business, focusing employees and our best and brightest talents on these areas. In the cloud, core elements include innovation, business-critical and revenue-generating applications and data, which remain central to the company’s future.

Certain applications and data that comprise business context can and should be outsourced to others to manage. These areas remain important as the business cannot operate without them, but they do not warrant our employees’ constant attention and time in comparison to the core areas.

The core demands the highest performance levels to ensure applications run fast and keep customers and users happy. It also requires the ability to maintain SLAs around high availability, RTO and RPO objectives to meet contractual obligations. Core demands the flexibility and agility to quickly and readily adapt as new business demands, opportunities and competitive threats emerge.

Many of these same characteristics are also important to business context areas as well, but not as critical as the context that can simply be moved around from one outsourced vendor to another as needed.

Increasingly, we see the business-critical, core applications and data migrating into the cloud. These customers demand control of their core business apps and data in the cloud, as they did on-premises. They are accustomed to managing key infrastructure components, like the network attached storage (NAS) that hosts the company’s core data assets and powers the core applications. We see customers choose a dedicated Cloud NAS that keeps them in control of their core in the cloud.

Example core apps include revenue-generating e-discovery, healthcare imaging, 3D seismic oil and gas exploration, financial planning, loan processing, video rental and more. The most common theme we see across these apps is that they drive core subscription-based SaaS business revenue. Increasingly, we see both file and database data being migrated and hosted atop of the Cloud NAS, especially SQL Server.

For these core business use cases, maintaining control over the data and the cloud storage is required to meet performance and availability SLAs, security and regulatory requirements, and to achieve the flexibility and agility to quickly adapt and grow revenues. The dedicated Cloud NAS meets the core business requirements in the cloud, as it has done on-premises for years.

We also see many less critical business context data being outsourced and stored in cloud file services such as Azure Files and AWS EFS. In other cases, the Cloud NAS abilities to handle both core and context use cases is appealing. For example, leveraging both SSD for performance and object storage for lower cost bulk storage and archival with unified NFS and CIFS/SMB makes the Cloud NAS more attractive in certain cases.

There are certainly other factors to consider when choosing where to draw the lines between control and outsourcing applications, infrastructure and data in the cloud.

Ultimately, understanding which applications and data are core vs context to the business can help architects and management frame the choices for each use case and business situation, applying the right set of tools for each of these jobs to be done in the cloud.

Check Also:

Successful cloud data management strategy
When will your company ditch its data centers?
Migrate Workloads and Applications to the Cloud
When it’s time to close your data center and how to do it safely

Choosing the Right Instance Type and Instance Size for AWS and Azure

Choosing the Right Instance Type and Instance Size for AWS and Azure

In this post, we’re sharing an easy way to determine the best instance type and appropriate instance size to use for specific use cases in the AWS and Azure clouds.

To help you decide, there are some considerations to keep in mind. Let’s go through each of these determining factors in depth.

Decision Point 1 – What use case are you trying to address?

  • A. Migration to the Cloud
    • Migrating existing applications into the cloud should be neither complex, expensive, time consuming, resource intensive, nor force you to rewrite your application to run it in the public cloud.If your existing applications access storage using CIFS/SMB, NFS, AFP or iSCSI storage protocols, then you will need to choose a NAS filer solution that will allow your applications to access cloud storage (block or object) using the same protocols it already does.
  • B. SaaS-Enabled Applications
    • For revenue-generating SaaS applications, high performance, maximum uptime and strong security with access control are critical business requirements.Running your SaaS apps in a multi-tenant, public cloud environment can be challenging while simultaneously fulfilling these requirements. An enterprise-grade cloud NAS filer may help you cope with these challenges, even in a public cloud environment. A good NAS solution provider will assure no downtime, high-availability and high levels of performance, strong security with integration to industry standard access control and make it easier to SaaS-enable apps.
  • C. File Server Consolidation
    • Over time, end users and business applications create more and more data – usually unstructured – and rather than waiting for the IT department, these users install file servers wherever they can find room to put them, close to their locations. At the same time, businesses either get acquired or acquire other companies, inheriting all their file servers in the process. Ultimately, it’s the IT department that must manage this “server sprawl” when dealing with for OS and software patches, hardware upkeep and maintenance, and security. With limited IT staff and resources, the task becomes impossible. The best long-term solution is using the cloud, of course, and a NAS filer to migrate files to the cloud.This strategy allows for scalable storage that is accessed the same way by users as they have always accessed their files on the local files servers.
  • D. Legacy NAS Replacement
    • With a limited IT staff and budget, it’s impractical to keep investing in legacy NAS systems and purchase more and more storage to keep pace with the rapid growth of data. Instead, investment in enterprise-grade cloud NAS can help businesses avoid burdening their IT staff with maintenance, support and upkeep, and pass those responsibilities on to a cloud platform provider. Businesses also gain the advantages of dynamic storage scalability to keep pace with data growth, and the flexibility to map performance and cost to their specific needs.
  • E. Backup/DR/Archive in the Cloud
    • Use tools to replicate and backup your data from your VMware datacenter to the AWS, Azure public clouds. Eliminate physical backup tapes by archiving data in inexpensive S3 storage or to cold storage like AWS Glacier for long-term storage. For stringent Recovery Point Objectives, cloud NAS can serve as an on-premises backup or primary storage target for local area network (LAN) connected backups as well.As a business’ data grows, the backup window can become unmanageable and tie up precious network resources during business hours. Cloud NAS with local disk-based caching reduces the backup window by streaming data in the background for better WAN optimization.

Decision Point 2 – What Cloud Platform do you want to use?

No matter which cloud provider is selected, there are some basic infrastructure details to keep in mind. The basic requirements are:

  • Number of CPUs
  • Size of RAM
  • Network performance
  • A. AWS
    • Standard: r5.xlarge is a good starting point in regard to memory and CPU resources. This category is suited to handle processing and caching with minimal requirements for network bandwidth. It comprises 4 vCPU, 16 GiB RAM, 1GbE network.Medium: r5.2xlarge is a good choice for workloads that are read-intensive, and will benefit from the larger memory-based read cache for this category. The additional CPU will also provide better performance when deduplication, encryption, compression and/or RAID is enabled. Composition: 8 vCPU, 32 GiB RAM, 10GbE network.High-end: r5.24xlarge can be used for workloads that require a very high-speed network connection due to the amount of data transferred over a network connection. In addition to the very high-speed network, this level of instance gives you a lot more storage, CPU and memory capacity. Composition: 96 vCPU, 758 GiB RAM, 25GbE network.
  • B. Azure
    • Dsv3-series support premium storage and are the latest, hyper-threaded general-purpose generation running on both the 2.4 GHz Intel Xeon® E5-2673 v3 (Haswell) and the 2.3 GHz Intel Xeon® E5-2673 v4 (Broadwell) processor. With the Intel Turbo Boost Technology 2.0, the Dsv3 can achieve up to 3.5 gigahertz (GHz). The Dsv3-series sizes offer a combination of vCPU, memory and temporary storage best suited for most production workloads.
    • Standard: D4s Standard v3 4 vCPU, 16 GiB RAM with moderate network bandwidth
    • Medium: D8s Standard v3 8 vCPU, 32 GiB RAM with high network bandwidth
    • High-end: D64s Standard v3 64 vCPU, 256 GiB RAM with extremely high network bandwidth

Decision Point 3 – What type of storage is needed?

Both AWS and Azure offer block as well as object storage. Block storage is normally used with file systems while object storage addresses the need to store “unstructured” data like music, images, video, backup files, database dumps and log files. Selecting the right type of storage to use also influences how well an AWS Instance or Azure VM will perform.

Other resources:

About AWS S3 object storage

About AWS EBS block storage

Types of Azure storage

Decision Point 4 – Need a “No Storage Downtime Guarantee”?

No matter which support platform is used, look for a filer that offers High Availability. A robust set of HA capabilities protects against data center, availability zone, server, network and storage subsystem failures to keep the business running without downtime. HA monitors all critical storage components, ensuring they remain operational. In case of an unrecoverable failure in a system component, another storage controller detects the problem and automatically takes over, ensuring no downtime or business impacts occur. So, companies are protected from lost revenue when access to their data resources and critical business applications would otherwise be disrupted.

If you’re looking for more personal guidance or have any technical questions, get in touch with our team of cloud experts who have done thousands of VPC and VNet configurations across AWS and Azure. Even if you are not considering SoftNAS, our free professional services reps will be happy to answer your questions.

Learn more about free professional services, environment setup and cloud consultations.

The Secret to Maximizing Returns on Auto-Optimized Tiered Storage

The Secret to Maximizing Returns on Auto-Optimized Tiered Storage

Intelligent tiering in storage can save money, but you can additionally save up to 75% if you opt for dedupe and data compression first

Businesses deal with large volumes of data every day and continue to add to this data at a rate that’s often difficult to keep up with. Data management is a continuous challenge, and data storage is an exponential expense. While the cost of storage/GB may not be significant, it adds up quickly, over days and months, and becomes a significant chunk of ongoing expenses.

More often than not, it is not feasible to delete or erase old data. Data must be stored for various reasons such as legal compliance, building databases, machine learning, or simply because it may be needed later. But a large portion of data often goes untouched for months at a time, with no need for access, yet continues to rack up the storage disk bills.

Cutting costs with Automated Tiered Storage

As a solution to this problem faced by most business owners, many storage providers and NAS filers offer auto tiering for storage. With automated tiering, data is stored across various levels of disks to save on storage costs. This tiering means data that’s accessed less frequently is stored on disks with lower performance and higher latency—disks that are much cheaper, usually 50-60% cheaper than high performance disks.

It is often difficult to identify and isolate data that is less likely to be accessed, so policies are set in place to identify and shift data automatically. For instance, you may set the threshold at 6 weeks. Then, once 6 weeks have passed without accessing a certain block of data, that block is automatically moved down to a lower tier, where it is not as expensive to continue to store it.

So intelligent tiering of data helps reduce storage costs significantly without really impacting day-to-day operations.

The Big Guns: Dedupe and Data Compression

While tiered storage helps you save on storage costs by optimizing the storage location, the expense is still proportionate to the amount of data. After all, you do pay for storage/GB. But before you even store data, have you considered, is this data streamlined? Am I saving data that can be pared down?

Deduplication

Unnecessarily bulky data is more common that you’d expect. Every time an old file or project is pulled out from storage for updates, or to ramp up, or to make any changes, a new file is saved. So even if changes are made to only 1 or 2 MB of data, a new copy of the entire 4TB file is made and saved. Now imagine this being done with several files each day. With this replication happening over and over again, the total amount of data quickly multiplies, occupying more storage, spiking storage costs, and even affecting IOPs. This is where inline deduplication helps.

With inline deduplication, files are compared block by block for redundancies, which are then eliminated. Instead, a reference count of the copies is saved. In most cases, data is reduced by 20-30% by making inline dedupe a part of the storage efficiency process delivered by a NAS filer.

Data compression

Reducing the number of bits needed to represent the data through compression is a simple process and can be highly effective – data can be reduced by 50-75%. The extent of compression depends on the nature of the data and how compressed it is at the outset. For instance, an mp4 file is already a highly compressed format. But, in our experience, data usually offers good opportunities for reduction through compression.

By compressing data, the amount of storage space needed is reduced, and the costs associated with storage come down too.

When we combine the effect of the deduplication and compression, we find that customers reap savings of up to 90% ! If this new streamlined data is now stored using automated tiering, the savings are amplified because

1. The amount of data to be stored is reduced, thus saving on storage capacity needed across ‘hot’ and ‘cool’ tiers
2. Input/Output is reduced, leading to better performance

Data deduplication and compression explained in a use case

Let’s say we have 1 TB of actual data to store. On average, cloud SSD storage costs $0.10 per GB/month, so $100 per TB/ month.

If the data can be reduced by 80% using deduplication and compression—which is likely— effective cost per TB is just 20% of the original projection, or $20 per TB/month. Now add in auto-tiering, and that cuts the cost in half again by using a combination of SSD and lower-cost HDD, and you have a $10 /TB cost basis.

If we estimate your storage needs grow to 10 TB over time, you will pay $100 per month – that’s the amount you would have been paying for basic file storage services for 1TB of data before dedupe and data compression.

The net effect of these combined storage efficiency capabilities delivered by SoftNAS, for example, is to reduce the effective cost per GB from $0.10 per GB/month to $0.02 per GB/month by combining tiering, compression and deduplication – without sacrificing performance. With the rapidly-increasing amount of data that must be managed, who doesn’t want to cut their cloud storage bill by 500%?

Auto-Optimized Tiered Storage with SoftNAS

Buurst SoftNAS offers SmartTiers, auto-tiered storage with the added advantage of flexible operations. After deduplication and compression, data is stored in tiers – with tiering that you can configure and control, according to policies to suit your usage patterns, and optimize further as your usage evolves. Our goal is to achieve the price/performance equation that suits your business, so even when data stored away in a low-cost tier is accessed, only the particular block accessed migrates up to the hot tier, not the entire file. With a user-friendly interface, you can continue to manage the policies and thresholds, and the capacity of the tiers, and the rest of the cost-saving optimizations happen automatically and transparently.

Want to know what kind of savings you can achieve with SoftNAS SmartTiers? Try our storage cost-savings calculator.

Cloudberry Backup – Affordable & Recommended Cloud Backup Service on Azure & AWS

Cloudberry Backup – Affordable & Recommended Cloud Backup Service on Azure & AWS

Let me tell you about a CIO I knew from my days as a consultant. He was even-keeled most of the time and could handle just about anything that was thrown at him. There was just the one time I saw him lose control – when someone told him the data backups had failed the previous night. He got so flustered, it was as if his career was flashing before him, and the ship might sink and there were no remaining lifeboats.

If You Don’t Backup Data, What Are the Risks?

Backing up one’s data is a problem as old as computing itself. We’ve all experienced data loss at some point, along with the pain, time and costs associated with recovering from the impacts caused in our personal or business lives. We get a backup to avoid these problems, an insurance you hope you never have to use, but, as Murphy’s Law goes, if anything can go wrong, it will.

Data storage systems include various forms of redundancy and the cloud is no exception. Though there are multiple levels of data protection within cloud block and object storage subsystems, no amount of protection can cure all potential ills. Sometimes, the only cure is to recover from a backup copy that has not been corrupted, deleted or otherwise tampered with.

SoftNAS provides additional layers of data protection atop cloud block and object storage, including storage snapshots, checksum data integrity verification on each data block, block replication to other nodes for high availability and file replication to a disaster recovery node. But even storage snapshots rely upon the underlying cloud storage block and object storage, which can and does fail occasionally.

These cloud native storage systems tout anywhere from 99.9% up to 11 nines of data durability. What does this really mean? It means there’s a non-zero probability that your data could be lost – it’s never 100%. So, when things do go wrong, you’d do best to have at least one viable backup copy. Otherwise, in addition to recovering from the data loss event, you risk losing your job too.

Why Companies Must Have a Data Backup

Let me illustrate this through an in-house experience.

In 2013, when SoftNAS was a fledgling startup, we had to make every dollar count and it was hard to justify paying for backup software or the storage it requires.

Back then, we ran QuickBooks for accounting. We also had a build server running Jenkins (still do), domain controllers and many other development and test VMs running atop of VMware in our R&D lab. However, it was going to cost about $10,000 to license Veeam’s backup software and it just wasn’t a high enough priority to allocate the funds, so we skimped on our backups. Then, over one weekend, we upgraded our VSAN cluster. Unfortunately, something went awry and we lost the entire VSAN cluster along with all our VMs and data. In addition, our makeshift backup strategy had not been working as expected and we hadn’t been paying close attention to it, so, in effect, we had no backup.

I describe the way we felt at the time as the “downtime tunnel”. It’s when your vision narrows and all you can see is the hole that you’re trying to dig yourself out of, and you’re overcome by the dread of having to give hourly updates to your boss, and their boss. It’s not a position you want to be in.

This is how we scrambled out of that hole. Fortunately, our accountant had a copy of the QuickBooks file, albeit one that was about 5 months old. And thankfully we still had an old-fashioned hardware-based Windows domain controller. So we didn’t lose our Windows domain. We had to painstakingly recreate our entire lab environment, along with rebuilding a new QuickBooks file by entering all the past 5 months of transactions, and recreate our Jenkins build server. After many weeks of painstaking recovery, we managed to put Humpty Dumpty back together again.

Lessons from Our Data Loss

We learned the hard way that proper data backups are much less expensive than the alternatives. The week after the data loss occurred, I placed the order for Veeam Backup and Recovery. Our R&D lab has been fully backed up since that day. Our Jenkins build server is now also versioned and safely tucked away in a Git repository so it’s quickly recoverable.

Of course, since then we have also outsourced accounting and no longer require QuickBooks, but with a significantly larger R&D operation now we simply cannot afford another such event with no backups ever again. The backup software is the best $10K we’ve ever invested in our R&D lab. The value of this protection outstrips the cost of data loss any day.

Backup as a Service

Fortunately, there are some great options available today to back up your data to the cloud, too. And they cost less to acquire and operate than you may realize. For example, SoftNAS has tested and certified the CloudBerry Backup product for use with SoftNAS. CloudBerry Backup (CBB) is a cloud backup solution available for both Linux and Windows. We tested the CloudBerry Backup for Linux, Ultimate Edition, which installs and runs directly on SoftNAS. It can run on any SoftNAS Linux-based virtual appliance, atop of AWS, Azure and VMware. We have customers who prefer to run CBB on Windows and perform the backups over a CIFS share. Did I forget to mention this cloud backup solution is affordable at just $150, and not $10K?

Here’s a block diagram of one example configuration below. CBB performs full and incremental file backups from the SoftNAS ZFS filesystems and stores the data into low-cost, highly-durable object storage – S3 on AWS, and Azure blobs on Azure.

CBB supports a broad range of backup repositories, so you can choose to back up to one or more targets, within the same cloud or across different clouds as needed for additional redundancy. It is even possible to back up your SoftNAS pool data deployed in Azure to AWS, and vice versa. Note that we generally recommend creating a VPC-to-S3 or VNET-to-Blob service endpoint in your respective public cloud architecture to optimize network storage traffic and speed up backup timeframes.

AWS Region singel VPC

azure Region

To reduce the costs of backup storage even further, you can define lifecycle policies within the Cloudberry UI that move the backups from object storage into archive storage. For example, on AWS, the initial backup is stored on S3, then a lifecycle policy (managed right in CBB) kicks in and moves the data out of S3 and into Glacier archive storage. This reduces the backup data costs to around $4/TB (or less in volume) per month. You can optionally add a Glacier Deep Archive policy and reduce storage costs even further down to $1 per TB/month. There is also an option to use AWS S3 Infrequent Access Storage.

There are similar capabilities available on Microsoft Azure that can be used drive your data backup costs down to affordable levels. Bear in mind the current version of Cloudberry for Linux has no native Azure Blob lifecycle management integration. Those functions need to be performed via the Azure Portal.

Personally, I prefer to keep the latest version in S3 or Azure hot blob storage for immediate access and faster recovery, along with several archived copies for posterity. In some industries, you may have regulatory or contractual obligations to keep archive data much longer than with a typical archival policy.

Today, we also use CBB to back up our R&D lab’s Veeam backup repositories into the cloud as an additional DR measure. We use CBB for this because there are no excessive I/O costs when backing up into the cloud (Veeam performs a lot of synthetic merge and other I/O, which drives up I/O costs based upon our testing).

In my book, there’s no excuse for not having file level backups of every piece of important business data, given the costs and business impacts of the alternatives: downtime, lost time, overtime, stressful calls with the bosses, lost productivity, lost revenue, lost customers, brand and reputation impacts, and sometimes, lost jobs, lost promotion opportunities – it’s just too painful to consider what having no backup can devolve into.

To summarize, there are 5 levels of data protection available to secure your SoftNAS deployment:

1. ZFS scheduled snapshots – “point-in-time” meta-data recovery points on a per-volume basis
2. EBS / VM snapshots – snapshots of the Block Disks used in your SoftNAS pool
3. HA replicas – block replicated mirror copies updated once per minute
4. DR replica – file replica kept in a different region, just in case something catastrophic happens in your primary cloud datacenter
5. File System backups – Cloudberry or equivalent file-level backups to Blob or s3.

So, whether you choose to use CloudBerry Backup, Veeam®, native Cloud backup (ex. Azure Backup) or other vendor backup solutions, do yourself a big favor. Use *something* to ensure your data is always fully backed up, at the file level, and always recoverable no matter what shenanigans Murphy comes up with next. Trust me, you’ll be glad you did!

To learn more and get started backing up your SoftNAS data today, download the PDF to get all the details on how to use CBB with SoftNAS.

Disclaimer:

SoftNAS is not affiliated in any way with CloudBerry Lab. As a CloudBerry customer, we trust our business’ data to CloudBerry. We also trust our VMware Lab and its data to Veeam. As a cloud NAS vendor, we have tested with and certify CloudBerry Backup as compatible with SoftNAS products. Your mileage may vary.

This post is authored by Rick Braddy, co-founder and CTO at Buurst SoftNAS. Rick has over 40 years of IT industry experience and contributed directly to formation of the cloud NAS market.

Webinar: Get your On-Premises NAS in the Azure Cloud

Webinar: Get your On-Premises NAS in the Azure Cloud

The following is a recording and full transcript from the webinar, “Get your On-Premises NAS in the Azure Cloud”. You can download the full slide deck on Slideshare

Looking to transition your enterprise applications to the highly-available Azure cloud? No time/budget to re-write your applications to move them to Azure? SoftNAS Cloud NAS extends Azure Blob storage with enterprise-class NAS file services, making it easy to move to Azure. Register for our live webinar and discover how you can quickly and easily:

– Configure Windows servers on Azure with full Active Directory control
– Enable enterprise-class NAS storage in the Azure cloud
– Provide disaster recovery and data replication to the Azure cloud
– Provide highly available shared storage

Full Transcript: Get your On-Premises NAS in the Azure Cloud

David Mitchell: Okay, folks, we’re just on the hour now so let’s get started. I want to click on record. Okay, it’s done. First of all, welcome to today’s webinar. Today we’re going to be talking about getting your on-premise NAS in Azure Cloud. Today’s presenter is going to be Matt Blanchard, a solutions architect with us here in SoftNAS.

My name is David Mitchell. Before I hand you over to Matt, I just have a couple of slides to cover. As I mentioned, Matt is our presenter today and I’ll hand you over to him shortly.

It looks like everyone has safely got into GoToWebinar. Hopefully, you can see and you can hear us. If it’s your first time using GoToWebinar, you do have a couple of options for audio. You can either use the mic and speakers or the telephone.

If you’re using the telephone, we do have a direct dial-in for most countries so make sure you do that and enter in your audio pin. If not, use your mic and speakers. You may need just to configure that if you have a couple of different options there in your local device.

Throughout the session today, we are going to have everyone on mute so the best way to handle a question, we found, is to use the questions pane. As Matt goes through the slides and the demo, if you have any questions please post them there.

We have allocated some time at the very end to go over the questions. I’m sure Matt will remind you as he goes through the webinar.

Lastly, as I mentioned and as you probably heard me saying about recording, we are recording this session. If you do need to leave or if a colleague couldn’t make it or if you know of someone else who’s interested and maybe couldn’t make it, we will be sending out a link to the recording after this and also a link to the slides.

We post our slides on SlideShare so don’t worry about writing down notes or anything like that should get access to all the material. That’s it. I am going to now hand you over to Matt.

Matt, if you want to unmute your line. I’m going to make you presenter. I can see your slide, matt, but I can’t hear you.

Matt Blanchard: Can you hear me now?

David: Yeah, loud and clear.

Matt: Great! Do you see the slides?

David: I do. You want to put it into presentation mode so we make sure we can.
Matt: I thought we were in presentation mode there. How’s that?

David: No, I can just see them in regular view.

Matt: We’ll try this one. How’s that?

David: No, it’s still the same for me, Matt.

Matt: I am sorry. Are you seeing the car on the background?

David: No, I’m just seeing a picture of the slides.

Matt: Let me do this.

David: I guess everyone is seeing on the webinar. I don’t know if you want to put a comment in the questions pane there if everyone is seeing the same thing.

Matt: How about that? Now, do you see just this?

David: Yeah, that’s perfect now. That’s it.

Matt: We will go from there. I’m sorry.

David: It’s perfect. Over to you, Matt.

Matt: I’m sorry about that David. Starting off once again, my name is Matt Blanchard. I am a principal solutions architect here at SoftNAS. Today, we’re going to talk about some of the advantages of using Microsoft Azure for your cloud storage devices inside the cloud and helping you make plans to move from your on-premise solution today into the cloud of tomorrow.

This is not a new concept. This is what we’ve seen in trend for the last several years. The bill versus buy aspect where we’re going to have a great economy of scale whenever we buy assets or we buy an OpEx partner and we are able to use that type of partnership to advance our IT needs versus a low economy of scale. If I have to invest my own money to build up the information systems and buy large SAN suppliers in networking, storage networks, and so forth.

Hosting that and building that all out myself makes a lot of capital investiture. This is the paradigm — it’s on-premise versus the cloud architecture. A lot of the things that we see that we have to provide for ourselves on-premise are things that are assumed and given to us in configurations in the cloud, such as with Microsoft Azure giving us the availability to have full-fledged VMs running inside of our Azure repository and accessing our SoftNAS virtual SANs. We are able to give you a network access control towards all your storage needs within a packaged small useable space.

What does this afford us? I don’t have to build my own data center. I can have all my applications running in the cloud on-services versus having them on-premise running physically and having to maintain them physically on datasets.

If you think about rebuilding applications for the next generation of databases or having the next generation of server componentry that we’re going to install that may not have the correct driver sets for our applications and having to rebuild all those things. It makes it quite tedious to help move forward your architecture.

However, when we start to blow those lines and move into let’s say a hosting provider or a cloud services, those dependencies on the actual hardware devices and the physical device drivers start to fade away because we’re running these applications as services and not as physical supported sideload architectures.

This movement towards Azure in the cloud, it makes quite a bit of sense whenever you start looking at the economies of scale, how fast we could grow in capacity, and things like bursting control whenever we have large amounts of data services that we’re going to have to supply on-demand versus things that we have on a constant day-to-day basis.

Say we are a big software company or a big game company that’s releasing the next new Star Wars game. I’ll have to TM that or something in my conversation. You’ll have to see us. It might be some sort of online game that needs extra capacity for the first weekend out just to support all the new users who’re going to be accessing that.

This burst ability and this expandability into the cloud make all the sense in the world because who wants to spend that money on that hardware to build out that infrastructure for something that may or may not continue to be that large of an investment in the future. If we can scale that down overtime or scale it up over time, either way. Maybe we undersized our built. You can think of it in that aspect.

It really makes sense – this paradigm switch into the cloud mantra.

At SoftNAS, we’ve built our architecture to be flexible and adaptable inside of this cloud architecture. We’ve built a Linux virtual machine; it’s built on CentOS. It runs ZFS as our file system on that kernel.

We run all of our systems on open controllable systems. We have staff on-site that contribute into these open-source amalgams to make these systems better into CentOS and ZFS. We contribute a lot of intellectual property to help advance these technologies into the future.

We, of course, run HTML5 as our admin UI, we have PHP, and Apace is our web server. We have all these open systems to allow us to be able to take advantage of a great open-source community out there on the internet.

We integrate with multiple different service users. If you have customers that are currently running in AWS or CenturyLink Cloud and they are looking to migrate into Azure — make a change — it’s very easy for us to come in and help you make that data migration change because inserting a SoftNAS service into both of those service providers and then simply migrating that data is a very simple and easy to do task.

We are actually going to cover that in our demonstration here in just a short few slides. I promised David I would not slide you all to death today. We’re going to go through a few of these slides further, then we’re going to get into a demonstration, then we’ll touch on a few that we’re going to end up with, and then we’ll do a quick Q&A.

As I said, we really do take in responses. We want to be flexible. We want to be open. We want to have all of our data resources that have multiple use-cases. We are able a full-featured NAS service that does all of these things in the data services tab.

Block replication, we can do inline deduplication, caching, storage pools, thin provisioning, writable snapshots, and snapclones. We can do compression, encryption. All of these different offerings, we are able to give you in a single packaged NAS solution.

Once again, all the things that you think you’ll come back in like, “I’m going to have to implement all of that stuff. I’m going to have to buy all these different componentry and insert them into my hardware,” those are things that are assumed and used and we are able to go ahead and give you directly in our NAS solution.

How does SoftNAS work? To be very forthcoming, it’s basically a gateway technology. We are able to present storage capacity whether it be a CIFS or SMB access medium for Windows user for some sort of Windows file share or if it’s an NFS share for some Linux machines or even just an iSCSI block device or an Apple File Protocol for entire machine backups.

If you have end-users or end-devices that need storage repositories of multiple different protocols, we are able then to store that data into say an Azure Blob Storage or even a native Azure storage device.

We are able then to translate those protocols into an object protocol, which is not a native language. We don’t speak in object whenever we’re going through a normal SMB connection, but we do also speak native object directly into Azure Blob. We offer the best of both worlds with this solution.

Just the same as native block devices, we have a native block protocol that we are to talk directly into Azure disks that directly attach to these machines. We are able to create flexible containers that make data unifiable accessible.

How does this play out and work in the real world? What we’re basically going to do is we’re going to present a single IP point of access that all of these file systems will land on. All of our CIFS access, all of our NFS exports, all of the AFP shares will all be enumerated out on a single SoftNAS instance and they will be presented to these applications, servers, and end-users.

The storage pools are nothing more than conglomerations of disks that have been offered up by the Microsoft Azure platform. Whether it’s Microsoft Blob or it’s just native disks, if it’s even another type of object device that you’ve imported into these drives, we can support all of those device types and create storage pools of different technologies.

And we can attach volumes and LUNs that have shares of different protocols to those storage pools so it allows us to have multiple different connection points to different storage technologies on the backend.

And we do this as a basic translation and it’s all seamless to the end-user or the end device.

We’re going to go really quick into a demonstration of this. If you don’t mind, just stick with me here. David, please interrupt me if my screen does not show up correctly here. I should be showing my screen now that has my Azure portal on it.

What we’re going to do right now is we’re going show you how easy it is to deploy a SoftNAS virtual machine into my Azure portal. I’ve got both the virtual portal up here as well as Microsoft Azure…
My Azure portal has timed out on me so I’ll just come back to this one here. I’m going to show you how to deploy this VM within the gallery. It’s very simple. All we have to do is come down and click on new.

I’m going to select compute in virtual machine. Once I select that, I’m going to select gallery and it’s going to bring up a selector. I could simply come in here and insert SoftNAS. Once I type soft, it’s actually going to appear here and I can select my instance that I would like to provision.

I’m not going to go all the way through this provisioning system, but you can kind of get the gist. It’s for the interest of how and not build up some machines for us.

We’re going to go through and call this MS Blob demo. You would then select your different platforms. If we’re going to have an A2. I think D4 is one of our standard offerings. We can build out these machines in a multitude of different ways according to your data needs.

If you’re going to be doing quite a bit of caching for read/cache, we might want to increase the RAM size because ZFS is very heavy on RAM for caching. We might come in here and add more memory, 28 gigs of memory.

We can come in then and create a user. Let’s call it SoftNAS and give it a password. Create a password. I’m sorry I’m not great at talking while I’m typing. Then we just continue forward.

After we select our password, we can come in and create a new cloud service or select a cloud service that we’ve already created before. Then we’ll come in and add some DNS names for this.

We can come in and add some different information for our network in our subnet if we wanted to select a different network. The last piece that we would need to use is set up SSH access as well as HTTPS. Where is HTTPS? There it is.

Once that is created, we are ready to go. We would be able to come in here and click next, next, next, and it would create this instance. I’m going to go ahead and kill this and show you all what we are going to be presented with.

You are going to be presented with a machine that looks something like this. After this machine has built up and everything is lined out correctly, you’re going to have a SoftNAS machine that you’re going to log into and be presented with this UI.

Now how do I add disk repositories to this? How do I add resources? If I want to add a native Microsoft disk to this or an Azure disk, I can come back into my Azure portal and simply select my system that I would like to add it. I am going to come in and I’m going to click on the dashboard.

Then down here at the bottom…Ops! I’m on the wrong one. I think I need to be on the SoftNAS one here. Yes, this is the one I need to be on.

Down at the bottom here, you’ll see attach and I can attach a new disk. Create a disk, I’ll call this one 10 gigs and attach. It will go through the attachment process of this disk.

Once it finishes, the disk will be available for use. We could move forward with adding a protocol as we chose. In this instance, I’m going to go ahead and show you all how to add a blob device as well.

A brand new option that we’ve just released is adding blob devices for use inside of our SoftNAS storage system. I’m back into my SoftNAS virtual machine – it’s running on my Azure system.

I’m going to come in and I’m going to add a device. I’m going to select Azure Blob. After I select Azure Blob, you’ll notice that I’m given my user name. I can put in MBlanchard is my user name.

I could come in and my access key. I’m not going into the rigamor of typing my access key out or copying and pasting it. I’m sorry. I don’t want to show that off to the whole world.

I’ll add a container base name here. We would want to customize this so I’m going to call this Matt Blob or something like that. You’ll notice that once I select off of that area, the Matt Blob container base name pops itself down here into the container name.

And that’s basically just coming in and creating a custom container. All containers in the world will have to be named something unique so we go ahead and throw in some unique characters here at the end of your base name to make sure that it’s completely randomized and unique.

We can select our disk size as we would with any maximum disk size. This is thin provisioned by default but we’re going to have to set maximum sealing limit. Then we could select if we’d like to encrypt this disk as well and give it a password to encrypt that upon.

Once again I would have to add my access key in here to create the blob devices. I’m not going to go through that rigamor of doing that. For the interest of time, I have gone ahead and gone forward and added in some blob devices and gotten us ready for the rest of the demonstration.

The rest of this demonstration is going to be going through and configuring two SoftNAS machines to talk with a synchronized ZFS replication running between them. Right now, what I have set up is two different machines.

You see they are pretty much identical, both machines have disk drives that have already been provisioned, and I have already provisioned these devices for use on a pool on my second machine.

On my second machine, I’ve already configured this pool but I have not added any protocols – basically no files or data on to these pools. I have created this storage pool for interest of time so I don’t have to come and create this twice.

I’m going to replicate this data on my primary instance. This primary instance could be in a datacenter that I am going to be using as my primary datacenter and my primary means of access.

Once that primary datacenter is up and running, which would be this machine, we were going to have storage repositories and protocols attached to this machine and all the data will be asynchronously replicated across the wire to our secondary machine.

This happens on a schedule about every one minute and it’s a ZFS sync replication that goes on. And after the copy happens, it will happen another one minute afterward, and one minute afterward, and so forth.

The two things that have to be configured upon this replication is the name of the pool and the size of the pool. Both those variables need to be the same in order for replication to happen.

Let’s go ahead and set up a pool that is equal to the pool that we’ve set up on our secondary machine. Our secondary machine, I have called Microsoft Blob and it has 10 gig disks. If you look at our details here, you’ll see that it has two disks that are hosted on this SoftNAS instance from Azure.

Let’s go ahead and do that on my primary machine. I come in here and click “create” on the pool creation wizard. I will name it Microsoft Blob just like my other one.

You’ll see that I have several different RAID options to use. I can use JBOD Array. I can use RAID 0 for striping. I can use RAID 10 for mirror and stripes. Five, six, and seven parity. I can do single parity, duo parity, and triple parity.

Just for demonstration’s sake, I used RAID 0 to give me the maximum speed possible across these two disks. I can select the two disks I would love to use. At the bottom, you’ll see a couple of different options.

I can force creation which basically says, “Hey, if there was a pool already created on these disks, overwrite it.” If you do have a pool on this disk and you’re trying to create another pool on top of it, we’re going to warn you because ZFS is very resilient and it can recover from a lot of errors.

If you do happen to have an issue where you disconnect a disk and it had a pool on it and now you reconnect it, we don’t want you to lose that data. It’s going to flag it and say, “Hey don’t use this disk. It already has a pool on it.”
Lex Encryption. That’s Linux Encryption System. We are able then to supply the password and a repetitive password to enable AES Encryption System. The last one is sync mode which is a write checksum. It’s making sure the writes are landing on the disk correctly.

We have three options; standard, which does its best case to check the write on every write. If not, it comes back for it and checks it later. Always, it reserves CPU time to check every write. And disable which we don’t ever [inaudible 26:02] people using. That never checks that write. It just goes on forward and goes along its business. It is the fastest mechanism, but it is also the most careless and worrisome.

I’m going to go ahead and create this pool. Now we will have a pool of equal size and equal name to my carrier pool on my secondary instance.

There are a couple of other options I can do on this tab if I did come in later and I need to extend this for more data volume. I could come in and click “expand –add any disks to this array” and it’s going to add those disks along and make that storage larger.

I can import any ZFS pools that have been brought in orphaned and this in case of a disaster recovery area, we can bring in those disks and attach them directly and import the pools.

We can add a read/cache. If we have high-speed local disks, that would be great usage for read cache to allow us to have a certain percentage size space for read caching.

By default, the ZFS takes half the system RAM for read hot caching and this is going to be layer two hot cache. We automatically have that much resources for caching. However, this is just layering back on top of that to give us even more caching.

The last piece here is write logging. This is ZIL, which is ZFS Intent Log. This is giving us write security for some writes that are under 32K. Anything that we’re writing on disk is going to be enumerated on the ZIL and we will be able to use that to reset where those writes had landed in a previous time.

We can also add a hot spare device in here if we care to, but I’m not going to go into those any further. The next piece after we’ve created our storage pool is we need to create our writable protocols or our volumes or shares.

Let’s go over here to our volumes in LUNs tab and let’s create some volumes. We’re going to call this first one just Vol and make it very simple. We will attach it to Microsoft Blob.

We can say let’s just do CIFS and AFP for this tab. We will thin provision this. Notice we can choose to thick or thin provision. We can choose if we’d like to use compression or deduplication.

A bit of “warning,” compression uses a little bit more CPU space. A little bit CPU time for compression and depublication is intensive on RAM so we advise you to bump your RAM up about I gig per terabyte of deduped data.

This is inline dedupe, ZFS’s inline file system, so everything is inline when it’s deduped so it’s on the fly and ready to go. Once again, we can set our sync mode directly on the volume versus directly on the pools. Either way, you can set it on volumes or pools.

Also, notice we have a snapshots tab. This allows us to select which type of snapshotting we’d like. If we’d like to have a default schedule which is about every three hours or so; 24/7, which is every single hour for every 24 hours. You can come in and edit that schedule or create schedules as you would like.

We also have a retention policy here and sets that retention times for each of the types of snapshots. These are ZFS snapshots that are stored on the volume itself. I’m going to go ahead and create a couple of these volumes to be used for our data just to demonstrate that whenever we do our replication that that data is actually replicated across the wire.

I’m going to select Vol 2, and this time we’ll do maybe NFS and CIFS. I’ll create it. Then we’ll create a block device on this last one.

Vol 3, and we’ll call this Microsoft Blob once again. This time I’m going to do an iSCSI line. Automatically notice, whenever we select the iSCSI block device, our thick provisioning button is selected. This is basically because most of the time whenever you do have an iSCSI device, it has a finite LUN size.

I’m going to say 5 gigs. Also, notice that we have a LUN targets tab up here. That means we just need to generate an IQN for any of the devices to hook into. We’ll generate IQN here and that way all of our iSCSI initiators can slot those targets, and click “create.”
Everything now is basically created. We’ve created disk repository shares ready for users to start dropping data in there. If we wanted somebody to come in here and write to Vol 1, we would say “Hey this one’s a CIFS share. Go to wackwackmicrosoft_blob, wack Vol 1, IP address wack and you would access to these volumes.

If you had an NFS share, you could come in and do the same with Vol 1 and Vol 2. All of these exports are all ready to go and ready to be written to.

Notice that we do have access to integrate directly with active directory. It’s a simple active directory wizard that goes through and asks you for your domain name, asks you for your net bios name, and then asks for administrator (a machine user that can add machines into the AD). And this is basically doing some addition into AD.

Once that’s all done in this machine over here, you add it into active directory. You can then assign user-rights and group-rights to all it’s file shares and so forth within Windows.

Now we have everything set up. However, if we look at our secondary machine, we don’t have any data here. If we look at our volumes and LUNs tab, there is no data on this secondary machine.

We want to now have a backup, a replicated copy of data on this second machine. Let’s go ahead and set that up through something that we call SnapReplicate.

I’m going to go ahead and add our replication and we’re going to replicate to these other machines. This is 49.121.150.65. I’m going to give it its password. Let’s see which one is this?

Make sure this is the right password. I’m not sure if it is or not. Wrong password. Let’s try again. Next, and finish. Notice now on the background, work is in progress to set up this replication. Replication is now underway so you can see all the mirrors are going. Mirror complete, mirror on the way. Complete.

Now we’ve basically taken all that data, and if we did have volumes of user-data in here, we would have all that data now and it is now copied to our secondary machine. If I refresh here, I will now see all of my data repositories.

We can demonstrate how our replication works by simply coming in and saying let’s go to a volume here. In volume 1, let’s come down and let’s create a snapshot. Ops, we already have snapshots.

If somebody came to me and said, “Hey I’ve got information on Vol 1. I need to recover that and it’s in an NFS share.” I could say, “Okay, let me go ahead and build you a snapclone. Then you could mount that snapclone and grab that data for yourself.”
We already support Microsoft previous versions. If this was in a CIFS directory that was configured for previous versions, they would be able to do this all on their own.

However, in this instance, this is someone coming from an NFS background saying, “Hey I need access to this machine.” Notice now, I’ve created a snapclone of this information. Then they would be able to come in and mount that data.

Let’s come in over here and make sure that my replication is happening. Oh, I’ve just had a failure on it. Something happened. I’m sorry. I grabbed the wrong snapshot for that.

I would need to have a full snapshot in order to create a snapclone in order for it to be replicated. But basically, that’s the idea. All of our data is going to be copied from one machine directly over to the other machine. Every one minute, we will be doing a replication of that data.

That’s basically all we have for the demonstration. We’re going to jump back over to this slide where I talk about a couple of little use-cases. Then we’ll end up and close, get some questions answered, and finish up here.

Let me bring up the slideware one more time. David, can you see my slideware?

David:  Yeah, we can see that, Matt.

Matt: Great. A couple of use cases where SoftNAS and Azure really make sense. I’m going to go through these and talk about the challenge. The challenge would be a company needs to quickly SaaS-enable a customer facing application on Azure but the app doesn’t support blob. They also need LDI or LDAP Integration for that application.
What would the solution be? Basically, the solution will be rewriting your application to support blob and AD authentications. That is highly unlikely that it would ever happen.

What else could you do? Instead of rewriting that application to support blob, continue to do business the way you always have. That machine needs access via NFS, fine. We’ll just support that via NFS through SoftNAS.

Drop all that data on a Microsoft Azure backend, store it in blob, and let us do the translation. Very simple access so then we could have access for all of our applications on-premise or in the cloud directly to whatever data resources they need and it could be presented with any protocol that’s listed – via CIFS, NFS, AFP, iSCSI.

The next use case, disaster recovery. This is what we did on the demonstration. The challenge is we have got a company that needs reliable off-site data protection.

Maybe they have a big EMC array at their location that they have several years of support left on. They need to be able to meter the use to it, but they need to be able to have a simple integration solution. What would be the solution?

It would be very easy to spin up a SoftNAS instance on the premise, directly access that EMC array and utilize the data resources for SoftNAS. We can then represent those data repositories to their application servers and end-users on site and replicate all that data using Snapreplicate into Microsoft Azure.

We would have our secondary blob storage in Azure and we’d be replicating all that data that’s on-premise into the cloud.

What’s great about this solution is it becomes a gateway drive when I get to the end of support on that EMC array and I say, “We need to go buy a new array or we need to have support for that array.”
We’ve got this thing running in Azure already, why don’t we just cut the code? It is the exact same thing that’s running in Azure. We could just start directing our application resources to Azure. It’s a great way to get you moving into the cloud and get a migration strategy moving forward.

The last one is hybrid on-premise usage and I alluded to this one earlier about the burst to cloud type of thing. This is a company that has performance sensitive applications that need a local LAN. They need off-site protection or capacity.

The solution basically would be to set up replication to Azure and then have that expand capacity. So basically whenever they run out of space on-premise, we would then be able to burst out into Azure and create more and more virtual machines to access that data.

Maybe it’s a web services account that has a web portal UI or something like that that needs just a web presence. Then we’re able to multiple copies of different web servers that are load balanced all accessing the same data on top of Microsoft Azure through SoftNAS.

All of these use cases are very possible. These are all use cases that I have had customers experience today.

Last, SoftNAS overview where our products land. SoftNAS is our main web offering. It’s offered on Azure, AWS, V Cloud Air, and CenturyLink Cloud. It is a public cloud NAS so any resources locally that are available on that cloud offering are present on our SoftNAS, as well as any object offering throughout the world. We can have any object connections throughout the world and access it.

SoftNAS File Gateway. This is an on-premise NAS. It would be built off of a VMware architecture, so this is basically a SoftNAS VM that has access to your local NAS files as well as local disk storage.

SoftNAS Object Filer. This is directed at somebody who is going to not have local data resources but wants to utilize an object resource either in the cloud or an object device locally. We would be able to give them an object file that has just S3 object access included so they’ll just be able to presently use object data repositories on that installation.

Last is SoftNAS Service providers, which is creating a multi-tenure NAS solution. It has Rest API so you can integrate building and tiering into this solution. It also has iSCSI connections with object storage so we are able to use that type of connection to a multitude of different backend offerings.

Some of our last things are technology partners. We’d like to that all of them – Microsoft, the Amazons, the VMwares. All these guys are out there that help us make our product great. We wouldn’t be here without Microsoft Azure helping us promote our product and go forward with a great solution.

Lastly, here is our brand sheet, people that you know that are today SoftNAS customers and we have many hundreds of customers out there that are not listed here.

Here’s just some of our customers that we work with directly — Netflix, Coca Cola, Nike, Boeing. We have all sorts of customers out there from all different verticals using our product in all different ways.

With that, I’m going to give it back to David. I’m going to take a look at some of the questions. While he finishes that up, I’ll go through some questions and we’ll go back to it.

David:  Okay, Matt. Thanks a lot. Again, just a reminder if you have any questions, please use the questions pane, but I also have a few here that I’ll read out. Just for our next step. I’m not sure. Most of you it’s your first time hearing about SoftNAS and our solution.

If you want to learn more, we do have a free 30-day trial version of SoftNAS cloud on Azure that you can try. If you go to softnas.com/azure, you can download that version there and we can help you out.

If you want to learn a bit more, you can go to our website softnas.com/azure. If you want to contact us, you can go to the contact page there. If you have any follow-up questions for the likes of Matt and the team, you can go and also make sure to follow us on Twitter.

Matt, if you want to jump at it, there’s one question there and I have a few questions here that I can call out.

Matt:  The question is, “For a BDR solution, Cloud File Gateway for the client side with replication to SoftNAS.” You’ll want to replicate that data from on-premise file gateway up into the SoftNAS on Microsoft Azure. That’s correct.

David: Another question here, Matt. What version of NFS is supported?
Matt:  We support both version three and four for NFS. The follow-up that probably will be the question that will be asked is what versions of SMB do we support? We support 2 and 3 SMB.
David:  What’s the max latency SoftNAS will support for site replication?

Matt:  The max latency that we support for site replication is really not a question. We are flexible to be able to handle latency from any reasonable network. It’s not a set on a stone number that 200-milliseconds latency is an acceptable or not acceptable range. We are very flexible with our solution. As long as we can have a fairly reliable connection, we can make up the latency and build that SoftNAS snap backup.

David:  Someone has a question here on RAID. What type of RAID is being used under SoftNAS?

Matt:  It’s built around RAID. We don’t tell you what types of RAIDs you have to use. It depends on what your situation is. If you’re inside of Microsoft Azure and you trust their local disk storages under enough level of AF4 that you are not going to have to worry about RAID in your solution or it’s not that much-pressing data, you can go ahead and use RAID 0 and get the fastest capabilities out of it.

However, if you’re on-premise and you don’t have a hardware RAID solution, we give you the ability to use up to RAID 7. If you wanted to use RAID 6 to give a really good performance and redundancy at the same time, you are welcomed to do that.

David:   I see Travis has another question there on the questions pane. How much would encryption inhibit or prevent deduplication benefits?

Matt:  That’s a tricky question. Deduplication actually happens on the fly, so we’re going to be doing the dedupe inline. Encryption is not going to come into play there. The encryption is going to happen on the actual container itself.

We are going to encrypt the channel itself and then whenever we drop the data in there it’s going to dedupe.

David:   A couple of more questions up here. Is it a good idea to use SoftNAS as a backup target? I think you covered that in one of our use cases there, I believe.

Matt: Absolutely, that is one of our biggest use cases. Can we use it as a backup target? I guess I didn’t touch on it as much on the use-cases. I have done a previous webinar directly on this subject where we demonstrated how we went about using a Veem Backup solution from a Windows 2012 server and using SoftNAS as our target.

It is a great solution for backup solutions. We have used it here locally with a backup solution for SoftNAS incorporated. It is absolutely a perfect solution for that because we can provide fast access to any protocol that your backup solution needs.

David: That’s right. That webinar, you can find on softnas.com within the webinars archive section if you’re interest in playing that back. Just the last question that I have here; does SoftNAS provide performance reports to show or to see hot versus cold data volumes?

Matt:             Absolutely. We do provide a dashboard that gives you access to all that data, so you actually can come here and see which data disks are getting hit the hardest, where we have data that’s just stored and asleep, basically never touched. We do have availability access for that dashboard to see that data and it reports in. And we can actually export that via SMTP server as well, so you can integrate it with SMTP or SNMP via things like What’s a gold or like product.

David:             I think that’s all the questions we have. That looks like that’s it. If you have any further questions as I mentioned, there’s a few places where you can contact SoftNAS. If you want to reach out and learn more or download a demo, please do that. I’m sure Matt and the team will be involved in that POC.

Matt, any addition to add at the end? Any common things that you see or what people should look out for or we’ve covered most of the areas?

Matt: No, I think I got everything covered. But yes, if you do have any questions, please do not hesitate to contact us. My email address is mblanchard@softnas.com. I am more than willing to answer any questions you have about SoftNAS and assist you all in doing a free trial and setting this up and getting it running.

David: Thanks, Matt. Thanks to everyone for attending. As you leave today, there will be a short survey. So if you can provide some feedback there, that also gives us an indication of any topics you’d like for future webinars.

As I mentioned at the start, recording and slides will be sent out very shortly. I hope to see you at the next webinar and have a good day. Thank you.