How to Maintain Control of Your Core in the Cloud

How to Maintain Control of Your Core in the Cloud

For the past 7 years, Buurst’s SoftNAS has helped customers in 35 countries globally to successfully migrate thousands of applications and petabytes of data out of on-premises data centers into the cloud. Over that time, we’ve witnessed a major shift in the types of applications and the organizations involved.

The move to the cloud started with simpler, low-risk apps and data until companies became comfortable with cloud technologies. Today, we see companies deploying their core business and mission-critical applications to the cloud, along with everything else as they evacuate their data centers and colors at a breakneck pace. At the same time, the mix of players has also shifted from early adopters and DevOps to a blend that includes mainstream IT.

The major cloud platforms make it increasingly easy to leverage cloud services, whether building a new app, modernizing apps, or migrating and rehosting the thousands of existing apps large enterprises run today.

Whatever cloud app strategy is taken, one of the critical business and management decisions is where to maintain control of the infrastructure and where to turn control over to the platform vendor or service provider to handle everything, effectively outsourcing those components, apps, and data.

Migrating to the cloud

So how can we approach these critical decisions to either maintain control or outsource to others when migrating to the cloud? This question is especially important to carefully consider as we move our most precious, strategic, and critical data and application business assets into the cloud.

One approach is to start by determining whether the business, applications, data and underlying infrastructure are “core” vs. “context”, a distinction popularized by Geoffrey Moore in Dealing with Darwin.

He describes Core and Context as a distinction that separates the few activities that a company does that create true differentiation from the customer viewpoint (CORE) from everything else that a company needs to do to stay in business (CONTEXT).

Core elements of a business are the strategic areas and assets that create differentiation and drive value and revenue growth, including innovation initiatives.

Context refers to the necessary aspects of the business that are required to “keep the lights on”, operate smoothly, meet regulatory and security requirements and run the business day-to-day; e.g., email should be outsourced unless you are in the email hosting business (in which case it’s core).

It’s important to maintain direct control of the core elements of the business, focusing employees and our best and brightest talents on these areas. In the cloud, core elements include innovation, business-critical, and revenue-generating applications and data, which remain central to the company’s future.

Certain applications and data that comprise business context can and should be outsourced to others to manage. These areas remain important as the business cannot operate without them, but they do not warrant our employees’ constant attention and time in comparison to the core areas.

The core demands the highest performance levels to ensure applications run fast and keep customers and users happy. It also requires the ability to maintain SLAs around high availability, RTO and RPO objectives to meet contractual obligations. Core demands the flexibility and agility to quickly and readily adapt as new business demands, opportunities, and competitive threats emerge.

Many of these same characteristics are also important to business context areas as well, but not as critical as the context that can simply be moved around from one outsourced vendor to another as needed.

Increasingly, we see business-critical, core applications and data migrating into the cloud. These customers demand control of their core business apps and data in the cloud, as they did on-premises. They are accustomed to managing key infrastructure components, like the network-attached storage (NAS) that hosts the company’s core data assets and powers the core applications. We see customers choose a dedicated Cloud NAS Filer that keeps them in control of their core in the cloud.

Example core apps include revenue-generating e-discovery, healthcare imaging, 3D seismic oil and gas exploration, financial planning, loan processing, video rental, and more. The most common theme we see across these apps is that they drive core subscription-based SaaS business revenue. Increasingly, we see both file and database data being migrated and hosted atop of the Cloud NAS, especially SQL Server.

For these core business use cases, maintaining control over the data and the cloud storage is required to meet performance and availability SLAs, security and regulatory requirements, and to achieve the flexibility and agility to quickly adapt and grow revenues. The dedicated Cloud NAS meets the core business requirements in the cloud, as it has done on-premises for years.

We also see many less critical business context data being outsourced and stored in cloud file services such as Azure Files and AWS EFS. In other cases, the Cloud NAS’s ability to handle both core and context use cases is appealing. For example, leveraging both SSD for performance and object storage for lower-cost bulk storage and archival with unified NFS and CIFS/SMB makes the Cloud NAS more attractive in certain cases.

There are certainly other factors to consider when choosing where to draw the lines between control and outsourcing applications, infrastructure, and data in the cloud.

Ultimately, understanding which applications and data are core vs context to the business can help architects and management frame the choices for each use case and business situation, applying the right set of tools for each of these jobs to be done in the cloud.

Check Also:

Successful cloud data management strategy
When will your company ditch its data centers?
Migrate Workloads and Applications to the Cloud
When it’s time to close your data center and how to do it safely

Choosing the Right Instance Type and Instance Size for AWS and Azure

Choosing the Right Instance Type and Instance Size for AWS and Azure

In this post, we’re sharing an easy way to determine the best instance type and appropriate instance size to use for specific use cases in the AWS and Azure cloud. To help you decide, there are some considerations to keep in mind. Let’s go through each of these determining factors in depth.

What use case are you trying to address?

  • A. Migration to the Cloud

    • Migrating existing applications into the cloud should be neither complex, expensive, time-consuming, or resource intensive, nor force you to rewrite your application to run it in the public cloud. If your existing applications access storage using CIFS/SMB, NFS, AFP, or iSCSI storage protocols, then you will need to choose a NAS filer solution that will allow your applications to access cloud storage (block or object) using the same protocols it already does.
  • B. SaaS-Enabled Applications

    • For revenue-generating SaaS applications, high performance, maximum uptime, and strong security with access control are critical business requirements. Running your SaaS apps in a multi-tenant, public cloud environment can be challenging while simultaneously fulfilling these requirements. An enterprise-grade cloud NAS filer may help you cope with these challenges, even in a public cloud environment. A good NAS solution provider will assure no downtime, high availability and high levels of performance, strong security with integration to industry-standard access control, and make it easier to SaaS-enable apps.
  • C. File Server Consolidation

    • Over time, end users and business applications create more and more data – usually unstructured – and rather than waiting for the IT department, these users install file servers wherever they can find room to put them, close to their locations. At the same time, businesses either get acquired or acquire other companies, inheriting all their file servers in the process. Ultimately, it’s the IT department that must manage this “server sprawl” when dealing with OS and software patches, hardware upkeep and maintenance, and security. With limited IT staff and resources, the task becomes impossible. The best long-term solution is using the cloud, of course, and a NAS filer to migrate files to the cloud. This strategy allows for scalable storage that is accessed the same way by users as they have always accessed their files on the local files servers.
  • D. Legacy NAS Replacement

    • With a limited IT staff and budget, it’s impractical to keep investing in legacy NAS systems and purchase more and more storage to keep pace with the rapid growth of data. Instead, investment in enterprise-grade cloud NAS can help businesses avoid burdening their IT staff with maintenance, support, and upkeep, and pass those responsibilities on to a cloud platform provider. Businesses also gain the advantages of dynamic storage scalability to keep pace with data growth, and the flexibility to map performance and cost to their specific needs.
  • E. Backup/DR/Archive in the Cloud

    • Use tools to replicate and back up your data from your VMware data center to the AWS, and Azure public clouds. Eliminate physical backup tapes by archiving data in inexpensive S3 storage or to cold storage like AWS Glacier for long-term storage. For stringent Recovery Point Objectives, cloud NAS can serve as an on-premises backup or primary storage target for local area network (LAN) connected backups as well. As a business’ data grows, the backup window can become unmanageable and tie up precious network resources during business hours. Cloud NAS with local disk-based caching reduces the backup window by streaming data in the background for better WAN optimization.

What Cloud Platform do you want to use?

No matter which cloud provider is selected, there are some basic infrastructure details to keep in mind. The basic requirements are:

  • Number of CPUs
  • Size of RAM
  • Network performance
  • A. Amazon Web Services (AWS)

    • Standard: r5.xlarge is a good starting point regarding memory and CPU resources. This category is suited to handle processing and caching with minimal requirements for network bandwidth. It comprises 4 vCPU, 16 GiB RAM, and a 1GbE network.

    • Medium
      : r5.2xlarge is a good choice for read-intensive workloads, and will benefit from the larger memory-based read cache for this category. The additional CPU will also provide better performance when deduplication, encryption, compression, and/or RAID are enabled. Composition: 8 vCPU, 32 GiB RAM, 10GbE network.
    •  
    • High-end: r5.24xlarge can be used for workloads that require a very high-speed network connection due to the amount of data transferred over a network connection. In addition to the very high-speed network, this level of instance gives you a lot more storage, CPU, and memory capacity. Composition: 96 vCPU, 758 GiB RAM, 25GbE network.
  • B. Microsoft Azure

    • Dsv3-series supports premium storage and is the latest, hyper-threaded general-purpose generation running on both the 2.4 GHz Intel Xeon® E5-2673 v3 (Haswell) and the 2.3 GHz Intel Xeon® E5-2673 v4 (Broadwell) processor. With the Intel Turbo Boost Technology 2.0, the Dsv3 can achieve up to 3.5 gigahertz (GHz). The Dsv3-series sizes offer a combination of vCPU, memory, and temporary storage best suited for most production workloads.
    •  
    • Standard: D4s Standard v3 4 vCPU, 16 GiB RAM with moderate network bandwidth
    • Medium: D8s Standard v3 8 vCPU, 32 GiB RAM with high network bandwidth
    • High-end: D64s Standard v3 64 vCPU, 256 GiB RAM with extremely high network bandwidth

What type of storage is needed?

Both AWS and Azure offer block as well as object storage. Block storage is normally used with file systems while object storage addresses the need to store “unstructured” data like music, images, video, backup files, database dumps, and log files. Selecting the right type of storage to use also influences how well an AWS Instance or Azure VM will perform.

Other resources:

About AWS S3 object storage

About AWS EBS block storage

Types of Azure storage

Need a “No Storage Downtime Guarantee”?

No matter which support platform is used, look for a NAS filer that offers High Availability. A robust set of HA capabilities protects against data center, availability zone, server, network, and storage subsystem failures to keep the business running without downtime. HA monitors all critical storage components, ensuring they remain operational. In case of an unrecoverable failure in a system component, another storage controller detects the problem and automatically takes over, ensuring no downtime or business impacts occur. So, companies are protected from lost revenue when access to their data resources and critical business applications would otherwise be disrupted.

If you’re looking for more personal guidance or have any technical questions, get in touch with our team of cloud experts who have done thousands of VPC and VNet configurations across AWS and Azure. Even if you are not considering SoftNAS, our free professional services reps will be happy to answer your questions.

The Secret to Maximizing Returns on Auto-Optimized Tiered Storage

The Secret to Maximizing Returns on Auto-Optimized Tiered Storage

Intelligent tiering in storage can save money, but you can additionally save up to 75% if you opt for dedupe and data compression first

Businesses deal with large volumes of data every day and continue to add to this data at a rate that’s often difficult to keep up with. Data management is a continuous challenge, and data storage is an exponential expense. While the cost of storage/GB may not be significant, it adds up quickly, over days and months, and becomes a significant chunk of ongoing expenses.

More often than not, it is not feasible to delete or erase old data. Data must be stored for various reasons such as legal compliance, building databases, machine learning, or simply because it may be needed later. But a large portion of data often goes untouched for months at a time, with no need for access, yet continues to rack up the storage disk bills.

Cutting costs with Automated Tiered Storage

As a solution to this problem faced by most business owners, many storage providers and NAS filers offer auto tiering for storage. With automated tiering, data is stored across various levels of disks to save on storage costs. This tiering means data that’s accessed less frequently is stored on disks with lower performance and higher latency—disks that are much cheaper, usually 50-60% cheaper than high performance disks.

It is often difficult to identify and isolate data that is less likely to be accessed, so policies are set in place to identify and shift data automatically. For instance, you may set the threshold at 6 weeks. Then, once 6 weeks have passed without accessing a certain block of data, that block is automatically moved down to a lower tier, where it is not as expensive to continue to store it.

So intelligent tiering of data helps reduce storage costs significantly without really impacting day-to-day operations.

The Big Guns: Dedupe and Data Compression

While tiered storage helps you save on storage costs by optimizing the storage location, the expense is still proportionate to the amount of data. After all, you do pay for storage/GB. But before you even store data, have you considered, is this data streamlined? Am I saving data that can be pared down?

Deduplication

Unnecessarily bulky data is more common that you’d expect. Every time an old file or project is pulled out from storage for updates, or to ramp up, or to make any changes, a new file is saved. So even if changes are made to only 1 or 2 MB of data, a new copy of the entire 4TB file is made and saved. Now imagine this being done with several files each day. With this replication happening over and over again, the total amount of data quickly multiplies, occupying more storage, spiking storage costs, and even affecting IOPs. This is where inline deduplication helps.

With inline deduplication, files are compared block by block for redundancies, which are then eliminated. Instead, a reference count of the copies is saved. In most cases, data is reduced by 20-30% by making inline dedupe a part of the storage efficiency process delivered by a NAS filer.

Data compression

Reducing the number of bits needed to represent the data through compression is a simple process and can be highly effective – data can be reduced by 50-75%. The extent of compression depends on the nature of the data and how compressed it is at the outset. For instance, an mp4 file is already a highly compressed format. But, in our experience, data usually offers good opportunities for reduction through compression.

By compressing data, the amount of storage space needed is reduced, and the costs associated with storage come down too.

When we combine the effect of the deduplication and compression, we find that customers reap savings of up to 90% ! If this new streamlined data is now stored using automated tiering, the savings are amplified because

1. The amount of data to be stored is reduced, thus saving on storage capacity needed across ‘hot’ and ‘cool’ tiers
2. Input/Output is reduced, leading to better performance

Data deduplication and compression explained in a use case

Let’s say we have 1 TB of actual data to store. On average, cloud SSD storage costs $0.10 per GB/month, so $100 per TB/ month.

If the data can be reduced by 80% using deduplication and compression—which is likely— effective cost per TB is just 20% of the original projection, or $20 per TB/month. Now add in auto-tiering, and that cuts the cost in half again by using a combination of SSD and lower-cost HDD, and you have a $10 /TB cost basis.

If we estimate your storage needs grow to 10 TB over time, you will pay $100 per month – that’s the amount you would have been paying for basic file storage services for 1TB of data before dedupe and data compression.

The net effect of these combined storage efficiency capabilities delivered by SoftNAS, for example, is to reduce the effective cost per GB from $0.10 per GB/month to $0.02 per GB/month by combining tiering, compression and deduplication – without sacrificing performance. With the rapidly-increasing amount of data that must be managed, who doesn’t want to cut their cloud storage bill by 500%?

Auto-Optimized Tiered Storage with SoftNAS

Buurst SoftNAS offers SmartTiers, auto-tiered storage with the added advantage of flexible operations. After deduplication and compression, data is stored in tiers – with tiering that you can configure and control, according to policies to suit your usage patterns, and optimize further as your usage evolves. Our goal is to achieve the price/performance equation that suits your business, so even when data stored away in a low-cost tier is accessed, only the particular block accessed migrates up to the hot tier, not the entire file. With a user-friendly interface, you can continue to manage the policies and thresholds, and the capacity of the tiers, and the rest of the cost-saving optimizations happen automatically and transparently.

Want to know what kind of savings you can achieve with SoftNAS SmartTiers? Try our storage cost-savings calculator.

Cloudberry Backup – Affordable & Recommended Cloud Backup Service on Azure & AWS

Cloudberry Backup – Affordable & Recommended Cloud Backup Service on Azure & AWS

Cloudberry Backup – Affordable & Recommended Cloud Backup Service on Azure & AWS

Let me tell you about a CIO I knew from my days as a consultant. He was even-keeled most of the time and could handle just about anything that was thrown at him. There was just the one time I saw him lose control – when someone told him the data backups had failed the previous night. He got so flustered, it was as if his career was flashing before him, and the ship might sink and there were no remaining lifeboats.

If You Don’t Backup Data, What Are the Risks?

Backing up one’s data is a problem as old as computing itself. We’ve all experienced data loss at some point, along with the pain, time, and costs associated with recovering from the impacts caused in our personal or business lives. We get a backup to avoid these problems, insurance you hope you never have to use, but, as Murphy’s Law goes, if anything can go wrong, it will.

Data storage systems include various forms of redundancy and the cloud is no exception. Though there are multiple levels of data protection within cloud block and object storage subsystems, no amount of protection can cure all potential ills. Sometimes, the only cure is to recover from a backup copy that has not been corrupted, deleted, or otherwise tampered with.

SoftNAS provides additional layers of data protection atop cloud block and object storage, including storage snapshots, checksum data integrity verification on each data block, block replication to other nodes for high availability, and file replication to a disaster recovery node. But even storage snapshots rely upon the underlying cloud storage block and object storage, which can and does fail occasionally.

These cloud-native storage systems tout anywhere from 99.9% up to 11 nines of data durability. What does this really mean? It means there’s a non-zero probability that your data could be lost – it’s never 100%. So, when things do go wrong, you’d do best to have at least one viable backup copy. Otherwise, in addition to recovering from the data loss event, you risk losing your job too.

Why Companies Must Have a Data Backup

Let me illustrate this through an in-house experience.

In 2013, when SoftNAS was a fledgling startup, we had to make every dollar count and it was hard to justify paying for backup software or the storage it requires.

Back then, we ran QuickBooks for accounting. We also had a build server running Jenkins (still do), domain controllers, and many other development and test VMs running atop of VMware in our R&D lab. However, it was going to cost about $10,000 to license Veeam’s backup software and it just wasn’t a high enough priority to allocate the funds, so we skimped on our backups. Then, over one weekend, we upgraded our VSAN cluster.

Unfortunately, something went awry and we lost the entire VSAN cluster along with all our VMs and data. In addition, our makeshift backup strategy had not been working as expected and we hadn’t been paying close attention to it, so, in effect, we had no backup.

I describe the way we felt at the time as the “downtime tunnel”. It’s when your vision narrows and all you can see is the hole that you’re trying to dig yourself out of, and you’re overcome by the dread of having to give hourly updates to your boss, and their boss. It’s not a position you want to be in.

This is how we scrambled out of that hole. Fortunately, our accountant had a copy of the QuickBooks file, albeit one that was about 5 months old. And thankfully we still had an old-fashioned hardware-based Windows domain controller. So we didn’t lose our Windows domain. We had to painstakingly recreate our entire lab environment, along with rebuilding a new QuickBooks file by entering all the past 5 months of transactions and recreating our Jenkins build server. After many weeks of painstaking recovery, we managed to put Humpty Dumpty back together again.

Lessons from Our Data Loss

We learned the hard way that proper data backups are much less expensive than the alternatives. The week after the data loss occurred, I placed the order for Veeam Backup and Recovery. Our R&D lab has been fully backed up since that day. Our Jenkins build server is now also versioned and safely tucked away in a Git repository so it’s quickly recoverable.

Of course, since then we have also outsourced accounting and no longer require QuickBooks, but with a significantly larger R&D operation now we simply cannot afford another such event with no backups ever again. The backup software is the best $10K we’ve ever invested in our R&D lab. The value of this protection outstrips the cost of data loss any day.

Cloud Backup as a Service

Fortunately, there are some great options available today to back up your data to the cloud, too. And they cost less to acquire and operate than you may realize. For example, SoftNAS has tested and certified the CloudBerry Backup product for use with SoftNAS. CloudBerry Backup (CBB) is a cloud backup solution available for both Linux and Windows. We tested the CloudBerry Backup for Linux, Ultimate Edition, which installs and runs directly on SoftNAS. It can run on any SoftNAS Linux-based virtual appliance, atop AWS, Azure, and VMware. We have customers who prefer to run CBB on Windows and perform the backups over a CIFS share. Did I forget to mention this cloud backup solution is affordable at just $150, and not $10K?

CBB performs full and incremental file backups from the SoftNAS ZFS filesystems and stores the data in low-cost, highly-durable object storage – S3 on AWS, and Azure blobs on Azure.

CBB supports a broad range of backup repositories, so you can choose to back up to one or more targets, within the same cloud or across different clouds as needed for additional redundancy. It is even possible to back up your SoftNAS pool data deployed in Azure to AWS, and vice versa. Note that we generally recommend creating a VPC-to-S3 or VNET-to-Blob service endpoint in your respective public cloud architecture to optimize network storage traffic and speed up backup timeframes.

To reduce the costs of backup storage even further, you can define lifecycle policies within the Cloudberry UI that move the backups from object storage into archive storage. For example, on AWS, the initial backup is stored on S3, then a lifecycle policy (managed right in CBB) kicks in and moves the data out of S3 and into Glacier archive storage. This reduces the backup data costs to around $4/TB (or less in volume) per month. You can optionally add a Glacier Deep Archive policy and reduce storage costs even further down to $1 per TB/month. There is also an option to use AWS S3 Infrequent Access Storage.

There are similar capabilities available on Microsoft Azure that can be used to drive your data backup costs down to affordable levels. Bear in mind the current version of Cloudberry for Linux has no native Azure Blob lifecycle management integration. Those functions need to be performed via the Azure Portal.

Personally, I prefer to keep the latest version in S3 or Azure hot blob storage for immediate access and faster recovery, along with several archived copies for posterity. In some industries, you may have regulatory or contractual obligations to keep archive data much longer than with a typical archival policy.

Today, we also use CBB to back up our R&D lab’s Veeam backup repositories into the cloud as an additional DR measure. We use CBB for this because there are no excessive I/O costs when backing up into the cloud (Veeam performs a lot of synthetic merge and other I/O, which drives up I/O costs based on our testing).

In my book, there’s no excuse for not having file-level backups of every piece of important business data, given the costs and business impacts of the alternatives: downtime, lost time, overtime, stressful calls with the bosses, lost productivity, lost revenue, lost customers, brand and reputation impacts, and sometimes, lost jobs, lost promotion opportunities – it’s just too painful to consider what having no backup can devolve into.

To summarize, there are 5 levels of data protection available to secure your SoftNAS deployment:

1. ZFS scheduled snapshots – “point-in-time” meta-data recovery points on a per-volume basis
2. EBS / VM snapshots – snapshots of the Block Disks used in your SoftNAS pool
3. HA replicas – block replicated mirror copies updated once per minute
4. DR replica – file replica kept in a different region, just in case something catastrophic happens in your primary cloud datacenter
5. File System backups – Cloudberry or equivalent file-level backups to Blob or s3.

So, whether you choose to use CloudBerry Backup, Veeam®, native Cloud backup (ex. Azure Backup) or other vendor backup solutions, do yourself a big favor. Use *something* to ensure your data is always fully backed up, at the file level, and always recoverable no matter what shenanigans Murphy comes up with next. Trust me, you’ll be glad you did!

Disclaimer:

SoftNAS is not affiliated in any way with CloudBerry Lab. As a CloudBerry customer, we trust our business data to CloudBerry. We also trust our VMware Lab and its data to Veeam. As a cloud NAS vendor, we have tested with and certified CloudBerry Backup as compatible with SoftNAS products. Your mileage may vary.

This post is authored by Rick Braddy, co-founder, and CTO at Buurst SoftNAS. Rick has over 40 years of IT industry experience and contributed directly to formation of the cloud NAS market.

Get your On-Premises NAS in the Azure Cloud

Get your On-Premises NAS in the Azure Cloud

 “Get your On-Premises NAS in the Azure Cloud”. Download the full slide deck on Slideshare

Looking to transition your enterprise applications to the highly-available Azure cloud? No time/budget to re-write your applications to move them to Azure? SoftNAS Cloud NAS extends Azure Blob storage with enterprise-class NAS file services, making it easy to move to Azure. In this post, we will discover how you can quickly and easily:

– Configure Windows servers on Azure with full Active Directory control
– Enable enterprise-class NAS storage in the Azure cloud
– Provide disaster recovery and data replication to the Azure cloud
– Provide highly available shared storage

Get your On-Premises NAS in the Azure Cloud

My name is Matt Blanchard. I am a principal solutions architect, we’re going to talk about some of the advantages of using Microsoft Azure for your cloud storage devices inside the cloud and helping you make plans to move from your on-premise solution today into the cloud of tomorrow.

This is not a new concept. This is what we’ve seen in trend for the last several years. The bill versus buy aspect is where we’re going to have a great economy of scale whenever we buy assets or we buy an OpEx partner and we are able to use that type of partnership to advance our IT needs versus a low economy of scale. If I have to invest my own money to build up the information systems and buy large SAN suppliers in networking, storage networks, and so forth. Hosting that and building that all out myself makes a lot of capital investment. This is the paradigm.

On-premise vs the cloud architecture.

On-premise vs the cloud architecture.

A lot of the things that we see that we have to provide for ourselves on-premise are things that are assumed and given to us in configurations in the cloud, such as with Microsoft Azure giving us the availability to have full-fledged VMs running inside of our Azure repository and accessing our SoftNAS virtual SAN. We are able to give you network access control towards all your storage needs within a packaged small useable space.

On-premise vs in the cloud

On-premise vs in the cloud

I don’t have to build my own data center. I can have all my applications running in the cloud on-services versus having them on-premise running physically and having to maintain them physically on datasets.

If you think about rebuilding applications for the next generation of databases or having the next generation of server componentry that we’re going to install that may not have the correct driver sets for our applications and having to rebuild all those things. It makes it quite tedious to help move forward with your architecture.

However, when we start to blow those lines and move into let’s say a hosting provider or cloud services, those dependencies on the actual hardware devices and the physical device drivers start to fade away because we’re running these applications as services and not as physically supported sideload architectures.

This movement towards Azure in the cloud makes quite a bit of sense whenever you start looking at the economies of scale, how fast we could grow in capacity, and things like bursting control whenever we have large amounts of data services that we’re going to have to supply on-demand versus things that we have on a constant day-to-day basis.

Say we are a big software company or a big game company that’s releasing the next new Star Wars game. I’ll have to TM that or something in my conversation. You’ll have to see us. It might be some sort of online game that needs extra capacity for the first weekend out just to support all the new users who’re going to be accessing that.

This burst ability and this expandability into the cloud make all the sense in the world because who wants to spend that money on that hardware to build out that infrastructure for something that may or may not continue to be that large of an investment in the future? If we can scale that down over time or scale it up over time, either way. Maybe we undersized our built. You can think of it in that aspect.

It really makes sense – this paradigm switched into the cloud mantra.

Flexible, Adaptable Architecture

Flexible, Adaptable Architecture

At Buurst, we’ve built our architecture to be flexible and adaptable inside of this cloud architecture. We’ve built a Linux virtual machine; it’s built on CentOS. It runs ZFS as our file system on that kernel. We run all of our systems on open controllable systems. We have staff on-site that contribute to these open-source amalgams to make these systems better into CentOS and ZFS. We contribute a lot of intellectual property to help advance these technologies into the future.

We, of course, run HTML5 as our admin UI, we have PHP, and Apace is our web server. We have all these open systems to allow us to be able to take advantage of a great open-source community out there on the internet.

We integrate with multiple different service users. If you have customers that are currently running in AWS or CenturyLink Cloud and they are looking to migrate into Azure — make a change — it’s very easy for us to come in and help you make that data migration change because inserting SoftNAS service into both of those service providers and then simply migrating that data is very simple and easy to do the task.

We really do take in responses. We want to be flexible. We want to be open. We want to have all of our data resources that have multiple use cases. We are able a full-featured NAS service that does all of these things in the data services tab.

Block replication, we can do inline deduplication, caching, storage pools, thin provisioning, writable snapshots, and snap clones. We can do compression and encryption. With all of these different offerings, we are able to give you a single packaged NAS solution.

Once again, all the things that you think you’ll come back in like, “I’m going to have to implement all of that stuff. I’m going to have to buy all these different componentry and insert them into my hardware,” those are things that are assumed and used and we are able to go ahead and give you directly in our NAS solution.

How does SoftNAS work?

How does SoftNAS work.png

To be very forthcoming, it’s basically a gateway technology. We are able to present storage capacity whether it be a CIFS or SMB access medium for Windows users for some sort of Windows file share or if it’s an NFS share for some Linux machines or even just an iSCSI block device or an Apple File Protocol for entire machine backups.

If you have end-users or end devices that need storage repositories of multiple different protocols, we are able then to store that data into say an Azure Blob Storage or even a native Azure storage device. We are able then to translate those protocols into an object protocol, which is not a native language. We don’t speak in object whenever we’re going through a normal SMB connection, but we do also speak native object directly into Azure Blob. We offer the best of both worlds with this solution.

Just the same as native block devices, we have a native block protocol that we are to talk directly into Azure disks that directly attach to these machines. We are able to create flexible containers that make data unifiable and accessible.

SoftNAS Cloud NAS on Azure

SoftNAS Cloud NAS on Azure

What we’re basically going to do is we’re going to present a single IP point of access that all of these file systems will land on. All of our CIFS access, all of our NFS exports, and all of the AFP shares will all be enumerated out on a single SoftNAS instance and they will be presented to these applications, servers, and end-users.

The storage pools are nothing more than conglomerations of disks that have been offered up by the Microsoft Azure platform. Whether it’s Microsoft Blob or it’s just native disks, if it’s even another type of object device that you’ve imported into these drives, we can support all of those device types and create storage pools of different technologies.

And we can attach volumes and LUNs that have shares of different protocols to those storage pools so it allows us to have multiple different connection points to different storage technologies on the backend. And we do this as a basic translation and it’s all seamless to the end-user or the end device.

NFS/CIFS/iSCSI Storage on Azure Cloud

NFS-CIFS-iSCSI Storage on Azure Cloud

A couple of use cases where SoftNAS and Azure really make sense. I’m going to go through these and talk about the challenge. The challenge would be a company needs to quickly SaaS-enable a customer-facing application on Azure but the app doesn’t support blob. They also need LDI or LDAP Integration for that application. What would the solution be? Basically, the solution will be rewriting your application to support blob and AD authentications. That is highly unlikely that it would ever happen.

Instead of rewriting that application to support blob, continue to do business the way you always have. That machine needs access via NFS, fine. We’ll just support that via NFS through SoftNAS.

Drop all that data on a Microsoft Azure backend, store it in the blob, and let us do the translation. Very simple access so then we could have access for all of our applications on-premise or in the cloud directly to whatever data resources they need and it could be presented with any protocol that’s listed – via CIFS, NFS, AFP, iSCSI.

Disaster Recovery on Azure Cloud

Disaster Recovery on Azure Cloud

Maybe you have a big EMC array at their location that you have several years of support left on. You need to be able to meter the use of it, but you need to be able to have a simple integration solution. What would be the solution?

It would be very easy to spin up a SoftNAS instance on the premise, directly access that EMC array, and utilize the data resources for SoftNAS. We can then represent those data repositories to their application servers and end-users on site and replicate all that data using Snapreplicate into Microsoft Azure.

We would have our secondary blob storage in Azure and we’d be replicating all that data that’s on-premise into the cloud.

What’s great about this solution is it becomes a gateway drive when I get to the end of support on that EMC array and says, “We need to go buy a new array or we need to have support for that array.” We’ve got this thing running in Azure already, why don’t we just cut the code? It is the exact same thing that’s running in Azure. We could just start directing our application resources to Azure. It’s a great way to get you moving into the cloud and get a migration strategy moving forward.

 

Hybrid on-premise storage getway to Azure Cloud

hybrid on-premise storage getway to Azure Cloud

The last one is hybrid on-premise usage and I alluded to this one earlier about the burst to cloud type of thing. This is a company that has performance-sensitive applications that need a local LAN. They need off-site protection or capacity. The solution basically would be to set up replication to Azure and then have that expand capacity. So basically whenever they run out of space on-premise, we would then be able to burst out into Azure and create more and more virtual machines to access that data.

Maybe it’s a web services account that has a web portal UI or something like that that needs just a web presence. Then we’re able to multiple copies of different web servers that are load balanced all accessing the same data on top of Microsoft Azure through SoftNAS AZUR NAS Storage Solution.

 

How SoftNAS Solved Isilon DR on AWS and Azure Clouds

How SoftNAS Solved Isilon DR on AWS and Azure Clouds

Isilon disaster recovery (DR) on AWS & Azure Storage

We have customers continually asking for solutions that will allow them to either migrate or continuously replicate from their existing storage systems to the AWS and Azure cloud storage. The ask is generally predicated on some cost savings analysis, receipt of the latest storage refresh bill, or a recent disaster recovery DR event. In other cases, customers have many remote sites, offices, or factories where there’s simply not enough space to maintain all the data at the edge. The convenience of the public cloud is an obvious answer. We still don’t have elegant solutions to all of our cloud problems, but with the release of SoftNAS Platinum, we have at least solved one. Increasingly, we see a lot of Dell/EMC Isilon customers who want to trade out hardware ownership for the cloud subscription model and get out of the hardware management business.

I will focus on one such customer and the Isilon system for which they tasked us to provide a cloud-based storage disaster recovery DR solution to solve.

Problem Statement

Can the cloud be used as a Disaster Recovery (DR) location for an on-premises storage array?

The customer’s Isilon arrays provide scale-out NFS for their on-premises datacenters. These arrays were aging and due for renewal. The company had been interested in public cloud alternatives, but as much as the business leaders are pushing for a more cost-effective and scalable solution, they are also risk-averse. Thus, their IT staff was caught in the “change it but don’t change it “paradox.  The project lead and his SoftNAS team were asked to provide an Isilon disaster recovery solution that would meet the immediate cost savings goal while providing the flexibility to scale and modernize their infrastructure in the future, as the company moves more data and workloads into the public cloud.

Assessing the Environment

One of Isilon’s primary advantages is its scale-out architecture, which aggregates large pools of file storage with an ability to add more disk drives and storage racks as needed to scale what are effectively monolithic storage volumes. The Isilon has been a hugely successful workhorse for many years.

Like Isilon, the public cloud provides its own forms of scale-out, “bottomless” storage. The problem is that this native cloud storage is not natively designed to be NFS or file-based but instead is block and object-based storage. The cloud providers are aware of this deficit, and they do offer some basic NFS file storage services, which tend to be far too expensive (e.g., $3,600 per TB per year). These cloud file solutions also tend to deliver unpredictable performance due to their shared storage architectures and the multi-tenant access overload conditions that plague them.

How to Replace Isilon DR Storage on AWS and Microsoft Azure Cloud?

Often, Isilon replacement projects begin with the disaster recovery DR datacenter because it is the easiest component to justify and move, and thus, poses the least risk for risk-averse IT shops that want to prove out their Isilon replacement with public cloud storage and provide a safe place for employees to gain improved cloud skills.

Companies are either tired of paying millions of dollars per year for traditional DR datacenters and associated standby equipment – that rarely if ever have been used – or they do not have a DR facility and are still in need of an affordable DR solution with a preference for leveraging the public cloud instead of financing yet another private datacenter.

SoftNAS solves this common customer need by leveraging the cloud provider’s scale-out block and object storage, providing enhanced data protection and tools to help in the migration and continuous sync of data.

SoftNAS leverages the cloud providers’ block and object storage

Isilon Disaster Recovery (DR) Solution Requirements

  • Active Cloud provides subscriptions with the ability to provision from the marketplace
  • 1 x SoftNAS Platinum v4.x Azure VM (recommend VM with at least 8 x virtual cores, 32GB memory, min. 1GBPS network)
  • Network connectivity to an existing AD domain controller (CIFS) from both on-prem and SoftNAS VMs
  • Source data-set hosted via on-prem Isilon & accessible via SMB/CIFS share and NFS exports
  • Internet access for both on-prem and SoftNAS VMs

Isilon Disaster Recovery (DR) Step by Step

  1. Spin-up SoftNAS from the Marketplace and present a hot file system (share).
  2. Setup another SoftNAS machine as a VMware® VM on-prem using the downloadable SoftNAS OVA.
  3. From the on-prem SoftNAS VM, using the built-in Lift & Shift wizard, create a replication schedule to initially copy the data from the Isilon to SoftNAS VM into the public cloud. If there’s any issue with the network during the transfer process, there’s a suspend/resume capability that ensures large transfers do not have to start over (very important with hundreds of terabytes and lengthy initial transfer jobs).
  4. Once the initial synchronization copy completes, Lift & Shift enters its (Continuous Sync), making sure any production changes to local Isilon volumes are replicated to the cloud immediately.
  5. Now that the data is replicated to the cloud, it is now instantly available should the on-prem services cease to function.  This affords customers a quick transition to cloud services during DR events and should the local applications need to be spun up temporarily or eventually fully migrate to the cloud to improve performance.

 

A detailed description of technologies used in providing the solution

Who is SoftNAS?

SoftNAS  NAS Filer has been focused 100% on cloud-based file storage and NAS solutions since 2013. The SoftNAS product has been tried and tested by thousands of customers located, in 36 countries across the globe. SoftNAS has brought enterprise features like high-availability (HA), replication, snapshot data protection, and AD integration to the two major cloud providers AWS and Azure. The software exists in the marketplace and can be spun up as on-demand instances or as a BYOL instance.

How is DR data replication handled by SoftNAS?

After deploying the instances, the next challenge is establishing replication and maintaining DR synchronization from on-premises Isilon shares to the public-cloud SoftNAS DR system/s. SoftNAS provides an incorporated flexible data synchronization tool along with a fill-in-the-blanks “Lift and Shift” feature that makes replication and sync quick and easy.

Easy to setup

As shown below, the admin just chooses the source file server (and any subdirectories) via the SMB or NFS share, then configures the transfer job. Once the transfer jobs start, it securely synchronizes the files from the Isilon source and transfers them into the SoftNAS filer running in the cloud.

Monitor Progress

A progress dashboard shows the percent complete of the replication job, along with detailed status information as the job progresses. If for any reason the job gets interrupted, it can simply be resumed from where it left off. If there’s a reason to temporarily suspend a large transfer job, there is a Pause/Resume feature.

Less than optimal network links

In the case that the remote Isilon file servers (or Windows or other file servers) also need to be replicated to the same cloud-based DR region; e.g., from factories, remote offices, branch offices, edge sites, or other remote locations over limited, high-latency WAN or Internet/VPN connections, SoftNAS addresses this barrier with its built-in UltraFast™ global data accelerator feature.

SoftNAS UltraFast provides end-to-end optimization and data acceleration for network connections where packet loss and high round-trip times commonly limit the ability of the cloud to be used for file replication.

SoftNAS UltraFast allows remote sites to accommodate high-speed large file data transfers of 600 Mbps or more, even when facing 300 milliseconds of latency and up to 2% packet loss on truly dirty or congested networks or across packet-radio, satellite, or other troublesome communications conditions (shown below). UltraFast can also be used to throttle or limit network bandwidth consumption, using its built-in bandwidth schedule feature. See the example of UltraFast vs. TCP performance:

Why the Customer Chose SoftNAS vs. Alternative DR Solutions

SoftNAS provides an ideal choice for the customer who is looking to host data in the public cloud in that it delivers the most granular configuration capabilities of any cloud storage solution. It enables customers to custom-build their cloud NAS solution using familiar native cloud compute, storage, networking, and security services as building blocks.

Cost optimization can be achieved by combining the correct instance type with backend pool configuration to meet throughput, IOPS, and capacity requirements. The inclusion of SoftNAS SmartTiers, SoftNAS UltraFast and Lift and Shift as built-in features make SoftNAS the most complete cloud-based NAS solution available as a pre-integrated Cloud DR and migration tool. Even though the customer started with the Isilon DR solution, it now has the infrastructure in the cloud for DR replication of all its remote Windows and other file servers as a next step.

Alternatively, customers must pick and choose various point products from different vendors, act as the systems integrator, and develop their own DR and migration systems. SoftNAS’s Platinum product provides all these capabilities in a single, cost-effective, ready-to-deploy package that saves time, reduces costs, and minimizes the risks and frustrations typically encountered using the alternative methods.

Summary and Next Steps

Isilon is a popular, premises-based file server that has served customers well for many years. Now that enterprises are moving into the hybrid cloud and public cloud computing era, customers need alternatives to on-prem file servers and dedicated DR file servers. A common first step we see customers taking at SoftNAS® is to start by replacing the Isilon file server in the DR datacenter as part of a bigger objective to eliminate the DR datacenter in its entirety and replace it with a public cloud. In other cases, customers do not have a DR datacenter and are starting out using the public cloud as the DR datacenter, while keeping the Isilon, NetApp, EMC, and Windows file servers on-prem.

In other cases, customers wish to replace both on-prem primary and DR datacenter file servers with a 100% cloud-based solution, then either rehost some or all the applications in the public cloud or access the cloud-hosted file server via VPN and high-speed cloud network links. In either case, SoftNAS virtual NAS appliance is combined with native cloud block and object storage to deliver the high-performance, cloud-based file server solutions the customers want in the cloud, without compromising on performance, availability, or overspending on cloud storage.