Recently, a SoftNAS customer that provides a sales enablement and readiness platform evaluated their current on-premises storage environment. They had growing concerns regarding whether or not their existing solution could support the large volumes of data their applications needed. The customer had Isilon storage in their datacenter and a dedicated remote Disaster Recovery (DR) datacenter with another Isilon storing 100 TB of data. With the growth of their data-intensive applications, the company predicted the dedicated DR site would be at full capacity within two years. This pushed them to evaluate cloud-based solutions that wouldn’t require them to continue purchasing and maintaining new hardware.
Buurst SoftNAS quickly became the ideal choice as it could support the petabytes of file storage they needed, allowing them to dynamically tier their storage by moving aging data to slower, less expensive storage. This new capability would solve their need for 100 TB of storage, as well as allow them to pay only for the services they used while also eliminating the need to pay for and maintain physical datacenters, network-attached storage, and storage area network (SAN) appliances.
By moving data to SoftNAS, the customer was able to quickly build a cloud-based DR platform with 100 TB of storage attached to it. Because of the successful disaster recovery platform, the company plans to make SoftNAS their primary storage solution, leveraging enterprise-class features like bottomless scalable storage and highly available clustering, allowing them to take full advantage of what the cloud has to offer.
So, how does this success story relate to you?
Data volumes are increasing at rates that are virtually impossible to keep up with using on-premises storage. If you’re leveraging an Isilon storage solution, you’re likely looking for ways to expand your storage capacity quickly, securely, and with the lowest cost. When considering your data storage strategy, ask yourself a few key questions:
- Do I have the physical space, power, and cooling for more storage?
- Do I have CapEx to purchase more storage?
- Do I want to build more data centers for more storage?
On-premises storage solutions can limit your organization from truly unlocking modern data analytics and AI/ML capabilities. The cost and upkeep required to maintain on-premises solutions prevent your teams from exploring ways to position your business for future growth opportunities. This push for modernization is often a driving factor for organizations to evaluate cloud-based solutions, which comes with its own considerations:
- Do I have a reliable and fast connection to the internet?
- How can I control the cost of cloud storage?
- How can I continuously sync live data?
With SoftNAS running on the cloud you can:
- Make cloud backups run up to 400% faster at near-block-level performance with object storage pricing, resulting in substantial cost savings. SoftNAS optimizes data transfer to cloud object storage, so it’s as fast as possible without exceeding read/write capabilities.
- Automate storage tiering policies to reduce the cost of cloud storage by moving aged data from more expensive, high-performance block storage to less expensive, slower storage, reducing cloud storage costs by up to 67%.
- Continuously keep content up to date when synchronizing data to the cloud of your choice by reliably running bulk data transfer jobs with automatic restart/suspend/resume.
Get started by pointing SoftNAS to existing Isilon shared storage volumes, select your cloud storage destination, and immediately begin moving your data. It’s that easy.
Find out more about how SoftNAS enables you to:
- Trade CapEx for OpEx
- Seamlessly and securely migrate live production data
- Control the cost of cloud storage
- Continuously sync live data to the cloud
Download our eBook to learn about managing cloud data costs without sacrificing performance.
Consolidate Your Files in the Cloud
Migrating Existing Applications to AWS
Moving your On-Premises NAS to the Azure Cloud
Replacing EMC Isilon, VNX & NetApp with AWS & Azure
For the past 7 years, Buurst’s SoftNAS has helped customers in 35 countries globally to successfully migrate thousands of applications and petabytes of data out of on-premises data centers into the cloud. Over that time, we’ve witnessed a major shift in the types of applications and the organizations involved.
The move to the cloud started with simpler, low risk apps and data until companies became comfortable with cloud technologies. Today, we see companies deploying their core business and mission-critical applications to the cloud, along with everything else as they evacuate their data centers and colos at a breakneck pace.
At the same time, the mix of players has also shifted from early adopters and DevOps to a blend that includes mainstream IT.
The major cloud platforms make it increasingly easy to leverage cloud services, whether building a new app, modernizing apps or migrating and rehosting the thousands of existing apps large enterprises run today.
Whatever cloud app strategy is taken, one of the critical business and management decisions is where to maintain control of the infrastructure and where to turn control over to the platform vendor or service provider to handle everything, effectively outsourcing those components, apps and data.
So how can we approach these critical decisions to either maintain control or outsource to others when migrating to the cloud? This question is especially important to carefully consider as we move our most precious, strategic, and critical data and application business assets into the cloud.
One approach is to start by determining whether the business, applications, data and underlying infrastructure are “core” vs. “context”, a distinction popularized by Geoffrey Moore in Dealing with Darwin.
He describes Core and Context as a distinction that separates the few activities that a company does that create true differentiation from the customer viewpoint (CORE) from everything else that a company needs to do to stay in business (CONTEXT).
Core elements of a business are the strategic areas and assets that create differentiation and drive value and revenue growth, including innovation initiatives.
Context refers to the necessary aspects of the business that are required to “keep the lights on”, operate smoothly, meet regulatory and security requirements and run the business day-to-day; e.g., email should be outsourced unless you are in the email hosting business (in which case it’s core).
It’s important to maintain direct control of the core elements of the business, focusing employees and our best and brightest talents on these areas. In the cloud, core elements include innovation, business-critical and revenue-generating applications and data, which remain central to the company’s future.
Certain applications and data that comprise business context can and should be outsourced to others to manage. These areas remain important as the business cannot operate without them, but they do not warrant our employees’ constant attention and time in comparison to the core areas.
The core demands the highest performance levels to ensure applications run fast and keep customers and users happy. It also requires the ability to maintain SLAs around high availability, RTO and RPO objectives to meet contractual obligations. Core demands the flexibility and agility to quickly and readily adapt as new business demands, opportunities and competitive threats emerge.
Many of these same characteristics are also important to business context areas as well, but not as critical as the context that can simply be moved around from one outsourced vendor to another as needed.
Increasingly, we see the business-critical, core applications and data migrating into the cloud. These customers demand control of their core business apps and data in the cloud, as they did on-premises. They are accustomed to managing key infrastructure components, like the network attached storage (NAS) that hosts the company’s core data assets and powers the core applications. We see customers choose a dedicated Cloud NAS that keeps them in control of their core in the cloud.
Example core apps include revenue-generating e-discovery, healthcare imaging, 3D seismic oil and gas exploration, financial planning, loan processing, video rental and more. The most common theme we see across these apps is that they drive core subscription-based SaaS business revenue. Increasingly, we see both file and database data being migrated and hosted atop of the Cloud NAS, especially SQL Server.
For these core business use cases, maintaining control over the data and the cloud storage is required to meet performance and availability SLAs, security and regulatory requirements, and to achieve the flexibility and agility to quickly adapt and grow revenues. The dedicated Cloud NAS meets the core business requirements in the cloud, as it has done on-premises for years.
We also see many less critical business context data being outsourced and stored in cloud file services such as Azure Files and AWS EFS. In other cases, the Cloud NAS abilities to handle both core and context use cases is appealing. For example, leveraging both SSD for performance and object storage for lower cost bulk storage and archival with unified NFS and CIFS/SMB makes the Cloud NAS more attractive in certain cases.
There are certainly other factors to consider when choosing where to draw the lines between control and outsourcing applications, infrastructure and data in the cloud.
Ultimately, understanding which applications and data are core vs context to the business can help architects and management frame the choices for each use case and business situation, applying the right set of tools for each of these jobs to be done in the cloud.
Successful cloud data management strategy
When will your company ditch its data centers?
Migrate Workloads and Applications to the Cloud
When it’s time to close your data center and how to do it safely
In this post, we’re sharing an easy way to determine the best instance type and appropriate instance size to use for specific use cases in the AWS and Azure clouds.
To help you decide, there are some considerations to keep in mind. Let’s go through each of these determining factors in depth.
Decision Point 1 – What use case are you trying to address?
- A. Migration to the Cloud
- Migrating existing applications into the cloud should be neither complex, expensive, time consuming, resource intensive, nor force you to rewrite your application to run it in the public cloud.If your existing applications access storage using CIFS/SMB, NFS, AFP or iSCSI storage protocols, then you will need to choose a NAS filer solution that will allow your applications to access cloud storage (block or object) using the same protocols it already does.
- B. SaaS-Enabled Applications
- For revenue-generating SaaS applications, high performance, maximum uptime and strong security with access control are critical business requirements.Running your SaaS apps in a multi-tenant, public cloud environment can be challenging while simultaneously fulfilling these requirements. An enterprise-grade cloud NAS filer may help you cope with these challenges, even in a public cloud environment. A good NAS solution provider will assure no downtime, high-availability and high levels of performance, strong security with integration to industry standard access control and make it easier to SaaS-enable apps.
- C. File Server Consolidation
- Over time, end users and business applications create more and more data – usually unstructured – and rather than waiting for the IT department, these users install file servers wherever they can find room to put them, close to their locations. At the same time, businesses either get acquired or acquire other companies, inheriting all their file servers in the process. Ultimately, it’s the IT department that must manage this “server sprawl” when dealing with for OS and software patches, hardware upkeep and maintenance, and security. With limited IT staff and resources, the task becomes impossible. The best long-term solution is using the cloud, of course, and a NAS filer to migrate files to the cloud.This strategy allows for scalable storage that is accessed the same way by users as they have always accessed their files on the local files servers.
- D. Legacy NAS Replacement
- With a limited IT staff and budget, it’s impractical to keep investing in legacy NAS systems and purchase more and more storage to keep pace with the rapid growth of data. Instead, investment in enterprise-grade cloud NAS can help businesses avoid burdening their IT staff with maintenance, support and upkeep, and pass those responsibilities on to a cloud platform provider. Businesses also gain the advantages of dynamic storage scalability to keep pace with data growth, and the flexibility to map performance and cost to their specific needs.
- E. Backup/DR/Archive in the Cloud
- Use tools to replicate and backup your data from your VMware datacenter to the AWS, Azure public clouds. Eliminate physical backup tapes by archiving data in inexpensive S3 storage or to cold storage like AWS Glacier for long-term storage. For stringent Recovery Point Objectives, cloud NAS can serve as an on-premises backup or primary storage target for local area network (LAN) connected backups as well.As a business’ data grows, the backup window can become unmanageable and tie up precious network resources during business hours. Cloud NAS with local disk-based caching reduces the backup window by streaming data in the background for better WAN optimization.
Decision Point 2 – What Cloud Platform do you want to use?
No matter which cloud provider is selected, there are some basic infrastructure details to keep in mind. The basic requirements are:
- Number of CPUs
- Size of RAM
- Network performance
- A. AWS
- Standard: r5.xlarge is a good starting point in regard to memory and CPU resources. This category is suited to handle processing and caching with minimal requirements for network bandwidth. It comprises 4 vCPU, 16 GiB RAM, 1GbE network.Medium: r5.2xlarge is a good choice for workloads that are read-intensive, and will benefit from the larger memory-based read cache for this category. The additional CPU will also provide better performance when deduplication, encryption, compression and/or RAID is enabled. Composition: 8 vCPU, 32 GiB RAM, 10GbE network.High-end: r5.24xlarge can be used for workloads that require a very high-speed network connection due to the amount of data transferred over a network connection. In addition to the very high-speed network, this level of instance gives you a lot more storage, CPU and memory capacity. Composition: 96 vCPU, 758 GiB RAM, 25GbE network.
- B. Azure
- Dsv3-series support premium storage and are the latest, hyper-threaded general-purpose generation running on both the 2.4 GHz Intel Xeon® E5-2673 v3 (Haswell) and the 2.3 GHz Intel Xeon® E5-2673 v4 (Broadwell) processor. With the Intel Turbo Boost Technology 2.0, the Dsv3 can achieve up to 3.5 gigahertz (GHz). The Dsv3-series sizes offer a combination of vCPU, memory and temporary storage best suited for most production workloads.
- Standard: D4s Standard v3 4 vCPU, 16 GiB RAM with moderate network bandwidth
- Medium: D8s Standard v3 8 vCPU, 32 GiB RAM with high network bandwidth
- High-end: D64s Standard v3 64 vCPU, 256 GiB RAM with extremely high network bandwidth
Decision Point 3 – What type of storage is needed?
Both AWS and Azure offer block as well as object storage. Block storage is normally used with file systems while object storage addresses the need to store “unstructured” data like music, images, video, backup files, database dumps and log files. Selecting the right type of storage to use also influences how well an AWS Instance or Azure VM will perform.
About AWS S3 object storage
About AWS EBS block storage
Types of Azure storage
Decision Point 4 – Need a “No Storage Downtime Guarantee”?
No matter which support platform is used, look for a filer that offers High Availability. A robust set of HA capabilities protects against data center, availability zone, server, network and storage subsystem failures to keep the business running without downtime. HA monitors all critical storage components, ensuring they remain operational. In case of an unrecoverable failure in a system component, another storage controller detects the problem and automatically takes over, ensuring no downtime or business impacts occur. So, companies are protected from lost revenue when access to their data resources and critical business applications would otherwise be disrupted.
If you’re looking for more personal guidance or have any technical questions, get in touch with our team of cloud experts who have done thousands of VPC and VNet configurations across AWS and Azure. Even if you are not considering SoftNAS, our free professional services reps will be happy to answer your questions.
Learn more about free professional services, environment setup and cloud consultations.