What do some of the most important brands know about SoftNAS?

What do some of the most important brands know about SoftNAS?

What do these powerful brands all have in common?

They’re all using SoftNAS.

Many of our customers tried their own cloud-native storage solutions or used on-premises solutions that were ported to the cloud, only to quickly realize they were difficult to implement and did not provide the cost or performance needed. Traditional storage solutions just want to sell more storage, driving companies down a slow, expensive path, resulting in a solution that doesn’t scale easily and is unable to take advantage of the capabilities and tools offered on the cloud. So why not learn from their mistakes? Skip the fail step by jumping right to the successful solution that helps you manage and control your data on the cloud, without the hefty price tag. SoftNAS offers NAS-like tools that support your highest-performing, data-intensive applications while helping you manage costs on the cloud.

So how does SoftNAS provide high performance without a high cost? 

Storage tiering

Automatic dynamic storage tiering applies data aging and access policies to data, moving aging data from high-performance block storage to less expensive, lower performance storage –saving you up to 67% in cloud storage costs. Unlike file-based tiering models, SoftNAS SmartTiers is block-based, meaning the portions of large files that are regularly accessed remain in high-performance storage tiers, while irregularly accessed portions are moved to lower performance tiers for better cost savings.

Deduplication and compression

SoftNAS doesn’t stop at storage tiering to keep your costs low. Inline data deduplication and compression greatly reduce your storage footprint, therefore reducing the amount of storage you have to procure. SoftNAS’s data deduplication compares your application files block by block to find and eliminate redundancies. On top of deduplication, data compression capabilities will reduce the size of your data by an additional 50-75%. With these drastic drops in storage needs, it’s not difficult to see why so many organizations are looking at SoftNAS to manage cloud costs for their high performing application data.

CapEx for OpEx

Perhaps one of the more obvious ways that SoftNAS manages your costs is by enabling you to make the shift from CapEx to OpEx. Netflix couldn’t operate at their current scale if they were strictly managing their data on-premises –the costs would be astronomical, and maintenance would be unfathomable. Procuring on-premises storage doesn’t stop at purchasing another piece of hardware. It means you’re buying hardware, a secondary data center, and paying for the electricity, manpower, and security to manage that hardware. The list goes on. SoftNAS eliminates the need to purchase hardware by making the transition to the cloud fast, easy, and possible. If it worked for Netflix, why can’t it work for you?

If you’re ready to unlock the same cost-effective capabilities many powerful companies have already experienced, check out our 30-day free trial:

Learn the new rules of cloud storage

Learn the new rules of cloud storage

SoftNAS is now Buurst, and we’re about to change the enterprise cloud storage industry as you know it. Watch the recording of our groundbreaking live webinar announcement on 4/15/20 and learn how:
  • To reduce your cloud storage costs, and save up to 80% on cloud storage costs and increase performance (yes, you read that right!)
  • Applying configuration variables will maximize data performance, without storage limitations
  • Companies such as Halliburton, SAP, and Boeing are already taking advantage of these rules and effectively managing Petabytes of data in the cloud

Who should watch?

  • Cloud Architects, CIO, CTO, VP Infrastructure, Data Center Architects, Platform Architects, Application Developers, Systems Engineers, Network Engineers, VP Technology, VP IT, VP BI/Data Analytics, Solutions Architects
  • Amazon Elastic File System (EFS) customers, Amazon FSx customers, Azure NetApp File System (NFS) customers, Isilon customers

Choosing the Right Type of AWS Storage for your Use-Case: Object Storage

When choosing data storage, what do you look for? AWS offers several storage types and options, and each is better suited to a certain purpose than the others. For instance, if your business only needs to store data for compliance and with little need for access, Amazon S3 volumes are a good bet. For enterprise applications, Amazon EBS SSD-backed volumes offer a provisioned IOPS option to meet the performance requirements.

And then there are concerns about the cost. Savings in cost usually come at the price of performance. However, the array of options from the AWS platform for storage means there usually is a type that achieves the balance of performance and cost that your business needs.

In this series of posts, we are going to look at the defining features of each AWS storage type. By the end, you should be able to tell which type of AWS storage sounds like the right fit for your business’ storage requirements. This post focuses on the features of AWS Object storage, or S3 storage. You may also read our post that explains all about AWS block storage.

Amazon S3 storage

Amazon Object storage has been designed to be the most durable storage layer, with all offerings stated to provide 99.999999999%, or 11 nines of durability of objects over a given year. This durability equates to an average annual expected loss of 0.000000001% of objects, or, said more practically, a loss of a single object once every 10,000 years. Given the advantages that Object storage brings to the table, why wouldn’t you want to use it for every scenario that includes data? This is a question put to solution architects at SoftNAS almost every day. Object storage excels in durability but its design makes it not so suitable for some use cases. When thinking about utilizing Object storage the questions you need to have answers for are:

    1. What is the Data life cycle?
    2. What is the Data access frequency?
    3. How latency-aware is your Application?
    4. What are the service limitations?

The hallmark of S3 storage has always been high throughput – with high latency. But AWS has refined its offerings by adding several S3 storage classes that address different needs. These include:

    • AWS S3 Standard
    • S3 Intelligent Tiering
    • S3 Standard IA (Infrequent Access)
    • S3 One Zone IA
    • S3 Glacier
    • S3 Glacier Deep Archive

These types are listed in order of increasing latency and decreasing cost/GB. The access time across all of these S3 storage classes ranges from milliseconds to hours.

How frequently you will need to access your data dictates the type of S3 storage you should choose for your backend. The cost for S3 tiers is determined not only by the amount of storage but also the access to the storage. The billing incorporates storage used, network data transferred in, network data transferred out, data retrieval and the number of data requests (PUT, GET, DELETE). Workloads with random read and write operations, low latency and high IOPS requirements are not suitable for S3 storage. Use-cases/workloads that are not latency-sensitive and require high throughput are good candidates.

AWS S3 is object storage – so you must remember that all data will be stored as objects in the native format, with no hierarchies as there are when using a file system. But then, objects may be stored across several machines, and can be accessed from anywhere.

Read our post on all about block storage here, it also includes tips on designing your AWS storage for optimum performance.

Need More Help or Information?

Even with all the above information, identifying the right data storage type, instance sizes and setting up custom architectures to suit your business performance requirements can be tricky. SoftNAS has assisted thousands of businesses with their AWS VPC configurations, and our in-house experts are available to answer queries and provide guidance free of charge.

Request a complimentary professional consultation.

Choosing the Right Instance Type and Instance Size for AWS and Azure

In this post, we’re sharing an easy way to determine the best instance type and appropriate instance size to use for specific use cases in the AWS and Azure clouds.

To help you decide, there are some considerations to keep in mind. Let’s go through each of these determining factors in depth.

Decision Point 1 – What use case are you trying to address?

  • A. Migration to the Cloud
    • Migrating existing applications into the cloud should be neither complex, expensive, time consuming, resource intensive, nor force you to rewrite your application to run it in the public cloud.

      If your existing applications access storage using CIFS/SMB, NFS, AFP or iSCSI storage protocols, then you will need to choose a NAS filer solution that will allow your applications to access cloud storage (block or object) using the same protocols it already does.

  • B. SaaS-Enabled Applications
    • For revenue-generating SaaS applications, high performance, maximum uptime and strong security with access control are critical business requirements.

      Running your SaaS apps in a multi-tenant, public cloud environment can be challenging while simultaneously fulfilling these requirements. An enterprise-grade cloud NAS filer may help you cope with these challenges, even in a public cloud environment. A good NAS solution provider will assure no downtime, high-availability and high levels of performance, strong security with integration to industry standard access control and make it easier to SaaS-enable apps.

  • C. File Server Consolidation
    • Over time, end users and business applications create more and more data – usually unstructured – and rather than waiting for the IT department, these users install file servers wherever they can find room to put them, close to their locations. At the same time, businesses either get acquired or acquire other companies, inheriting all their file servers in the process. Ultimately, it’s the IT department that must manage this “server sprawl” when dealing with for OS and software patches, hardware upkeep and maintenance, and security. With limited IT staff and resources, the task becomes impossible. The best long-term solution is using the cloud, of course, and a NAS filer to migrate files to the cloud.

      This strategy allows for scalable storage that is accessed the same way by users as they have always accessed their files on the local files servers.

  • D. Legacy NAS Replacement
    • With a limited IT staff and budget, it’s impractical to keep investing in legacy NAS systems and purchase more and more storage to keep pace with the rapid growth of data. Instead, investment in enterprise-grade cloud NAS can help businesses avoid burdening their IT staff with maintenance, support and upkeep, and pass those responsibilities on to a cloud platform provider. Businesses also gain the advantages of dynamic storage scalability to keep pace with data growth, and the flexibility to map performance and cost to their specific needs.
  • E. Backup/DR/Archive in the Cloud
    • Use tools to replicate and backup your data from your VMware datacenter to the AWS, Azure public clouds. Eliminate physical backup tapes by archiving data in inexpensive S3 storage or to cold storage like AWS Glacier for long-term storage. For stringent Recovery Point Objectives, cloud NAS can serve as an on-premises backup or primary storage target for local area network (LAN) connected backups as well.

      As a business’ data grows, the backup window can become unmanageable and tie up precious network resources during business hours. Cloud NAS with local disk-based caching reduces the backup window by streaming data in the background for better WAN optimization.

Decision Point 2 – What Cloud Platform do you want to use?

No matter which cloud provider is selected, there are some basic infrastructure details to keep in mind. The basic requirements are:

  • Number of CPUs
  • Size of RAM
  • Network performance
  • A. AWS
    • Standard: r5.xlarge is a good starting point in regard to memory and CPU resources. This category is suited to handle processing and caching with minimal requirements for network bandwidth. It comprises 4 vCPU, 16 GiB RAM, 1GbE network.

      Medium: r5.2xlarge is a good choice for workloads that are read-intensive, and will benefit from the larger memory-based read cache for this category. The additional CPU will also provide better performance when deduplication, encryption, compression and/or RAID is enabled. Composition: 8 vCPU, 32 GiB RAM, 10GbE network.

      High-end: r5.24xlarge can be used for workloads that require a very high-speed network connection due to the amount of data transferred over a network connection. In addition to the very high-speed network, this level of instance gives you a lot more storage, CPU and memory capacity. Composition: 96 vCPU, 758 GiB RAM, 25GbE network.

  • B. Azure
    • Dsv3-series support premium storage and are the latest, hyper-threaded general-purpose generation running on both the 2.4 GHz Intel Xeon® E5-2673 v3 (Haswell) and the 2.3 GHz Intel Xeon® E5-2673 v4 (Broadwell) processor. With the Intel Turbo Boost Technology 2.0, the Dsv3 can achieve up to 3.5 gigahertz (GHz). The Dsv3-series sizes offer a combination of vCPU, memory and temporary storage best suited for most production workloads.


    • Standard: D4s Standard v3 4 vCPU, 16 GiB RAM with moderate network bandwidth
    • Medium: D8s Standard v3 8 vCPU, 32 GiB RAM with high network bandwidth
    • High-end: D64s Standard v3 64 vCPU, 256 GiB RAM with extremely high network bandwidth

Decision Point 3 – What type of storage is needed?
Both AWS and Azure offer block as well as object storage. Block storage is normally used with file systems while object storage addresses the need to store “unstructured” data like music, images, video, backup files, database dumps and log files. Selecting the right type of storage to use also influences how well an AWS Instance or Azure VM will perform.

Other resources:

About AWS S3 object storage: https://aws.amazon.com/s3/

About AWS EBS block storage: https://aws.amazon.com/ebs/

Types of Azure storage: https://docs.microsoft.com/en-us/azure/storage/common/storage-introduction

 

Decision Point 4 – Need a “No Storage Downtime Guarantee”?

No matter which support platform is used, look for a filer that offers High Availability. A robust set of HA capabilities protects against data center, availability zone, server, network and storage subsystem failures to keep the business running without downtime. HA monitors all critical storage components, ensuring they remain operational. In case of an unrecoverable failure in a system component, another storage controller detects the problem and automatically takes over, ensuring no downtime or business impacts occur. So, companies are protected from lost revenue when access to their data resources and critical business applications would otherwise be disrupted.

If you’re looking for more personal guidance or have any technical questions, get in touch with our team of cloud experts who have done thousands of VPC and VNet configurations across AWS and Azure. Even if you are not considering SoftNAS, our free professional services reps will be happy to answer your questions.

Learn more about free professional services, environment setup and cloud consultations.

How the SoftNAS QuickStart Wizard Helps You Deploy SoftNAS in the Cloud

In our ongoing quest to improve user experience, we’ve created a new QuickStart Wizard to assist you on your journey with SoftNAS. The SoftNAS QuickStart Wizard takes the guesswork out of deploying your SoftNAS instance on AWS. By providing basic information about your desired deployment, your SoftNAS instance will be provisioned for you, providing you or your application with ready and accessible storage, compatible with key industry protocols, including CIFS, NFS, AFP or iSCSI.

    1.) The first step in the journey of a new SoftNAS deployment is to select the region to which you wish to deploy. This should be a region that is closest in geographical proximity to the majority of users accessing the storage or the application the storage serves.

QuickStart Wizard Step 1

    2.) Next, your AWS credentials must be provided in the form of Access and Secret Keys. These keys are only used to make the API calls necessary to deploy your instance or instances on the AWS platform, and is cleared and discarded shortly thereafter, to preserve customer privacy.

QuickStart Wizard Step 2

    3.) Next you will select the amount of storage you wish to provision your instance. The Wizard will also create storage volumes that you can connect to using CIFS or NFS. Additional storage using other compatible storage protocols, such as iSCSI or AFP, can be provisioned after deployment if necessary.

QuickStart Wizard Step 3

As you can see, SoftNAS has taken care to ensure that the deployment process is as simple as it can be, while providing enough options to cover a significant range of deployment possibilities.

    • 4.) Next, you can determine whether a highly available deployment is necessary for your use case, or if a single SoftNAS instance will serve your needs. You can also decide the accessibility of your instance, whether it is only accessible from within your AWS VPC or is open via a Public IP (Single Node deployment only). For testing and demonstration purposes, an open configuration is simpler. For production purposes, a private deployment is recommended.

5.) Finally, SoftNAS has provided three basic performance levels for your deployment. The instance sizes selected simplify but coincide with the guidance provided by our Instance Sizing Guide. If the instance size selected does not match your requirements, you can change the instance size for your deployment at any time.

QuickStart Wizard Step 5

    6.) All that remains is to click the subscribe link and accept the SoftNAS terms and conditions. Your instance will be provisioned shortly thereafter.

QuickStart Wizard Step 6

SoftNAS will deploy a standard configuration suitable to typical user needs, based on performance selections made. This deployment includes:

  • 1 AWS EC2 instance or 2 AWS EC2 instances, dependent on whether High Availability is selected.
  • Thin-provisioned AWS Elastic Block Store (EBS)
  • Network connections required to access your SoftNAS instances, access cloud storage
    • VPN or other networking method may be required to access Private VPC deployments
  • Security groups and IAM policies configured to SoftNAS best practices
  • AWS VPC and required Subnets

The Quick Start Wizard allows you to quickly deploy and evaluate SoftNAS. In a few short steps, your instance will be deployed, and you can begin to use SoftNAS to manage your storage. It is expected that your deployment may need to be tailored to create a complete production solution for your organization’s purposes. If assistance is required, you can avail yourselves of SoftNAS Free Professional Environment Set-up service.

Let us know what you think of our new Quick Start Wizard, and whether our efforts to help our customers are working for you!