SoftNAS Dual Zone High Availability

SoftNAS Dual Zone High Availability

SoftNAS SNAP HA High Availability delivers a 99.999% uptime guarantee that is a low-cost, low-complexity solution that is easy to deploy and manage. A robust set of HA capabilities protect against cloud storage failures to keep business running without downtime.   

SNAP HA monitors all storage components for potential failure and automatically takes over when necessary. It overcomes the challenge of protecting data and application availability across a cloud infrastructure, while easily integrating into existing IT deployments. 

Continuous Availability ensures your applications are always available when you need them. Multiple data paths safeguard against hardware, software and human error. 

  • Safeguard critical data against unplanned storage outages. 24 x 7 x 365 
  • Feel confident that your business is protected at all times 
  • Data protection across multiple availability zones 
  • Creating data protection with a design philosophy of simplicity and ease of use. 
  • Replicate data so there is an up-to-the-minute recovery 
  • Prevent incorrect or outdated data from be made available due to multiple failures across the storage environment 
  • Ensure your applications and storage remain available 
  • Enjoy the ability to make storage maintenance and upgrades with little or no downtime in their production environment. 

With SoftNAS you can rely on its continuous availability and highly available snapshots, replication, and the enterprise class storage services to drive your business forward. 

Several measures have been taken to ensure the highest possible data integrity of your highly available block storage system. An independent “witness” HA controller function ensures there is never a condition that can result in what is known as “split-brain”, where a controller with outdated data is accidentally brought online. SNAP HA prevents split-brain using several industry-standard best practices, including use of a 3rd party witness HA control function that tracks which node contains the latest data. On AWS, shared data stored in highly redundant S3 storage is used.  

Another HA feature is “fencing”. In the event of a node failure or takeover, the downed controller is shut down and fenced off, preventing it from participating in the cluster until any potential issues can be analyzed and corrected, at which point the controller can be admitted back into the cluster. 

Finally, data synchronization integrity checks prevent accidental failover or manual takeover by a controller which contains data which is out of date. 

The combination of tunable high-integrity features built into SNAP HA ensures data is always protected and safe, even in the face of unexpected types of failures or user error.  

SoftNAS SNAP HA High Availability delivers the availability required by mission-critical applications running in virtual machines and cloud computing environments, independent of the operating system and application running on it. HA provides uniform, cost-effective failover protection against hardware and operating system outages within virtualized IT and cloud computing environments. SNAP HA: 

  • Monitors SoftNAS storage servers to detect hardware and storage system failures 
  • Automatically detects network and storage outages and re-routes NAS services to keep NFS and Windows servers and clients operational 
  • Restarts SoftNAS storage services on other hosts in the cluster without manual intervention when a storage outage is detected 
  • Reduces application and IT infrastructure downtime by quickly switching NAS clients over to another storage server when an outage is detected 
  • Maintains a fully replicated copy of live production data for disaster recovery for block storage 
  • Is quick and easy to install by any IT administrator, with just a few mouse clicks using the automatic setup wizard 

SoftNAS SNAP HA provides NFS, CIFS and iSCSI services via redundant storage controllers. One controller is active, while another is a passive standby controller. Block replication transmits only the changed data blocks from the source (primary) controller node to the target (secondary) controller. Data is maintained in a consistent state on both controllers using the ZFS copy-on-write filesystem, which ensures data integrity is maintained. In effect, this provides a near real-time backup of all production data (kept current within 1 to 2 minutes). 

A key component of SNAP HA is the HA Monitor. The HA Monitor runs on both nodes that are participating in SNAP HA. On the secondary node, HA Monitor checks network connectivity, as well as the primary controller’s health and its ability to continue serving storage. Faults in network connectivity or storage services are detected within 10 seconds or less, and an automatic failover occurs, enabling the secondary controller to pick up and continue serving NAS storage requests, preventing any downtime. 

Once the failover process is triggered, either due to the HA Monitor (automatic failover) or as a result of a manual takeover action initiated by the admin user, NAS client requests for NFS, CIFS and iSCSI storage are quickly re-routed over the network to the secondary controller, which takes over as the new primary storage controller. Takeover on VMware typically occurs within 20 seconds or less. On AWS, it can take up to 30 seconds, due to the time required for network routing configuration changes to take place. 

In AWS, SNAP HA is applied to SoftNAS storage controllers running in a Virtual Private Cloud (VPC). It is recommended to place each controller into a separate AWS Availability Zone (AZ), which provides the highest degree of underlying hardware infrastructure redundancy and availability. 

SNAP HA has been validated in real-world enterprise customer environments and is proven to handle hundreds of millions of files efficiently and effectively. The use of block replication instead of file replication supports hundreds of millions of files and directories. 

SoftNAS High-Performance Cloud Storage for SaaS

SoftNAS High-Performance Cloud Storage for SaaS

Performance and cost are critical to consider when building a cloud storage solution for SaaS. You need a system that can ingest data, store it, provide access to it, and cost-effectively do this. Your goal is to develop a cloud storage solution that is highly scalable, highly available, and efficient. 

Cloud storage needs to be resilient. It needs to handle segmentation, service interruptions and outages, hardware failures, and so on. The enduser should not be able to see these issues. Your overall solution needs to be reliable, costeffective, and highly scalable with high availability. 

The protocols you choose depends on the needs of your application and your user base. Some applications will work best with NFS, others with CIFS. It would help if you considered the impact that each of these choices has on your overall solution. 

It would help if you had a cloud storage solution that is highly scalable, highly available, and efficient. When designing cloud storage, different cost models come into play. Cloud solutions like AWS EFS, Azure Files are expensive and slow. A more practical choice may be to use block storage from a cloud provider and then serve it with a Cloud NASA Cloud NAS provides scalability, performance, and reduced costs. 

You have choices from your deployment to your data center upfront costs and the ongoing operations costs. The less you pay, and the less you spend, the better off you are. 

A high performance, low cost, and highly scalable cloud storage solution are vital to meet SaaS offerings’ demand. Designing and deploying a cloud solution is complicated, but Buurst SoftNAS helps by giving you control of your instance size for CPU, memory, and network.  The main advantage is tuning your performance. SoftNAS lets you choose the instance size based on performance needs, flexible, cost-effective, and easy to deploy. 

Buurst SoftNAS is designed with high performance and cost in mind. It connects to your managed service provider storage so that you can easily harness the capacity of cloud storage without the bottlenecks. The solution makes use of a SoftNAS cloud instance that has direct a direct connection to the cloud block storage and provides a private connection to clients owned by your organization.  SoftNAS supports native protocols to optimize performance with your SaaS applications. The solution also includes support for L1/L2 cache with NVME and SSD, better throughput because you might need to run enterprise applications without sacrificing performance. 

Buurst SoftNAS is protocolagnostic and relies on low-level block storage and an advanced caching layer for logging. Advanced caching results in outstanding performance and extreme affordability. Additionally, it has backup services that enable you to snapshot and recover your files efficiently. SoftNAS offers data deduplication to save storage space. SoftNAS can replicate data to other regions, and availability zones make your data resilient. 

These managed service providers have inconsistent performance characteristics. Both Azure File and AWS NFS limit the number of IOPS  and throughput you can get out of the filesystem by default unless you go to a premium storage account. In contrast, cloud providers allow additional throughput for a premium charge. 

Here are the results of the SoftNAS performance gains over Azure File and AWS NFS: 

Buurst on AWS

Better Throughput

Better IOPs

Better Latency

Buurst on Azure

Better Throughput

Better IOPs

Better Latency

SoftNAS gives you the full power of cloud block storage, allowing you to have scalable and predictable performance.  Here are some of the high-level reasons people choose to run SoftNAS: 

    • Scale IOPS to accommodate more highperformance workloads. 
    • Cloud storage performance consistency and raw performance. 
    • Data throughput as well as using all the storage they pay for. 

SoftNAS can be useful to any SaaS company that cares about high performing SaaS applications.

SoftNAS scales IOPS and throughput based on the number and size of the instance, so if SoftNAS offering is exceptionally flexible and scalable. High performance is a business requirement but also a storage requirement. Performance is key to a successful SaaS application because of speed and reliability. 

Useful Links: 

Cloud Storage: Delivering on a performance priority eBook

What do we mean when we say no “Storage Tax”?

SoftNAS Deployment Guide for High-Performance SaaS

Getting Started:

AWS
Marketplace
 

Azure
Marketplace
 

Buurst
Solutions

HA NFS For Kubernetes Using SoftNAS

HA NFS For Kubernetes Using SoftNAS

Today I would like to show you how to create highly available NFS mounts for your Kubernetes pods using SoftNAS. Sometimes we require our microservices running on Kubernetes to be able to share common data. Naturally, a common NFS share would achieve that. Since Kubernetes deployments are all about high availability, then why not also make our common NFS shares highly available as well? I’m going to show you a way to do that using a SoftNAS appliance to host the NFS mount points.

A Little Background on SoftNAS

SoftNAS is a software-defined storage appliance created by BUURST™ that you can deploy from the AWS Marketplace, Azure Cloud Marketplace, and VMWare OVA. SoftNAS allows you to create aggregate storage pools from cloud disk devices. These pools are then shared over NFS, SMB, or iSCSI for your applications and users to consume. SoftNAS also can run in HA mode, whereby you deploy two SoftNAS instances and can failover to the secondary node if the primary NFS server fails health check. This means that my Kubernetes shared storage can failover to another zone in AWS if my primary zone fails. It’s accomplished by using a floating VIP that moves between the primary and secondary nodes, depending on which one has the active file system.

The Basic SoftNAS Architecture

For this blog post, I’m going to use AWS. I’m using Ubuntu 18.04 for my Kubernetes nodes. I have deployed two SoftNAS nodes from AWS Marketplace. One node is deployed in US-WEST-2A, and the other is deployed in US-WEST-2B. I went through the easy HA setup wizard and chose a random VIP of 99.99.99.99. The VIP can be whatever IP address you like since it’s only is routed inside your private VPC. My KubeMaster is deployed in US-WEST-2B, and I have two KubeWorkers. One deployed in zone A and the other deployed in zone B. The below screenshot shows what HA looks like when you enable it. My primary node on the left, my secondary node on the right, and the VIP that my Kubernetes nodes are using for this NFS share is 99.99.99.99. The status messages below the nodes show that replication is from primary to secondary is working as expected.

Primary Node

Secondary Node

 The Kubernetes Pods

How To Mount The HA NFS Share from Kubernetes

Alright, enough about the background information, let’s mount this VIP on the SoftNAS from our Kubernetes pods. Since the SoftNAS exports this pool over NFS, you will mount it just like you would mount any other NFS share. In the below example, we will launch an Alpine Linux Docker image and have it mount the NFS share at the 99.99.99.99 VIP address. I’ll name the deployment’ nfs-app’.

The deployment will have the below contents inside of your file nfs-app.yaml. Notice in the volumes section of the deployment, we are using the HA VIP of the SoftNAS  (99.99.99.99) and the path for our NFS share ‘/kubepool/kubeshare’:

Copy and paste the below configuration into a file called ‘nfs-app.yaml’ to use this example:

kind: Pod
apiVersion: v1
metadata:
  name: nfs-app
spec:
  containers:
    - name: app
      image: alpine
      volumeMounts:
        - name: nfs-volume
          mountPath: /mnt/nfs
      command: ["/bin/sh"]
      args: ["-c", "sleep 500000"]
  volumes:
    - name: nfs-volume
      nfs:
        server: 99.99.99.99
        path: /kubepool/kubeshare

So let’s launch this deployment using Kubernetes.

From my KubeMaster, I run the below command:

'kubectl create -f nfs-app.yaml'

 Now let’s make sure the node launched:

'kubectl get pods'

See the below example of what the expected output should be:

Let’s verify from the Kubernetes pod that our NFS share is mounted from that pod.

To do that, we will connect to the pod and check the storage mounts.

We should see an NFS mount to SoftNAS at the VIP address 99.99.99.99.

 Connect to the ‘nfs-app’ Kubernetes pod:

‘kubectl exec --stdin --tty nfs-app -- /bin/sh’

Check for our mount to 99.99.99.99 like below:

'mount | grep 99.99.99.99'

 The expected output should look like below. We see that the Kubernetes pod has mounted our SoftNAS NFS share at VIP address using the NFS4 protocol:

Now you have configured highly available NFS mounts for your Kubernetes cluster. If you have any questions or need any more information regarding HA NFS for Kubernetes, please reach out to BUURST for sales or more technical information.

SoftNAS storage solutions for SAP HANA

SoftNAS storage solutions for SAP HANA

As we all know, one of the most critical parts of any system is storage. Buurst SoftNAS provides cloud storage performance to SAP HANA. This blog post will make it easier for you to understand the options available to SoftNAS and SAP HANA to improve data performance and reduce your environment’s complexity. You will also learn how to choose specific storage options for their SAP HANA environment.

SAP HANA is a platform for data analysis and business analytics. HANA can provide insights from your data—faster than traditional RDMS systems. Performance is essential for SAP HANA because it helps to provide information more quickly. SAP HANA is optimized to work with real-time data, so performance is a significant factor.

All data and metadata for the SAP HANA system are in shared objects. These objects are copied from data tables to logical tables and accessed by SAP HANA software. So, as this information is grown, the impact on performance grows as well. By using SoftNAS to address performance bottlenecks, SoftNAS can accelerate operations that otherwise might be less efficient.

For example, the copy operation will undoubtedly be faster if you deploy storage with read-cache. Read cache is implemented with NVMe or SSD drives and helps copy the parameters from source tables to specialization indexes. Tables are frequently written, such as ETL operations, the technique of storing data and logs in SoftNAS reduces the complexity and resiliency, which reduces the overall risk for data loss. Cloud NAS can also improve your data requests’ response times and other critical factors like resource management and disaster recovery.

Storing your data and logs volumes on a NAS would certainly improve resilience. Using a NAS with RAID also allows for added redundancy if something goes wrong with one of your drives. Utilizing RAID will not only help to ensure your data is safe, but it will also allow you to maintain a predictable level of performance when it comes time to scale up your software.

Partitioning of data volumes allows for efficient data distribution and high availability. Partitioning will also help you scale up your SAP HANA performance, which can be a challenge with only one large storage pool. Partitioning will involve allocating more than one volume to each table and moving the information across volumes over time.

SAP HANA supports persistent memory for database tables. Persistent memory retains data in memory between server reboots. Loading data requires time to boot and load the data and then refresh the data. With SAP HANA deployed with SoftNAS storage, loading times are not a problem at all. The amount of data you consume will significantly benefit from persistence memory. While reading (basically accessing) records from persistent memory takes a long time, writing to the memory works much better with SoftNAS.

SoftNAS data snapshots enable SAP HANA backup multiple times a day without the overhead of file-based solutions, eliminating the need for lengthy consistency check times when backing up and dramatically reducing the time to restore the data. Schedule multiple backups a day with restore and recovery operations in a matter of minutes.

CPU and IO offloading help to support high-performance data processing. Reducing CPU and IO overhead effectively serves to increase database performance. By moving backup processes into the storage network, we can free up server resources to enable higher throughput and a lower Total Cost of Ownership (TCO).

You want to deploy SAP HANA because your business needs access to real-time information that allows you to make timely decisions with maximal value. SoftNAS is a cloud NAS that will enable you to develop real-time business applications, connecting your company with customers, partners, and employees in ways that you have never imagined before.

Designing your storage to meet cost and performance goals

Designing your storage to meet cost and performance goals

Public cloud platforms like AWS and Azure offer a few different choices for persistent storage. Today I’m going to show you how to leverage a SoftNAS storage appliance with these different storage types to scale your application to meet your specific performance and cost goals in the public cloud. To get started, let’s take a quick look at these storage types in both AWS and Azure to understand the characteristics. The high-performance disk types are more expensive, and the cost decrease as the performance level decreases.  Refer to the below table for a quick reference.

References below:

Azure storage types

Amazon EBS volume types

Because I’m going to use a SoftNAS as my primary storage controller in AWS or Azure, I can take advantage of all of the different disk types available on those platforms and design storage pools that meet each of my application performance and cost goals. I can create pools using high-performance devices along with pools that utilize magnetic media and object storage. I can even create tiered pools that utilize both SSD and HDD. Along with the flexibility of using different media types for my storage architecture, I can leverage the extra benefits of caching, snapshots, and file system replication that come along using SoftNAS. There are tons of additional features that I could mention, but for this blog post, I’m only going to focus on the types of pools I can create and how to leverage the different disk types.

High-Performance Pools

I’ll use AWS in this example. For an application that requires low latency and a high IOP’s, we would think about using SSD’s like IO1 or GP2 as the underlying medium. Let’s say we need our application to have 9k available IOP’s and at least 2TB of available storage. We can aggregate the devices in a pool to get the sum throughput and IO of all the devices combined, or we can provision a single IO Optimized volume to achieve the performance target. Let’s look at the underlying math and figure out what we should do.

We know that AWS GP2 EBS gives us 3 IOPs per GB of storage. With that in mind, 2TB would only give us 6k IOPs. That’s 3k short of our performance goal. To reach the 9k IOP’s requirement, we would either need to provision 3TB of GP2 EBS disk or provision an IO Optimized (IO1) EBS disk and set the IOPS to 9k for that device.

 

Any of the below configurations would allow you to achieve this benchmark using Buurst™ SoftNAS.

Throughput Optimized Pools

If your storage IO specification does not require low latency but does require a higher throughput level, then ST1 Type EBS may work well for you. ST1 disk types are going to be less expensive than the GP2 or IO1 type of devices. The same rules apply regarding aggregating the throughput of the devices to achieve your throughput requirements. If we look at the specs for ST1 devices (link above), we are allowed up to 500 IOP’s per device and a max of 500 MiB of throughput per device. If we require a 1TB volume to achieve 1GiB of throughput and 1000 IOPs, then we can design a pool with those requirements as well. It may look something like below:

Pools for Archive and Less Frequently Accessed Data

If you require storing backups on disk or have a data set that is not frequently accessed, then you could save money by storing this data set on less expensive storage. Your options are going to be magnetic media or object storage. SoftNAS can also help you out with that. HDD in Azure or SC1 in AWS are good options for this. You can combine devices to achieve high capacity requirements for this infrequently accessed or archival data. The throughput limits on the HDD type devices are limited to 250MiB, but the capacity is higher, and the cost is much less when compared to SSD type devices. If we needed 64TB of cold storage in AWS, it might look like below. The largest device in AWS is 16TB, so that we will use four.

Tiered Pools

Finally, I will mention Tiered Pools. Tiered pools are a feature you can use in BUURST™ SoftNAS, whereby you can have different levels of performance all within the same pool. When you set up a tiered pool on SoftNAS, you can have a ‘hot’ tier made of fast SSD devices along with a ‘cold’ tier that is made of slower, less expensive HDD devices. You set block-level age policies that enable the less frequently accessed data to migrate down to the cold tier HDD devices while your frequently accessed data will remain in the hot tier on the SSD devices. Let’s say we want to provision 20TB of storage. We think that about 20% of our data would be active at any time, and the other 80% could be on cold storage. An example of what that tiered pool may look like is below.

The tier migration policy has the following configuration:

  • Maximum block age: Age limit of blocks in seconds.
  • Reverse migration grace period: If a block is requested from the lower tier within this period, it will be migrated back up.
  • Migration interval: Time in seconds between checks.
  • Hot tier storage threshold: If the hot tier fills to this level, data is migrated off.
  • Alternate Block Age: Addition age to migrate blocks in the case of HOT tier becoming full.

Summary

If you are looking for a way to tune your storage pools based on your application requirements, then you should try SoftNAS.  It gives you the flexibility to leverage and combine different storage mediums to achieve the cost, performance, and scalability that you are looking for. Feel free to reach out to BUURST™ sales team for more information.

Mounting iSCSI inside SoftNAS to copy data using rsync

Mounting iSCSI inside SoftNAS to copy data using rsync

In this Blog post we are going to discuss how to mount ZFS iSCSI LUNs that have already been formatted with NTFS inside SoftNAS to access data. This use case is mostly applicable in VMware environments were by you a have a single SoftNAS node on a failing hardware and want to quickly migrate your data to a newer version of SoftNAS using rsync. However, this can also be applied on different iSCSI data recovery scenarios.

For the purposes of this Blog post, the following terminology will be used:

At this point we assumed we already have our new SoftNAS (Node B) deployed, configured our pool and iSCSI LUN and ready to receive the rsync stream from Node A. So we won’t be discussing that as that is not  main focus of this blog post

That said, let’s get started!

1. On Node B, do the following:

From UI go to Settings –> General System Settings –> Servers –> SSH Server –> Authentication –> and change Allow authentication by password? to “YES” and Allow login by root? to “YES”

Restart ssh sever

NOTE: Please take note of these changes as you will need to revert them back to their defaults for security reasons

2. From Node A,

Let’s setup SSH keys to push to Node B to allow a seamless rsync experience. After this step we should be able to connect to Node B using  root@Node-B-IP without requiring a password. However, if a passphrase was set, you will be required to provide the passphrase every time you try to connect via ssh. So for the interest of convenience and time don’t use a passphrase. Just leave it blank:

a. Create the RSA Key Pair:
# ssh-keygen -t rsa -b 2048

b. Use default location /root/.ssh/id_rsa and setup passphrase if required.

c. The public key is now located in /root/.ssh/id_rsa.pub

d. The private key (identification) is now located in /root/.ssh/id_rsa

3. Copy the public key to Node B

Using the ssh-copy-id command. Where user and IP address should be replaced with node B’s credentials.

Example:

# ssh-copy-id root@10.0.2.97

Alternatively, copy the content of /root/.ssh/id_rsa.pub to /root/.ssh/authorized_keys on the second server.

Now we are ready to mount our iSCSI volume on Node A and node B respectively. A single volume on each node is used on this blog post but the steps applies to multiple iSCSI volumes as well.

Before we proceed, please make sure that no ZFS iSCSI LUNs are mounted in Windows before proceeding to step #4 if not all NTFS volumes will mount as read-only inside SoftNAS which is not what we really want. This is because our current iSCSI implementation doesn’t allow Multipath at the same time.

To unmount we can simply head over to “Computer Management” in Windows –> right click on the iSCSI LUN and click “Offline”. Please see the screenshots below for reference

4. Mount the NTFS LUN inside SoftNAS

We need to install the package below on both Node A and Node B to allow us to mount the NTFS LUN inside SoftNAS/Buurst.

# yum install -y ntfs-3g

5. Login to the iSCSI LUN

Now from the CLI on Node A let’s login to the iSCSI LUN. We’ll run the commands below respectively while substituting the IP and the Target’s name with correct values from Node A:

# iscsiadm -m discovery -t st –portal 10.10.1.4

# iscsiadm -m node -T iqn.2020-07.com.softnas:storage.target1 -P 10.10.1.4 -l

Successfully executing the commands above will present you with the screenshot below:

6. iSCSI disk on Node A

Now we can run lsblk to expose our new iSCSI disk on Node A. From the screenshot below, our new iSCSI disk is /dev/sdd1. You can run this command ahead of time to make sure that you take note of your current disk mappings before logging into the LUN. This will allow you to quickly identify the new disk after mounting. However, often times it is usually the first disk device from the output.

7. NTFS Volume

Now we can mount our NTFS Volume to expose the data, but first we’ll create a mount-point called /mnt/ntfs.

# mkdir /mnt/ntfs

# mount -t ntfs /dev/sdd1  /mnt/ntfs

The Configuration on Node A is complete!

8. rsync script

On Node B let’s perform the steps # 5 to #7 but this time on Node B

Now we are ready to run our rsync script to copy our data over from Node A to Node B

9. Seeding data

We can run the command below on Node A to start seeding the data over from Node A to Node B

# rsync -Aavh /mnt/ntfs/* root@10.10.1.7:/mnt/ntfs

Once the copy is done, we’ll have the exact replica of our iSCSI data on Node B!