Consolidating File Servers into the Cloud

Cloud File Server Consolidation Overview

Maybe your business has outgrown its file servers and you’re thinking of replacing them. Or your servers are located throughout the world, so you’re considering shutting them down and moving to the cloud. It might be that you’re starting a new business and wondering if an in-house server is adequate or if you should adopt cloud technology from the start.

Regardless of why you’re debating a physical file server versus a cloud-based file server, it’s a tough decision that will impact your business on a daily basis. We know there’s a lot to think about, and we’re here to show why you should consolidate your physical file servers and move your data to the cloud.

We’ll discuss the state of the file server market and talk about the benefits of cloud file sharing. What we’re going to talk about is some of the challenges and some of the newest technologies to step up to the challenges of unstructured data not only sitting in one place but scattered around the world.

Managing Unstructured Data

The image below is how Gartner looks at unstructured data in the enterprise. The biggest footprint of data that you have as an enterprise or a commercial user is your unstructured data. It’s your files.

cloud file server consolidation unstructured data

That one is where you buy a large single platform that might be a petabyte or even larger to house all of that file data, but what creeps up on us is the data that doesn’t leave  in the data, that which isn’t right under your nose and surrounded by best practices. And those who distribute file servers that live around the world, on average an enterprise with 50 locations, be they branch offices, distribution centers, manufacturing facilities, oil rigs, etc, they’ve got 50 or 100 locations, they’re going to have at least 50 or 100 data centers.

The analyst community (Gartner, Forrester and 451) tell us that almost 80% of the unstructured data you’re dealing with actually sits outside of your well protected data center. This presents challenges for an enterprise because it’s outside of your control.

It’s been difficult to leverage the cloud for unstructured data. Customers by and large are being fairly successful moving workloads and applications to the cloud, along with the storage those applications use. However, when you’re talking about user data and your users are all around the world, you’re dealing with distance, latency, network unavailability in general and multiple hubs through routing.

Which has led to some significant challenges, such as having the situation where you’ve data islands popping up everywhere. You have massive amounts of corporate data that’s not subject to the same kind of data management security that you would have in an enterprise datacenter. Including backup, recovery, audit, compliance, secure networks and even physical access.

And that is what has led to a really “bleeding from the neck problem.” That being, how am I going to get this huge amount of data around the world under our control?

Unstructured Data Challenges

These are some of the issues that you find: Security problems, lost files. Users calling in and saying, “Oops, I made a mistake. Can you restore this for me?” And the answer quite often is, “No. You people in that location are supposed to be backing up your own file server.”

Bandwidth issues are significant as people are trying to have everyone in the world work from a single version of the truth and they’re trying to all look at the same data. But how do you do that when it’s file data?

You have a location in London trying to ship big files to New York. NY then makes some changes and ships the files to India. Yet people are in different time zones. How do you make sure they’re all working off of the same version of information? That has led to the kind of problems driving people to the cloud. Large enterprises are trying to get to the cloud not only with their applications, but with their data.

If you look at what Gartner and IDC say about the move to the cloud, you see that larger enterprises have a cloud-first strategy. We’re seeing SMBs (small and medium businesses) and SMEs (small and medium enterprises) also have a cloud-first strategy. They’re embracing the cloud and moving significant amounts of their workloads to the cloud.

cloud file server consolidation

More companies are going to install a cloud IT infrastructure at the expense of private clouds. We see customers all the time that are saying, “I have 300,000 sq ft. data center. My objective is to have a 100,000 sq ft. data center within the next few months.”

NAS/SAN vs. Hyperconverged vs. The Cloud

And so many customers are now saying, “What am I going to do next? My maintenance renewal is coming up. My capacity is reaching its limit because unstructured data is growing in excess of 30% annually in the enterprise. So what is the next thing am I going to do?”

Am I going to add more on-premise storage to my files? Am I going to take all of my branch offices that are currently 4 terabytes and double them to 8 terabytes?

You probably have seen the emergence of hyperconverged hardware — single instance infrastructure platforms that do applications, networking and storage. It’s a newer, different way of having an on-premise infrastructure. With a hyperconverged infrastructure, you still have some forklift upgrade work both in terms of the hardware platform and in terms of the data.

nas vs hyperconverged vs cloud

Customers that are moving off of traditional NAS and SAN systems onto hyperconverged have to bring in the new hardware, migrate all the data, get rid of the old hardware, so it’s still lift and shift from a datacenter as well as a footprint.

Because of that, a lot of SoftNAS customers are asking, “Is it possible to do a lift and shift to the cloud? I don’t want to get the infrastructure out of my data center and out of my branch offices. I don’t want to be in the file server business. I want to be in the banking, or the retail, or the transportation business.”

I want to let the cloud providers — Azure, AWS, or Google — to use their physical resources, but it’s my data and I want everybody to have access to it. That’s opened the world to a lift and shift into a cloud-based infrastructure. That means you and your peers are going through a pros and cons discussion. If you look at on-premises versus hyperconverged versus the cloud, the good news is all of them have an secure infrastructure available. That could be from the level of physical access, authentication and encryption – either in-transit or at-rest or in-use, all the way down to rights management.

nas vs hyperconverged vs cloud

What you’ll find is that all the layers of security apply across the board. In that area, cloud has become stronger in the last 24 months. In terms of infrastructure management — which is getting to be a really key budget line item for most IT enterprises — for on-premise and hyperconverged, you’re managing that. You’re spending time and effort on physical space, power, cooling, upgrade planning, capacity planning, uptime and availability, disaster recovery, audit and compliance.

The good news with the cloud is you get to off load that to someone else. Probably the biggest benefit that we see is in terms of scalability. It’s in terms of the businesses that say, “I have a pretty good handle on the growth rates of my structured data but my unstructured data is a real unpredictable beast. It can change overnight. We may acquire another company and find out we have to double the size of our unstructured data share. How do I do that?” Scalability is a complicated task if you’re running an on premise infrastructure.

With the cloud, someone else is doing it — either at AWS, Azure, Google, etc. From a disaster recovery perspective, you pretty much get to ride on the backs of established infrastructure. The big cloud providers have great amounts of staff and equipment to ensure that failover, availability, pointing to a second copy, roll-back etc, has already been implemented and tested.

Adding more storage becomes easy too. From a financial perspective, the way you pay for an on-premise environment, is you buy your infrastructure and you use it. It’s the same thing with hyperconverged. Although, they have lower price points than traditional legacy NAS and SAN. But the fact is only the cloud gives you the ability to say “I’m going to pay for exactly that I need. I’m not buying 2 Terabytes because I currently need 1.2 Terabytes and I’m growing 30% per annum.” If you’re using 1.2143 terabytes, that’s what you pay for in the cloud.

A Single Set of Data

But just as important, they have found out that there is a business use-case. There is the ability to do things from a centralized consolidated cloud viewpoint which you simply cannot do from the traditional distributed storage infrastructure.

If you think about what customers are asking for now, more and more enterprises are saying “I want centralized data.” That’s one of the reasons they’re moving to the cloud. They want security. They want to make sure that it’s using best practices in terms of authentication, encryption, and token management. And whatever they use has to be able to scale up for their business.

cloud file server consolidation unstructured data

But how about from a use case perspective? You need to make sure you have data consistency. Meaning, if I have people on my team in California, New York and London, I need to make sure they’re not stepping on each other’s work as they collaborate on projects.

You need to make sure you have flexibility. If you’re getting rid of old infrastructure in 20 or 30 branch offices, then you need to get rid of them easily and quickly spin up the ability for them to access centralized data within minutes. Not within hours and weeks of waiting for new hardware to come in.

Going back to data consistency, if I’m going to have one copy of the truth that everyone is using, I need to make sure that I have that distributed files working. Because face it, that what file servers do. That is the foundation of file servers since they were invented in the market. Those are the type of benefits that are being brought to bear by people that move their file servers into the cloud. They cut costs and increase flexibility.

Cloud File Server Reference Architecture

Here’s an example. In the image below, a SoftNAS customer needed to build a highly available 100 TB Cloud NAS on AWS. The NAS needs to be accessed in the cloud via a CIFS protocol and they need to have data elsewhere. Not the primary location, but they need to have across the region and different continents.

cloud file server consolidation reference architecture

They needed to have to have access from the remote office. Also, they need Active Directory and giving them a need to have them for the help build a new space with the district file locking as well.

The solution provided along with Talon FAST, deployed two instances in UFCs. In this case in two separate zones — control A deployed in one zone and control B deployed in the second zone. We leveraged S3 and EBS for different type of applications for their SLA.

We set up replication between two nodes so the data is available in two different places and is within the zone. We deployed HA on top of it to give that availability with minimal down time. So we give you that flexibility to migrate data or flip to another node without management intervention.

Next Steps

You can also try SoftNAS Cloud NAS free for 30 days to start consolidating your file servers in the cloud:

softnas cloud nas free trial

Docker Persistent Storage on AWS

Docker Persistent Storage on AWS

Persistent storage is critical when running applications across containers on AWS. In this article, we cover how to build persistent storage for Docker containers on AWS. Learn best practices to spin-up, spin-down and move containerized applications across AWS environments, whether running Docker or Amazon EC2 Container Services (ECS).

You can jump to different sections of the article by clicking the hyperlinks below:

  1. Resources (Recording video, slides, whitepapers)
  2. What is Docker?
  3. Virtual Machines vs. Containers
  4. Why Does Docker Persistent Storage Matter?
  5. Application Delivery with Persistent Storage
  6. Amazon EC2 Container Service
  7. Docker and SoftNAS Cloud NAS
  8. SoftNAS Cloud NAS Overview
  9. Docker Persistent Storage Q&A

SlideShare: How to Build Docker Persistent Storage on AWS

SoftNAS Cloud NAS on the AWS Marketplace: Visit SoftNAS on the AWS Marketplace

What is Docker?

docker persistent storage what is

What is Docker and what are containers?

Containers running on a single machine share the same operating system kernel. They start instantly and they make more efficient use of RAM. Images are constructed from layered file systems so they share common files. This makes disk usage and image downloads much more efficient. Docker containers are based on open standards. Which allows containers to run on major Linux distributions and Microsoft operating systems. Containers isolate applications from each other and the underlying infrastructure. This provides an added layer of protection for the application.

Virtual Machines vs. Containers

docker persistent storage containers vs vm

People often ask, “How are virtual machines and containers different?” Containers have similar resource isolation and allocation benefits as virtual machines. But have a different architectural approach that allows them to be much more portable and efficient. Virtual machines may include the application, necessary binaries and libraries. But also have the overhead of an entire guest operating system, and that can take tens of gigabytes. It’s a challenge that virtual desktop people have taken on.

Containers have a very different approach where there’s more containers running into a single instance or virtual machine. They’re better for isolating processes and user space not tied into any specific infrastructure. They’re much more portable and can run virtually everywhere, but it has a Docker infrastructure. The benefits of Docker containers over VMs is less overhead, faster instantiation, better isolation, and easier scalability. Another benefit of containers over virtualization is containers are a great fit for automation. Containers are the best for automation. 

So why does DevOps care? Again, it’s all about automation, setup, launch and run. Don’t worry about what hardware you’re on. Don’t worry about finding the drivers for your servers. Now you can focus on your life cycle repeatability and not worry about keeping your infrastructure going.

Why Does Docker Persistent Storage Matter?

docker persistent storage devops

Why does persistent storage matter for Docker? We need to think about what our storage options are. The Docker containers themselves have checked storage. You can use the storage that’s in that container if you want. The huge problem with that is simple: it goes away when the container is gone. The container is useful as a scratch pad, but not great if you have data you want to keep. So that’s the storage problem.  

Docker containers can mount directories from the host instance on AWS. It would be in this instance that storage can be shared by all containers that run within that host. So what are the issues? You can deploy a cluster of instances to house containers. Your containers move around those different instances, hosts, and your storage is persistent. Also, there’s no guarantee of how you can share that storage.

docker persistent storage container storage options

Network storage is a much better option because now you can share storage like you used to. You can access it from anywhere, and then there’s cloud storage such as EBS and S3. If you wanted the block, block doesn’t share very well. If you want S3, you have to code your containers to work directly with object. SoftNAS Cloud NAS does give you that middle ground option with network storage and native cloud storage. Let’s put CIFS shares, NFS shares, etc., onto your cloud storage and have that complete solution.

Application Delivery with Persistent Storage

docker persistent storage application delivery

Let’s talk about application delivery with persistent storage. If you look at container services, there’s really three components in a container service. There’s your front-end service, and think of that as what you see. It’s the stuff that provides information often like on a webpage. Your back-end service provides APIs in front of the service and the execution part of the application within Docker. Then there’s data storage services. If you use SoftNAS Cloud NAS as your data storage service, you now get high availability and persistence.

So to really use EC2 and SoftNAS together, what does that mean? You’re going to use Amazon’s clustering to kick off a cluster of containers and instances, and auto-scaling.  

docker persistent storage aws ecs

By doing this, we can have a SoftNAS Cloud NAS instance in one availability zone using our virtual IP address and mount that storage a couple of ways.  You can mount those directly to the containers so that they’re using NFS directly, or you could even mount SoftNAS Cloud NAS into the container instance. This lets each of the containers use this as local storage and that lessens the amount of capacity you need on your container instances and also still provides that level of sharability that you model on.

The other thing we stress with ECS and Docker containers is you really want to stretch those across a couple of availability zones. That way if an AZ completely goes out, your auto-scaling can help by bringing up new container instances. This distributes the load on new containers, and by continuing access to SoftNAS virtual IP, you’ll then be able to keep up with your storage and stay online.

Amazon EC2 Container Service

docker persistent storage amazon ec2

Now let’s go into Amazon EC2 Container Services. It’s a highly scalable, fast, container management service that makes it easy to run, stop, and manage Docker containers on a cluster of Amazon EC2 instances. Amazon EC2 lets you launch and stop container-enabled applications with simple API calls and allows you to get the state of your cluster from a centralized service, gives you access to many familiar Amazon features.

When you use Amazon ECS’ schedule, placing the containers across your cluster based on resource needs, isolation policies and availability requirements, having the ability to schedule, that’s important. ECS also eliminates the need for you to operate your own cluster management and configuration management system and not have to worry about scaling your management infrastructure.

One of the benefits of ECS is being able to easily to manage clusters for any scale. There is flexible container placement. You want them to flow across availability zones for most of that time.  

docker persistent storage amazon ec2 benefits

Docker and SoftNAS Cloud NAS

docker persistent storage softnas cloud nas

Let’s talk about why SoftNAS matters to Docker. Again, the storage for Docker and container services, they’re in multiple places. There’s temporary storage in the containers. There’s global storage on the server, the instance that the containers are running on, but if you really want to have your storage around portable, accessible and highly available, then you need the capabilities that SoftNAS Cloud NAS provides. We try to make it simple.

Part of that’s making sure we have APIs that plug into your automation system that goes along with ECS. The setup we showed you had high availability setup as part of that CFT deployment. So to have that kind of ability within quick deployment is amazing. Now try to be agile, and really when you think about DevOps, they want to be fast and they want to be quick.

We have a feature called SnapClone that lets you take snapshots or remember ahead of schedule for hourly snapshots during business hours turned on as a result of that container cluster deployment. Each of those snapshots or a snapshot on-demand could be mounted as what we call a SnapClone so it’s a space efficient writeable snapshot and a 99% rate, and then use that for like a DevOps test case where you want to test against real production data without, heaven forbid, damaging your real production data. The SnapClones are very useful that way. Continuous deployment, and that’s again making it easy, using the SnapClones.  

What are the read/write latency, max throughputs, IO costs for SoftNAS? We don’t use the ephemeral storage except for read cache as part of an EC2 instance, so the storage is the EBS general purpose, EBS provision IOPS. What percentage of that is due to SoftNAS overhead? It’s very light as overall IO.

The reason you might want to do that is because of our fault system. We have background scanning, so if you want an additional layer of protection, allow us to recognize that some bit-rot occurred or something went wrong in the EBS and then with our fault system underlying we can fix that. The reason I bring that into what’s our overhead, so, you know, in this case I configured it for marriage so that means every write is two writes, every read still maps to one read and we go to, you know, the most available storage. If we’re doing like a RAID 5 of course, then it’s a, we do a parity writing but there’s no read-back.  

One additional thought on that is in some scenarios, SoftNAS will actually improve the IO profile simply because of the way that we use the FS in the backend with read caching. In the event that you’re re-reading something that’s been pulled into cache, you’ll typically see an even better IO profile than what the storage typically provides. There’s definitely some considerations associated with performance and the way that we’re structured and the way the product is designed.

SoftNAS Cloud NAS Overview

docker persistent storage softnas architecture

Let’s talk about SoftNAS Cloud NAS and what it is. SoftNAS is a software-based Cloud NAS filer. We deliver on Amazon through the AWS Marketplace as an EC2 instance. One of the huge benefits you get from SoftNAS is being able to use cloud native storage, to deliver files such as NFS for Linux and CIFS for Microsoft, and blocks through an iSCSI interface.

Being able to layer that on the different types of EBS volumes, whether it’s provisioned IOPS or general purpose or any of the other flavors or on object storage such as Amazon S3, SoftNAS can take S3 and EBS, and we look at those as this device is aggregated into storage pools and then carve those storage pools into volumes and export those interfaces that I mentioned as on with AFP. So being able to take that cloud made of storage and provide it to software that used to work indirectly with shared files is a huge benefit.

Another huge benefit is our ability to replicate data, and through data replication also provide high availability where we won’t have parity instances. and where with the data replicated mutually more often in separate availability zones, and sort of the secondary monitor, the primary’s health through network and storage heartbeats and then do a takeover and continue to provide uninterrupted service to the servers, the users and the files. That’s just as important in Docker you’re going to spend that two, four, five hundred of containers that all want to have shared storage, you want to make sure that your infrastructure stays up during that entire time so that you don’t have any outages. Since we go across availability zones, it usually leaves a whole Amazon availability zone within a region and then through auto provisioning with your containers and SoftNAS turnover have completely uninterrupted service.

Docker Persistent Storage Q&A

  1. SnapReplicate and public elastic IPs. If it’s private, how do you have the service storage using private IPs which is specific to a subnet AZ or availability zone for AWS?
    • We have two modes of HA and with a virtual IP.  For the longest time we’ve been supporting the HA through Amazon’s elastic IP and that in fact does use the public IP, of course. The other mode is our private virtual IP and in that case everything would be completely private. We’re managing the route tables between availability zones and moving that virtual IP from primary and secondary instance that way so that’s how we deliver that.
  2. Can EBS volumes be encrypted with AWS key management service?
    • We have encryption built into our product for data storage and we’re using a common third-party encryption software called LUKS so we can encrypt the data on the disk and we also have a pretty nice application guide on how to do data flight encryption for both NFS and CIFS.
  3. Is there a way to backup the SoftNAS managed storage? What types of recovery can we leverage?
    • So we’ve built into our product the ability to back up our storage tools through EBS snapshots. If you’re familiar with EBS snapshots, it’ll take a full copy of your volume and copy that into EBS and manage the changes through there, so we had that built into our storage pool panel in the UI so you can take full backups that way and then be able to restore it to the full storage tool as well. But, you know what, that’s just one avenue for backup. Obviously what we’re doing with storage in the public cloud and Amazon in this case is mirror any other enterprise-class storage product, enterprise-class NAS, and we highly recommend that you have a complete backup and recovery strategy. There’s a lot of really good products on the market today, you know, that we’ve integrated and tested within our lab and a lot of others that I’m sure work just fine because we’re completely about open standards. It’s very important that with our storage or anybody’s storage that, you know, want to have a very comprehensive backup plan and use of those third party products.
  4. What RAID types are being used under SoftNAS?
    • So in the safety built for containers, that’s RAID 1, but that was just a choice.  Often when you look at Amazon and the short answer is we support RAID 0, RAID 1, RAID 5, and RAID 6. I probably want to use RAID 1 at really durable storage, and then if you were to deploy us in a data center where we’re on raw drives, that’s where you want to really look hard at the RAID 5, RAID 6 and use them there because as the drives got bigger and bigger the rebuilds take a while and you kind of want to get to the point where if I have a failed drive and replace it, well, we’ll say, “Oh, it’s in another one,” we’ll develop some type of bit error, we’ll undo it, you know, recovering from the other, so those kind of factors go in. We support the whole gambit of RAID levels.
  5. What version of NFS do we support?
  6. What is the underlying file system used by SoftNAS?
    • At SoftNAS we are very much an open standards, open source company. We’ve built the Z file system commonly referred to as ZFS into our product.
  7. What is the maximum storage capacity of the SoftNAS instance?
    • We don’t really have a limit that we enforce ourselves. Amazon has certain rules for the amount of drive mounts that they provide. But if you’re using S3, our capacity range is virtually limitless. We’ll quote up to 6PB. On the AWS marketplace, we do have editions based on the capacity that they’ll manage. Our express edition manages up to 1TB, our standard up to 20, and then we have a BYOL edition.
  8. Is it possible to get a trial version of SoftNAS?
    • Yes it is. Through the AWS marketplace, we have a 30 day trial as long as you’ve never tried our product before. It just works out of the box there, just off the console. If you would like to try it through BYOL, then contact our sales team at sales@softnas.com.
  9. Is it a good idea to use SoftNAS as a backup target?
    • Yes. It’s a common use case for us since we enable native cloud storage. Even on premise storage you could have a good backup plan of maybe keeping your nightly backups on the very fast storage such as EBS or, you know, SSD or spending the rest of this in your data center, but then also have a storage pool made out of object storage and S3 up in the cloud and use that for your weekly archival. Very common for people that are using product like VMware to use SoftNAS for backup, as a backup target.
  10. Is it a good idea to replace a physical NAS device with SoftNAS?
    • The considerations would be with replacing that type of a solution, just adding where you want to store your backups. If you want to leverage the cloud or if you want to retain those locally. If you want to retain those locally, we have an offering that allows you to connect to local disks and you have a lot of flexibility on the types of disks that you can attach to, local attach disks, iSCSI targets, as well as tying into S3. You can have a local instance that’s tied into S3 from all object storage. 
    • Additionally a secondary option would be to have a SoftNAS node that’s deployed within the cloud and use that as a backup target, and essentially you’re getting a 2-for-1 with that type of a strategy.  By backing up to the target, you get a backup storage resource that you don’t have to store on premise but secondarily it offers a disaster recovery strategy by being able to take your backups and store those offsite.  So those are two approaches that might make sense for that particular scenario.
  11. Is it advisable to utilize an AWS-based SoftNAS instance for on premise apps?
    • I advise not to deploy SoftNAS into an Amazon VPC and access it remotely through NFS or CIFS. And those protocols are very tatty and degrade over long distances. What is common is to deploy SoftNAS into your VMware cluster and your virtual loader data center. Then mount Amazon S3 into a storage pool. Backup your applications and storage pools use them in your data center. It’s great for backups.
    • You’ll have to be somewhat sensitive to latency. There’s applications that would not be great for, because there’s IO latency. It’s going to happen from your data center to the Amazon region where the S3 is. For example, it wouldn’t be a good idea to use like a database with transactional IO. Using a backup with S3, you put SoftNAS onto your data center and back up to the storage pool.
    • A typical customer use case is when they segment out their hot data which is highly active. It’s typically a smaller subset of their overall data set. One use case is to tie into object storage that you host in the cloud for cool data. Use on premise storage that is not affected by latency to service hot data requirements. Then leverage S3 object storage as the backend larger repository for your cool data.
  12. If we use S3 as a storage pool for on premise, does it provide write back caching?
    • With on premises, we’re able to leverage high performance local disk as a block cache file to front-end S3 storage. It functions like a page file for read and write operations. It  also allows higher performance, essentially caching for S3 access, and enhances overall performance.
    • Using the log cache for both reads and writes will do read-aheads and make it easier to handle read/writes.

We hope that you found the content useful and that you gained something out of it. Hopefully, you don’t feel we marketed SoftNAS Cloud NAS too much. Our goal here was just to pass on some information about how to build docker persistent storage on AWS. As you’re making the journey to the cloud, hopefully this saved you time from tripping over some common issues. 

We’d like to invite you to try SoftNAS Cloud NAS on AWS. We do have a SofNAS 30 day trial

 

Storage Reflections by SoftNAS on theCube at #reInvent2014

CRN released its 10 Storage Predictions for 2015 a few days ago, and if there’s anything we can learn from such decisive subheads as “Storage Revenue: 2015 Will See Growth, Or Not,” it’s that the future of the cloud market is really up in the air (with no end to terrible cloud puns in sight).

There are, however, some clear cloud trends that are proving to be more than a passing fad, including the notable rise of Docker containers, as Forrester validates in its own 2015 predictions.

It’s easy to get caught up in all the cloud wars speculation, but with so much ambiguity, you might want to stay grounded with SoftNAS’ interview with theCube in which SoftNAS CEO Rick Braddy discusses actual use cases–with SoftNAS’ own nod to Docker–and cloud storage trends that were definitively occurring at the end of 2014 during re:Invent 2014.

SoftNAS Releases First Cloud Computing Storage API and CLI for DevOps teams

SoftNAS has released the first cloud computing storage API and CLI for DevOps teams using Amazon’s EC2 and VMware platforms.

SoftNAS adoption on cloud computing platforms, like Amazon Web Services’ EC2 platform, has been steadily accelerating through Q4 2013 and early here in 2014. One of the things we’re seeing more and more are DevOps organizations in larger companies adopting SoftNAS. As a result, more development shops are looking to incorporate SoftNAS into their entire DevOps process – from Development all the way through QA Test and into Production as a common storage platform.

At the same time, DevOps is automating everything possible, including deployment of the IT infrastructure, to create truly “elastic” systems that expand to deal with demand on the fly during peak periods of traffic, then automatically contract down during non-business hours to minimize operating costs. This is the promise of the cloud – get as much capacity as you need, when you need it, and only pay for what you use. At the same time, the ability to quickly spin up a new QA test or development system makes agile development and testing more efficient and less labor-intensive.

Given that shared storage is a key part of the IT infrastructure for cloud computing, a way to include SoftNAS in CloudFormation templates, Auto-scaling groups, and other cloud computing automation systems is critical, especially for Amazon Web Services customers.

To meet these demands of DevOps and cloud computing operators, SoftNAS has introduced the first cloud storage API and CLI for its flagship SoftNAS product line. The API provides access to the same robust storage administration and management functionality provided by the SoftNAS StorageCenter  GUI, through REST API calls and a simple command-line interface.

The API and CLI support adding EBS and S3 cloud disk devices to SoftNAS, creating and expanding storage pool capacity on the fly, adding new volumes and making them available as NFS and CIFS shares – without direct human intervention. In fact, 95% of everything that can be done via the StorageCenter GUI can now be accomplished using the API or CLI, as well. And the API and CLI are available across all SoftNAS supported virtualization platforms, including AWS and VMware.

This makes SoftNAS the obvious best choice as the cloud computing storage platform for DevOps teams.

Read more about the API and CLI here