How to Maintain Control of Your Core in the Cloud

For the past 7 years, SoftNAS has helped customers in 35 countries globally to successfully migrate thousands of applications and petabytes of data out of on-premises data centers into the cloud. Over that time, we’ve witnessed a major shift in the types of applications and the organizations involved.

The move to the cloud started with simpler, low risk apps and data until companies became comfortable with cloud technologies. Today, we see companies deploying their core business and mission-critical applications to the cloud, along with everything else as they evacuate their data centers and colos at a breakneck pace.

At the same time, the mix of players has also shifted from early adopters and DevOps to a blend that includes mainstream IT.

The major cloud platforms make it increasingly easy to leverage cloud services, whether building a new app, modernizing apps or migrating and rehosting the thousands of existing apps large enterprises run today.

Whatever cloud app strategy is taken, one of the critical business and management decisions is where to maintain control of the infrastructure and where to turn control over to the platform vendor or service provider to handle everything, effectively outsourcing those components, apps and data.

So how can we approach these critical decisions to either maintain control or outsource to others when migrating to the cloud? This question is especially important to carefully consider as we move our most precious, strategic, and critical data and application business assets into the cloud.

One approach is to start by determining whether the business, applications, data and underlying infrastructure are “core” vs. “context”, a distinction popularized by Geoffrey Moore in Dealing with Darwin.

He describes Core and Context as a distinction that separates the few activities that a company does that create true differentiation from the customer viewpoint (CORE) from everything else that a company needs to do to stay in business (CONTEXT).

Core elements of a business are the strategic areas and assets that create differentiation and drive value and revenue growth, including innovation initiatives.

Context refers to the necessary aspects of the business that are required to “keep the lights on”, operate smoothly, meet regulatory and security requirements and run the business day-to-day; e.g., email should be outsourced unless you are in the email hosting business (in which case it’s core).

It’s important to maintain direct control of the core elements of the business, focusing employees and our best and brightest talents on these areas. In the cloud, core elements include innovation, business-critical and revenue-generating applications and data, which remain central to the company’s future.

Certain applications and data that comprise business context can and should be outsourced to others to manage. These areas remain important as the business cannot operate without them, but they do not warrant our employees’ constant attention and time in comparison to the core areas.

The core demands the highest performance levels to ensure applications run fast and keep customers and users happy. It also requires the ability to maintain SLAs around high availability, RTO and RPO objectives to meet contractual obligations. Core demands the flexibility and agility to quickly and readily adapt as new business demands, opportunities and competitive threats emerge.

Many of these same characteristics are also important to business context areas as well, but not as critical as the context that can simply be moved around from one outsourced vendor to another as needed.

Increasingly, we see the business-critical, core applications and data migrating into the cloud. These customers demand control of their core business apps and data in the cloud, as they did on-premises. They are accustomed to managing key infrastructure components, like the network attached storage (NAS) that hosts the company’s core data assets and powers the core applications. We see customers choose a dedicated Cloud NAS that keeps them in control of their core in the cloud.

Example core apps include revenue-generating e-discovery, healthcare imaging, 3D seismic oil and gas exploration, financial planning, loan processing, video rental and more. The most common theme we see across these apps is that they drive core subscription-based SaaS business revenue. Increasingly, we see both file and database data being migrated and hosted atop of the Cloud NAS, especially SQL Server.

For these core business use cases, maintaining control over the data and the cloud storage is required to meet performance and availability SLAs, security and regulatory requirements, and to achieve the flexibility and agility to quickly adapt and grow revenues. The dedicated Cloud NAS meets the core business requirements in the cloud, as it has done on-premises for years.

We also see many less critical business context data being outsourced and stored in cloud file services such as Azure Files and AWS EFS. In other cases, the Cloud NAS abilities to handle both core and context use cases is appealing. For example, leveraging both SSD for performance and object storage for lower cost bulk storage and archival with unified NFS and CIFS/SMB makes the Cloud NAS more attractive in certain cases.

There are certainly other factors to consider when choosing where to draw the lines between control and outsourcing applications, infrastructure and data in the cloud.

Ultimately, understanding which applications and data are core vs context to the business can help architects and management frame the choices for each use case and business situation, applying the right set of tools for each of these jobs to be done in the cloud.

 

 

SoftNAS Names Garry Olah as Chief Executive Officer and Vic Mahadevan to the Board of Directors

Experienced storage executives add strong leadership background and a deep understanding of the market dynamics and opportunities to grow cloud data platform business

HOUSTON–(BUSINESS WIRE)–SoftNAS®, a leading cloud data platform for business-critical applications, today announced that Garry Olah will join the company as president and chief executive officer. Olah brings more than twenty years of experience in the software industry, having played key roles in both startups and large corporations, as well as in the growth of many innovative technology companies as an advisor. In addition to Olah, the company also today announced that former NetApp® chief strategy officer Vic Mahadevan will join the board of directors as chairman.

“Over the last few years, we’ve had the opportunity to create a group of advisors who bring extensive experience in the cloud and storage industries and are fortunate to now add Garry and Vic”

Tweet this

Known for his expertise building cloud partner sales models and creating momentum for growth, Olah’s broad areas of expertise include business development, alliance building, sales, marketing and disruptive tech. Prior to SoftNAS, Olah was both a partner at Prime Foray and founder and managing partner of Vedify where he was responsible for advising companies focused on enterprise cloud, SaaS and mobility. His guidance helped several companies position for partnership and expand their market footprint, coordinating the objectives of a venture with small, mid and large cap businesses.

Vedify drove clients to positive results, assisting Quest on strategy through the $2B+ “go private” efforts, ScaleExtreme (Acquired by Citrix), BlueStacks, Apprenda, Lookout and others. Olah held VP of Business Development positions at GoodData, Coho Data, Apprenda and Citrix and has served on various industry boards, including as a six-year member of the Software & Information Industry Association (SIIA), Software Division where he was Chairman.

“Over the last few years, we’ve had the opportunity to create a group of advisors who bring extensive experience in the cloud and storage industries and are fortunate to now add Garry and Vic,” said Randolph Henry, who will retire as Chairman of the Board and remain as a board member. “These are two important positions for us. Their collective expertise leading global companies, driving strategic vision and executing with operational rigor will be invaluable to SoftNAS as we scale and grow. We are excited to add these successful industry veterans to our leadership team.”

Vic Mahadevan is CEO of Dev Solutions Inc., a consultancy to technology startups. Prior to Dev Solutions, Mahadevan served as chief strategy officer for NetApp. In this position, he was responsible for leading NetApp’s business strategy and identifying additional market and product opportunities to fuel future growth. Mahadevan was also responsible for driving strategic planning, managing acquisitions and supporting NetApp’s strategic alliances and partnership teams. Before NetApp, he held a number of executive leadership positions including VP of marketing for LSI Corporation, CEO of NeoEdge Networks, VP/GM of BMC Software’s Storage Management Business Unit and VP/GM of Compaq’s Storage Business Unit. Mahadevan holds an MBA in marketing and MS in engineering from the University of Iowa, as well as a degree in mechanical engineering from the Indian Institute of Technology.

“My goal is to extend SoftNAS’ position as the trusted leader in migrating business-critical applications to the cloud. The combination of sustained app performance, the growing need for better cloud-economics and enterprise’s moving their apps and data to the cloud give SoftNAS a significant opportunity,” said Olah. “SoftNAS has tremendous growth potential and the ability to become the de facto standard in cloud NAS.”

“I am pleased to join and bring my experience in the storage market to the SoftNAS board,” said Mahadevan. “SoftNAS has a strong customer base, great technology and is well positioned at this pivotal time in our industry as cloud migration heats up.”

About SoftNAS

SoftNAS® leads the cloud storage industry by helping customers overcome native performance and storage limitations when migrating business-critical application data from on-premises to the AWS and Microsoft Azure clouds. By delivering enterprise-class NAS features and flexible architectures, SoftNAS proves customers don’t have to sacrifice sustained high-performance to maintain reasonable cloud economics. Some of the largest enterprises are powered by SoftNAS including Samsung, T-Mobile, Coca-Cola, Boeing, Netflix, L’Oréal and WWE. To learn more about SoftNAS and cloud storage, follow @SoftNAS on Twitter, LinkedInYouTube or the SoftNAS blog.

Contacts

Yolande Yip
yolande@softnas.com

What is Cloud NAS (Network Attached Storage)?

what is cloud nas

What is Cloud NAS?

Cloud NAS is a popular storage choice for people looking to use cloud storage for applications, user file systems or data archive. But we still see a lot of confusion when people hear the terms “Cloud network attached storage”, “Cloud-based NAS”, or cloud NAS service. So what is Cloud NAS?

A cloud NAS works like the legacy, on-premises NAS currently in a lot of data centers. But, unlike traditional NAS or SAN infrastructures, a cloud NAS is not a physical machine. It’s a virtual appliance designed to work with and leverage cloud-based storage to give you all of the functionality you’d expect from a premises-based hardware NAS or SAN.

Cloud NAS is a “NAS in the cloud” that uses cloud computing to simplify infrastructure and provide flexible deployment options while reducing costs. Most cloud NAS service solutions work in cloud environments like Amazon Web Services (AWS) and Microsoft Azure. Cloud-based NAS uses easily expandable cloud storage as a central source for storage while still providing common enterprise NAS features.

Why do you need Cloud NAS?

The way we work has evolved, but data storage hasn’t changed substantially in over two decades with the exception of the increasing amount of data being generated. With the right set of capabilities, cloud-based NAS offerings significantly shorten the amount of time it takes to migrate from an on-premises NAS to the cloud. With the capacity of cloud storage basically being “limitless”, NAS in the cloud allows you to expand or reduce the amount of data you manage without the need for forklifting hardware-based solutions.

Benefits of using Cloud NAS service:

  • Price/Performance Flexibility: With the right cloud NAS, you have numerous options on what type of cloud storage to use. Low performance Cloud Object storage can be used for use cases that don’t require high performance. But for a price, you can also satisfy HPC (high performance computing) level SLAs. With the right combination of Virtual Machines running your Cloud NAS controller head with the right backend cloud storage, you can meet your storage requirements for just about any project.
  • Eliminate Legacy NAS Systems Refresh: How do you predict exactly how much storage and performance from that storage you will need over the next 12 months? Do you ever have an unexpected project come up that requires more storage possibly at a different performance level? With legacy hardware NAS solutions, you usually get locked into a long-term contract, and if something changes, you incur the overhead and costs with a “forklift” upgrade. With a Cloud NAS, you are in control. You can create the storage you need when you need it, for as long as you need it without signing long-term contracts or renewals.
  • Built-in Data Resiliency: Most cloud storage has data resiliency built in by storing multiple copies of data on multiple disks. This resiliency does not replace the need for High Availability, SnapShots and backups, but it nice to have this level of resiliency built right into the storage used by your Cloud NAS.
  • Pay as You Go and Reduce Costs: You only pay your cloud provider for the storage you need. With cloud storage becoming cheaper, you can instantly scale your cloud instances to best suit your needs. Or, you can even use tiered storage and push legacy data to low-cost storage and store frequently-accessed data in top-tiered storage to maintain performance for your “hot” data.

Use cases for a Cloud NAS include:

  • SaaS-Enable Applications: When looking to migrate applications from on-prem to Software-as-a-Service (SaaS) in the cloud, a common hurdle is that traditional applications typically do not support the native cloud storage interfaces. Rewriting your applications to support cloud storage requires application development and is usually complex and costly. For legacy applications that use standard file protocols, the right Cloud NAS can offer the expected file services and support for NFS, CIFS/SMB, iSCSI, and AFP protocols along with Active Directory integration support for your existing applications. Choosing a Cloud NAS with the features your application depends on is key.
  • New Apps and Proofs of Concept (POC): A cloud NAS lets developers quickly stand up storage infrastructure for a new application or proof of concept project without any storage hardware. Developers can easily create a storage infrastructure with just a few clicks.
  • High Performance Computing (HPC): HPC requirements for Artificial Intelligence and Machine Learning are becoming standard. The power of the virtual machines you can create in the cloud today matches what you can get on-prem, but you need your storage to meet HPC SLAs also. The level of performance you can get from both compute and storage IOPs and throughput is increasing every day. With the right Cloud-based NAS, you can achieve 10s and even 100s of thousands of IOPS with massive throughput to cloud storage today. And as the performance of the cloud resources increases, you have the option to use this power when ready.
  • File Services for User File Systems: The amount of file data being created daily can be overwhelming. When you add the need to keep copies of the data for compliance reasons for extended periods of time, it gets even worse. The right Cloud NAS can help satisfy the data access needs of your Linux, Windows and Apple users by supporting the necessary file access protocol support. When you include the need for compliance copies of the files for possibly decades, you want a Cloud NAS that will help you automatically move the files to less expensive cloud storage for you. The right Cloud NAS can help you with both your increasing capacity needs and options to keep those compliance files on more cost-effective storage with low overhead on your part.
  • Backup and Archive Data: The data your company collects is one of its most valuable assets in today’s world. You can’t afford to lose it. And when you need to access that data, you need to be able to access it quickly. Using cloud storage to store backup or archive data can be the key. A Cloud NAS with lower cost cloud storage gives you “endless” capacity to store backup and archive data. You have the flexibility to decide what level of SLA you need for the retrieval of archived data and match it to the price/performance requirements you have. If your SLA to retrieve archive data is days, you can use archive level object storage. If you can’t wait days, choose higher performance storage. And you want a Cloud NAS partner that can help get your data onto the archive storage with ease and low overhead.
  • DevOps and Development: With cloud-based NAS, developers and DevOps can quickly stand up a new storage infrastructure needed for a new project and then tear it down when down (no long-term storage contract required). The storage needed is always available to be allocated at the performance required for the project.

    When considering a Cloud NAS partner, it’s important to understand your current and potential future requirements. Most cloud NAS offerings include “table stakes” NAS functionality, but the devil can be in the details.

    • Some Cloud NAS’s only offer limited support for the types of cloud storage that can be utilized.
    • Some limit the capacity of storage that can be supported.
    • Some may be fine for basic performance workloads, but can be quickly overloaded as performance demands increase.

    Top Cloud NAS solutions may offer solution extensions to help you solve problems beyond just storing data in the cloud. The may offer extensions that help you:

    • Migrate your data from on-prem to the cloud
    • Move cold data from more expensive higher performing storage to more cost effective less performant storage
    • Offer enterprise class high-availability
    • Data orchestration that can help you automate the movement and transformation of data
  • So when considering a Cloud NAS partner, take a look under the hood before you buy. Understand what type of mileage you will really get with your selection and choose the right tool for the job from the beginning and save time and money in the long run.
    Amazon EFS vs. GlusterFS vs. SoftNAS Cloud NAS trial

We hope you found this post helpful as you learn more about cloud NAS. Leave a comment and let us know what we should write about next!

Best Practices Learned from 1,000 AWS VPC Configurations


At SoftNAS, we’ve configured over 1,000 Amazon VPCs for companies of all sizes: small businesses, Fortune 100 companies, and everything in between.

When you’ve worked on this many AWS VPC designs and configurations, you learn a lesson or two about getting optimal results.

From this experience, we’re sharing AWS VPC best practices to help you with your AWS VPC deployment.

  1. Organize your AWS environment
  2. Create AWS VPC subnets
  3. Control your access
  4. SoftNAS Cloud NAS and AWS VPCs
  5. Common AWS VPC Mistakes
  6. SoftNAS Cloud NAS Overview
  7. AWS VPC FAQs
  8. Claim my $100 AWS Credit

Organize your AWS environment

aws vpc best practice
The very first Amazon VPC best practice is to organize your AWS environment. We recommend that you use tags. As you continue to add instances, create route tables and subnets, it’s nice to know what’s connects with what. And the simple use of tags will make life so much easier when it comes to troubleshooting.

Make sure you plan your CIDR block very carefully. Go a little bit bigger than you think you need, not smaller.

Remember that for every VPC subnet that you create, AWS takes five of those IP addresses for subnet. So when you create a subnet know that off the top there’s a five IP overhead.

Avoid using overlapping CIDR blocks— at some point, if not today but maybe down the road, you may want to pair this VPC with another VPC, and if you have overlapping CIDR blocks, the pairing of VPCs will not function correctly, and you’re going to find yourself in a configuring nightmare in order to get those VPCs to pair.

Try to avoid using overlapping CIDRs, and always save a little bit of space for future expansion. There’s no cost associated here with using a bigger CIDR block, so don’t undersize what you think you may need from the IP’s perspective – just to try to make it clean and easy.

Create AWS VPC subnets the right way for success

aws vpc subnet

Design your subnets lead to success. What is your AWS VPC subnet design strategy going to be?

One of the best practices for AWS subnets is to align your VPC subnets to as many different tiers as possible, such as DMZ/Proxy layer, ELB layer if you’re going to be using load balancers, application or database layer. Remember, if your subnet is not associated to a specific route table, then by default it’s going to the main route table. I’ve seen so many cases where people create a route table, and they’ve got a subnet, but they haven’t associated the subnet to the route table when they thought they did, so the packets aren’t flowing where they think they’re flowing.

Put everything in a private subnet by default and use either ELB filtering and monitoring type services in your public subnet. You can use NAT to gain access to public networks. We highly recommend, and you’ll see this later, that you use a dual NAT configuration for redundancy. There are some great cloud formation templates that are available to set up highly available NAT instances and make sure that you size those instances properly for the amount of traffic you’re going to actually push into your network.

You can go ahead and set up VPC peering for access to other VPCs within your environment or maybe from a customer or a partner environment, and I highly suggest leveraging the endpoints for access to services like S3 instead of actually going out either over a NAT instance or over an internet gateway in order to gain access to services that may not live within the specific VPCs. They’re very easy to configure, and they’re actually much more efficient and have lower latency by leveraging an endpoint than actually going out over a NAT or over an internet gateway to gain access to something like S3 from your instance.

Control your access

aws vpc access

Control your access within the AWS VPC. Don’t cut corners and use a default route to the internet gateway. We see a lot of people that do this, and it comes back to cause them problems later on. We mentioned to use redundant NAT instances. There are some great cloud formation templates available from Amazon on creating a highly available redundant NAT instance.

The default NAT instance size is an m1.small, which may or may not suit your needs depending upon the amount of traffic you’re going to use, and I would highly recommend that you use IAM for access control, especially configuring IAM roles to instances, and remember that IAM roles cannot be assigned to running instances. It has to be set during instance creation time, and using those IAM roles will actually allow you to not have to continue to populate AWS keys within the specific products in order to gain access to some of those API services.

SoftNAS Cloud NAS and AWS VPCs

softnas aws vpc

How does SoftNAS Cloud NAS fit into AWS VPCs?

We have a highly available architecture from a storage perspective, leveraging our SNAP HA capability, which allows us to provide high availability across multiple different availability zones. We leverage our underlying secure block replication with SnapReplicate, and we highly recommend using SNAP HA in a high-availability mode which would give you a no downtime guarantee, plus a five nine uptime, and also it’s important to remember that Amazon provides no SOA unless you run in a multi-zone deployment, right? So a single AZ deployment has no SLE within AWS.

We have two methods of actually deploying our cross-zone high availability at SoftNAS. The first is actually to leverage the use of elastic IPs, where you have two separate controllers, each in their own availability zones. They’re in the public subnet and we assign each node an elastic IP address. We use a third elastic IP address as our VIP or virtual interface.

You can configure SnapReplicate between the two instances which will provide you the underlying block replication, and then what happens is that the elastic IP address that’s considered to be the VIP IP address is assigned to whatever’s the primary controller, and whatever services you have from an NFS, CIFS or iSCSI perspective will actually mount or map drives to that elastic IP address, and then if there is a failover or failure of the storage instance.

It will move that elastic IP address over from the primary controller to the secondary controller should anything trigger our HA monitor, which looks at things like health of the file system, health of the network, at multiple different levels. This is applicable for doing things like backing EBS with SoftNAS, using S3 with SoftNAS.

The second mode is to use a private virtual IP address where both SoftNAS Cloud NAS instances actually live within a private subnet and don’t have any access out, and what you would actually do there is it’s the same underlying SnapReplicate technology and monitoring technology. However what happens here is you actually pick a virtual IP address that is outside of the CIDR block of your AWS VPC, your clients map to it, there’s an entry that’s automatically placed into the route table, and should a failover occur we’ll update the route table automatically in order to route the track properly to the proper controller that should be the primary at the time. This is probably the more common way of deploying SoftNAS in a highly available architecture.

Common AWS VPC Mistakes

aws vpc mistakes

The SoftNAS support team sheds some light on common mistakes they see when it comes to Amazon VPC configuration. Read on to understand what you should avoid.

Each of these deployments require two ENIs or two NIC interfaces, and both of those NICs need to be in the same subnet. Make sure that you check this when you’re creating your instances or adding the ENS, and make sure that both NICs are in the same subnet.

Another common error is that one of the health checks we actually perform is to do a ping between the two instances, and the security group isn’t always open to allow the ICMP health check to happen. This will cause an automatic failover because we can’t gain access to the other instance. We do actually leverage an S3 bucket here in our HA deployment as a third party witness, so if you deploy SoftNAS as your private subnet, we do need to gain access to the S3, either via NAT or the configuration of an S3 endpoint within the VPC.

And again, as I mentioned just a few moments ago, for private HA, a virtual IP address must not be in the same CIDR of the AWS VPC. So if your CIDR is 10.0.0.0/16, then you need to pick a virtual IP address that doesn’t fit within that subnet, so say 50.0.0.0.1 would work in that particular case or whatever works for you best, but it cannot fall within the CIDR block of the AWS VPC, or the route failover mechanism that we’re leveraging will not function properly.

SoftNAS Cloud NAS Overview

SoftNAS Cloud NAS is a powerful enterprise-class, virtual storage appliance that works for both public, private and hybrid clouds. It’s easy to try, easy to buy, and easy to learn and use. You have freedom from platform lock-in, and it works with the most popular cloud computing platforms including Amazon EC2, VMware vSphere, CenturyLink, Microsoft Azure. Our mission is to be the data fabric for businesses across all types of cloud, whether private, public or hybrid.

We have a couple of different products that we can leverage. Our first is our SoftNAS Cloud NAS cloud product which runs on your public clouds, which is a NAS filer for public clouds. We have our cloud file gateway which is for on premise use to connect to cloud-based storage. We also have SoftNAS for service providers. Which is our multi-tenant NAS replacement for service providers that leverage iSCSI and object storage.

AWS VPC FAQs

We’re sharing the questions that came from the attendees of our webinar on AWS VPC best practices, and our answers to them. You may just find your own questions answered here.

  • We use VLANs in our data centers for isolation purposes today. What VPC construct do you recommend to replace VLANs in AWS?
    • That would be subnets, so you could either leverage the use of subnets or if you really wanted to get a different isolation mechanism, create another VPC to isolate those resources further and then actually pair them together via the use of VPC pairing technology.
  • You said to use IAM for access control, so what do you see in terms of IAM best practices for AWS VPC security?
    • So the biggest thing is that you deal with either third party products or customized software that you made on your web server. Anything that requires use of AWS API resources need to use a secret key and an access key, so you can store that secret key and access key in some type of text file and have it reference it, or, b, the easier way is just to set the minimum level of permissions that you need in the IAM role, create this role and attach it to your instance and start time. Now, the role itself can’t be assigned, only during start time. However, the permissions of several can be modified on the fly. So you can add or subtract permissions should the need arise.
  • So when you’re troubleshooting the complex VPC networks, what approaching tools have you found to be the most effective?
    • We love to use traceroute.  I love to use ICMP when it’s available, but I also like to use the AWS Flow Logs which will actually allow me to see what’s going on in a much more granular basis, and also leveraging some tools like CloudTrail to make sure that I know what API calls were made by what user in order to really understand what’s gone on.
  • What do you recommend for VPN intrusion detection?
    • There’s a lot of them that are available. We’ve got some experience with Cisco and Juniper for things like VPN and, Fortinet, whoever you have, and as far as IVS goes, like Alert Logic is a popular solution. I see a lot of customers that use that particular product. Some people like some of the open source tools like Snort and things like that as well.
  • Any recommendations around secure junk box configurations within AWS VPC?
    • If you’re going to deploy a lot of your resources within a private subnet and you’re not actually going to use a VPN, one of the ways that a lot of people do this is to just configure a quick junk box, and what I mean by that is just to take a server, whether it be a Windows or Linux, depending upon your preference, and put that in the public subnet and only allow access from a certain amount of IP addresses over to either SSH from a Linux perspective or RDP from a Windows perspective.  It puts you inside of the network and actually allows to gain access to the resources within the private subnet.
  • And do junk boxes sometimes also work? Are people using VPNs to access the junk box too for added security
    • Some people do that. Sometimes they’ll just put like a junk box inside of the VPN and your VPN into that. It’s just a matter of your organization security policies.
  • Any performance or further considerations when designing the VPC?
    • It’s important to understand that each instance has its own available amount of resources, from not only from a network IO but from a storage IO perspective, and also it’s important to understand that 10GB, a 10GB instance, like let’s say take the c3.8xl which is a 10GB instance. That’s not 10GB worth of network bandwidth or 10GB worth of storage bandwidth. That’s 10GB for the instance, right? So if you have a high amount of IO that you’re pushing there from both a network and a storage perspective, that 10GB is shared, not only from the network but also to access the underlying EBS storage network. This confuses a lot of people, so it’s 10GB for the instance not just a 10GB network pipe that you have.
  • Why would use an elastic IP instead of the virtual IP?
    • What if you had some people that wanted to access this from outside of AWS? We do have some customers that primarily their servers and things are within AWS, but they want access to files that are running, that they’re not inside of the AWS VPC.  So you could leverage it that way, and this was the first way that we actually created HA to be honest because this was the only method at first that allowed us to share an IP address or work around some of the public cloud things like node layer to broadcast and things like that.
  • Looks like this next question’ is around AWS VPC tagging. Any best practices for example?  
    • Yeah, so I see people that basically take different services, like web and database or application, and they tag everything within the security groups and everything with that particular tag.  For people that are deploying SoftNAS, I would recommend just using the name SoftNAS as my tag.  It’s really up to you, but I do suggest that you use them.  It will make your life a lot easier.
  • Is storage level encryption a feature of SoftNAS Cloud NAS or does the customer need to implement that on their own?  
    • So as of our version that’s available today which is 3.3.3, on AWS you can leverage the underlying EBS encryption. We provide encryption for Amazon S3 as well, and coming in our next release which is due out at the end of the month we actually do offer encryption, so you can actually create encrypted storage pools which encrypts the underlying disk devices.
  • Virtual VIP for HA: does the subnet this event would be part of add in to the AWS VPC routing table?
    • It’s automatic. When you select that VIP address in the private subnet, it will automatically add a host route into the routing table. Which allows clients to route that traffic.
  • Can you clarify the requirement on an HA pair with two next, that both have to be in the same subnet? 
    • So each instance you need to move NIC ENIs, and each of those ENIs actually need to be in the same subnet.
  • Do you have HA capability across regions? What options are available if you need to replicate data across regions? Is the data encryption at-rest, in-flight, etc.?  
    • We cannot do HA with automatic failover across regions.  However, we can do SnapReplicate across regions. Then you can do a manual failover should the need arise. The data you transfer via SnapReplicate is sent over SSH and across regions. You could replicate across data centers. You could even replicate across different cloud markets. 
  • Can AWS VPC pairings span across regions?
    • The answer is, no, that it cannot.
  • Can we create an HA endpoint to AWS for use with direct connect?
    • Absolutely. You could go ahead and create an HA pair of SoftNAS Cloud NAS, leverage direct connect from your data center and access that highly available storage.
  • When using S3 as a backend and a write cache, is it possible to read the file while it’s still in cache?
    • The answer is, yes, it is. I’m assuming that you’re speaking about the eventual consistency challenges of the AWS standard region; with the manner in which we deal with S3 where we treat each bucket as its own hard drive, we do not have to deal with the S3 consistency challenges.
  • Regarding subnets, the example where a host lives in two subnets, can you clarify both these subnets are in the same AZ?
    • In the examples that I’ve used, each of these subnets is actually within its own VPC, assuming its own availabilities. So, again, each subnet is in its own separate availability zone, and if you want to discuss more, please feel free to reach out and we can discuss that.
  • Is there a white paper on the website dealing with the proper engineering for SoftNAS Cloud NAS for our storage pools, EBS vs. S3, etc.?
    • Click here to access the white paper, which is our SoftNAS architectural paper which was co-written by SoftNAS and Amazon Web Services for proper configuration settings, options, etc. We also we have a pre-sales architectural team that can help you out with best practices, configurations, and those types of things from an AWS perspective. Please contact sales@softnas.com and someone will be in touch.
  • How do you solve the HA and failover problem?
    • We actually do a couple of different things here. When we have an automatic failover, one of the things that we do when we set up HA is we create an S3 bucket that has to act as a third party witness. Before anything take overs as the master controller, it queries the S3 bucket and makes sure that it’s able to take over. The other thing that we do is after a take-over, the old source node is actually shut down.  You don’t want to have a situation where the node is flapping up and down and it’s kind of up but kind of not and it keeps trying to take over, so if there’s a take-over that occurs, whether it’s manual or automatic, the old source node in that particular configuration is shut down.  That information is logged, and we’re assuming that you’ll go out and investigate as to why the failover took place.  If there’s questions about that in a production scenario, support@softnas.com is always available.
  • Can we monitor SoftNAS logs using SplunkSumo and see which log file we should monitor?
    • Absolutely, but we also provide some built-in log monitoring.  They key logs here are going to be in the SnapReplicate.log which controls all of your SnapReplicate and HA functionality. The snserv.log, which is the SoftNAS server log. It controls all things done via StorageCenter, and because this is a Linux operating system, monitoring log messages is a good idea.  That’s just a smattering of those. 

Our goal here was to pass on some of the lessons that we’ve learned from configuring AWS VPC deployments for our customers. As you’re making that journey to deploying in the cloud or you’re already operational in the cloud, maybe you’ve saved some time tripping over some of the obtsacles that other customers have faced.

We’d like to invite you now to try SoftNAS Cloud NAS on AWS. We do have a 30 day trial. If you click blue button below, you can try SoftNAS Cloud NAS on the AWS platform with a $100 AWS credit. There are also some links there about how you can contact us further if you have any more questions and you’d like to get more information around it.

Claim my $100 AWS Credit

softnas aws vpc credit

Webinar: AWS vs. On-Premises NAS Storage – Which is Best for Your Business?

The following is a recording and full transcript from a webinar that aired live on 08/30/16. You can download the full slide deck on Slideshare

Full Transcript: AWS vs. Azure Storage

Taran Soodan:             Good afternoon everyone. My name is Taran Soodan, and welcome to our webinar today on on-premises NAS upgrade, paid maintenance or public cloud. Our speaker today is going to be Kevin Brown who is a solutions architect here at SoftNAS. Kevin, do you want to go ahead and give a quick hello?

Kevin Brown:              Hello? How are you guys doing?

Taran:                         Thanks, Kevin. Before we begin the webinar, we just want to cover a couple of housekeeping items with you all. In order to listen to today’s webinar, you’ll click on the telephone icon that you see here on the GoTo Meeting Control Panel.

Any questions that you might have during the webinar can be posted in the questions box, and we’re going to be answering those questions at the end of the webinar so please feel free to ask your questions.

Finally, this webinar is also being recorded. For those of you who are unable to make it or have colleagues that this might be of interest to, you’ll be able to share the webinar according with them later on today. Please keep an eye out for your email and we’ll send that information your way.

Also, as a bonus for attending today’s webinar, we are going to be handing out $100 AWS credits, and we’ll have more information about that later on at the end of the webinar.

Finally, for our agenda today, we’re going to be talking about on-premises to cloud conversation. We’re going to show you the difference between on-premises versus hyperconverged versus AWS.

We are going to demo how to move your on-premises storage to AWS without having to modify any of your applications. We will also tell you here is why you should choose AWS over on-premises and hyperconverged.

We’ll also give you some information about the actual total cost for ownership for AWS, and we’ll show you how to use the AWS TCO Calculator. Then we will tell you a little bit about SoftNAS and how it helps with your cloud migrations.

Finally, we’ll have a Q&A at the end where we answer any and all questions that you might ask. With that being said, I’ll go ahead and hand it over to Kevin to begin the webinar.

Kevin, you are now the presenter. Thanks, Kevin. I think you might also be on.

Kevin:                         Not a problem. Can you guys see my screen?

Taran:                         Yes, we can.

Kevin:                         All right, perfect. Good morning, good afternoon, goodnight, and we appreciate that you’ve logged in today from wherever you are in the world. We thank you for taking the time and joining us for this webinar.

Today, we’re actually going to focus on a storage dilemma that we have happening with IT teams and organizations all across the world. The looming question is what do I do when my maintenance renewal comes up?

Teams are left with three options. Either we stay on-premise and pay the renewal fee for your maintenance bill which is a continuously increasing expense, or you could consider a forklift upgrade where you’re buying a new NAS or SAN and moving to a hyper-converged platform.

The drawback with this option is that you still haven’t solved all your problems because you’re still on-prem, you’re still using hardware and the new maintenance renewal is about 24 to 12 months away.

Finally, customers can Lift and Shift their data to AWS – hardware will no longer be required, and data centers could be unplugged. Does this sound familiar to anybody on this call?

There is an increase in maintenance costs for support, for swapping disks, for downtime. There is exorbitant pricing for SSD drive and SSD storage when you paying a nominal leg for pricing that you need to pay to ensure that your environment works as advertised.

You have a never-ending pressure from business to add more storage capacity – we need it, we need it now, and we need more of it. There is a lack of low-cost high-performance object storage, and you’re pressured by the business owners for agile infrastructure.

The business is growing; data is growing. You need to be ahead of the curve and way ahead of the curve to actually keep up. Let’s take a look. Let’s do a stare and compare of all these three options that are there so On-Premise, Hyper-Converged, and AWS Cloud.

From a security standpoint, all these three options deliver a secure environment with all the rules and policies that you’ve already designed to protect your environment — they travel with you.

From an infrastructure and management scenario, your on-premise and hyper-converged still requires your current staff to maintain and update the underlying infrastructure.

That’s where we’re talking about your disk swaps, your networking, your racking and un-racking. AWS can help you limit this IT burden with its Manage Infrastructure.

From a scalability standpoint, I dare that you call your NAS or SAN provider and tell them that you think you bought too much storage last year and you want to give them back some. Tell me how that plays out.

I say this in jest but, in AWS, you get just that option. You have the ability to scale up or scale down allowing you to grow as needed and not as estimated. We talked a bit, on the last slide, about the infrastructure and how AWS can help lessen some of that IT burden.

For your on-premise and hyper-converged system, you control and manage everything from layer one all the way up to layer seven. In an AWS model, you can remove the need for jobs like maintenance, disk swapping, monitoring the health of your system and hand that over to AWS.

You’re still in control of managing user-accounts access in your application, but you could wave goodbye to hardware, maintenance fees in your forklift upgrades.

With that, I’d like to share eight reasons for you to choose AWS for your storage infrastructure. From a scalability standpoint. We talked about this earlier. Giving you the ability to grow your storage as needed, not as estimated. For the people who are storage gurus, you know exactly what that means.

I’ve definitely been in rooms sitting with people with predictive modelling about how much data we are going to grow by for the next quarter or for the next year. I could tell you for 100% fact, I have never ever been in a room where we’ve come up with an accurate number. It’s always been an idea, a hope, a dream, a guess.

With AWS and the scalability that it provides, we give you the ability to grow your storage and then pay for it as you go and only pay for what you use. That in itself is worth its own weight in gold.

Not only that, you get a chance to end your maintenance renewals and no longer pay for that maintenance ransom where they are holding access to your data until your maintenance ransom is paid.

There is also huge benefits for trading-in CAPEX for the OPEX model. There is no more long-term commitment. When you’re finished with the resource, you send it back to the provider. You’re done with using your S3 disk, you turn it off and you send that back to Amazon.

You also gain freedom from having to make a significant long-term investment for equipment that you and I know will eventually breakdown or become outdated. You also have a reliable infrastructure.

We’re talking about S3 and its (11) nines worth of durability. EC2, where it gives you (4) nines worth of accessibility for your multi-AZ deployments. You have functions like disaster recovery, easy of use, ease to manage, and your utilizing the best in the class in security to protect your data.

Taran:                         Thanks for that, Kevin. What we are going to do now is just ask a quick poll question to the audience here. If you’re currently using an on-premises NAS system and it’s coming up for maintenance renewal, what do you intend to do?

Are you going to do an in-place upgrade or you use your existing hardware, but you update the software? Are you going to go a forklift update where you buy new hardware and software?

Or are you going to move to a hyper-converged system? Or are you considering the public cloud whether it’s AWS or other options? I’ll go ahead and give just a few more seconds for everyone to answer.

Looks like about half of you have voted already, so I’ll give that about 10 more seconds and I’ll close the poll. I’ll go ahead and close the poll and share the results with everyone.

As you can see here, we have a good mix of what you all intend to do. Most of you are intending to move to the public cloud, whether it’s AWS or others. Looks like a lot of you are interested in in-place NAS and SAN upgrade, so it’s interesting.

A lot of you are also considering moving to hybrid-converged. For those of you who answered other, we would be curious to learn more about what your other plans are. On the questions pane, you’re more than welcome to write what you’re intending to do.

With that, I’ll go ahead and hand it back to Kevin to handle the demo. Kevin.

Kevin:                         Not a problem. Can you guys see my screen?

Taran:                         I can.

Kevin:                         Perfect. For the sake of my feed, I went ahead and did a recording of this just to make sure that no issues or problems would happen with the demo, so I can walk through. I’m going to talk through this video that I’m playing.

We’re talking about lifting and shifting to the cloud and that can be done in multiple ways. We’re talking about, from a petabyte scale, you can incorporate AWS Import using Snowball. You can connect directly using AWS Direct Connect, or you could use some open-source tools or programs like Rsync, Lsync, and Robocopy.

Once your data is in the cloud, it’s how you’re going to maintain that same enterprise-level of experience that you’re used to. With SoftNAS, you have that ability. We give you the opportunity to be able to have no downtime SLA and we’re the only company that gives that guarantee.

We will be able to walk through the demo. Let me show you. One of the things I’m going to show is the easy use that we have in what we actually do with attaching storage and sharing that storage out to your data consumers.

In this demo, we’re going to go ahead and we’re going to add a device. We are going to add some S3 storage. Very simple and easy, we click “add device.” It comes up “Cloud Disk Expander,” and we’re going to choose Amazon Web Services, S3.

We’re going to go ahead and we’re going to click next. Because we’ve defined our IAM Role, you don’t have to share your access keys or security keys. We do give you the ability to go in and select your region.

For this demo, my instances exist in Northern California so I’m going to select Norther California. If you had an S3 region that’s closer to you, you’d have the ability to do that also.

We give you the ability to choose the bucket name and we increment up the S3 buckets as they are actually created. For the sake of this demo, I’m going to go ahead. I’m going to select a 10GB drive just to make sure that this creates quickly and easy for you to see.

We also give you the ability to encrypt that disk – that would be Amazon’s encryption that we allow you to bleed through. We give you the ability to also encrypt data at a different level.

From this, we see that our S3 disk has been created and is now available to be assigned to a storage group. We are also going to add some EBS disk. We know S3 is not Amazon’s most performing disk. For your data that requires more performance backend, we allow you to add Amazon’s EBS disk.

With that, we already told you IAM there so we don’t have to use our key. For the sake of this, we’re going to do 15GB. We have the ability to encrypt it again. This is disk level encryption.

Then we also give you the ability to choose the storage that is best suited towards your purpose. We have General Purpose SSD. You could use Provisioned IOP disks for your more performance data, or you could choose to use your standard.

For the sake of this demo, we will choose general purpose and I will select to use three. With that, we’re just going to go and we’re going to create that EBS disk.

As you see, the wizard in the background is creating it. It’s going through the process of creating, attaching, and partitioning those disks to be used by the SoftNAS device.

We give it a couple of seconds. We’ll see that this completes. All right, now we have our system. We have that both our S3 and our EBS disk, they have the ability for us to assign them to our storage pool.

Our storage pool is what we use to aggregate our disk. We have then the ability to add some enhancements while we go through that process. We’ll go ahead and we’ll select “Create.”

At this point in time, I am very descriptive with what I name my disk. I am going to call this EBS pool. I’m going to go ahead and select RAID zero and that’s because we’re already working highly-performing redundant disk.

At that point, nothing that I add can add to the performance system that Amazon has, but we can stripe across those disks, which should add performance going in. we’ll go ahead.

We’ll select that drive, and we’ll go ahead. If we go ahead, it’s coming up saying that we’ve choose no RAID and that’s okay because we’re talking about cloud and highly redundant disks. We’re in.

We’re going to do the same thing for our S3 pool. We’re going to go in. We’re going to call it S3_pool if I could spell it correctly. We’re going to go ahead and we’re going to select our S3 storage. We’re going to select RAID zero so we could stripe across.

Remember, stripping across is similar to what you would have in your environment with a server. The more disks you’re stripping across, the more performance you make in a system.

We’re going to go ahead. We’re going to create this pool. Now we have an EBS pool and we also have an S3 pool set up. We’re going to go in. These are some of the other enhancements that we could add to the system.

In front of my S3 pool, I’m going to add some read cache. With the system that I chose, I have ephemeral drive associated with it. This is m3.xlarge. With that, I’ll be able to put that ephemeral drive in front of my system as a way to make my S3 disk be more performing.

I’m going to go ahead. I’m going to select it. Just as simple as that and select it in “Select/Add Cache.” Just as simple as that, I’ve made my S3 disk more performing as I’m utilizing not only striping, but I’m also utilizing the fact that now “read cache” is in front of that system to make those disks more performing.

Now that we’ve added our disk devices, we’ve aggregated them in a storage pool. The next thing that we have to do is that we have to share this out to our end-users, to our applications, to our servers.

The way that we do that is we create volumes in nodes. The way that this is going to share out, your volume name should adhere to your company policies. Whatever share you’re used to sharing out, you should adhere to those rules.

From a volume name standpoint, we’re going to go with user-share, and I’m going to go ahead and select that storage pool. It’s S3. I’m going to select the storage pool.

I don’t need my user-shares and access to my user-shares to be the most performing. I’m going to share it out via NFS and via CIFS. I also have the ability to make a thin provisioned or a thick provisioned.

We also have the ability to do compression and deduplication at this level. They both come with a resource cost. If you are looking at doing compression, you’re going to add a little bit more CPU. For dedupe, you’re going to want to add some more memory.

We’ll talk about snapshots. We get right out of the box. By default, we enable snapshotting of the system and that gives us the ability to be able to sustain the readable/writable clones which we will show you later.

By default, we snapshot the volume every three hours, but it’s definitely tunable if you need something else that would be better for your environment, your LAN, or you could schedule it for something.

We also have a scheduled snapshot retention policy. We start of ten-hourly, six-daily, one-weekly. We could go ahead, and we could create that volume. Now we have a volume created that allows you to access it within a cloud via CIFS or via NFS and it’s backed by S3 storage.

We now are going to go ahead and create a more performing storage. I’m setting up a website that I’m going to have in the cloud, and I want to be able to have some fast disk that’s sitting behind it.

I probably wouldn’t call this a website in production. However, for the sake of this demo, I’m going to go ahead and call it website. I go ahead. I select my storage pool. It’s an EBS disk.

I’m going to go ahead. I’m going to share this out via NFS, and I’m sharing it out via CIFS. We talked about compression and duplication and snapshots. We’re going to go ahead, and we’re just going to do create.

Now we have both volumes created; they are backed by AWS disk. We’re going to go in and we’re going to talk about snapshotting.

QA did something that they ended up breaking our environment. Based on the fact that they broke our environment, they want to be able to test that the fix that they have created works.

What I’m going to do is I’m going to go in. I can’t let them use my production data. What I could do is do a snapshot of my existing environment, and I’m going to create a readable/writable clone so that they could test their fix on that. It’s just as simple.

This is now a point in time instance of the data that existed in website and it’s useable, readable, writable, and shared out in my environment to my users. We also have the ability to attach and integrate with LDAP and Active Directory to ensure that security of your environment is intact.

With that, that is in a nutshell what we went about doing for giving you access to your data. I’m going to run another video. This video should be a little faster. We’re going to go in, and what we’re actually going to do is set up snap replicate between two instances within AWS.

We talked about the SLA within Amazon that gives you five nines worth of availability for instances that you have in multiple AZs. SoftNAS allows you to do that.

If have two instances in this environment, both are in the west. One is in west IA and one is in west IB. with that, I am going to set up replication of my data between both of my instances.

The first thing that we’re going to go in is that we require that the storage size be the same in both nodes. It doesn’t have to be the exact same type of storage, but it needs to be the same amount of storage.

We also require that you name the storage pools the same. I’ll go in. For the sake of time, I went ahead and I added the drives in advance. With that, I’ll just go ahead and I’ll create my storage pools.

Let me create my EBS storage pool if I could spell it correctly, again. Selecting RAID zero, and then I’m going to go ahead and select “Create.” We’re being warned again.

We are very concerned about your data. We want to make sure that it’s extremely secure. With that, we have EBS pool. We’re going to go ahead and we’re going to create our S3 pool or pool of S3 disks.

We’re going to go and we’ll be able to create. Now we have both systems ready to be able to do replication between them. Everything that exists on the lines and volumes, those will be automatically synched from the source node over to the target.

Just verifying one last time to make sure that the pool names are correct in there. Now we should be able to go to snap replicate and set up the snap replication between the source and the node.

We’re going to go in. We’re going to add. We’re going to click next. All that it is asking me is for the host name of the incidence on the other side. I’ll go ahead and act properly there, and then it’s going to prompt me for the password for the other side.

We go ahead. Just with that click, I’ve made my system more resilient. I’m going through the process right now of replicating my data into a different availability zone.

Given my being able to do a manual failover, if push came to shove, to be able to access my data from a different node. If you notice, we’ll go ahead and we’ll show that during this process, we are now taking everything that we created, even the readable/writable close, and we are moving that over to the secondary instance.

I’ll show you. We have source node, we have a target node, and it’s being asynchronously replicated between nodes. This is great. We have the ability to have our data in two locations. It took little or no time in order to do that.

What if we could stand up two instances but put a VIP in front of those instances and give you the ability to failover in case of an issue. We have the ability to do that with snap HA.

We are going to select our virtual IP. You have the ability to use a virtual IP or an elastic IP. We are going to select a virtual IP. For the sake of this demo, we are going to ahead.

I’m going to choose 2.2.2.2. I’m going to select next. Because of the IAM Role we don’t need to share out the access key. We are going to go ahead and click install. While we are going and installing this, there is a bunch of heavy lifting that’s being done in the background that SoftNAS is handling on its own.

We are updating routing tables. We are ensuring that S3 is accessible. We are making sure that each instance could actually talk to each other. We are setting the heartbeats and putting the heartbeats in place.

With that VIP that sits in front of these instances, we want to make sure that if your source node goes down that your target node is going to stand up and be able to give you no downtime access to your data.

We are going to have around 30% for a couple of seconds. We are going to jump to 50, and within a second, we’re going to be at a hundred. There we go. This is real-time. It was not sped up, just to show how easy it is for us to be able to set up a HA environment which allows you to be able to, from a cloud standpoint, lift and shift your applications. Giving them the ability to use the protocols that they are familiar with. That’s your NFS, that’s the CIFS, AFP, and iSCSI if you see fit.

With that, I’ll end the video showing part of the demo and I’ll go back to showing more of the PowerPoint slides. I’m sorry. Let me take a sip of water. I apologize.

I know if you’re in a room with a bunch of folks or even if you’re by yourself, you’re saying, “It looks good but how much is this going to cost?”

I am actually very glad that you ask that question. I want to introduce you to the TCO Calculator. You could use this calculator to compare the costs of your running applications with what it would actually cost you within an AWS environment.

All you have to do is describe a bit of your configuration and you’d be able to do it. Let me go ahead. I’m going to show it fairly quickly. Let’s calculate. It’s very easy.

You’re going in. You’re selecting currency. It’s going to ask you what type of environment it is and the region, your application, what’s the amount of VMs that you’re looking at, and storage. What are you looking from a storage standpoint?

For the sake of this demo, we went ahead and we did a three-year total cost of ownership for AWS for 40TB instance. What you could actually see is that there is a huge cost-savings between going from an on-premise system to an AWS system and the majority of this cost-saving is actually located in infrastructure.

Getting rid of the need for you to manage the underlying infrastructure ends up saving you about 61% worth of your costs-savings.

How do you pay for this? Right after how much does it cost? The next thing is how can we pay for it? The challenge is you want to move to AWS but where’s your budget for AWS?

We talked earlier in this slide where budgets are not increasing. Budgets are being struggled, but you’re being forced to think about the factor of moving to the cloud and how to move to the cloud.

What we suggest is an innovative method is to allocate your existing maintenance in you budget to be able to make your shift to the cloud. We do have a mock up right now just in the slide.

We are seeing that for a 50TB on-premise system, from a maintenance renewal budget, we’re looking at around $450,000. For $265,000, you could increase the size of your environment in AWS by 20TB. That in itself is cost effective enough.

At the point you select it. Some of the steps that you might be thinking about. Pick a tier 2 application to migrate to AWS, test the waters with that application. From your learnings, you could then create a workplan to migrate the remaining apps, workloads, and data.

When all the migration is done, it’s time for you to unplug your on-premise hardware and have a party because of all the money that you are going to be saving.

Lift and Shift. SoftNAS allows you to lift and shift while maintaining the same enterprise-level of service and experience that you are used to. We are the only company that gives a no downtime guarantee SLA.

We will give you the same enterprise feel in the cloud that you are used to on-premise. Whether or not that’s serving out your data via NFS for your apps that need NFS, CIFS, or SMB, we have the ability to do that.

We deploy within minutes. We give you the ability, as we demonstrated, to do storage snapshots. The GUI in of itself is easy to learn, easy to use, no training. You don’t need to send your teams back to get training to be able to use SoftNAS as a software.

It is there, it is intuitive, and it’s easy to use. We talked about disaster recovery in HA and being able to move to a monthly or annual subscription.

Help us help you migrate your existing applications to the cloud. We allow you to use the standard protocols. We leverage AWS’s storage elasticity. SoftNAS enables existing application to migrate unchanged.

Our software provides all the Enterprise NAS capabilities — whether it’s CIFS, NFS, iSCSI — and it allows you to make the move to the cloud and preserve your budget for the innovation and adaptations that translate to improved business outcomes.

SoftNAS can also run on-premise via a virtual machine and create a virtual NAS from a storage system and connect to AWS for cloud-hosted storage.

Taran:                         Thanks for that, Kevin. Going on to our next poll. I’m going to launch that real quick. What storage workloads do you intend to move to the AWS Cloud? For those of you who are interested in moving to AWS, is it going to be NFS, CIFS, iSCSI, AFP, or you are not just intending to move to AWS at all?

It looks like we’ve gotten about half of you to vote so far, so I’ll give that about 10 more seconds. I’ll go ahead and close the poll and pull up our results. It looks like about over 40% of you are intending to move NFS workloads to the AWS Cloud, which is pretty common with what we’ve seen.

CIFS and iSCSI support is also balanced too. A couple of you just have no interest in moving to the AWS Cloud. For those of you who don’t have interest in moving to the AWS Cloud, again, in the questions pane, please let us know why you don’t intend tomove to AWS.  I’ll go ahead and hand it over back to you, Kevin.

Kevin:                         Not a problem. Let’s go over. I’ll do a very quick review of SoftNAS. SoftNAS in a nutshell is an enterprise filer that exists on a Linux appliance with a ZFS backing in the cloud or on-premise.

We have a robust API, CLI, and cloud base that integrate with AWS S3, EBS on-Premise storage, and VMware. This allows us to provide data services like block replication, allows you to access cloud disks.

We give you storage enhancements such as compression and in-line deduplication, multi-level caching, and the ability to produce writable snap clones, and encrypt your data at rest or in-flight.

We continue to deliver best of brand services by working with our industry-leading partners. Some of these people you guys might know such us Amazon, Microsoft, VMware. We continue to partner with them to enhance both our offerings and theirs.

These are some of the brands that you trust and they trust us also because these companies are using SoftNAS in many different use-cases all over their environment.

We have a couple of these just to bring out a few – Netflix, Weather Channel, and too many to name.

Taran:                         Thanks, Kevin. I’m just going to go ahead and take over the screen share really quick. To thank everyone for attending this webinar, we are handing out AWS credits. If you click on this Bitly link right here, you’ll actually be able to register for a $100 AWS credit.

We’re only giving out 100 of these so the first 100 people to register for it will get it. I’m also going to paste the link here in the chat window and you’ll be able to click on that link to register for the credit.

You just have to go to the page, put in your email address, and one of our team members will get back to you later today with your free credit information.

As far as next steps go, for CXOs in our audience, we invite you to try out that AWS TCO Calculator. If you have any questions about it, feel free to contact us. Just visit our “contact us” page here on the bottom and we’ll be happy to answer any questions that you might have about cloud costs and how SoftNAS and AWS result in more cost savings over an on-premise solution.

We also recommend that you have your technical leads try out SoftNAS. Tell them to visit the softnas.com/tryaws page and they’ll be able to go and try out SoftNAS free for 30 days.

For some of the more technical people on our audience, we invite you to go to our AWS page where you can learn a little bit more about the details of how SoftNAS works on AWS. Any questions that you might have, again, please feel free to reach out to us.

We also do have a whitepaper available that covers how the SoftNAS architecture work on AWS and that gets into some technical details that should help you out.

That covers everything that we had to talk about today, so now let’s go on to the Q&A session. We do thank you guys for asking questions throughout the webinar. I’ll go ahead and start having a few questions answered.

Our first question here is why chose SoftNAS over the AWS storage points?

Kevin:                         I think that that’s a good question. Why? We give you the ability, from a SoftNAS standpoint, to be able to encrypt the data. As we spoke about in the webinar, we’re the only appliance that gives a no down-time SLA.

We stand by that SLA because we have designed our software to be able to address and take care of your storage needs.

Taran:                         Thanks for that Kevin. Out to the next question — is Amazon the only S3 target or are there other providers as well?

Kevin:                         Amazon is not the only S3 provider. We do have the ability to connect to blob storage, and we are in other platforms other than AWS – that would be CenturyLink, Azure, among some of the others.

Taran:                         Thanks Kevin. Our next question; do you have any performance data for running on standard EBS non-provisioned volumes?

Kevin:                         I think that there might be a session. I believe that question came from James. James, feel free to reach out to us. We’ll definitely get on a call with you, and that’s something that we could dig into more as we discuss your use-case. Depending on your use-case and depending on the enhancements that we could actually put in front of these disks, we could definitely come up with a solution that would be beneficial for you.

Taran:                         Thanks Kevin. Then onto the next question. If I move my storage to the cloud, do I have to move SSL too?

Kevin:                         It totally depends on your use-case. This is another use-case question, and depending on your internet connectivity, that’s something that would be defined by that. The last mile of your connectivity into the cloud is always what’s going to determine your level of performance.

Each use case is different, and that’s something that we could definitely do a deep-dive and dig into your use-case again.

Taran:                         Thanks Kevin. It looks like we have two more questions. We’ve got quite a bit of time so for those of you who are interested in asking questions, just go ahead and paste them in the questions pane and we’ll be happy to answer them.

As far as our next questions goes, what kind of use-cases are best for SoftNAS on AWS?

Kevin:                         That’s a very good question. There are multiple use-cases. If you have production data that you need CIFS access for, SoftNAS gives you the ability to do that. You have a web back-end that you need to be able to apply iSCSI or more performing disk. We give you the ability to do that also.

Like we talked about in this webinar, you have an application that you need to be able to move to the cloud. However, in order for you to rewrite that application, it’s going to take you six months to a year to be able to support S3 or any kind of block storage.

We give you the ability to migrate that data by setting up a NFS or a CIFS [inaudible 50:29] whatever that application is used to already in your enterprise.

Taran:                         Thanks Kevin. It looks like that’s all the questions that you guys have today. Before we close up this webinar, we just want to enlighten you to try out SoftNAS free for 30 days on AWS — just visit softnas.com/tryaws.

Also, at the end of this webinar, we do have a quick survey available. We ask that you please fill out the information so that we can better serve in the future. Just knowing what kind of webinars really interest you will help us out with improving our webinars.

With that, that’s all we had for you today. We hope you guys have a great day.