How Can Cloud Infrastructure Help You Stay Competitive?

How Can Cloud Infrastructure Help You Stay Competitive?

Cloud Infrastructure Helps You Stay Competitive!

We’ve become so accustomed to locking in certain functionality when we buy hardware devices that it’s now something we expect to upgrade in the future. Think about your smartphone, your desktops, your cars, and all of your data center equipment. Even if what you have is still working, but it’s not as great anymore, we either live with it or justify how to upgrade our older, inefficient productivity tools and infrastructure to get the latest capabilities.

The speed of innovation in data center infrastructure is a perfect example. There are always new functional capabilities appearing. New processor speeds, bigger and faster disks or flash media, higher speed networks… and they may even be less expensive! Buying a new car because it’s got new gadgets might sound like a great thing, but it’s not pragmatically affordable when what you own is working. We see the very same things happen when new data center options appear. They’re the “latest thing” until the next latest thing comes along. You can never affordably keep up, right?

Cloud Infrastructure Help You Stay Competitive

Well, maybe there’s a better alternative. The modern flexibility of public cloud platforms offers customers instant access to a wide variety of computing, storage, and networking ingredients that form almost infinite combinations. The difference is you don’t have to buy all of the combinations, and you get to pick what’s best for your business. This flexibility isn’t a static decision that you’re stuck with at the time of purchase, like computer hardware, it’s malleable throughout the lifecycle of your project, and throughout the lifecycle of your business.

Have you ever made an architectural error in designing a data center or rolling out a new application? Have you ever received a data storage capacity forecast that was accurate as you looked forward to the next three to five years? Did your performance requirements change after you made your last infrastructure purchase? These may be rhetorical questions (or you’ve been extremely fortunate). If this resonates with past frustrations or you have suffered through budget consequences by over or under-buying, then you’re not alone.

Enter the Cloud Era

The inventory of options now available from public cloud platform vendors provides an incredible opportunity to dial in just what you need. If you need more processing power, then picking an instance (cloud terminology for your server) with the right combination of cores, memory, networking, and the local ephemeral cache is a click away. There is a similar variety of choices for your storage requirements. It ranges from ultra-high performance all-flash for latency-sensitive workloads, to inexpensive capacity-oriented storage options that hold your secondary or tertiary tier data, OR you can mix them for the right combination of any tier. Where in the physical appliance world can you get all of that flexibility?

It doesn’t end at the initial options you picked. What about the changes mentioned earlier? You start a project with one set of needs and then you must adjust based on reality. A perfect example is going from a proof of concept or pilot instance into actual production. Or how about when your company is growing exponentially or your customer base increases beyond forecast? All may be hard to predict, but public cloud options afford you the ability to adjust at any time. Fearful of getting your deployment architecture wrong? It’s understandable, as you may put your company or yourself at risk, but a malleable cloud architecture is one that permits adjustment over time.

At SoftNAS, we provide software-based options for NAS and SAN storage with all of the advanced data services you’d expect from a mature on-premises hardware company. The protocols your applications use today allow you to make use of the cloud computing, network and storage options without changing the applications. Because we also provide a non-destructive means of changing the compute running our virtual storage appliance, the performance upgrades provided by higher-speed computing, local cache, RAM, and networking are easily changed by moving to a size instance. The same goes for your storage type where we can provide access, concurrently, to any combination of media – flash to inexpensive object-based data stores. All of these combinations accommodate change over time and at any time.

SoftNAS provides unprecedented flexibility that hardware-based on-premises options can’t match:

The Storage Controller

The host of the storage operating system relies on computing power to handle I/O activity, and advanced storage features like deduplication, compression, caching, snapshotting, cloning, replications, high availability protection, tiering storage, encryption, and access controls. The compute instances offered by the cloud computing platforms offer cost and performance flexibility to match the workload. For smaller, less I/O intensive applications, utilizing an instance with fewer cores and a lower speed NIC that uses smaller amounts of RAM or local SSD cache may work just fine… and it saves you money. If your needs change, or if you have a more demanding workload, then using a more robust instance configuration instantly bumps up your performance with more computing and caching power. The bigger instances also come with higher-speed NIC interfaces, which is crucial for I/O demanding workloads. Changing the computing power changes the storage controller characteristics. The benefit of using a software-based storage system in a public cloud is the ability to change the performance profile of the storage controller at anytime.

The Storage Account Type

On-prem storage offerings force you to select the performance based on the type of media it uses and then you’re pretty much stuck with the performance profile and possibly the capacity too. Cloud storage offers much more variety and multiple storage types. SoftNAS utilizes the block and/or object storage accounts as a storage pool much like traditional storage systems use disk drives or flash drives. SoftNAS leverages these combinations to provide flexible cost and performance storage solutions that are often difficult or impossible to obtain using conventional on-premises options. We do this by allowing multiple concurrent capacities sourced from the various storage account types offered on Azure and AWS. The flexibility provides excellent cost, performance, and capacity options that infinitely adjust as needed. The ability to use and adjust your current requirements in any combination of protocols, capacity, and performance level, provides the ability to add on or change using these options as your needs change.

SoftNAS Full spectrum flexibility

Advanced Storage Features, too!

You don’t have to leave the enterprise functionality that you expect when you move to the cloud.

One last point… when you want to leverage the latest technology, public cloud platforms make your life so much easier. Vendors continually make evolutionary investments to provide the latest in datacenter solutions. The competitive nature of the public cloud infrastructure space affords you the best options because it is ever-evolving with the latest advancements in technology. This is the case because cloud platform vendors compete to earn and retain your business. If there’s a new processor available… a new network speed… a lower cost per terabyte… you’ll always benefit from their never-ending upgrade cycle. Adding software from companies like SoftNAS, you get the added flexibility to make changes as needed.

So, the next time you feel underwhelmed by your hardware infrastructure, consider the options SoftNAS and our public cloud partners offer for more flexibility. It won’t take long to realize the old way isn’t workable in today’s dynamic business environments where things keep changing and the data footprint keeps growing.

Examples of Ideal Uses Cases

The use cases for software-based virtual storage appliances hosted on public cloud platforms span many segments. For unstructured data with ever-increasing file sizes, examples of ideal uses include:

  • Back-up, DR, and Archive
  • Moving your business or SaaS applications
  • User file sharing
  • Storage consolidation
  • Video/media storage
  • Source code repository
  • Medical records
  • Legal documents
  • Energy Industry data
  • Big Data
  • Genomics

 

Next Steps

Learn more about cloud-based enterprise architecture options at Buurst. We have years of experience helping customers make this transition quickly and easily. We understand you may have questions and we’d be pleased to help you make a successful transition to the cloud. SoftNAS is used by thousands of organizations around the world to protect their data in the cloud, achieve a 100% up-time SLA for business-critical applications, and move applications, data and workloads into the cloud with confidence.

You can get started with SoftNAS on Azure or AWS in multiple ways:

  1. Take a Free Azure Test Drive: Get started in under two minutes using the Azure Test Drive. This option allows you to quickly try SoftNAS without having to install or configure anything. The SoftNAS instance loads automatically connect the Azure storage account and pre-provisions multiple storage volumes/LUN using NFS, CIFS/SMB, and iSCSI. No Credit card or Azure Subscription is required but the environment is available for 1 hour from the time you enter the test drive.
  2. Try out SoftNAS for free on Azure.
  3. Try out SoftNAS for free on AWS. 

 

Both trials allow you to install, configure and use SoftNAS as if you were running in a production environment and explore the product for multiple weeks.
  • Purchase: You can purchase SoftNAS on the Azure or AWS Marketplaces. We offer various capacities and performance levels for fast time to production. Discounted larger deployments up to many petabytes, are available via a BYOL (Bring Your Own License) obtained by contacting the SoftNAS Sales team or an authorized reselling partner.

 

ABOUT THE AUTHOR

Michael Richtberg is the VP of Strategic Alliances and Corporate Development for SoftNAS, Inc. His experience spans 30+ years of computer science transformation. He has been involved in leading business initiatives via leadership roles in technology strategy, marketing and product management with brand leaders like Oracle, Dell, Citrix, NCR and start-ups in the disruptive virtual storage and hyper-converged space via GreenBytes, Pivot3 and now SoftNAS, the #1 best-selling NAS in the cloud.

Why SoftNAS on AWS?

Why SoftNAS on AWS?

Why Use SoftNAS Cloud NAS on AWS?

SoftNAS Enterprise and SoftNAS Essentials for AWS extend the native storage capabilities of AWS and feature the POSIX-compliant and required storage access protocols needed to create a virtual cloud NAS, without having to re-engineer existing customers’ applications.

SoftNAS products allow customers to migrate existing applications and data to AWS with NFS, CIFS/SMB, iSCSI or AFP support. Customers gain the performance, data protection, and flexibility needed to move to the cloud cost-effectively and ensure successful project outcomes (e.g., snapshots, rapid recovery, mirroring, cloning, high availability, deduplication, and compression).

Each customer’s journey to the cloud is unique and SoftNAS solutions are designed to facilitate adopting cloud projects according to what makes the most immediate business sense, resulting in the highest return on invested resources (budget, people, time). Whether your need is to consolidate file servers globally, utilize the cloud for data archival or backup, migrate SaaS and other business applications, or carry out Big Data or IoT projects, SoftNAS products on AWS deliver effective, tangible results.

aws cloud nas

When to use SoftNAS NAS versus Amazon EFS?

SoftNAS is a best-in-class software-defined, virtual, unified Cloud NAS/SAN storage solution for businesses that need control of their data and frictionless and agile access to AWS cloud storage. SoftNAS supports AWS EBS and S3 Object storage;  SoftNAS is deployed by thousands of organizations worldwide, supporting a wide array of application and workload use cases. Amazon EFS provides basic and scalable file-based NFS access for use with Amazon EC2 instances in the AWS Cloud. As a basic NFS filer, Amazon EFS is easy to use, allowing quick and simple creation and configuration of file systems. The multi-tenant architecture of EFS accommodates elastic growth and scales up and down for AWS customers that require basic cloud file services.

If you need more than a basic cloud filer on AWS, SoftNAS NAS is the right choice:

  1. For mission-critical cloud data requirements that demand low latency, high-availability with the highest-performance cloud storage I/O possible.
  2. SoftNAS ObjectBacker™ increases storage I/O performance up to 400% faster, giving customers performance approaching that of EBS at the price of S3 storage.
  3. For environments with multiple cloud storage projects that require an enterprise-class, virtual NAS storage solution that provides flexible and granular pricing and control with performance and instance-selection capabilities.
  4. For customers with low cloud storage requirements (low TBs) that don’t want to overprovision storage to get desired performance.
  5. For requirements that demand the broadest POSIX-compliant and full NFS 4.1 feature set required storage access support.
  6. For enterprises requiring multi-cloud environment (i.e. AWS & Other Clouds) capabilities, flexibility and data risk mitigation.
  7. For organizations that need the industry’s most complete NAS filer feature set, including data protection provided by the patented cross-zone high availability architecture, at-rest and in-flight encryption (360-degree Encryption™) and full replication using SnapReplicate™. Additional features include: High Availability (SNAP HA™* and DCHA**), Snapshots, Rapid Recovery, Deduplication, Compression, Cloning, and Mirroring.

*   = SNAP HA is available only in SoftNAS Enterprise and SoftNAS Platinum

** = DCHA is available as an optional add-on to SoftNAS Essentials via BYOL only

aws softnas

How and Why is SoftNAS Cloud NAS a Better Way to Store & Control Data?

Moving to SoftNAS on AWS is helping thousands of public organizations and private enterprises around the world move from the old way of controlling and efficiently utilizing data storage into a new and better way. Never-ending backups, storage capacity bottlenecks, the proliferation of file servers, existing applications preventing organizations from moving to the cloud due to complexity, and the spiraling data storage costs become a thing of the past.

softnas

SoftNAS Cloud NAS on AWS

Put SoftNAS to the Test and Try it for 30-Days. Contact a SoftNAS representative for more details and a demo to learn more about both SoftNAS products on AWS and the Beta of SoftNAS Platinum so that you make the right decision.

 

30 days or less and 20TB or less AWS Marketplace Free Trial is available. Just launch and go in minutes.

Why We’re Here

As the founder of a company, it’s common to be asked “why did you start SoftNAS?”  People are curious as to what drives a person to start a company, take so many risks and do what to many must seem like gambling with your life and career…

Personally, I’m an entrepreneur and inventor.  I love creating things.  And I enjoy solving problems for customers.  Okay – probably the same kind of answers you’d get from most entrepreneurs – we’re passionate about what we do!  But why did I start SoftNAS?

I originally started SoftNAS for two reasons:

  1. I was a disgruntled storage customer who was fed up with traditional on-premise storage.

    The high costs of traditional storage was one of the primary causes of the failure of a startup where I was CTO for a time. This was a painful failure and one I felt could’ve been avoided, had things been different.

  2. I was at a fork in the road personally. I was coming off of another failed startup – and each one of those takes its toll on you, and I’m not exactly a spring chicken, and so either I could start another one and risk it all or play it safe and go take another executive position with a Fortune whatever company, probably move my family out of Texas again, and play the politics and cash in my chips and retire someday that way.

It wasn’t an easy decision.

I just knew I was onto something because it was clear to me that customers were overpaying for traditional storage, it was too complicated and hard to deploy and use, and the capital intensive hardware model just didn’t fit how a subscription-based business got paid.  So there were technical, usability and business model issues with the existing model.  To me, this looked ripe for disruptive innovation.  Keep in mind this was 2012, before the cloud was totally obvious to anyone other than AWS.

I asked my best friend what he thought.  Should I go for it and take the risks or take the least risky path and just get another job?  He knows me well, and he said “go for it, you can always find another job if and when you need to.”  So that’s what I did.  That night I started SoftNAS and here we are 5 and a half years later.  Boy am I glad I did!

The reasons we are here haven’t really changed that much.

  1. Business model: traditional premise-based storage doesn’t match how SaaS and other subscription-driven companies operate.  That’s one of the main reasons why the cloud has taken off – the cost model scales with the revenue model and every industry is moving toward subscription-based billing models.  It’s not so much about saving money as it is about matching the spend with the income.  A wise man once told me that business is actually really easy – it’s about bringing in more than you send out… and the pay-as-you-go business model does that for SaaS and subscription-oriented businesses.  It’s also helpful for traditional businesses, too, but often for other reasons.
  2. The Cloud:  the cloud is helping customers transition into the new business model, contain costs, increase agility and flexibility. We wake up each day and get to help customers make their journey to the cloud and be a part of that with them, solving interesting technical issues and pushing ourselves to go faster and further than the day before.
  3. Enriching Customers’ Lives:  perhaps it’s because I’ve been on the customer side of IT as the guy who took the calls from irate end users, their bosses and had to deal with the fallout of storage and downtime issues, I have a lot of empathy and understanding for what customers must go through with their vendors.  While it’s impossible to be perfect, I managed to create a culture at SoftNAS that delights in making customers successful and, ideally, delighted with us.  I wish we could say we are 100% successful in doing that, but alas, I doubt any company ever gets to 100% customer delight, but we continue to improve and excel more each day to get there.  Our technical support and solutions architects are among the best in the industry, and we help customers succeed every opportunity we can.  In fact, more than 40% of our technical support calls end up with us helping customers troubleshoot and resolve issues in their own cloud infrastructure, with very few actually being bugs or defects in our product (hasn’t always been that way, but sure glad it is today).

That’s why I started SoftNAS.  To change the world and make it less painful to be a storage customer.  Of course, we are evolving well beyond our original storage roots now, with the advent of the Cloud Data Platform and the push beyond our native cloud roots into the hybrid cloud and exciting new IoT integration worlds.  But I don’t expect our approach to change too much.

We continue to focus on improving the quality of customer experience every day.  We still have “Customer Experience” meetings every week, where we talk about how customers are doing, how we’re doing at serving them, and where we can improve.  And it’s working for us – the proof is in the results.

Today we have thousands of subscribers – from the smallest DevOps teams, individual developers to the largest global enterprises and brands we all use in our lives each day. In fact, I was at lunch yesterday and someone pointed to the TV in the background and said “3 of the last 4 commercials are our customers.”  It’s incredible how fast the cloud is changing how we do business.

This is just a sampling of a few of the customers who have chosen to run their businesses on SoftNAS.

I look back on that day when I started SoftNAS after that tough decision, and think about all the challenges we have faced and overcome along the way.  I’m so very thankful to be here and have the opportunity to continue to serve our customers.  As we grow and try to keep up with the breakneck pace our customers and we put ourselves through, one saying keeps coming to mind.

“The Only Easy Day Was Yesterday”

What this means is every day you will need to work harder than the last. But when you work hard every day and see what you’re now capable of — yesterday seems easy.

I remember meeting with a wealthy former Compaq executive at a former startup and what he said that day. “I have a tremendous amount of respect for what you guys do. You come in here every day and do things no normal human being would do, knowing that only 1 in 10 startups actually make it.  I couldn’t do what you do, but I respect the guts and determination.”  He didn’t invest, but his words stuck with me.

I can tell you firsthand that starting a company and figuring out how to make it successful is the hardest thing I’ve ever done.  And I had to fail more times than I can count to learn enough to finally get it right. If you’re reading this and thinking about starting a company, all I can tell you is to focus on what you believe in, don’t listen to the naysayers, work hard and focus on learning what customers want and then give it to them. And never, ever, ever give up! The breakthrough you need is just around the corner if you’re persistent enough.  If you can do that, you’ll have a fighting chance to overcome the other hurdles that lay ahead.  But having satisfied customers will provide you (and your investors and supporters) with the proof and the emotional fuel that’s required to keep going and winning.

So that’s why we’re here.  We like to win.  And we measure how we’re winning not just by recognized revenue, churn rate and all those other accepted SaaS and financial metrics, but by how many happy customers we create and make successful in the cloud with us.  That’s what keeps me going… along with the hope that this is the last time I have to start another company!

 

The Cloud’s Achilles Heel – The Network

The Cloud’s Achilles Heel – The Network

SoftNAS began its life in the cloud and rapidly rose to become the #1 best-selling NAS in the AWS cloud in 2014, a leadership position we have maintained and continue to build upon today. We and our customers have been operating cloud native since 2013, when we originally launched on AWS. Over that time, we have helped thousands of customers move everything from mission-critical applications to entire data centers of applications and infrastructure into the cloud.  In 2015, we expanded support to Microsoft Azure, which has become a tremendous area of growth for our business.

By working closely with so many customers with greatly varying environments over the years, we’ve learned a lot as an organization – about the challenges customers face in the cloud – and getting to the cloud in the first place with big loads of data in the hundreds of terabytes to petabyte scale.

Aside from security, the biggest challenge area tends to be the network – the Internet.  Hybrid cloud uses a mixture of on-premises and public cloud services with data transfers and messaging orchestration between them, so it all relies on the networks.  Cloud migrations must often navigate various corporate networks and the WAN, in addition to the Internet.

The Internet is the data transmission system for the cloud, like power lines distribute power throughout the electrical grid. While the Internet has certainly improved over the years, it’s still the wild west of networking.

The network is the Achilles heel of the cloud.

Developers tend to assume that components of an application are operating in close proximity of one another; i.e., a few milliseconds away across reliable networks, and if there’s an issue, TCP/IP will handle retries and recover from any errors. That’s the context many applications get developed in, so it’s little surprise that the network becomes such a sore spot.

In reality, production cloud applications must hold up to higher, more stringent standards of security and performance than when everything ran wholly contained within our own data centers over leased lines with conditioning and predictable performance.  And the business still expects SLA’s to be met.

Hybrid clouds increasingly make use of site-to-site VPN’s and/or encrypted SSL tunnels through which web services integrate third party and SaaS sites and interoperate with cloud platform services. Public cloud provider networks tend to be very high quality between their data center regions, particularly when communications remain on the same continent and within the same provider.  For those needing low-latency tunnels, AWS DirectConnect and Azure ExpressRoute can provide additional conditioning for a modest fee, if they’re available where you need them.

But what about the corporate WAN, which are often overloaded and plagued by latency and congestion?  What about all those remote offices, branch offices, global manufacturing facilities and other remote stations that aren’t operating on pristine networks and remain unreachable by cost-effective network conditioning options?

Latency, congestion and packet loss are the usual culprits

It’s easy to overlook the fact that hybrid cloud applications, bulk data transfers and data integrations take place globally. And globally it’s common to see latencies in the many hundreds of milliseconds, with packet loss in the several percent range or higher.

In the US, we take our excellent networks for granted.  The rest of the world’s networks aren’t always up to par with what we have grown accustomed to in pure cloud use cases, especially where many remote facilities are located.  It’s common to see latency in the 200 to 300 milliseconds range when communicating globally. When dealing with satellite, wireless or radio communications, latency and packet loss is even greater.

Unfortunately, the lingua franca of the Internet is TCP over IP; that is, TCP/IP. Here’s a chart that shows what happens to TCP/IP in the face of latency and packet loss resulting from common congestion.

TPC

The X axis represents round trip latency in milliseconds, with the Y axis showing effective throughput in Kbps up to 1 Gbps, along with network packet loss in percent along the right side.  It’s easy to see how rapidly TCP throughput degrades when facing more than 40 to 60 milliseconds of latency with even a tiny bit of packet loss. And if packet loss is more than a few tenths of a percent, forget about using TCP/IP at all for any significant data transfers – it becomes virtually unusable.

Congestion and packet loss are the real killer for TCP-based communications. And since TCP/IP is used for most everything today, it can affect most modern network services and hybrid cloud operation.

This is because the TCP windowing algorithm was designed to prioritize reliable delivery over throughput and performance.  Here’s how it works. Each time there’s a lost packet, TCP cuts its “window” buffer size in half, reducing the number of packets being sent and slowing the throughput rate.  When operating over less than pristine global networks, sporadic packet loss is very common. It’s problematic when one must transfer large amounts of data to and from the cloud.  TCP/IP’s susceptibility to latency and congestion render it unusable. This well-known problem has been addressed on some networks by deploying specialized “WAN Optimizer” appliances, so this isn’t a new problem – it’s one IT managers and architects are all too familiar with and have been combating for many years.

Latency and packet loss turn data transfers from hours into days, and days into weeks and months

So even though we may have paid for a 1 Gbps network pipe, latency and congestion conspire with TCP/IP to limit actual throughput to a fraction of what it would be otherwise; e.g., just a few hundred kilobits per second.  When you are moving gigabytes to terabytes of data to and from the cloud or between remote locations or over the hybrid cloud, what should take minutes takes hours, and days turn into weeks or months.

We regularly see these issues with customers who are migrating large amounts of data from their on-premises datacenters over the WAN and Internet into the public cloud.  A 50TB migration project that should take a few weeks turns into 6 to 8 months, dragging out migration projects, causing elongated content freezes and sending manpower and cost overruns through the roof vs. what was originally planned and budgeted.

As we continued to repeatedly wait for customer data to arrive in the public cloud to complete cloud migration projects involving SoftNAS Cloud NAS, we realized this problem was acute and needed to be addressed. We had many customers approach us and ask us if we had thought about helping in this area – as far back as 2014.  Several even suggested we have a look at IBM Aspera, which they said was a great solution.

In late 2014, we kicked off what turned into a several year R&D project to address this problem area. Our original attempts were to use machine learning to automatically adapt and adjust dynamically to latency and congestion conditions.  That approach failed to yield the kind of results we wanted.

Eventually, we ended up inventing a completely new network congestion algorithm (that’s now Ultra pending patent) to break through and achieve the kind of results we see below.

We call this technology “UltraFast™.”

UltraFast

As can be easily seen here, UltraFast overcomes both latency and packet loss to achieve 90% or higher throughput, even when facing up to 800 milliseconds and several percent packet loss.  Even when packet loss is in the 5% to 10% range, UltraFast continues to get the data through these dirty network conditions.

I’ll save the details of how UltraFast does this for another blog post, but suffice it to say here that it uses a “congestion discriminator” that provides the optimization guidance.  The congestion discriminator determines the ideal maximum rate to send packets without causing congestion and packet loss.  And since TCP/IP constantly re-routes packets globally, the algorithm quickly adapts and optimizes for whatever path(s) the data ends up taking over IP networks end-to-end.

What UltraFast means for cloud migrations

We combine UltraFast technology with what we call “Lift and Shift” data replication and orchestration. This combo makes migration of business applications and data into the public cloud from anywhere in the world a faster, easier operation. The user simply answers some questions about the data migration project by filling in some wizard forms, then the Lift and Shift system handles the entire migration, including acceleration using UltraFast.  This makes moving terabytes of data globally a simple job any IT or DevOps person can do.

Additionally, we designed Lift and Shift for “live migration”, so once it replicates a full backup copy of the data from on-premise into the cloud, it then refreshes that data so the copy in the cloud remains synchronized with the live production data still running on-premise.  And if there’s a network burp along the way, everything automatically resumes from where it left off, so the replication job doesn’t have to start over each time there’s a network issue of some kind.

Lift and Shift and UltraFast take a lot of the pain and waiting out of cloud migrations and global data movement.  It took us several years to perfect it, but now it’s finally here.

What UltraFast means for global data movement and hybrid cloud

UltraFast can be combined with FlexFiles™, our flexible file replication capabilities, to move bulk data around to and from anywhere globally. Transfers can be point-to-point, one to many (1-M) and/or many to one (M-1). There is no limitation on the topologies that can be configured and deployed.

Finally, UltraFast can be used with Apache NiFi, so that any kind of data can be transferred and integrated anywhere in the world, over any kind of network conditions.

SUMMARY

The network is the Achilles heel of the cloud. Internet and WAN latency, congestion and packet loss prevent hybrid cloud performance, timely and cost-effective cloud migrations and slow global data integration and bulk data transfers.

SoftNAS’ new UltraFast technology, combined with Lift and Shift migration and Apache NiFi data integration and data flow management capabilities yield a flexible, powerful set of tools for solving what have historically been expensive and difficult problems with an purely software solution that runs everywhere; i.e., on VMware or VMware-compatible hypervisors and in the AWS and Azure clouds. This powerful combination puts IT in the driver’s seat and in control of its data, overcoming the cloud’s Achilles heel.

NEXT STEPS

Visit Buurst, Inc to learn more about how SoftNAS is used by thousands of organizations around the world to protect their business data in the cloud, achieve a 100% up-time SLA for business-critical applications and move applications, data and workloads into the cloud with confidence.  Register here to learn more and for early access to UltraFast, Lift and Shift, FlexFiles and NiFi technologies.

ABOUT THE AUTHOR

Rick Braddy is an innovator, leader and visionary with more than 30 years of technology experience and a proven track record of taking on business and technology challenges and making high-stakes decisions. Rick is a serial entrepreneur and former Chief Technology Officer of the CITRIX Systems XenApp and XenDesktop group and former Group Architect with BMC Software. During his 6 years with CITRIX, Rick led the product management, architecture, business and technology strategy teams that helped the company grow from a $425 million, single-product company into a leading, diversified global enterprise software company with more than $1 billion in annual revenues. Rick is also a United States Air Force veteran, with military experience in top-secret cryptographic voice and data systems at NORAD / Cheyenne Mountain Complex. Rick is responsible for SoftNAS business and technology strategy, marketing and R&D.

The IT Archaeological Dig of Technology and the Cloud

The IT Archaeological Dig of Technology and the Cloud

The IT Archaeological Dig of Technology and the Cloud

I must admit something right up front here – I love technology!  I’m a techno-geek on most every level, in addition to having done a lot of other stuff in my career and personally with technology.  Most people don’t know, but I recently got my ham radio license again after being inactive for 45 years… but that’s another story.  One of my latest radio projects I’ve been working on after hours for almost a year involves what is essentially an IoT device for ham radio antennas.

I find technology relaxing and satisfying, especially electronics, where I can get away from the stresses of business and such and just focus on getting that next surface mount component properly soldered. Given my busy schedule, I make slow but unrelenting progress on these types of background projects, but it beats wasting away in front of the TV.  In fact, I should probably order one of these hats from propellerhats.com and wear it proudly.

As a bit of background on this blog post, when I was CTO at CITRIX Systems, I used the term “archaeological dig of technology” to describe the many layers of technology deposits that I saw our customers had deployed and that we continue to see companies dealing with today.  The term caught on internally and even our then CEO, Mark Templeton, picked up on and it and used it from time to time, so the term stuck and resonated with other technologists over the years.  Mark and I had a friendly competition going to see who could come up with the coolest tech. Mark usually won, being the real chief propeller head. We had a lot of fun winning and growing CITRIX together in those days…

So, what do I mean by the Archaeological Dig of Technology aka “The Dig”?  I would describe it as the sum total of technologies that have been deposited within an enterprise over time.

The Dig is the result of buying and deploying packaged applications, in-house applications, external service integration, mergers and acquisitions and other forms of incrementalism adding technology acquisition and automation to our businesses over time.  It’s surprising how much technology accumulates with time.

The following diagram attempts to depict a typical dig one might find at companies today. It’s by no means complete, but merely attempts to illustrate some of the components and complexities involved.  For most, this is probably an oversimplified perspective highlighting the types of technologies and related issues that have accrued.

typical dig

What we see above is a complex set of application stacks running across virtual machines, physical servers, and data stored across various proprietary vendor storage gear.  There’s numerous hard-coded data integration paths across applications, technology stacks and SaaS providers, tentacles to and from remote offices, branch offices and offshore manufacturing facilities spread around globally. The challenge with global data and the cloud is the latency and congestion that limits us over the WAN and Internet.

File servers have proliferated around the world into most every nook and cranny they could fit.  And now there’s so much data piling up to be protected that for many large enterprises, weekly full backups have turned into monthly backups and the wall of data keeps getting higher.  Customers tell us that they see the day coming when even monthly full backups won’t be feasible.  The costs continue to mount and there’s no end in sight for most growing companies today.

For large enterprises, the term “dig” doesn’t adequately describe the full breadth and depth of the technological sediment involved, which is truly expansive.  For these companies, there are numerous Digs, spread across many data centers, subsidiaries and physical locations – and clouds.

For small to medium size companies, it’s amazing how many types of technologies we have deployed and integrated together to run our businesses.  While much less complex than The Digs of larger enterprises, relative to our size our digs are often just as challenging with our limited resources, and are often outsourced to someone else to deal with.

If we think about the bottommost layer of The Dig as the earliest forms of technology we acquired, for many that is still the mainframe. It’s amazing how many companies still rely on mainframe technology for much of their most critical transaction processing infrastructure. This brings with it various middleware that links mainframe beast computing with everything else.

Unlike minicomputers, which went the way of the dinosaur quickly, client-server and PC era technologies followed and stuck, comprising a large portion of most enterprises’ digs today.  Citrix helped hundreds of thousands to organize and centralize most of the client-server layer, so it’s now contained in the data centers and continuing to serve the business well today.

Web technology layers came next and became a prevalent layer that remains centralized for all users, ranging from B2C, B2B and B2E via SaaS layers that sit outside our data centers in someone else’s digs.

Then we realized we have too many servers and they’re not all busy doing work, yet they take up space and cost money to maintain and power up 24 x 7.  Enter the virtualization and server consolidation era, and the next major new layer reorganized The Dig into a more manageable set of chunks affectionately known as Virtual Machines.  VM’s made life sweet, because we can now see most of The Dig on one console and manage it by pushing a few buttons.  This is very cool!  VMware ushered in this era and owns most of this layer today.

Of course, Apple, Google and others ushered in the mobile computing era, another prevalent and recent layer that’s rapidly evolving and bringing richness to our world.  To make things easier and more convenient, Wi-Fi and 3G then 4G and next 5G wireless networks came to the rescue to tie it all together for us and make our tech world available 24 x 7 everywhere we go.

IoT is now on the horizon and promises to open amazing new frontiers by melding our physical world increasingly with the virtual one we work and live in.  We have the Big Data and data warehouses and analytics systems to try and make sense of everything, as the number of layers and complexity of The Dig becomes overwhelming as it accelerates in size, data growth and the intensity of complexity stresses our abilities to understand, manage and keep it all secure. Speaking of security, there’s entire other layers which are there solely so The Dig doesn’t get infiltrated and pilfered endlessly.

Unfortunately for many, as we have seen all too often in recent news, for some The Dig has been penetrated by hackers, exposing some of our most precious personal information to the bad guys.  As if that’s not disturbing enough, we find out that encryption, that layer which insulates us from the bad guys in cyberspace, is compromised at the edge with our Wi-Fi devices!

So, what’s the next layer?  Obviously, the clouds, machine learning and someday real Artificial Intelligence.  And we already hear the pundits telling us that AI will change everything!  Of course, it will.  Maybe it will figure out how to reorganize The Dig for us, too!

Underneath these big animal picture layers, we have the actual underlying technologies.  Now I’m not going to attempt to provide a complete taxonomy or list here, but it’s the entire gamut of devices and appliances deployed today, including mainframes, middleware, client-server, virtual servers, cloud servers, Citrix servers, provisioning and deployment tools, systems management, e-commerce systems, networking, firewalls, plus programming tools and stacks (.NET, JAVA, MFC, C++, PHP, Python, …) and traditional operating systems like Linux, Windows, Mac OS X, iOS, Android, … and the list just keeps going (way too long to list here).

What I find most amusing is how the vendor marketing hype cycle invariably tries to convince us that this latest technology wave will be the “be all, end all” that will take over and replace everything!  Nope – not even close. It’s just the next layer of The Dig being promoted for immediate adoption and installment. It will either replace an earlier layer or (more likely) add a new layer atop the existing Dig and bring with it new tentacles of integration and complexities of its own.

Perhaps an obvious question to ask is “how could this happen?” or “what can be done to keep The Dig from spiraling out of control?” or “who’s responsible for this and making sure it doesn’t happen?”

My guess is we may not like the answers to those kinds of questions.  It happens because companies need to compete, adapt and move quickly to grow. Each technology acquisition decision is usually treated as a discrete event that addresses a current set of priorities and issues evaluated in isolation, but nobody is truly responsible for or capable of managing The Dig strategy overall.  I mean, who has the title “The Dig Director” or “The Chief Dig Officer”?

Ultimately, IT is typically held responsible for keeping The Dig running, updated, patched, secured, performing well, available and operational to meet the business’ needs. IT is sometimes, but certainly not always, consulted about the next set of layers that are about to be deposited. But increasingly, IT inherits the latest layers and admits them into The Dig and becomes the custodian who’s responsible for running and maintaining it all (with something like 2% to 3% annual budget increase).

So where does “the cloud” fit into this picture?  Good question.  I suspect if you ask some, they will tell you “finally, the cloud is the one that will replace them all!”  Right.  Of course, it will. I mean, we’ve been waiting for a long time, surely this must be it!  The Dig will be completely “digitally transformed”, replacing all that those other messy, pesky layers that we no longer want or respect like we once did.

Others will probably say the cloud is just one of many IT strategies we have, which is probably closer to reality for most companies, at least over the short haul.

I wish it were really that simple. In reality, “the cloud” isn’t a single thing. There’s public clouds, private clouds, hybrid clouds and SaaS clouds – and each one is yet another layer coming to pile onto The Dig and create a new set of interesting technologies for us moving forward.  Most companies can only muster enough budget and resource to rewrite a few apps per year to “digitally transform” pieces of The Dig into the new world order we seek. Alas, rewriting all the apps to Java didn’t work out in the end, so can we really digitally transform everything before the next big thing appears to disrupt our progress?

Multi-cloud is the next reality coming to The Dig near you. The facts are that most companies expect to deploy across many different clouds (up to 10 or more!) and link everything together via various “hybrid cloud” layers… just a friendly heads up – it’s coming soon to The Dig near you. As shown below, industry analysts tell us that 80% of the decision-makers are already committed to adding hybrid clouds and 60% expect to operate multi-cloud environments in 2018.

The Dig near you

This means we know what’s coming next to The Dig near us – more layers.

So, what can be done about The Dig we have today, and the new layers being regularly deposited?  For most companies, little to nothing. Each layer serves a purpose, adding value to our businesses.  Mergers and acquisitions aren’t going to stop.  Business units will continue to hire DevOps and Shadow IT to quickly develop new applications, integrate business processes with new cloud services with multiple vendors and then add it to the corporate technology collective.

For smaller to medium size corporations, there’s hope in that much of their technologies can be migrated to one or more public cloud and SaaS platforms.  For others, it’s incrementalism-as-usual – do what we must today, it’ll be someone else’s problem to deal with in the future – there’s no strategy other than survive to live another day.

When we step back and consider what seems like a chaotic process full of uncontrolled variables and incremental decisions, I believe there’s hope to eventually unwind from the hairball architectures and reorganize our respective technology digs to make them more manageable.

One of the keys is “virtualization”.  The cloud is really a combination of virtual computing and platform services that’s both backward compatible and forward leaning; meaning, we can migrate our existing VM workloads into the cloud and run them, while we lean forward and create new services and applications by tapping into cloud platform services.  But is that the be all, end all that’s needed?  Probably not.

We need a “data strategy”. I believe there is an elemental piece that’s been missing – the Data Virtualization layer, a data access and control layer that makes business data more portable across storage systems, clouds, SaaS clouds, databases, IoT devices and the many other data islands we have today and will add to The Dig over time.

To achieve the multi-cloud and hybrid cloud diversity and integration that many believe come next, without creating more brittle hairball architectures, there must be a recognition that “data is the foundation” that everything rests upon.

If data remains corralled up and tied down to discrete, platform-specific “storage” devices, applications will never be truly freed up to become portable, multi-cloud or hybrid-cloud. Even clever innovations like modular micro-services and reusable containers will continue to be platform constrained until the data layer is virtualized and made flexible enough to quickly and easily adapt with the evolution to the multi-cloud.

In future posts, I will share details around the SoftNAS “cloud fabric” vision and what we now call the “Cloud Data Platform”, a data control layer that enables rapid construction of hybrid clouds, IoT integration and interconnecting the existing layers of The Dig across multiple clouds.

The Dig will continue to be with us, supporting our businesses as we grow and evolve with technology. It’s clear at this point that the next set of layers will be cloud-based. We will need to integrate the many existing layers globally with the cloud, while we incrementally evolve and settle into our new digs in the cloud era.

NEXT STEPS

Visit Buurst to learn more about how SoftNAS is used by thousands of organizations around the world to protect their business data in the cloud, achieve a 100% up-time SLA for business-critical applications and move applications, data and workloads into the cloud with confidence.  Register here to learn more and for early access to the Cloud Data Platform, the new data access and control plane from SoftNAS.

ABOUT THE AUTHOR

Rick Braddy is an innovator, leader and visionary with more than 30 years of technology experience and a proven track record of taking on business and technology challenges and making high-stakes decisions. Rick is a serial entrepreneur and former Chief Technology Officer of the CITRIX Systems XenApp and XenDesktop group and former Group Architect with BMC Software. During his 6 years with CITRIX, Rick led the product management, architecture, business and technology strategy teams that helped the company grow from a $425 million, single-product company into a leading, diversified global enterprise software company with more than $1 billion in annual revenues. Rick is also a United States Air Force veteran, with military experience in top-secret cryptographic voice and data systems at NORAD / Cheyenne Mountain Complex. Rick is responsible for SoftNAS business and technology strategy, marketing and R&D.

Consolidating File Servers into the Cloud

Cloud File Server Consolidation Overview

Maybe your business has outgrown its file servers and you’re thinking of replacing them. Or your servers are located throughout the world, so you’re considering shutting them down and moving to the cloud. It might be that you’re starting a new business and wondering if an in-house server is adequate or if you should adopt cloud technology from the start.

Regardless of why you’re debating a physical file server versus a cloud-based file server, it’s a tough decision that will impact your business on a daily basis. We know there’s a lot to think about, and we’re here to show why you should consolidate your physical file servers and move your data to the cloud.

We’ll discuss the state of the file server market and talk about the benefits of cloud file sharing. What we’re going to talk about is some of the challenges and some of the newest technologies to step up to the challenges of unstructured data not only sitting in one place but scattered around the world.

Managing Unstructured Data

The image below is how Gartner looks at unstructured data in the enterprise. The biggest footprint of data that you have as an enterprise or a commercial user is your unstructured data. It’s your files.

cloud file server consolidation unstructured data

That one is where you buy a large single platform that might be a petabyte or even larger to house all of that file data, but what creeps up on us is the data that doesn’t leave  in the data, that which isn’t right under your nose and surrounded by best practices. And those who distribute file servers that live around the world, on average an enterprise with 50 locations, be they branch offices, distribution centers, manufacturing facilities, oil rigs, etc, they’ve got 50 or 100 locations, they’re going to have at least 50 or 100 data centers.

The analyst community (Gartner, Forrester and 451) tell us that almost 80% of the unstructured data you’re dealing with actually sits outside of your well protected data center. This presents challenges for an enterprise because it’s outside of your control.

It’s been difficult to leverage the cloud for unstructured data. Customers by and large are being fairly successful moving workloads and applications to the cloud, along with the storage those applications use. However, when you’re talking about user data and your users are all around the world, you’re dealing with distance, latency, network unavailability in general and multiple hubs through routing.

Which has led to some significant challenges, such as having the situation where you’ve data islands popping up everywhere. You have massive amounts of corporate data that’s not subject to the same kind of data management security that you would have in an enterprise datacenter. Including backup, recovery, audit, compliance, secure networks and even physical access.

And that is what has led to a really “bleeding from the neck problem.” That being, how am I going to get this huge amount of data around the world under our control?

Unstructured Data Challenges

These are some of the issues that you find: Security problems, lost files. Users calling in and saying, “Oops, I made a mistake. Can you restore this for me?” And the answer quite often is, “No. You people in that location are supposed to be backing up your own file server.”

Bandwidth issues are significant as people are trying to have everyone in the world work from a single version of the truth and they’re trying to all look at the same data. But how do you do that when it’s file data?

You have a location in London trying to ship big files to New York. NY then makes some changes and ships the files to India. Yet people are in different time zones. How do you make sure they’re all working off of the same version of information? That has led to the kind of problems driving people to the cloud. Large enterprises are trying to get to the cloud not only with their applications, but with their data.

If you look at what Gartner and IDC say about the move to the cloud, you see that larger enterprises have a cloud-first strategy. We’re seeing SMBs (small and medium businesses) and SMEs (small and medium enterprises) also have a cloud-first strategy. They’re embracing the cloud and moving significant amounts of their workloads to the cloud.

cloud file server consolidation

More companies are going to install a cloud IT infrastructure at the expense of private clouds. We see customers all the time that are saying, “I have 300,000 sq ft. data center. My objective is to have a 100,000 sq ft. data center within the next few months.”

NAS/SAN vs. Hyperconverged vs. The Cloud

And so many customers are now saying, “What am I going to do next? My maintenance renewal is coming up. My capacity is reaching its limit because unstructured data is growing in excess of 30% annually in the enterprise. So what is the next thing am I going to do?”

Am I going to add more on-premise storage to my files? Am I going to take all of my branch offices that are currently 4 terabytes and double them to 8 terabytes?

You probably have seen the emergence of hyperconverged hardware — single instance infrastructure platforms that do applications, networking and storage. It’s a newer, different way of having an on-premise infrastructure. With a hyperconverged infrastructure, you still have some forklift upgrade work both in terms of the hardware platform and in terms of the data.

nas vs hyperconverged vs cloud

Customers that are moving off of traditional NAS and SAN systems onto hyperconverged have to bring in the new hardware, migrate all the data, get rid of the old hardware, so it’s still lift and shift from a datacenter as well as a footprint.

Because of that, a lot of SoftNAS customers are asking, “Is it possible to do a lift and shift to the cloud? I don’t want to get the infrastructure out of my data center and out of my branch offices. I don’t want to be in the file server business. I want to be in the banking, or the retail, or the transportation business.”

I want to let the cloud providers — Azure, AWS, or Google — to use their physical resources, but it’s my data and I want everybody to have access to it. That’s opened the world to a lift and shift into a cloud-based infrastructure. That means you and your peers are going through a pros and cons discussion. If you look at on-premises versus hyperconverged versus the cloud, the good news is all of them have an secure infrastructure available. That could be from the level of physical access, authentication and encryption – either in-transit or at-rest or in-use, all the way down to rights management.

nas vs hyperconverged vs cloud

What you’ll find is that all the layers of security apply across the board. In that area, cloud has become stronger in the last 24 months. In terms of infrastructure management — which is getting to be a really key budget line item for most IT enterprises — for on-premise and hyperconverged, you’re managing that. You’re spending time and effort on physical space, power, cooling, upgrade planning, capacity planning, uptime and availability, disaster recovery, audit and compliance.

The good news with the cloud is you get to off load that to someone else. Probably the biggest benefit that we see is in terms of scalability. It’s in terms of the businesses that say, “I have a pretty good handle on the growth rates of my structured data but my unstructured data is a real unpredictable beast. It can change overnight. We may acquire another company and find out we have to double the size of our unstructured data share. How do I do that?” Scalability is a complicated task if you’re running an on premise infrastructure.

With the cloud, someone else is doing it — either at AWS, Azure, Google, etc. From a disaster recovery perspective, you pretty much get to ride on the backs of established infrastructure. The big cloud providers have great amounts of staff and equipment to ensure that failover, availability, pointing to a second copy, roll-back etc, has already been implemented and tested.

Adding more storage becomes easy too. From a financial perspective, the way you pay for an on-premise environment, is you buy your infrastructure and you use it. It’s the same thing with hyperconverged. Although, they have lower price points than traditional legacy NAS and SAN. But the fact is only the cloud gives you the ability to say “I’m going to pay for exactly that I need. I’m not buying 2 Terabytes because I currently need 1.2 Terabytes and I’m growing 30% per annum.” If you’re using 1.2143 terabytes, that’s what you pay for in the cloud.

A Single Set of Data

But just as important, they have found out that there is a business use-case. There is the ability to do things from a centralized consolidated cloud viewpoint which you simply cannot do from the traditional distributed storage infrastructure.

If you think about what customers are asking for now, more and more enterprises are saying “I want centralized data.” That’s one of the reasons they’re moving to the cloud. They want security. They want to make sure that it’s using best practices in terms of authentication, encryption, and token management. And whatever they use has to be able to scale up for their business.

cloud file server consolidation unstructured data

But how about from a use case perspective? You need to make sure you have data consistency. Meaning, if I have people on my team in California, New York and London, I need to make sure they’re not stepping on each other’s work as they collaborate on projects.

You need to make sure you have flexibility. If you’re getting rid of old infrastructure in 20 or 30 branch offices, then you need to get rid of them easily and quickly spin up the ability for them to access centralized data within minutes. Not within hours and weeks of waiting for new hardware to come in.

Going back to data consistency, if I’m going to have one copy of the truth that everyone is using, I need to make sure that I have that distributed files working. Because face it, that what file servers do. That is the foundation of file servers since they were invented in the market. Those are the type of benefits that are being brought to bear by people that move their file servers into the cloud. They cut costs and increase flexibility.

Cloud File Server Reference Architecture

Here’s an example. In the image below, a SoftNAS customer needed to build a highly available 100 TB Cloud NAS on AWS. The NAS needs to be accessed in the cloud via a CIFS protocol and they need to have data elsewhere. Not the primary location, but they need to have across the region and different continents.

cloud file server consolidation reference architecture

They needed to have to have access from the remote office. Also, they need Active Directory and giving them a need to have them for the help build a new space with the district file locking as well.

The solution provided along with Talon FAST, deployed two instances in UFCs. In this case in two separate zones — control A deployed in one zone and control B deployed in the second zone. We leveraged S3 and EBS for different type of applications for their SLA.

We set up replication between two nodes so the data is available in two different places and is within the zone. We deployed HA on top of it to give that availability with minimal down time. So we give you that flexibility to migrate data or flip to another node without management intervention.

Next Steps

You can also try SoftNAS Cloud NAS free for 30 days to start consolidating your file servers in the cloud:

softnas cloud nas free trial