Meeting Cloud File Storage Cost and Performance Goals – Harder than You Think

Meeting Cloud File Storage Cost and Performance Goals – Harder than You Think

Meeting Cloud Storage Cost and Performance Goals – Harder than You Think

According to Gartner, by 2025 80% of enterprises will shut down their traditional data centers. Today, 10% already have. We know this is true because we have helped thousands of these businesses migrate workloads and business-critical data from on-premises data centers into the cloud since 2013. Most of those workloads have been running 24 x 7 for 5+ years. Some of them have been digitally transformed (code for “rewritten to run natively in the cloud”).

The biggest challenge in adopting the cloud isn’t the technology shift – it’s finding the right balance of storage cost vs performance and availability that justifies moving data to the cloud. We all have a learning curve as we migrate major workloads into the cloud. That’s to be expected as there are many choices to make – some more critical than others.

Applications in the cloud

Many of our largest customers operate mission-critical, revenue-generating applications in the cloud today. Business relies on these applications and their underlying data for revenue growth, customer satisfaction, and retention. These systems cannot tolerate unplanned downtime. They must perform at expected levels consistently… even under increasingly heavy loads, unpredictable interference from noisy cloud neighbors, occasional cloud hardware failures, sporadic cloud network glitches, and other anomalies that just come with the territory of large-scale data center operations.

In order to meet customer and business SLAs, cloud-based workloads must be carefully designed. At the core of these designs is how data will be handled. Choosing the right file service component is one of the critical decisions a cloud architect must make.

Application performance, costs, and availability

For customers to remain happy, application performance must be maintained. Easier said than done when you no longer control the IT infrastructure in the cloud.

So how does one negotiate these competing objectives around cost, performance, and availability when you no longer control the hardware or virtualization layers in your own data center?  And how can these variables be controlled and adapted over time to keep things in balance? In a word – control. You must correctly choose where to give up control and where to maintain control over key aspects of the infrastructure stack supporting each workload.

One allure of the cloud is that it’s (supposedly) going to simplify everything into easily managed services, eliminating the worry about IT infrastructure forever. For non-critical use cases, managed services can, in fact, be a great solution. But what about when you need to control costs, performance, and availability?

Unfortunately, managed services must be designed and delivered for the “masses”, which means tradeoffs and compromises must be made. And to make these managed services profitable, significant margins must be built into the pricing models to ensure the cloud provider can grow and maintain them.

In the case of public cloud-shared file services like AWS Elastic File System (EFS) and Azure NetApp Files (ANF), performance throttling is required to prevent thousands of customer tenants from overrunning the limited resources that are actually available. To get more performance, you must purchase and maintain more storage capacity (whether you actually need that add-on storage or not). And as your storage capacity inevitably grows, so do the costs. And to make matters worse, much of that data is actually inactive most of the time, so you’re paying for data storage every month that you rarely if ever even access. And the cloud vendors have no incentive to help you reduce these excessive storage costs, which just keep going up as your data continues to grow each day.

After watching this movie play out with customers for many years and working closely with the largest to smallest businesses across 39 countries, At Buurst™. we decided to address these issues head-on. Instead of charging customers what is effectively a “storage tax” for their growing cloud storage capacity, we changed everything by offering Unlimited Capacity. That is, with SoftNAS® you can store an unlimited amount of file data in the cloud at no extra cost (aside from the underlying cloud block and object storage itself).

SoftNAS has always offered both data compression and deduplication, which when combined typically reduces cloud storage by 50% or more. Then we added automatic data tiering, which recognizes inactive and stale data, archiving it to less expensive storage transparently, saving up to an additional 67% on monthly cloud storage costs.

Just like when you managed your file storage in your own data center, SoftNAS keeps you in control of your data and application performance. Instead of turning control over to the cloud vendors, you maintain total control over the file storage infrastructure. This gives you the flexibility to keep costs and performance in balance over time.

To put this in perspective, without taking data compression and deduplication into account yet, look at how Buurst SoftNAS costs compare:

SoftNAS vs NetApp ONTAP, Azure NetApp Files, and AWS EFS

These monthly savings really add up. And if your data is compressible and/or contains duplicates, you will save up to 50% more on cloud storage because the data is compressed and deduplicated automatically for you.

Fortunately, customers have alternatives to choose from today:

  1. GIVE UP CONTROL – use cloud file services like EFS or ANF, pay for both performance and capacity growth, and give up control over your data or ability to deliver on SLAs consistently
  2. KEEP CONTROL – of your data and business with Buurst SoftNAS, and balance storage costs, and performance to meet your SLAs and grow more profitably.

Sometimes cloud migration projects are so complex and daunting that it’s advantageous to just take shortcuts to get everything up and running and operational as a first step. We commonly see customers choose cloud file services as an easy first stepping stone to a migration. Then these same customers proceed to the next step – optimizing costs and performance to operate the business profitably in the cloud and they contact Buurst to take back control, reduce costs, and meet SLAs.

As you contemplate how to reduce cloud operating costs while meeting the needs of the business, keep in mind that you face a pivotal decision ahead. Either keep control or give up control of your data, its costs, and performance. For some use cases, the simplicity of cloud file services is attractive and the data capacity is small enough and performance demands low enough that the convenience of files-as-a-service is the best choice. As you move business-critical workloads where costs, performance and control matter, or the datasets are large (tens to hundreds of terabytes or more), keep in mind that Buurst never charges you a storage tax on your data and keeps you in control of your business destiny in the cloud.

Next steps:

Learn more about how SoftNAS can help you maintain control and balance cloud storage costs and performance in the cloud.

Tired of Slow EFS Performance? Need Predictable NFS and CIFS with Active Directory?

Tired of Slow EFS Performance? Need Predictable NFS and CIFS with Active Directory?

Tired of Slow AWS EFS Performance? Need Predictable NFS and CIFS with Active Directory?

I’m Rick Braddy, Founder and CTO of Buurst. Like you, I have been in a situation where I was dependent upon Storage-as-a-Service (SaaS) and forced to live with slow performance, with no control or say in how NFS storage was structured or delivered.

In my case, it was on-premise on VMware, using NFS-mounted multi-user, shared Isilon® storage arrays in a private cloud, where we could not install our own storage appliances. We were stuck and ended up having to switch data centers to resolve this storage performance issue that plagued our business. It’s very frustrating when the users complain about AWS slow application performance, then the bosses get riled up, and there’s little to nothing you can do about it. So I feel your pain.

Today customers have choices, along with the freedom to choose. So if you’re experiencing slow or inconsistent performance with your current storage provider ( AWS EFS or other) and need answers, you came to the right place. On the other hand, if you haven’t yet deployed and are evaluating your options, this is a great time to get the facts before you find yourself stuck and have to switch horses later, as we see all too often.

We commonly see Amazon customers who have tried AWS EFS (and other options) that have hit slow performance, high costs, and other barriers that do not meet their business requirements. But it’s not just throttling from burst limits that slow performance. It’s the very nature of shared, multi-tenant filesystems that makes them prone to bottlenecks. The very nature of shared, multi-tenant filesystems makes them prone to bottlenecks, as shown in the following animated image.

A shared, multi-tenant filesystem must deal with millions of concurrent filesystem requests from thousands of other companies, who compete for filesystem resources within a Shared Data Service Lane. It’s easy to see why performance becomes slow and unpredictable in a shared, multi-tenant filesystem that must service so many competing requests. In such an environment, bottlenecks and collisions at the network and storage levels are unavoidable at times. 

Contrast that with having a Dedicated Data Express Lane like Buurst SoftNAS, your own exclusive Cloud NAS with its caching and resources focused on your data alone, where your applications’ needs are met consistently and constantly at maximum performance. 

Below we see the published AWS EFS burst limits. As applications use more throughput, burst credits are used. Applications that use too much I/O are penalized, and Amazon EFS throttles performance, one of the potential causes for slow AWS EFS performance.

Amazon EFS is a shared, multi-tenant filesystem, subject to burst limits and multi-tenant bottlenecks. SoftNAS is a dedicated, high-performance filesystem with no burst limits and access to full AWS EBS performance, regardless of how much storage capacity you use. This is the first core difference between the two – shared vs. dedicated infrastructure. 

There’s a place in the market where each type of filesystem fits and serves its individual purpose. The question is, which approach best meets your application and business needs? 

Performance results can also vary based upon many other factors and each application’s unique behavior. Therefore, we recommend customers run benchmarks and test their applications at scale with simulated user loads before finalizing the components that will be placed into production as part of their cloud stack. This is often best accomplished during a proof-of-concept stage before applications go into production. Unfortunately, customers often discover too late in the process that they have performance issues and are in urgent need of a solution. 

Since 2013, SoftNAS has provided thousands of customers with high-performance, dedicated Cloud NAS solutions. We originally pioneered the “cloud NAS” category as the #1 Best-selling NAS in the Cloud. By coupling industry standards like ZFS on Linux with our patented cross-zone high-availability and high-performance AWS EBS and S3-backed storage technologies, SoftNAS has consistently provided customers with the most flexibility, choice, and value business-critical NFS and CIFS with Active Directory for AWS®. 

What do customers say about AWS EFS Slow Performance?

EFS has been in the marketplace for several years now, so there’s been plenty of time to hear from customers about their experiences. It’s helpful to consider what customers see and report.

Here are some examples of comments our Support Team sees below.

You can search for “AWS EFS slow performance” to do your own research.

“We want to migrate about 5 percent of our 150+ Centos based servers from on-prem to AWS and need NFS and CIFS but AWS does not offer EFS in Canada.”
“Hi we were suggested to contact you guys from our AWS rep in helping us deal with the issues EFS has due to the slowness of I/O … our AMI’s currently mount our EFS mount point and due to the nature of the software we are using it tends to use a lot of file_get_contents, is_dir functionality which is very slow with EFS compared to a standalone server with SSD. Is this something you guys can help us with and get it up and running in our VPC?”
“We host multiple Magento based websites for our clients in AWS and looking for a solution that we can use to replace AWS EFS as it is very slow with many small files and it is not yet available in many regions.”
“I am trying the trial of SoftNAS in our AWS cloud for potential use for SMB/CIFS & NFS to be used as a replacement for EFS. We have reached the limit of the capabilities of EFS and so looking into other products.”
“I need a demo by which I can decide the SoftNAS is suitable AMI for my production use case. This is very high priority as I need to implement this for persistence storage for Kubernetes as EFS is not available in Mumbai region.”

“Need ability to use S3 as the backend storage instead of EBS/EFS as they are much too expensive for us.”

The above is just a sample of what we typically hear from customers after evaluating or deploying EFS. EFS is a solid NFS-as-a-service that is meeting many customers’ cloud file storage needs. Some customers, however, need more than a multi-tenant, the shared filesystem can deliver. 

What’s needed is a full-featured, enterprise-grade Cloud NAS that delivers: 

  • A tunable level of dedicated, predictable performance 
  • Guaranteed cross-zone high-availability with an up-time SLA 
  • Storage efficiency features (e.g., compression, deduplication) that mitigate the costs of cloud storage 
  • Support for EBS (Provisioned IOPS, SSD, magnetic) and highly durable, low-cost S3 storage 
  • A full-featured POSIX-compliant filesystem with full NFS that supports all the expected features at scale 
  • CIFS with Active Directory integration and full ACL support for Windows workloads 
  • Ability to scale storage into the hundreds of terabytes or petabytes while maintaining performance consistency at a reasonable cost level that doesn’t break the bank or derail the project 
  • Storage auto-tiering that maximizes performance while minimizing storage costs 
  • Data checksumming and other data integrity features that verify retrieved data is accurate and correct 
  • Instant storage snapshots and writable clones that do not involve making expensive, time-consuming copies of data for rapid recovery and previous versions 
  • Integrated data migration tools that make lifting and shifting production workloads into the cloud faster and easier 

    How can SoftNAS address all these issues better than AWS EFS?

    Simple. That’s what it was designed to do from the beginning – provide cloud-native, enterprise-grade “cloud NAS” capabilities in a way that’s designed to squeeze the maximum performance from EC2, EBS, and S3 storage. And cloud storage and data management is all we have done, every day, since 2013, so we are among the world’s top cloud storage experts.

    Below we see SoftNAS running as a Linux virtual machine image on EC2. First, EBS and S3 cloud storage is assigned to the SoftNAS image that’s been launched from AWS Marketplace or the AWS Console. Next, that storage is aggregated into ZFS filesystems as “storage pools”. Then, these pools of storage can be thin-provisioned into “volumes” that are shared out as NFS, CIFS, AFP, or iSCSI for use by applications and users.

    SoftNAS leverages arrays of native EBS block devices and S3 buckets as underlying, scale-out cloud storage. Data is striped across arrays of EBS and S3 devices, thereby increasing the available IOPS (I/O per second).  To add more performance and storage capacity, add more devices at any time. 

    SoftNAS runs on a dedicated instance within your AWS account, so everything is under your control – how many and what types of storage devices are attached, how much RAM is available for caching, how much direct-attached SSD to use (based on instance choice), along with how much CPU to allocate to compression, deduplication, and performance. Incidentally, compression and deduplication can reduce your actual storage costs by up to 80%, depending upon the nature of your data (you don’t get any storage efficiency features with EFS). 

    And because ZFS is a copy-on-write filesystem and SoftNAS automatically creates storage snapshots for you based on policies, you always have previous versions of your data available at your fingertips, in case something ever happens. You can quickly go back and recover your data without rolling a backup restore (you don’t get any storage snapshots or the ability to recover with EFS). 

    ZFS provides complete POSIX-compliant semantics as the SoftNAS filesystem. CentOS Linux provides full NFS v4.1 and NFS v3 filesystem semantics, including native file_get_contents, is_dir functionality. And because SoftNAS is built upon Linux, you get native NFS support with no functional limitations. 

    SoftNAS supports Windows CIFS with SMB 3 protocol, with Active Directory integration for millions of AD objects. All of these details matter when you’re deploying applications that rely on proper locking, filesystem permissions, full NFS semantics, NTFS ACLs, etc.   

    SoftNAS delivers reliable performance with two levels of caching – RAM and direct-attached SSD for massive read-caching, which together provide both predictable performances that YOU control and performance increases. And later, if you need even more performance, you can upgrade your EC2 instances to add more IOPS and greater throughput with more network bandwidth. 

    SoftNAS keeps you in control instead of turning your data over to the multi-tenant EFS filesystem and living with its various limitations. 
    Cost Management
    • Highest Performance
    • Most Expensive Storage
    • Most Active Data
    Cost Management
    • Moderate Performance
    • Medium Expense Storage
    • Less Active Data
    Cost Management
    • Lowest Performance
    • Least Expensive Storage
    • Least Active Data
    • Highly-durable S3 object storage

    Using the combination of three data efficiency methods, SoftNAS can reduce cloud storage costs more than any other alternative available today. No other vendor offers these kinds of cloud storage cost savings while maximizing performance and providing the level of data protection and control that SoftNAS delivers. Additionally, by working with our Sales team, you can get volume discounts for larger capacity implementations to save you even more. 

    You may wonder “who else uses SoftNAS?”  Look at who’s chosen SoftNAS for their business-critical cloud application deployments below. This is just a sample of recognizable customers across 36 countries globally who trust their business data to SoftNAS on a 24 x 7 x 365 basis. 

    Logo

    You will find that customers say that Buurst provides the best technical support and cloud expertise available, along with its world-class SoftNAS Cloud NAS product. 

    By the way, over the years, we have developed an excellent relationship with AWS. AWS suggested that we create this and other comparison material to help customers understand the differences between EFS and SoftNAS. We are great partners with one common objective – to make our customers successful together. 

    How did this come about?

    I started Buurst in 2012 as a former traditional storage customer, so I know what you’re dealing with and what you need and want in a storage software product and partner. I believed in AWS and the future of the public cloud before it was apparent to everyone that it would be the next big thing. As a result, we were early to market in 2013 and have many years of head start vs. the alternatives. 

    Today, I ensure our team delivers excellence to our customers every day. Our management team ensures we hire, train, and provide the best technical support resources available globally. In addition, we bring years of cloud storage and multi-cloud performance experience to the table to ensure your success. 

    Customers always come first at Buurst. Our cloud experts are here to help you quickly address your most pressing cloud data management needs and ensure your success in the cloud – something we’ve done for thousands of customers since 2013. You can count on Buurst to be there with you. We help customers with all issues around networking, security, VPCs, and other areas all the time. 

    For example, our networking and storage experts have helped customers deploy thousands of VPCs in just about every imaginable security and network topology.

    Check out our prevalentBest Practices Learned from 1,000 VPC Configurations. This is just one of many examples of how Buurst helps its customers make the journey to the cloud faster, easier, and better than going it alone. 

    What regions is SoftNAS available in?

    Our products are available in all regions globally. We typically add support for new regions within 60 days or less (as soon as our QA team can test and certify the product in the new region). This means you can count on Buurst to be where you need it to be when you need it there. 

    What are my options to get started?

    There are many ways to get started with Buurst. If you need help fast, I recommend you reach out to our solutions team or call us at 1-832-495-4141  to see how Buurst can address your cloud file storage and data management needs. 

    Need help migrating your data from on-premise or EFS?

    SoftNAS now includes automated data migration capabilities that take the pain out of migration. We can also assist you in the planning and actual migration.  Learn more.

    Prefer some help getting started?

    If you’d prefer to see a demo first or get some assistance evaluating your options further, schedule a demo and free consultation with one of our experts now and get your questions answered one-on-one. 

    Need more information before proceeding?

    That’s understandable. Here’s some additional helpful links below to more information:

    Remember. We are here and ready to help. You don’t have to go it alone anymore– reach out, and let’s schedule a time together to explore how our cloud experts can be of service and quickly address your needs – at no cost and no obligation.Click here to schedule a free consultation now. 

    Thank you for visiting Buurst. Here’s to your cloud success! We look forward to being of service. 

    The Cloud’s Achilles Heel – The Network

    The Cloud’s Achilles Heel – The Network

    SoftNAS began its life in the cloud and rapidly rose to become the #1 best-selling NAS in the AWS cloud in 2014, a leadership position we have maintained and continue to build upon today. We and our customers have been operating cloud native since 2013, when we originally launched on AWS. Over that time, we have helped thousands of customers move everything from mission-critical applications to entire data centers of applications and infrastructure into the cloud.  In 2015, we expanded support to Microsoft Azure, which has become a tremendous area of growth for our business.

    By working closely with so many customers with greatly varying environments over the years, we’ve learned a lot as an organization – about the challenges customers face in the cloud – and getting to the cloud in the first place with big loads of data in the hundreds of terabytes to petabyte scale.

    Aside from security, the biggest challenge area tends to be the network – the Internet.  Hybrid cloud uses a mixture of on-premises and public cloud services with data transfers and messaging orchestration between them, so it all relies on the networks.  Cloud migrations must often navigate various corporate networks and the WAN, in addition to the Internet.

    The Internet is the data transmission system for the cloud, like power lines distribute power throughout the electrical grid. While the Internet has certainly improved over the years, it’s still the wild west of networking.

    The network is the Achilles heel of the cloud.

    Developers tend to assume that components of an application are operating in close proximity of one another; i.e., a few milliseconds away across reliable networks, and if there’s an issue, TCP/IP will handle retries and recover from any errors. That’s the context many applications get developed in, so it’s little surprise that the network becomes such a sore spot.

    In reality, production cloud applications must hold up to higher, more stringent standards of security and performance than when everything ran wholly contained within our own data centers over leased lines with conditioning and predictable performance.  And the business still expects SLA’s to be met.

    Hybrid clouds increasingly make use of site-to-site VPN’s and/or encrypted SSL tunnels through which web services integrate third party and SaaS sites and interoperate with cloud platform services. Public cloud provider networks tend to be very high quality between their data center regions, particularly when communications remain on the same continent and within the same provider.  For those needing low-latency tunnels, AWS DirectConnect and Azure ExpressRoute can provide additional conditioning for a modest fee, if they’re available where you need them.

    But what about the corporate WAN, which are often overloaded and plagued by latency and congestion?  What about all those remote offices, branch offices, global manufacturing facilities and other remote stations that aren’t operating on pristine networks and remain unreachable by cost-effective network conditioning options?

    Latency, congestion and packet loss are the usual culprits

    It’s easy to overlook the fact that hybrid cloud applications, bulk data transfers and data integrations take place globally. And globally it’s common to see latencies in the many hundreds of milliseconds, with packet loss in the several percent range or higher.

    In the US, we take our excellent networks for granted.  The rest of the world’s networks aren’t always up to par with what we have grown accustomed to in pure cloud use cases, especially where many remote facilities are located.  It’s common to see latency in the 200 to 300 milliseconds range when communicating globally. When dealing with satellite, wireless or radio communications, latency and packet loss is even greater.

    Unfortunately, the lingua franca of the Internet is TCP over IP; that is, TCP/IP. Here’s a chart that shows what happens to TCP/IP in the face of latency and packet loss resulting from common congestion.

    TPC

    The X axis represents round trip latency in milliseconds, with the Y axis showing effective throughput in Kbps up to 1 Gbps, along with network packet loss in percent along the right side.  It’s easy to see how rapidly TCP throughput degrades when facing more than 40 to 60 milliseconds of latency with even a tiny bit of packet loss. And if packet loss is more than a few tenths of a percent, forget about using TCP/IP at all for any significant data transfers – it becomes virtually unusable.

    Congestion and packet loss are the real killer for TCP-based communications. And since TCP/IP is used for most everything today, it can affect most modern network services and hybrid cloud operation.

    This is because the TCP windowing algorithm was designed to prioritize reliable delivery over throughput and performance.  Here’s how it works. Each time there’s a lost packet, TCP cuts its “window” buffer size in half, reducing the number of packets being sent and slowing the throughput rate.  When operating over less than pristine global networks, sporadic packet loss is very common. It’s problematic when one must transfer large amounts of data to and from the cloud.  TCP/IP’s susceptibility to latency and congestion render it unusable. This well-known problem has been addressed on some networks by deploying specialized “WAN Optimizer” appliances, so this isn’t a new problem – it’s one IT managers and architects are all too familiar with and have been combating for many years.

    Latency and packet loss turn data transfers from hours into days, and days into weeks and months

    So even though we may have paid for a 1 Gbps network pipe, latency and congestion conspire with TCP/IP to limit actual throughput to a fraction of what it would be otherwise; e.g., just a few hundred kilobits per second.  When you are moving gigabytes to terabytes of data to and from the cloud or between remote locations or over the hybrid cloud, what should take minutes takes hours, and days turn into weeks or months.

    We regularly see these issues with customers who are migrating large amounts of data from their on-premises datacenters over the WAN and Internet into the public cloud.  A 50TB migration project that should take a few weeks turns into 6 to 8 months, dragging out migration projects, causing elongated content freezes and sending manpower and cost overruns through the roof vs. what was originally planned and budgeted.

    As we continued to repeatedly wait for customer data to arrive in the public cloud to complete cloud migration projects involving SoftNAS Cloud NAS, we realized this problem was acute and needed to be addressed. We had many customers approach us and ask us if we had thought about helping in this area – as far back as 2014.  Several even suggested we have a look at IBM Aspera, which they said was a great solution.

    In late 2014, we kicked off what turned into a several year R&D project to address this problem area. Our original attempts were to use machine learning to automatically adapt and adjust dynamically to latency and congestion conditions.  That approach failed to yield the kind of results we wanted.

    Eventually, we ended up inventing a completely new network congestion algorithm (that’s now Ultra pending patent) to break through and achieve the kind of results we see below.

    We call this technology “UltraFast™.”

    UltraFast

    As can be easily seen here, UltraFast overcomes both latency and packet loss to achieve 90% or higher throughput, even when facing up to 800 milliseconds and several percent packet loss.  Even when packet loss is in the 5% to 10% range, UltraFast continues to get the data through these dirty network conditions.

    I’ll save the details of how UltraFast does this for another blog post, but suffice it to say here that it uses a “congestion discriminator” that provides the optimization guidance.  The congestion discriminator determines the ideal maximum rate to send packets without causing congestion and packet loss.  And since TCP/IP constantly re-routes packets globally, the algorithm quickly adapts and optimizes for whatever path(s) the data ends up taking over IP networks end-to-end.

    What UltraFast means for cloud migrations

    We combine UltraFast technology with what we call “Lift and Shift” data replication and orchestration. This combo makes migration of business applications and data into the public cloud from anywhere in the world a faster, easier operation. The user simply answers some questions about the data migration project by filling in some wizard forms, then the Lift and Shift system handles the entire migration, including acceleration using UltraFast.  This makes moving terabytes of data globally a simple job any IT or DevOps person can do.

    Additionally, we designed Lift and Shift for “live migration”, so once it replicates a full backup copy of the data from on-premise into the cloud, it then refreshes that data so the copy in the cloud remains synchronized with the live production data still running on-premise.  And if there’s a network burp along the way, everything automatically resumes from where it left off, so the replication job doesn’t have to start over each time there’s a network issue of some kind.

    Lift and Shift and UltraFast take a lot of the pain and waiting out of cloud migrations and global data movement.  It took us several years to perfect it, but now it’s finally here.

    What UltraFast means for global data movement and hybrid cloud

    UltraFast can be combined with FlexFiles™, our flexible file replication capabilities, to move bulk data around to and from anywhere globally. Transfers can be point-to-point, one to many (1-M) and/or many to one (M-1). There is no limitation on the topologies that can be configured and deployed.

    Finally, UltraFast can be used with Apache NiFi, so that any kind of data can be transferred and integrated anywhere in the world, over any kind of network conditions.

    SUMMARY

    The network is the Achilles heel of the cloud. Internet and WAN latency, congestion and packet loss prevent hybrid cloud performance, timely and cost-effective cloud migrations and slow global data integration and bulk data transfers.

    SoftNAS’ new UltraFast technology, combined with Lift and Shift migration and Apache NiFi data integration and data flow management capabilities yield a flexible, powerful set of tools for solving what have historically been expensive and difficult problems with an purely software solution that runs everywhere; i.e., on VMware or VMware-compatible hypervisors and in the AWS and Azure clouds. This powerful combination puts IT in the driver’s seat and in control of its data, overcoming the cloud’s Achilles heel.

    NEXT STEPS

    Visit Buurst, Inc to learn more about how SoftNAS is used by thousands of organizations around the world to protect their business data in the cloud, achieve a 100% up-time SLA for business-critical applications and move applications, data and workloads into the cloud with confidence.  Register here to learn more and for early access to UltraFast, Lift and Shift, FlexFiles and NiFi technologies.

    ABOUT THE AUTHOR

    Rick Braddy is an innovator, leader and visionary with more than 30 years of technology experience and a proven track record of taking on business and technology challenges and making high-stakes decisions. Rick is a serial entrepreneur and former Chief Technology Officer of the CITRIX Systems XenApp and XenDesktop group and former Group Architect with BMC Software. During his 6 years with CITRIX, Rick led the product management, architecture, business and technology strategy teams that helped the company grow from a $425 million, single-product company into a leading, diversified global enterprise software company with more than $1 billion in annual revenues. Rick is also a United States Air Force veteran, with military experience in top-secret cryptographic voice and data systems at NORAD / Cheyenne Mountain Complex. Rick is responsible for SoftNAS business and technology strategy, marketing and R&D.

    SoftNAS Named Best Cloud NAS Solution in Network World Review


    networkworld

    Network World recently reviewed software-based NAS solutions and concluded:

    If you’re looking for a cloud-based solution, SoftNAS is your best bet.

    SoftNAS 3.3.3, the latest version of our award-winning Cloud NAS solution, differs from other solutions reviewed by offering both software-based and on-premise versions.  It’s available using Amazon EC2, Microsoft Azure and most recently, CenturyLink Cloud.

    Eric Geier, freelance tech writer with Network World, provides a thorough review, citing the following about how SoftNAS is typically deployed:

    Commonly, SoftNAS is deployed in AWS VPCs serving files to EC2 based servers within the same VPCs. SoftNAS also supports a hybrid cloud model where one SoftNAS instance is deployed on-premise on a local PC in your office and a second instance in an AWS VPC. In this hybrid model, replication occurs from the local SoftNAS to cloud-based SoftNAS for cloud-based disaster recovery.

    SoftNAS is quick and easy to configure and setup in minutes, something we strive to achieve with all our products. Eric alluded to this during the review:

    Once we configured the EC2 instance of SoftNAS, we could access the web GUI, which they call the SoftNAS StorageCenter, via the Amazon IP or DNS address. After logging in, you’re prompted to register and accept the terms. Then you’re presented with a Getting Started Checklist, which is useful in ensuring you get everything setup.

    Read the full review.

    Better yet, give it a try yourself.

    SoftNAS 30-day Free Trial
    start free trial now

     

    Learn more: https://www.softnas.com
    Twitter: @SoftNAS
    LinkedIn: https://www.linkedin.com/company/softnas

    The Good, the Bad and the Ugly of EBS Backup to S3

    The Good, the Bad and the Ugly of EBS Backup to S3

    The Good, the Bad, and the Ugly of EBS Backup to S3

    Developing a comprehensive strategy for backing up and restoring data is not a simple task. In some industries, regulatory requirements for data security, privacy, and records retention can be important factors to consider when developing a backup strategy.

    EBS Advantages

    Amazon Elastic Block Storage (EBS) provides many features such as high durability and reliability, encryption, provisioned IOPS, and point-in-time snapshots—amongst others. The built-in volume snapshot feature is a good option for backing up data.

    Like all things AWS, Amazon has many options for creating backups. If you setup your EC2 instance to use EBS, you can simply create a snapshot of your volume from the AWS Console. These EBS snapshots are incremental backups that persist on Amazon’s S3. Incremental means that only the blocks that have changed since your last snapshot are saved.

    Problem

    EBS snapshots are really slick, but manually creating snapshots from the AWS Console isn’t a good solution if your goal is to have daily, hourly, or snapshots on a customizable timeframe.

    In order for any backup to be useful, the backed-up content has to be crash consistent.  This means that write IOs are quiesced and write caches flushed.  Enterprise backup software has solved that problem for years, but direct use of EBS Snapshots for backup does not.

    Alternative

    You can create your own services using scripts and command line tools. But to do this you need to have the knowledge and experience of writing shell scripts, scheduling and executing. The snippet of script below provides details about the server in the snapshot.

    code snippet

    With ease of use in mind, the next version of SoftNAS will include a number of features that will allow for a faster and more convienient backup and archival process.

    Learn more about Buurst SoftNAS.