Introducing the SoftNAS No Downtime Guarantee Program

With “30% of primary data located in some form of cloud storage,” substantial losses may seem inevitable when choosing to migrate to the cloud.

A recent study on data protection, conducted by Vanson Bourne (funded by EMC), revealed data loss and downtime cost enterprises $1.7 trillion in the last 12 months.  In fact, data loss continues to climb and since 2012 numbers are up to more than 400% – which is equivalent to roughly half of Germany’s $3.6 trillion GDP (2013).

However, not all cloud storage vendors are equal.

At SoftNAS, we believe you don’t have to sacrifice enterprise capabilities to take advantage of cloud convenience, which is why the team is excited to announce the SoftNAS No Storage Downtime Guarantee Program.

SoftNAS will be available and usable without any noticeable disruption in storage connectivity, with a 99.999% uptime SLA, when operated with production workloads under SoftNAS best practices–or we will refund one month of SoftNAS service fees.

We aren’t saying one month’s refund will make up for the lost revenue, poor customer satisfaction, or damaged credibility associated with downtime, we’re simply showing our customers how confident we are in the SoftNAS product line.

Don’t take our word for it. Try SoftNAS now.

 

To Cloud or Not to Cloud – That is the Question for CXO’s

To Cloud or Not to Cloud – That is the Question for CXO’s

It’s interesting being out front where technology paradigm shifts take place. At SoftNAS, we are increasingly seeing customers facing a major fork in the road ahead and a critical decision point where their IT infrastructure is concerned:

Door #1 – Renew legacy storage array maintenance for 3 to 5 years and re-commit to our own data center
Door #2 – Move away from the legacy storage arrays and onto commodity x86 servers with software-defined storage and virtual storage apps
Door #3 – Bite the bullet and migrate mission-critical data and applications to IaaS in the cloud.

Increasingly, we are seeing companies choosing door #3, especially when faced with hundreds of thousands or millions of dollar maintenance renewals of their EMC(R), NetApp(R) or other legacy storage arrays. And industry analysts are seeing the same trend we do (as do the financial analysts, who are dealing firsthand with the fallout this market shift is causing for public company storage vendors).

I read an interesting post today on The Register and Forrester’s Henry Baltazar’s blog entitled:

Forrester says it’s time to give up on physical storage arrays – the physical/virtual storage tipping point may just have arrived. It says:

The storage industry knows that the market for physical arrays is in decline. Cheap cloud storage and virtual arrays have emerged as cheaper and often just-as-useful alternatives, making it harder to justify the cost of a dedicated array for many applications.

 

Forrester knows this, too: one of its analysts, Henry Baltazar, just declared you should “make your next storage array an app”.

 

Baltazar reckons arrays are basically x86 servers these days, yet are sold in ways that lock their owners to an inelastic resource for years. Arrays also attract premium pricing, which is not a a good look in these days of cloud services and pooled everything running on cheap x86 servers.

 

The time has therefore come to recognize that arrays are expensive and inflexible, Baltazar says, and make the jump to virtual arrays for future storage purchases.

 

Storage has been confined to hardware appliance form factors for far too long. Over the past two decades, innovation in the storage space has transitioned from proprietary hardware controllers and processors to proprietary software running on commodity X86 hardware….

 

While this is all true, I think it misses the bigger picture and increasingly the more critical decision that IT managers must make. Whether to be in the hardware business at all going forward.

Cloud platforms like Amazon’s AWS and Microsoft Azure are becoming the new pivot point for IT, when faced with major investment decisions driven by storage maintenance coming up for renewal, major expansions of existing storage arrays to support new projects or major new application projects being undertaken by DevOps teams. I expect vCloud Air, Google Cloud, HP Cloud, Rackspace Cloud and myriad other niche cloud players like FireHost will continue to attract more customers who are ready to just get out of the hardware business altogether and focus instead on the backlog of IT projects at hand, and rolling out new applications faster, easier and less expensively.

Of course, for some time many companies will continue with their historical incrementalism approach to IT, paying whatever they must and moving forward along the path of least resistance and perceived risk; however, in these times of cost containment and increased IT budget efficiency, others are now questioning their overall IT infrastructure strategy, realizing there is a fork in the road ahead…

The fundamental question is increasingly whether the company should continue to be in the data center and/or hardware business at all, or start fresh with cloud-based IaaS. For those committed to remaining in the data center and hardware business, as Baltazar correctly points out, customers are now choosing to take more ownership of their data management needs with software-defined storage, leveraging virtual storage and software-defined storage with commodity servers and storage gear.

We see all three paths being taken, and acknowledge there is no right or wrong answer – just different ways forward based upon each customer’s overall business and IT strategy, objectives and budgets constraints.

Of course, we’re happy to see customers choosing door #2 or door #3, where SoftNAS enables our customers with something they have never before enjoyed when it comes to storage – the freedom to choose whichever approach they want and need – pure cloud IaaS, software-defined storage on-premise or in the colo facility, or some hybrid combination that makes the transition easier and more incremental.

—————-
Rick Braddy is founder of SoftNAS and inventor of SoftNAS, the #1 Best-Selling NAS in the Cloud.
See us in booth #801 at VMworld 2014, where we will be demonstrating how customers now have freedom of choice where data storage and IT platforms are concerned.

What Can 45% of SMBs Who Experienced Data Loss Do Differently?

What Can 45% of SMBs Who Experienced Data Loss Do Differently?

Survey says… 45% of SMBs Experienced Data Loss, according to Storage Newsletter

More than 1,000 SMB IT professionals responded to the survey “Backing up SMBs”, which investigates backup and recovery budgets, technologies, planning, and key considerations for companies with fewer than 1,000 employees.

45% of respondents said their organization had experienced a data loss, costing an average of nearly $9,000 in recovery fees. Of those, 54% say the data loss was due to a hardware failure.

“Data is the lifeblood of any business – big or small,” said Deni Connor, founding analyst of Storage Strategies NOW. “The opportunity to provide SMBs with better and more cost-effective ways to protect and recover data is huge. While these companies may have smaller IT staffs, they collectively account for a significant portion of the total backup and recovery market.”

Additional highlights from the survey include:

SMBs spend an average of $5,700 each year to manage backup and recovery environments. While the majority of respondents (70%) are satisfied with current backup methods, nearly one-third (30%) believe their approaches and technologies are insufficient.

When it comes to DR, an even greater number of SMBs (42%) believe their company’s plans fall short. Furthermore, only 30% think all information would be recoverable in the event of a disaster.

The top technology used by SMBs to back up information is DAS. Cloud-based backup and recovery offerings have gained a footing. Currently, 30% use hosted solutions, and 14% plan to invest in a hosted offering within the next year.

Reliability and security are the top two priorities for SMBs considering hosted backup solutions. Of those currently using or planning to implement a private, hybrid, or public cloud backup platform, 77% prefer a private or hybrid approach while 23% favor a public cloud offering.

So what can be done differently to minimize data loss and outages?

Reliability is something that must be “designed into” the IT solutions that SMBs deploy. Backup solutions are becoming more plentiful and affordable – but affordable, viable disaster recovery solutions for SMBs have remained elusive.

When I was CTO for a cloud-hosted virtual desktop company, we were responsible for dozens of SMB data and IT operations – 24 x 7. I learned a lot about what works and what doesn’t. In fact, we had several close calls, including a “near-miss” where we almost lost a company’s data due to a combination of technology failure and human error (and no DR solution in place due to the high costs and limited budgets). If it hadn’t been for storage snapshots being available to use for recovery, that business would have likely lost most of its data… experiences like that made me a true believer in the importance of storage snapshots.

SMB’s need the following to properly protect their precious data and business operations:

1) UPS

Uninterrupted power is the foundation for protecting IT equipment from power failures, spikes, and transients that can destroy equipment can cause catastrophic damage (e.g., to multiple disk drives, destroying a RAID group’s protection).

2) RAID

Redundant disks with parity provide the ability to recover from one to two simultaneous drive failures.

3) Storage Snapshots

Snapshots provide the ability to recover filesystems to an earlier point in time, for rapid recovery from data corruption, accidental deletion, virus infections, and human error. I am a huge believer in storage snapshots – they have saved the day more times than I can count because you can quickly restore to an earlier point prior to the failure, and get everything back up in a matter of minutes (instead of hours or days).

4) Off-site Redundancy / Replication

It’s critical for data to be backed up and stored off-site, in case something catastrophic occurs at the primary data center. For many SMBs, the “data center’ is a 19” rack in a locked (hopefully it’s locked) room or closet somewhere in the business’ building, so having a copy of the business-critical data off-site ensures there’s always a way back from any kind of failure or even a disaster locally.

5) On-site Redundancy / High-Availability

In addition to having an off-site copy, if you can afford it, also having an on-site copy for rapid local recovery is also needed. For example, an on-site replica of the data allows failover and rapid recovery (whereas an off-site, replicated copy of data provides protection against data loss and emergency recovery).

There are many off-site backup services available today. They work well and ensure your data is backed up offsite and encrypted and stored in a secure place. The biggest challenge becomes the time required to “restore” in the event there’s a failure. It can take many days (or longer) to download several Terabytes of data using these services. How long can a business truly afford to be down? (usually not that long)

That’s why we came up with “SnapReplicate” – a way to replicate an entire copy of the data to a second system – one that is capable of being used to actually run the business in an emergency situation, where either the primary data center is destroyed or severely impaired, and the business needs to be brought back up in a matter of hours – not days or weeks.

Anyway, whatever approach you take to backup, recovery, and DR – make sure it has all of the above components – power protection to prevent catastrophic damage and outages caused by the most common culprit, RAID protection against the next most likely failure point (disk drive mechanical failure), storage snapshots (protection against corruption, infection, accidental deletions, and human error), and off-site redundancy via replication, with the ability to bring the business’ IT systems back up at a secondary data center (in case the primary data center is compromised).

How does SoftNAS address SMB Backup, Recovery, and DR needs?

SoftNAS runs on existing, commodity servers from major vendors like IBM, HP, Dell, Super Micro, and others. It operates on standard server operating systems and virtualization platforms like Windows Server running VMware. Here’s how each of the Big 5 above is addressed:

1) UPS 

It’s best practice to employ a UPS to provide battery-backed power to the servers running SoftNAS and the other servers running workloads like SQL Server, Exchange, etc. If the servers are in a data center, chances are there are multiple layers of power protection. If the servers are in a rack in the local building, investing in UPS systems with at least 20 minutes of battery operating time and an orderly shutdown process for VMware and Windows is highly recommended.

2) Two Layers of RAID protection plus automatic error detection/correction 

SoftNAS supports multiple levels of RAID – a) hardware RAID, which provides direct RAID protection at the disk controller level, and b) software RAID at the SoftNAS level to detect and recover from soft errors, bit rot and other errors that aren’t easily detected and corrected by normal RAID systems

3) Storage Snapshots and Clones 

SoftNAS provides scheduled snapshots that are automatically maintained, ensuring there are many recovery points as far back in time as you have available storage. Think of these as instant, automatic “incremental backups” that take no time and only occupy as much space as your actual data changes over time. The average SMB creates no more than about 1 GB of new data per day (30 GB per month), so it’s often possible to keep several weeks of snapshots around, especially for user files.

A “clone” is a writable copy of a snapshot – an exact image copy of the files as they were at the point in time the snapshot marker was originally taken; e.g., last night at 6 p.m.. This cloned copy becomes a writable copy that can be put to immediate use so the servers can be brought back online in a matter of minutes. The clones can also be used to restore missing or corrupted files. And because snapshots and clones do not actually copy any data, they are instantly available for rapid recovery when the chips are down…. and they have saved my customers’ data and business’ many times in a pinch.

4) Off-site Redundancy and Replication 

SoftNAS SnapReplicate(TM) provides “SyncImage”, a full backup of each data volume that occurs initially, followed by once per-minute “SnapReplicate” actions, which securely copy just the incremental data changes from the source (primary) SoftNAS to the target (secondary) SoftNAS instance, which is typically located off-site at a different data center.

5) On-site Redundancy / HA 

SoftNAS can provide on-site redundancy using SnapReplicate to a local target system, which provides the ability to rapidly recover locally. SoftNAS does not yet include high availability with automatic failover (a feature that’s under development). For now, failover is a manual process that involves a bit of manual reconfiguration, such as updating DNS entries or IP addresses on the secondary SoftNAS unit.

With the many levels of redundancy and failure protection and recovery layers provided, SoftNAS offers a high degree of protection against data loss and multiple ways to recover from failures without data loss – and because it’s available on a monthly basis, most SMBs can actually afford it today.

Do CFOs Know Big Data’s Dirty Little Secret?

Do CFOs Know Big Data’s Dirty Little Secret?

Do CIOs and CFOs Know Big Data’s Dirty Little Secret? By “Big Data,” I’m referring to the storage appliances businesses use to store and manage their data today (not business intelligence, databases, and applications that run atop these storage appliances).

With 60% or more of typical virtual computing, IT project being comprised of data management hardware and software, few stop to ask “what are we getting for our money?”

As the ones responsible for P&L, CFOs must have asked their CIO this question before. To justify the large expenditures involved, CIOs might say:

– We need to run application XYZ that we’re rolling out to support the business
– We need to protect our corporate data and ensure we never lose any critical data
– We need to ensure our business is always up and running
– It’s part of every IT project – a safe place to manage our data

I’m sure the actual explanations vary wildly… but one must wonder if most CFO’s (or CIO’s for that matter) know the other questions they should be asking, like:

“Okay, I understand WHY we need the Big Data storage appliances now… what are the alternatives to paying $10,000 per Terabyte (TB)? I’m obviously not a computer expert like you guys in IT, so help me understand the price difference between disk drives costing less than $500 per Terabyte on Amazon.com, and those costing $10,000 per TB?”

Good question! The answer to the CFO’s/CIO’s question and Big Data’s dirty little secret is:

“The price difference is the NAS software inside the Big Data appliances.”

The ‘storage operating system’ inside the Big Data appliances is very specialized and provides the performance, caching and data protection needed to run the business. At least that’s what the Big Data vendors would have you believe.

You’re paying for a combination of proprietary hardware (the appliances) and the software inside. The appliances and software are purpose-built for doing just one thing – managing corporate data. The $8,000 per TB premium is primarily for the software and proprietary hardware. You also get redundant data paths that probably increase the reliability of the system by a few percentage points (cables and controller cards don’t fail very often).

So where does that $8,000 per TB go? It goes to pay for the SG&A (sales, service, advertising, promotion, accounting, big buildings, big salaries, and bonuses, etc), plus some R&D, that is associated with operating billion-dollar Big Data franchises today.

When I was CTO of a start-up in the hosted virtual desktop marketplace, we discovered just how critical high-performing, reliable storage can be in a virtual computing environment (in our case, VMware). Turns out, it’s especially critical for VDI, where end-users see every little “burp” that storage and networks cause on their desktops.

When you have high-performance storage needs (and who doesn’t need reliability and storage performance these days?), it’s a lot easier to look past the high prices that Big Data is commanding and just pay it. The problem is, unless you’re Exxon Mobil or a huge corporation, you may not have the luxury of just overlooking these costs. In our case, the cost of goods associated with using Big Data storage appliances to power our IT needs to mean the difference between being profitable or losing money as a cloud service provider.

So the right answer as to whether a CFO and CIO should view Big Data as a “necessity” vs. a “luxury” item depends on whether you’re a large enterprise vs. a small to medium business (or a government agency), where every dollar counts and saving money can sometimes even mean the difference between profitability or survival in today’s tough economic times.

When you use off-the-shelf server hardware (the same servers you use for VMware, SQL Server, Oracle, Microsoft Exchange, etc. today), plus the Virtual NAS software that will run on that server, and a few hours of labor to create a “good enough” storage solution, you can do the same job for $2,000 per TB or less.

For example, using SoftNAS plus the popular HP DL 380 or Dell PowerEdge server equipment, the costs are typically in the $2,000 per TB range for a complete, high-performance storage solution…. Saving $160,000 on a typical 20 TB configuration.

Knowing this, perhaps CFO’s should re-phrase the question to:

“Why should we pay an $8,000 per TB premium for storage? Is it really necessary for this project?”

Can your entire business run in the cloud?

Can your entire business run in the cloud?

In 2011, I served as CTO of a hosted virtual desktop company. We used Remote Desktop Services (RDS) that’s built into Windows Server 2008 R2 to provide hosted business desktops to small businesses with 10 to 200 employees. And it worked well. We marketed the solution as “desktop as a service”, but in reality, what we did was move a small business’ entire IT infrastructure into the cloud, including the company’s domain controller, Exchange server, file server, SQL Server databases, application servers, etc. And then we erected a dedicated RDS server for the users to access the applications.

What I learned firsthand is that the cloud is quite capable of hosting a small company’s entire IT infrastructure. But the Achilles heel is storage. When storage performs well, all the applications and desktops perform well – and users love the cloud-based solution. When storage is slow for any reason, everything is slow, the user experience goes downhill and users become extremely dissatisfied. And when storage fails or goes down for any reason, the customer’s business gets disrupted and they lose money, time and even people to IT outages – very much like traditional IT!

As a result, I learned firsthand how critical storage is to IT and the business it supports – and it’s no different in the cloud. Storage is the cornerstone of IT, and indeed, most businesses today. And the network is the fabric which connects storage, applications and users.

However, top quality, good-performing NAS was so expensive that it became a hindrance from a cost of goods perspective, as a hosted desktop provider.

This backdrop and experience is what led me to explore software-based NAS and develop SoftNAS for Amazon EC2 and VMware.

From my perspective, with SoftNAS, it is now possible to implement both cloud-based and premise-based hosted IT solutions – and keep the storage costs under control, yet provide the performance and resilience that’s required. And with SoftNAS, it’s now possible for small to medium businesses to realize something else they typically have had to make undesirable tradeoffs around – Disaster Recovery. SoftNAS-based systems are affordable and can be readily duplicated at multiple data centers (Amazon calls these “regions”), so it’s possible to fail over in the event of a major outage or disaster scenario.

In moving to a purely cloud-hosted IT solution, another critical link is the company’s Internet connection. Because of the reliance on the Internet to reach cloud-based desktops and applications, it’s critical that you have redundant ISP’s with a good Internet failover in place. I personally like SonicWall firewalls, which have robust WAN failover that supports redundant Internet providers nicely.

So it is certainly possible to run a small to medium business entirely cloud-based, provided the company’s data is properly managed and you provide redundant, high-speed Internet available at each office location (try to avoid T1 links if you’re using virtual desktops with more than 10 users per location). If Comcast Business Internet service or TW Telecom is available, it’s hard to go wrong with one of those. There are many other high-speed, reliable options available. Be sure the download speed is at least 10 Mb/sec and uploads are 5 Mb/sec or more (if you’re planning to use hosted desktops in the cloud).

Finally, if you don’t have an experienced IT staff who has managed cloud migrations, be sure to get the expertise you need around the table; e.g., contract with an IT consultant or IT services firm who has a proven track record of moving businesses and applications into the cloud.

Check also

What is Cloud NAS (Network Attached Storage)?
Using a Cloud NAS to improve your VDI experience