Buurst Fuusion Now Available in the Microsoft Azure Marketplace

Buurst Fuusion Now Available in the Microsoft Azure Marketplace

Microsoft Azure customers worldwide gain access to the latest edge to cloud and cloud to edge data transfer solution from Buurst, taking advantage of the scalability and reliability of Azure

SoftNAS Microsoft Azure Marketplace

Houston, Texas – July 1, 2021 – Buurst, an enterprise-class data performance company, today announced the availability of Buurst Fuusion in the Microsoft Azure Marketplace, an online store providing applications and services for use on Microsoft Azure. Buurst customers now can take advantage of the scalability, high availability, and security of Azure, with streamlined deployment and management.

Buurst Fuusion is a data pipeline connecting the edge and Microsoft Azure for secure bi-directional data transfer while also overcoming network latency issues commonly found at the edge. Fuusion provides Azure customers the ability to securely and quickly migrate data from the edge into Azure for usage in tools such as Microsoft Azure Databricks, Azure Data Factory, Azure Maps, and more.

“Today, the amount of corporate data in the cloud and at the edge makes it more challenging for organizations to ensure data availability and security regardless of location,” said Vic Mahadevan, CEO, Buurst. “We are now in the Microsoft Azure Marketplace with our latest feature-rich product, Fuusion, following the successful addition of SoftNAS in the fall of last year, to address the ongoing data security, network transfer, and storage challenges that customers are facing.”

“Microsoft Azure Marketplace lets customers worldwide discover, try, and deploy software solutions that are certified and optimized to run on Azure,” said Jake Zborowski, General Manager, Microsoft Azure Platform at Microsoft Corp. “Azure Marketplace helps solutions like Fuusion reach more customers and markets.”

The Azure Marketplace is an online market for buying and selling cloud solutions certified to run on Azure. The Azure Marketplace helps connect companies seeking innovative, cloud-based solutions with partners who have developed solutions that are ready to use.

Learn more about Fuusion at its page on the Azure Marketplace.

About Buurst

Buurst, Inc. is an enterprise-class data performance company that provides solutions for enterprises’ data use in the cloud or at the edge. The company provides flexibility, security, and access to corporate data at all touchpoints while reducing storage expenses by eliminating double billing for cloud data management. To learn more, visit. www.buurst.com or follow the company on Twitter, LinkedIn, and Vimeo.

For more information, press only contact:

Georgiana Comsa, Silicon Valley PR, georgiana@siliconvalleypr.com

Competition? What Competition? Part I – SoftNAS

Competition? What Competition? Part I – SoftNAS

Every company believes they do something better (or cheaper) than others competing for the same marketplace. But to be the best at something often means you have to carve out your niche, which isn’t easy to do. Sometimes, carving your niche means redefining competition. To Buurst, competition is opportunity in disguise, and competitors are partners waiting to happen. 

SoftNAS 

Buurst (formerly  SoftNAS) fills a marketplace niche that did not even genuinely exist at the company’s conception.  Everyone knew the cloud was the next big thing, but few thought about getting there or retaining the value of current data and data practices once you got there. At the time, no cloud-based solutions were replicating on-premise capabilities. SoftNAS offered cloud customers full enterprise feature capabilities, deployment flexibility, integration, and the use of standard protocols (CIFS/SMB, NFC, iSCSI).

Flexibility combined with stability is a rare thing, one that allows us to fit in where other solutions might not. We can replace existing systems, but we can also supplement and complement them. We can’t say that Buurst  (or  SoftNAS was known then) created the cloud storage marketplace. But I believe we redefined it and continue to do so with our enterprise Cloud NAS solution,  SoftNAS. If not the first, we were one of the first to offer an effective enterprise solution that consistently displays:

Cross-zone high availability

High Availability

Buursts SoftNAS typically exceeds or matches the toughest SLAs. SoftNAS offers 99.999% (5 9s) of availability and backs their solution with a moneyback guarantee. 

Optimized performance

Performance

The best performance to price ratio on the market (for example, look at our performance vs. Netapp ONTAP). 

Cost Management

Cost

The lowest price to performance point available.  

Flexibility

Deploy in any region on the AWS and Azure cloud, or on-premise, and do so without changing your applications or workloads.

Lets take a look at how weve continued to deliver the above core competencies consistently. 

High Availability 

SoftNAS’ patented solutions, SNAP HA™ and SnapReplicate have served our customers reliably for years, delivering five-nines (99.999%) of availability. None that we have seen offer a money-back guarantee to back up their promise. Most of our competitors cannot provide an SLA to match, typically limited to 3 or 4 9s of availability (99.9, or 99.99%). NetApp Cloud Volumes claims 5 9’s availability, but their implementation is not as integrated as SoftNAS’ design, and as far as we can tell, this appears to be a marketing claim that an SLA does not back.  

Performance 

Our competitors often tier their services, bundling both storage and a certain level of performance into a package. Typically, there are three or four tiers, each with defined performance parameters. To achieve higher performance levels, you must purchase the next tier and the increased storage. As you might guess, this interferes significantly with the granularity of their solution and often forces the customer to buy unnecessary storage to reach the desired performance level. 

Native cloud services have gotten better on this front, offering ways of tuning performance to your requirements (IOPS/Bandwidth). Still, such solutions are often themselves only available at certain levels of their performance groups.  SoftNAS allows you to use any Azure virtual machine or AWS instance in any region to marry the right level of performance to any amount of storage you need.  In this way, we can assure you that the solution we provide offers the most bang for your buck. Additionally, flexible deployment options in terms of cloud location boost performance by limiting latency and bandwidth.  

Cost 

Buurst’s SoftNAS only charges for the performance delivered to access the customer’s data. Competitors charge customers for both the performance they deliver and the storage or capacity of managed data. Some competitors mandate that you add more capacity to get more performance, even if the customer does not need the extra capacity, so you again pay for even more capacity and performance.

But it’s not enough to offer the same services cheaper – at some point, that approach results in the “you get what you pay for” adage. If you don’t innovate on behalf of the customer, someone else will, leaving you to play catch up. At Buurst, we actively try to bring features into play that will drive costs down and allow you to tailor your solution even further. It is this drive that resulted in our  SmartTiers feature.  SmartTiers allows our customers to save money by automatically tiering data based on usage.  Frequently accessed data is placed on production-ready, high-performance block storage options, and infrequently accessed data is placed on low-cost, lower performance options.

In an upcoming release, we will be working to save you more money by offering a cheaper high availability method for workloads that do not require 5 9’s of availability, making our product even more flexible in terms of cost.

Flexibility 

Designed to bridge systems together, whether for disaster recovery, transition to the cloud, or any other reason you might need such integration.  As stated, SoftNAS is available in both on-premise versions and the cloud. You can deploy  SoftNAS as an application on RHEL (Red Hat Enterprise Linux) or as a VM hosted on VMware on-premise. On the cloud side, you can leverage AWS or Azure, or both. SoftNAS can leverage industry-standard protocols such as CIFS/SMB, NFS, and iSCSI. It is the only solution (we are aware of) that can provide volumes accessible by multiple protocols at once.

In the section above, we already mentioned SmartTiers, which offers additional deployment flexibility, allowing you to define and automate the level of performance needed for your workloads. And as stated, we are also working to deliver a less costly HA option in tandem with our current SNAP HA solution. Stay tuned for more information on that in the weeks to come.

Lastly, because we have worked with AWS and Azure for many years, our integration is second to none, meaning we can seamlessly connect with AWS and Azure services and products, such as AWS Outposts or Azure Stack Hub.

Conclusion 

Buurst’s  SoftNAS  product represents our company values well. It is a product that offers capacities second to none but is accessible and flexible in a way that allows us to work with others seamlessly. As stated, competition is opportunity in disguise. We fully believe that the addition of  SoftNAS can improve nearly any solution, as much as we also know SoftNAS is an excellent solution on its own. In other words, here at Buurst, we look forward to proving it, one way or another.

 In our next blog (Competition – Part 2), you’ll see how  Buurst Fuusion extends this philosophy even further.  

Competition? What Competition? Part 2 - Fuusion

Competition? What Competition? Part 2 - Fuusion

In our previous blog, we outlined our beliefs that our SoftNAS product embodied Buurst’s approach to business and hinted that to us, competition is merely opportunity in disguise. Our Fuusion product might extend this even further, full of unique strengths of its own, as well as immense capacity for integration. 

If you missed part one…

Fuusion 

Storing your data is only half the battle, and at Buurst, we recognize this. Customers want to drive value from their data, and the explosion of data generation at the edge is further complicating the ability to quickly and efficiently process data. For example, with data in satellite offices at a university, oil wells and dig sites for the energy industry, or cargo ships in the middle of the ocean for a shipping company.  

Fuusion stands apart from the competition because of its flexibility and its functionality across the data pipeline. Most of our competitors focus on individual parts of the data pipeline.

Data Pipeline consists of three parts:

The Edge

Data generation at locations outside the datacenter (often far outside)  

The Network

Transportation paths for data moving to and from the edge and cloud

The Cloud (or Datacenter)

Data storage and ideally processed for analysis

Processing at the Edge 

Fuusion is unique in providing flexible solutions at the edge, over the network, and in the cloud. We can deploy in the data center or the field and offer robust data orchestration at any point in the extended network. Future Fuusion support will allow data to be processed onsite, at field offices, wells, satellite offices, anywhere that produces an excess of unstructured data. Local real-time analysis in the future will perform even as you are sending data to the cloud/datacenter for a longer-term, more thorough analysis.

Every aspect of the data pipeline functionality will be available at the edge as part of a single, integrated pipeline. 

Network Competition – but No Orchestration 

Fuusion also offers assistance in getting data from the edge to the cloud. Fuusion’s Ultrafast feature accelerates data traffic across less-than-optimal or the most intermittent network, ensuring data integrity at every step along the way. Competitive data pipeline products do not offer network optimization as part of their pipeline.  Competitors typically require a 3rd party product to the customer solution stack, driving up the cost for network acceleration.  

Competition does exist for the Fuusion Ultrafast feature itself, including WAN optimization point solutions such as IBM Aspera, Bridgeworks, Cisco, etc.…However, these are not data pipeline solutions but network optimization and only focus on the network part of the problem. Fuusion, on the other hand, allows for processing, filtering, and routing of the data in the data pipeline, ensuring: 

    • The data is sent as fast as possible through the data pipeline with optimal data integrity via Ultrafast 
    • The data is routed and transferred where needed 
    • The data that is not required can be filtered, potentially reducing the amount of data that requires transmission 
    • The data sent is transformed and is ready for consumption, reducing the workload at the cloud or datacenter central processing. 

Cloud/Datacenter Only Services 

Data management (DM) and master data management (MDM) solutions often get lumped into the data pipeline market from a marketing message perspective. For such solutions, filtering, transforming, processing, and routing are not done at all (or very little) until the data reaches the central storage hub. But, again, these DM and MDM solutions tend to focus on the data and add value once the data is in the cloud/data center. 

Many DM/MDM solutions need an edge data pipeline like Fuusion to get the data to the cloud, making them a target for partnership rather than competition. 

Conclusion 

Buurst is aware that we have intense competition in specialty data pipeline solutions such as WANdisco, which specializes in pipelines for Hadoop data, or Model 9, specializing in delivering mainframe data to the cloud. It would be challenging to dislodge such competitors from their specialized niche, even though Fuusion is undoubtedly capable of pipelining Hadoop or mainframe data.

Instead, Buurst realizes that DM/MDM solutions are partnership opportunities due to our future ability to process at the edge and optimize data transmission to a central repository. AWS, for example, despite potentially competitive products with similar messaging, such as AWS Datasync and AWS Data Pipeline, have turned to Buurst to bridge gaps in their coverage. 

AWS Datasync provides file migration to AWS storage services and does it well. AWS Data Pipeline offers data pipeline services, but its capabilities focus on the orchestration of data between different services within AWS (and does this exceptionally well). While it can get limited types of data from edge locations, this is not its focus. Recognizing this limitation, AWS has recognized Buurst as their solution of choice for Smart Data Migration.

Buurst could not be prouder to have turned competition into cooperation. So what other opportunities are out there? Reach out to us, and let’s find out together.

Buurst Wins the 2021 Digital Innovator Award from Intellyx

Buurst Wins the 2021 Digital Innovator Award from Intellyx

Intellyx, the first and only analyst firm dedicated to digital transformation, today announced that Buurst has joined the inaugural class of their 2021 Digital Innovator Award.

As an industry analyst firm that focuses on enterprise digital transformation and the disruptive vendors that support it, Intellyx interacts with numerous innovators in the enterprise IT marketplace.

To honor these vendors, Intellyx has established the Intellyx Digital Innovator Awards.

Intellyx will bestow this award on any vendor who makes it through Intellyx’s rigorous briefing selection process and delivers a successful briefing.

“At Intellyx, we get dozens of PR pitches each day from a wide range of vendors,” said Jason Bloomberg, President of Intellyx. “We will only set up briefings with the most disruptive and innovative firms in their space. That’s why it made sense for us to call out the companies that made the cut.”

For more details on the award and to see other winners, visit the 2021 Intellyx Digital Innovator awards page here.

Intellyx may announce a new set of Digital Innovator awardees in the future. To be considered for a briefing–and hence a Digital Innovator award–and use the authorized award badge, please contact Intellyx at pr@intellyx.com.

AWS Public Sector Selects Buurst as a Strategic Provider for Smart Data Migration Workloads

AWS Public Sector Selects Buurst as a Strategic Provider for Smart Data Migration Workloads

HOUSTON–()–Buurst, a leading enterprise-class data performance company, announced today that Amazon Web Services (AWS) Public Sector has selected the company as a strategic provider for smart data migration. Buurst’s new Fuusion™ technology enables organizations to accelerate the migration of application workloads to AWS and AWS cloud-native services for data orchestration.

“Launching smart data migrations between AWS, Buurst and Carahsoft enables public sector organizations globally to rapidly and securely migrate data to a wide range of AWS cloud-native services for the enhanced protection of legacy data.”

As an early adopter of AWS, Buurst has been active in the AWS Marketplace for eight years, supporting joint customers. Buurst was also recently added to the AWS ISV Accelerate program, an invitation-only tier for software companies that have met stringent technical and revenue requirements with AWS. The company’s commitment to AWS includes three AWS Partner Network (APN) competencies in migration, storage, and public sector, qualifying for joint go-to-market initiatives with AWS and their global Partners.

The Smart Data Migration program offered with Buurst Fuusion provides organizations an edge-to-cloud solution into AWS that is orchestrated into other AWS cloud-native services such as EFS, FSx, Redshift, Rekognition, Kineses, and Athena. Additionally, Fuusion’s patented, built-in UltraFast™ data acceleration feature securely transfers data at speeds up to 100 times the average transmission rates. This capability is critical for organizations with remote sites connected via high latency network connections such as satellite, 5G, cellular, or DSL.

“We are pleased to see Buurst’s involvement in many of our key Partner initiatives designed to move data workloads into the cloud where improved decision making can take place through visualization or analytics,” said Sandy Carter, Vice President, Worldwide Public Sector Partners and Programs, AWS. “Launching smart data migrations between AWS, Buurst and Carahsoft enables public sector organizations globally to rapidly and securely migrate data to a wide range of AWS cloud-native services for the enhanced protection of legacy data.”

“Working with Buurst and AWS allows our public-sector customers to deploy the latest innovative technology in the cloud and at the edge. Buurst Fuusion enables public sector organizations to aggregate their data quickly and to migrate those workloads to advanced AWS cloud-native services securely,” said Craig P. Abod, Carahsoft President. “We are looking forward to this relationship that will help better support our government customers and reseller partners.”

“We are announcing this new AWS public sector initiative today as we continue to expand our longtime relationship with AWS,” said Vic Mahadevan, Buurst CEO. “As a strategic AWS provider, our goal is to securely accelerate data migration from the edge to AWS for the worldwide public sector market.”

About Buurst

Buurst is a data performance software company that provides solutions for enterprise data in the cloud or at the edge. The company provides high availability, secure access, and performance-based pricing to organizational data with the industry’s only No Downtime Guarantee for cloud storage. To learn more, visit. www.buurst.com or follow the company on Twitter, LinkedIn, and Vimeo.

Contacts

Georgiana Comsa
Silicon Valley PR
georgiana@siliconvalleypr.com
650-800-7084

Original Press release located here.

How UltraFast Uses Reinforcement Learning to Tackle Tough Network Conditions

How UltraFast Uses Reinforcement Learning to Tackle Tough Network Conditions

Latency and packet loss over wide area networks, the Internet and RF-based network devices (e.g., satellite, cellular, packet radio) has long been a barrier to large scale data transfers. TCP/IP’s windowing algorithm is infamous for reacting very poorly to packet loss, which reduces the amount of data TCP is willing to send per transaction, making it extremely slow yet reliable. Today, large amounts of data increasingly needs to be transferred from where it gets created to elsewhere across available networks to where it is consumed and used. Sometimes this data is purely for disaster recovery and backup, other times its for important analytics and other business processes. Edge computing promises to address some of these issues by moving workloads closer to the point of data creation, but even then, data must often be transferred to centralized locations (data centers, public clouds, SaaS services) to make use of the insights gained across many edge sites.

Buurst Fuusion’s UltraFast® Machine Learning Approach

Over the years, many different types of algorithms have been devised to try and address this network throughput optimization problem. Buurst’s Fuusion product includes a feature called UltraFast®, which overcomes the challenges posed by TCP over highly latent or lossy networks in a unique way. As we will see in this post, UltraFast utilizes a type of AI/ML technology to learn, adapt and optimize data transfers over troublesome network conditions.

The UltraFast Gambler Agent

UltraFast uses a machine learning process that uses a set of “gamblers,” or data transmission experiments, that place different “bets” on what the ideal transmission rate will be. There is no model of the network available ahead of the agent running its own experiments to learn about the network.

The main goals are to:

  1. Maximize network throughput by sending as much data as possible
  2. Avoid creating packet loss due to sending data too quickly
  3. Detect when external factors, such as other network participants, changing IP routes, and other dynamic conditions are causing congestion or interfering with packet throughput and use this information to place improved bets.

The Agent creates a set of “Gambler” processes, each running in an independent thread. Gamblers are given a “Data Transmission Bet” to place, said bet being its data transmission “rate”; i.e., a bet is the time delay between sending each packet. The data is sent to a connection at the distant end of the network, and several types of responses may occur: 1. an ACK indicating good data receipt, 2. a NAK indicating bad data receipt, or 3. no response at all, indicating a lost packet (timeout). Each Gambler process sends several data packets and records the overall success rate, i.e., how many packets were sent, how many succeeded, and how many failed. Upon completion of the Gamblers’ processing, each Gambler is assigned an overall score. The more acknowledged and successful data packets sent, the higher the score. The more NAKs or timeouts (packet losses) present, the lower the score. As we will soon see in more detail, the Agent then uses these Gambler scores to reward successful gamblers, which are allowed to “breed” and multiply during the next generation or experiment cycle. Less successful or failed Gamblers are pruned and eliminated. This process is similar to natural selection, where the strong and successful survive, and the weak and unsuccessful do not propagate.

UltraFast Reinforcement Learning Process

The UltraFast Reinforcement Learning Process The chart below depicts the UltraFast learning cycle and each step in the process.

The UltraFast learning loop runs repeatedly, processing these steps:

  1. A Monte Carlo derived genetic algorithm generates random strategies for the initial set of gamblers, then subsequently breeds new gamblers based upon last cycle’s winners’ results.
  2. A new generation of dozens of gamblers is created at the start of each cycle, each with its own rate of sending data packets.
  3. Gamblers send their data packets, measuring ACKs, NAKs, and lost packets.
  4. Each gambler’s win/loss rate is scored – more packets sent equals a higher score, lost packets or data transmission errors (NAKs) penalize the score.
  5. Each gambler’s loss-rate is compared with the current loss-zero (separately established with regularly timed packets).
  6. Winning gamblers showing the best results are rewarded by being bred, resulting in similar successful gamblers for the next cycle. The agent prunes the losers and feeds the learned results forward into the genetic algorithm. In addition to the ‘successful’ newly created gamblers, new random variants are added to further explore the newly defined boundaries, enabling the system to adapt to changing network conditions.

The above 6-step process runs continually, optimizing data throughput while minimizing packet loss and congestion, and adapting to the constantly changing and evolving complex network environment. Reinforcement learning enables UltraFast to adapt to each unique network topology and navigate its evolving traffic and routing conditions.

UltraFast Speed Test

UltraFast includes a speed test feature, which sends “iperf” data through the UltraFast optimizer as both a download stream and then an upload test. This is analogous to your typical Internet or broadband speed test, except it uses UltraFast technology to compare the throughput results vs. plain TCP/IP. In the following screenshot, we see the TCP results displayed in red (mostly hidden behind the blue UltraFast chart). The link being tested is on AWS between the Ohio region in the USA and the Capetown region in South Africa. The latency averages around 250 milliseconds round trip time, with little to no packet loss.

TCP/IP averages just 144 Mbps over this moderate-latency 1 Gbps link. We can see during the initial Download Speed test, the blue (cyan) UltraFast chart slowly increase its throughput over time, as the gamblers run and the reinforcement learning algorithm actually learns the particular characteristics of this network. Once UltraFast learns the network, it eventually is able to peg the network at near 1 Gbps at times. Then the upload test starts. Since UltraFast has already learned this network, it immediately optimizes the throughput, averaging 822 Mbps vs. TCP’s 144 Mbps.

As network conditions vary over time, UltraFast’s intelligent learning algorithm continues to observe, adapt and learn in order to continually optimize network throughput. This is very important for long-running bulk data transfer jobs in the terabytes or more size. As these long-running data transfer jobs occupy large amounts of network bandwidth over time, they are much more likely to experience competing traffic at different times of day; e.g., backup jobs running overnight, user downloads during daytime hours, and many other variables, including network routes changing the underlying network characteristics over time.

Summary

Optimizing data throughput over challenging network conditions is an age old problem – one that now has a new type of solution that uses reinforcement learning to intelligently optimize and constantly learn and adapt to changing network conditions. To learn more about the UltraFast feature of Fuusion and how it addresses challenging, high-latency and lossy network conditions, please visit the Fuusion page for more information. For more detailed insights into UltraFast, its machine learning technology and overall architecture, you can download the UltraFast technical white paper.