Sar, Elasticsearch, and Kibana

Sar, Elasticsearch, and Kibana

Sar, Elasticsearch, and Kibana… Kibana is a great visualization tool and this article shows how to automate building graphs and dashboards using API with Sar logs as a data source.

Sar is an old, but good, sysadmin tool that helps answer many performance-related questions…

  • Did we have a CPU spike yesterday at 2 pm when the customer complained?
  • Do we have enough RAM?
  • Do we have enough IOPS with our brand new SSD disks?

Sar was a nice little tool that helped us collect statistics even without CloudWatch or SNMP or any other monitoring tool configured.

Well, Sar has its issues. By default, it collects statistics only once in 10 minutes and you will be deciphering the output like this:

01:00:01        CPU     %user     %nice   %system   %iowait    %steal     %idle
04:30:01        all      0.25      0.00      0.23     99.52      0.00      0.00
04:40:01        all      0.25      0.00      0.21     99.54      0.00      0.00
04:50:01        all      0.26      0.00      0.22     99.52      0.00      0.00
05:00:01        all      0.24      0.02      0.23     99.51      0.00      0.00
05:10:01        all      0.26      0.00      0.23     99.51      0.00      0.00
05:20:01        all      0.24      0.00      0.20     99.56      0.00      0.00
05:30:01        all      0.26      0.00      0.22     99.52      0.00      0.00
05:40:01        all      0.25      0.00      0.22     99.53      0.00      0.00
05:50:01        all      0.57      0.00      1.01     48.45      0.00     49.97
06:00:01        all      0.32      0.00      0.41     10.32      0.00     88.95
06:10:01        all      0.24      0.00      0.19      0.33      0.00     99.25
06:20:01        all      0.23      0.00      0.18      0.35      0.00     99.24
06:30:01        all      0.24      0.00      0.17      0.32      0.00     99.27
06:40:01        all      0.24      0.00      0.19      0.36      0.00     99.21
06:50:01        all      0.46      0.00      1.00     25.55      0.00     72.99
07:00:01        all      1.26      0.00      3.52     90.35      0.00      4.87
07:10:01        all      1.26      0.00      4.01     90.57      0.00      4.16
07:20:01        all      1.07      0.00      3.56     89.42      0.00      5.95

This is actually a good example that shows some event possibly requiring further investigation. The server was clearly stuck on IO subsystem as the %iowait column shows it was more than 99%. At 05:50 it suddenly became better, iowait dropped to nearly zero and overall CPU usage was less than 0.5%. Surely something was going on!

Sar, Elasticsearch, and Kibana

Elasticsearch is a much more sophisticated technology. Elasticsearch is a distributed search and analytics engine, but when we really speak of Elasticsearch, we are speaking of a bunch of interconnected products commonly known as Elastic Stack:

Beans – many small agents upload data to Elasticsearch.

Logstash – accepts data from the Beans, and after potentially complicated processing, uploads the transformed data into Elasticsearch.

Elasticsearch – the search and analytics engine and the heart of the Elastic Stack.

Kibana – a great visualization tool and a graphical interface to Elasticsearch.

Elastic (ELK) Stack Architecture

Elasticsearch, Logstash, and Kibana

So, these capital letters comprise what used to be called an ELK stack – E from Elasticsearch, L from Logstash, and K from Kibana. These days we tend to include Beans into the Stack and call it Elastic Stack.

Performing virtual appliances health checks, our team often needs to analyze log sets from different customers on a regular basis. The logs contain tons of valuable information so why not feed it to Elasticsearch and see what happens?

Naturally, log files that we check most often have been sent to ElasticSearch using one of the beats – like the Filebeat – so we could visually explore the logs in Kibana pretty much instantaneously. Keeping the logs centrally is a good practice and ways to do it are really countless. Rsyslog, Splunk, Loggly, and CloudWatch Logs are popular central log solutions and Elasticsearch fits really well in this family.

Sar logs are a usual part of the log sets to be analyzed but there is sometimes a tiny inconvenience with Sar logs. They are often generated by older Sar versions, and there are 2 problems with that:

1. The current Sar does not understand the old version logs, and the old Sar version needs to be installed just to process the Sar logs.

2. The graphs can’t be easily produced due to the limitations of the old versions.

The backward compatibility of sars logs is out of our hands, and some practice and automation do not make the old Sar version installation too much of a problem. At the same time, analyzing Sar logs for many days and checking many parameters demands some graphical data presentation. For example, a current Sar on Ubuntu allows these commands to run:

sadf -g > cpu.svg
sadf -g -- -r > ram.svg
See these graphs in your favorite browser or image viewer:

Sar logs are well structured and Elasticsearch is a powerful tool to process logs

The older Sar versions simply don’t have an option to produce graphics. Still, Sar logs are well structured and Elasticsearch is a powerful tool to process logs in 2 easy steps:

1. Load Sar data into Elasticsearch.

2. Use Kibana to do all the visualizations and dashboards based on the data in Elasticsearch.

So how do we do it automatically?

By all means, there are many logs and we don’t want to do it manually after proof of the concept!

The answer is API and bash. We occasionally thought of writing API calls using Python or other full-featured language but bash proved to be more than enough for most cases.

We used 2 absolutely different APIs to do the task – the first API was Elasticsearch to load data and the second API was Kibana to create all the graphs and dashboards.

We have found that the Kibana API is less documented and we feel that more examples would benefit the community. As such, we provide all the API calls examples here. Each API call is a curl command referring to a JSON file. We shall provide both the curl command and the example JSON file for all the calls.

We have also utilized the Kibana concept of spaces to distinguish between logs from different servers. One space is only for one server. Ten servers mean ten Kibana spaces. Using spaces greatly reduces the risk of processing data for the wrong server.

 

Depending on which metric we process in the loop, we used the following commands on the Sar log referred to as the $file below.

for CPU:

sadf -d `echo $file`

for RAM:

sadf -d `echo $file` -- -r

for swap:

sadf -d `echo $file` -- -S

for IO:

sadf -d `echo $file` -- -b

for disks:

sadf -d $file -- -d -p

for network:

sadf -d $file -- -n DEV

Once we have output from one of the above commands or whatever other command we want to process further and vizualize, it’s time to create the indexes in ElasticSearch. Indexes are required so there is a place where we can upload sar data.

 

For example, the index for CPU data is created this way:

curl -XPUT -H'Content-Type:application/json' $ELASTIC_HOST:9200/sar.$METRIC.$HOSTNAME?pretty -d @create_index_$METRIC.json
 
$ cat create_index_cpu.json

{
  "mappings": {
    "properties": {
      "hostname":    { "type": "keyword" }, 
      "interval":  { "type": "integer"  },
      "timestamp":   {
        "type": "date",
        "format": "yyyy-MM-dd HH:mm:ss zzz"
      },
      "CPU":    { "type": "integer" }, 
      "%user":  { "type": "float"  },
      "%nice":   { "type": "float"  },
      "%system":    { "type": "float" }, 
      "%iowait":  { "type": "float"  },
      "%steal":   { "type": "float"  },
      "%idle":   { "type": "float"  }
    }
  }
}

Once the indexes for all the metrics are created, it’s time to upload Sar data into Elasticsearch indexes.

Bulk upload is the easiest way and below is an example JSON file for swap Sar data:

curl -H 'Content-Type: application/x-ndjson' -XPOST $ELASTIC_HOST:9200/_bulk?pretty --data-binary @interim.json

$ more interim.jso

{"index": {"_index": "sar.swap.server1.example.com "}}
{"hostname":"# hostname","interval":"interval","timestamp":"timestamp","kbswpfree":"kbswpfree"
,"kbswpused":"kbswpused","%swpused":"%swpused","kbswpcad":"kbswpcad","%swpcad":"%swpcad"}
{"index": {"_index": "sar.server1.example.com "}}
{"hostname":"SoftNAS-A83PR","interval":"595","timestamp":"2020-06-01 05:10:01 UTC","kbswpfree"
:"0","kbswpused":"4128764","%swpused":"100.00","kbswpcad":"23324","%swpcad":"0.56"}
{"index": {"_index": "server1.example.com"}}
{"hostname":"SoftNAS-A83PR","interval":"595","timestamp":"2020-06-01 05:20:01 UTC","kbswpfree"
:"0","kbswpused":"4128764","%swpused":"100.00","kbswpcad":"23324","%swpcad":"0.56"}
{"index": {"_index": "server1.example.com"}}
{"hostname":"SoftNAS-A83PR","interval":"595","timestamp":"2020-06-01 05:30:01 UTC","kbswpfree"
:"0","kbswpused":"4128764","%swpused":"100.00","kbswpcad":"23324","%swpcad":"0.56"}

All Elasticsearch work is done now. Data is uploaded to Elasticsearch indexes and we are switching to Kibana to create a few nice graphs.

First, we change the Kibana time format and Kibana time settings to how we like them.

The settings could be found in advanced settings in the Kibana UI but it’s easy to forget for any new Kibana installations:


curl -X POST -H "Content-Type: application/json" -H "kbn-xsrf: true" -d @change_time_format.json  http://$KIBANA_HOST:5601/s/$SPACE_ID/api/kibana/settings

curl -X POST -H "Content-Type: application/json" -H "kbn-xsrf: true" -d @change_time_zone.json  http://$KIBANA_HOST:5601/s/$SPACE_ID/api/kibana/settings

$ cat change_time_format.json 
{"changes":{"dateFormat:scaled":"[\n  [\"\", \"HH:mm:ss.SSS\"],\n  [\"PT1S\", \"HH:mm:ss\"],\n  [\"PT1M\", \"MM-DD HH:mm\"],\n  [\"PT1H\", \"YYYY-MM-DD HH:mm\"],\n  [\"P1DT\", \"YYYY-MM-DD\"],\n  [\"P1YT\", \"YYYY\"]\n]"}}

$ cat change_time_zone.json 
{
  "changes":{
    "dateFormat:tz":"Etc/GMT+5"
  }
}

Let’s create a Kibana space for each server

The screenshot shows the space selector page, where we choose to keep using the default space or choose one of the server spaces created with the api call above.

curl -X POST -H "Content-Type: application/json" -H "kbn-xsrf: true" -d @interim.json  http://$KIBANA_HOST:5601/api/spaces/space


$ cat interim.json 
{
  "id": "server1.example.com",
  "name": "server1.example.com"
}

Now, the real Kibana work – create index patterns. The example shows json file for swap data:

curl -X POST -H "Content-Type: application/json" -H "kbn-xsrf: true" -d @interim.json  http://$KIBANA_HOST:5601/s/$SPACE_ID/api/saved_objects/index-pattern

$ cat interim.json 
{
  "attributes":
    {
      "title": "sar.swap.server1.example.com *",
      "fields": "[{\"name\":\"kbswpfree\",\"type\":\"number\",\"esTypes\":[\"float\"],\"count\":0,\"scripted\":false,\"searchable\":true,\"aggregatable\":true,\"readFromDocValues\":true},{\"name\":\"kbswpused\",\"type\":\"number\",\"esTypes\":[\"float\"],\"count\":0,\"scripted\":false,\"searchable\":true,\"aggregatable\":true,\"readFromDocValues\":true},{\"name\":\"%swpused\",\"type\":\"number\",\"esTypes\":[\"float\"],\"count\":0,\"scripted\":false,\"searchable\":true,\"aggregatable\":true,\"readFromDocValues\":true},{\"name\":\"kbswpcad\",\"type\":\"number\",\"esTypes\":[\"float\"],\"count\":0,\"scripted\":false,\"searchable\":true,\"aggregatable\":true,\"readFromDocValues\":true},{\"name\":\"%swpcad\",\"type\":\"number\",\"esTypes\":[\"float\"],\"count\":0,\"scripted\":false,\"searchable\":true,\"aggregatable\":true,\"readFromDocValues\":true},{\"name\":\"swap\",\"type\":\"number\",\"esTypes\":[\"integer\"],\"count\":0,\"scripted\":false,\"searchable\":true,\"aggregatable\":true,\"readFromDocValues\":true},{\"name\":\"_id\",\"type\":\"string\",\"esTypes\":[\"_id\"],\"count\":0,\"scripted\":false,\"searchable\":true,\"aggregatable\":true,\"readFromDocValues\":false},{\"name\":\"_index\",\"type\":\"string\",\"esTypes\":[\"_index\"],\"count\":0,\"scripted\":false,\"searchable\":true,\"aggregatable\":true,\"readFromDocValues\":false},{\"name\":\"_score\",\"type\":\"number\",\"count\":0,\"scripted\":false,\"searchable\":false,\"aggregatable\":false,\"readFromDocValues\":false},{\"name\":\"_source\",\"type\":\"_source\",\"esTypes\":[\"_source\"],\"count\":0,\"scripted\":false,\"searchable\":false,\"aggregatable\":false,\"readFromDocValues\":false},{\"name\":\"_type\",\"type\":\"string\",\"esTypes\":[\"_type\"],\"count\":0,\"scripted\":false,\"searchable\":true,\"aggregatable\":true,\"readFromDocValues\":false},{\"name\":\"hostname\",\"type\":\"string\",\"esTypes\":[\"keyword\"],\"count\":0,\"scripted\":false,\"searchable\":true,\"aggregatable\":true,\"readFromDocValues\":true},{\"name\":\"interval\",\"type\":\"number\",\"esTypes\":[\"integer\"],\"count\":0,\"scripted\":false,\"searchable\":true,\"aggregatable\":true,\"readFromDocValues\":true},{\"name\":\"timestamp\",\"type\":\"date\",\"esTypes\":[\"date\"],\"count\":0,\"scripted\":false,\"searchable\":true,\"aggregatable\":true,\"readFromDocValues\":true}]"
    }
}

Create graphs, which are called visualizations in Kibana. The JSON file below is for one of the CPU graphs:


curl -X POST -H "Content-Type: application/json" -H "kbn-xsrf: true" -d @$METRIC.$HOSTNAME.$i.json http://$KIBANA_HOST:5601/s/$SPACE_ID/api/saved_objects/visualization


$ cat cpu.server1.example.com.%user.json
{
  "attributes":
    {
      "title": "sar-cpu-server1.example.com-%user",
      "visState": "{\"title\":\"%user\",\"type\":\"line\",\"params\":{\"type\":\"line\",\"grid\":{\"categoryLines\":false},\"categoryAxes\":[{\"id\":\"CategoryAxis-1\",\"type\":\"category\",\"position\":\"bottom\",\"show\":true,\"style\":{},\"scale\":{\"type\":\"linear\"},\"labels\":{\"show\":true,\"filter\":true,\"truncate\":100},\"title\":{}}],\"valueAxes\":[{\"id\":\"ValueAxis-1\",\"name\":\"LeftAxis-1\",\"type\":\"value\",\"position\":\"left\",\"show\":true,\"style\":{},\"scale\":{\"type\":\"linear\",\"mode\":\"normal\"},\"labels\":{\"show\":true,\"rotate\":0,\"filter\":false,\"truncate\":100},\"title\":{\"text\":\"Max %user\"}}],\"seriesParams\":[{\"show\":true,\"type\":\"line\",\"mode\":\"normal\",\"data\":{\"label\":\"%user\",\"id\":\"1\"},\"valueAxis\":\"ValueAxis-1\",\"drawLinesBetweenPoints\":true,\"lineWidth\":2,\"interpolate\":\"linear\",\"showCircles\":true}],\"addTooltip\":true,\"addLegend\":false,\"legendPosition\":\"right\",\"times\":[],\"addTimeMarker\":false,\"labels\":{},\"thresholdLine\":{\"show\":false,\"value\":10,\"width\":1,\"style\":\"full\",\"color\":\"#34130C\"},\"dimensions\":{\"x\":null,\"y\":[{\"accessor\":0,\"format\":{\"id\":\"number\"},\"params\":{},\"aggType\":\"count\"}]}},\"aggs\":[{\"id\":\"1\",\"enabled\":true,\"type\":\"max\",\"schema\":\"metric\",\"params\":{\"field\":\"%user\"}},{\"id\":\"2\",\"enabled\":true,\"type\":\"date_histogram\",\"schema\":\"segment\",\"params\":{\"field\":\"timestamp\",\"useNormalizedEsInterval\":true,\"scaleMetricValues\":false,\"interval\":\"10m\",\"drop_partials\":false,\"min_doc_count\":1,\"extended_bounds\":{}}}]}",
      "uiStateJSON": "{}",
      "description": "",
      "version": 1,
      "kibanaSavedObjectMeta": {
        "searchSourceJSON": "{\"query\":{\"query\":\"\",\"language\":\"kuery\"},\"filter\":[],\"indexRefName\":\"kibanaSavedObjectMeta.searchSourceJSON.index\"}"
      }
    },
  "references": [
      {
        "name": "kibanaSavedObjectMeta.searchSourceJSON.index",
        "type": "index-pattern",
        "id": "2a5ed4b0-b451-11ea-a8db-210d095de476"
      }
    ]

}

We are pretty much done but we could have generated dozens of graphs by now, so lets make a few dashboards to organize graphs by metric, meaning one dashboard for CPU, one for RAM, one for each disks, etc:


curl -X POST -H "Content-Type: application/json" -H "kbn-xsrf: true" -d @$INTERIM_FILE http://$KIBANA_HOST:5601/s/$SPACE_ID/api/saved_objects/dashboard

{
  "attributes":
    {
      "title": "sar-swap-server1.example.com",
      "hits": 0,
      "description": "",
      "panelsJSON": "[{\"version\":\"7.5.1\",\"gridData\":{\"w\":12,\"h\":8,\"x\":0,\"y\":0,\"i\":\"sar-swap-softnas-a83pr-kbswpfree\"},\"panelIndex\":\"sar-swap-softnas-a83pr-kbswpfree\",\"embeddableConfig\":{},\"panelRefName\":\"panel_0\"},{\"version\":\"7.5.1\",\"gridData\":{\"w\":12,\"h\":8,\"x\":12,\"y\":0,\"i\":\"sar-swap-softnas-a83pr-kbswpused\"},\"panelIndex\":\"sar-swap-softnas-a83pr-kbswpused\",\"embeddableConfig\":{},\"panelRefName\":\"panel_1\"},{\"version\":\"7.5.1\",\"gridData\":{\"w\":12,\"h\":8,\"x\":24,\"y\":0,\"i\":\"sar-swap-softnas-a83pr-%swpused\"},\"panelIndex\":\"sar-swap-softnas-a83pr-%swpused\",\"embeddableConfig\":{},\"panelRefName\":\"panel_2\"},{\"version\":\"7.5.1\",\"gridData\":{\"w\":12,\"h\":8,\"x\":36,\"y\":0,\"i\":\"sar-swap-softnas-a83pr-kbswpcad\"},\"panelIndex\":\"sar-swap-softnas-a83pr-kbswpcad\",\"embeddableConfig\":{},\"panelRefName\":\"panel_3\"},{\"version\":\"7.5.1\",\"gridData\":{\"w\":12,\"h\":8,\"x\":48,\"y\":0,\"i\":\"sar-swap-softnas-a83pr-%swpcad\"},\"panelIndex\":\"sar-swap-softnas-a83pr-%swpcad\",\"embeddableConfig\":{},\"panelRefName\":\"panel_4\"}]",
      "optionsJSON": "{\"useMargins\":true,\"hidePanelTitles\":false}",
      "version": 1,
      "timeRestore": false,
      "kibanaSavedObjectMeta": {
        "searchSourceJSON": "{\"query\":{\"query\":\"\",\"language\":\"kuery\"},\"filter\":[]}"
      }


    },
    "references": [

      {
        "name": "panel_0",
        "type": "visualization",
        "id": "56224aa0-b451-11ea-a8db-210d095de476"
      },
      {
        "name": "panel_1",
        "type": "visualization",
        "id": "56b95a80-b451-11ea-a8db-210d095de476"
      },
      {
        "name": "panel_2",
        "type": "visualization",
        "id": "5752db60-b451-11ea-a8db-210d095de476"
      },
      {
        "name": "panel_3",
        "type": "visualization",
        "id": "57ec5c40-b451-11ea-a8db-210d095de476"
      },
      {
        "name": "panel_4",
        "type": "visualization",
        "id": "58865250-b451-11ea-a8db-210d095de476"
      }
    ]

}

JSON files often look scary, but they are not actually. Once the desired object is created manually in Kibana UI, the json could be found and copy-and-paste is easily applied with only a minor editing or auto replacement.

Just a few more API calls are required while coding all the visualizations and dashboards:

Get index pattern id:
curl -X GET -H "Content-Type: application/json" -H "kbn-xsrf: true" http://$KIBANA_HOST:5601/s/$SPACE_ID/api/saved_objects/_find?type=index-pattern&fields=title

Get visualization id:

curl -X GET -H "Content-Type: application/json" -H "kbn-xsrf: true" "http://$KIBANA_HOST:5601/s/$SPACE_ID/api/saved_objects/_find?type=visualization&per_page=1000"

Lets enjoy the newly created dashboards!

 The CPU dashboard shows a spike related to a massive data copy operation:

The RAM dashboard shows the same data copy operation from a memory consumption point of view:

The root disk dashboard:

The data disk dashboard. The server has 4 data disks in RAID 0 and the dashboard shows metrics for one the data disks:

Buurst Now Available in the Microsoft Azure Marketplace

Buurst Now Available in the Microsoft Azure Marketplace

Microsoft Azure customers worldwide now gain access to SoftNAS to take advantage of the scalability, reliability, and agility of Azure to drive application development and shape business strategies.

BELLEVUE, Wash.–()–Buurst, a leading enterprise-class data performance company, today announced the availability of its flagship product, SoftNAS in the Microsoft Azure Marketplace, an online store providing applications and services for use on Azure. Buurst customers can now take advantage of the productive and trusted Azure cloud platform, with streamlined deployment and management.

“The availability of our SoftNAS product in the Microsoft Azure Marketplace enables us to offer these key benefits to a wider range of organizations.”

  Tweet this

“Through our solutions, we strive to provide our customers with better application performance, lower cloud storage costs, and the control they need,” said Garry Olah, CEO of Buurst. “The availability of our SoftNAS product in the Microsoft Azure Marketplace enables us to offer these key benefits to a wider range of organizations.”

Buurst is dedicated to delivering new levels of data performance, control, and availability to position businesses to move, access, and leverage data quickly. The company and its innovative solutions offer impressive levels of performance in the cloud, having reached 1 million input/output operations per second (IOPS), and provide a patented cross-zone, high availability with a 99.999 percent uptime guarantee, giving customers true control over their data in the cloud.

Buurst’s flagship product, SoftNAS, offers customers control by providing the resources required to develop a new environment and enabling businesses to apply the configuration variables they need to get the maximum performance for petabytes of data on Azure. Additionally, businesses can significantly reduce the cost of cloud storage through SoftNAS’ optimization of Azure’s premium and standard managed disk storage, as well as leveraging its deduplication, compression, and tiering capabilities. SoftNAS optimizes data performance while keeping costs in check for businesses.

Sajan Parihar, senior director, Microsoft Azure Platform at Microsoft Corp., said, “We’re pleased to welcome Buurst to the Microsoft Azure Marketplace, which gives our partners great exposure to cloud customers around the globe. Azure Marketplace offers world-class quality experiences from global trusted partners with solutions tested to work seamlessly with Azure.”

The Azure Marketplace is an online market for buying and selling cloud solutions certified to run on Azure. The Azure Marketplace helps connect companies seeking innovative, cloud-based solutions with partners who have developed solutions that are ready to use.

About Buurst

Buurst, Inc. is a leading enterprise-class data performance company that delivers migration, cost management, and control of data in the cloud customers need. Buurst optimizes cloud storage decisions for organizations, from migration to granular monitoring and management to storage tiering for cost performance, across all major cloud platforms, ensuring superior performance and optimization of business-critical data. Buurst has offices in the Seattle and Houston areas and employees located across the globe. Buurst powers some of the largest enterprises, including Samsung, Halliburton, T-Mobile, Boeing, Netflix, L’Oréal, and WWE. For more information, visit www.buurst.com.

Click here for link to original Business Wire Press Release

Expand Azure Storage Efficiency with SoftNAS

 SoftNAS provides enterprise-level cloud NAS featuring data performance, security, high availability (HA), and support for the most extensive set of storage protocols in the industry: NFS, CIFS/SMB-AD, iSCSI.

Control cloud cost icon

Cost Management

Save 30-80% by reducing amount of data to store  

  • Enable Block Storage  
  • Data Deduplication & Compression 
cloud icon

Performance

Increase data performance without expanding storage required  

  • Multiple Storage Types 
  • Striping Disk 
  • Private Storage  
    control and security icon

    Control & Security

    Supporting major protocols including iSCSI, CIFs/SMB, and NFS 

    • Snapshots and Rollbacks 
    • Large-scale Windows Filer & NFS Server  

    SoftNAS designed to support a variety of market verticals, use cases, and workload types. Increasingly, SoftNAS deployed on the Azure platform to enable block and file storage services through Common Internet File System (CIFS), NFS, AFP, and iSCSI. 

    SoftNAS is a software-defined NAS delivered as a virtual appliance running within Azure Computing Service. It provides NAS capabilities suitable for the enterprise, including high availability utilizing Azure availability sets with automatic failover in the Azure cloud. SoftNAS runs within your Microsoft Azure account, offers business-critical data protection required for non-stop operation of applications, websites, and IT infrastructure. 

    New Maintenance Release 4.4.4 Improves Performance with a No Downtime Guarantee

    New Maintenance Release 4.4.4 Improves Performance with a No Downtime Guarantee

    SoftNAS is now Buurst! Our new name represents our commitment to even higher performance and reliability for your cloud applications and storage. The new Maintenance Release 4.4.4 is our latest step in that commitment. This version is a must-update for NDG compliance. 

    Here are the highlights… 

    • Replication Performance Improvements now switched to ZSTD compression.
    • Lift and Shift Improved File Scan Performance– now much faster.
    • SmartTiers Fixes for migration policies, storage expansion and UI.
    • SnapReplicateand Snap HA Fixes for Snapshot retention, Snap replication and UI display issues. 
    • Updated ZFS to v0.8.3-3
    • Bug Fixes & Security Updates

    Version 4.4.4 for Buurst SoftNAS® is generally available.

    Version 4.4.4 for Buurst SoftNAS® is generally available. As a current subscriber, you will continue to have access to previous versions of SoftNAS products. However, Buurst highly recommends updating to version 4.4.4 as soon as possible to ensure you have the updated ZFS versionsecurity fixes, and dramatic performance enhancements.  

     Version 4.4.4 establishes a new minimum version requirement for the Buurst® No Storage Downtime Guarantee™. Please update your instance(s) within 30 days to ensure you remain in compliance with the Buurst No Storage Downtime Guarantee.  

    Read “HOW TO UPGRADE” below for instructions on how to begin your upgrade process. 

    NEW IN VERSION 4.4.4 

    Replication Performance Improvements Buurst SoftNAS has switched to ZSTD compression for replication transfers for a performance improvement gain of up to 40% through faster data replication. More info on ZSTD can be found here: https://facebook.github.io/zstd/ 

    Lift and Shift Improved File Scan Performance – Scans of Lift and Shift source locations with a high number of file system objects are now much quicker. In lab testing, a source with 1 TiB of data and 1.5 million objects was cut from 92 hours to 4 hours, using 1 GbE networking. 

    ENHANCED IN VERSION 4.4.4

    Updated ZFS to v0.8.3-3 – This ZFS upgrade addresses a potential ZFS panic issue with send/receive VERIFY3 operations.

    Lift and Shift Pool Creation – The pool creation process has now been optimized to provide better feedback if invalid parameters are provided. 

    SmartTiers enhancements – Several fixes to the SmartTiers feature have been implemented:

    • Resolved an issue in which migration policy settings were not retained after a reboot.
    • Resolved an issue in which expansion of SmartTiers storage could potentially result in data corruption.
    • Resolved issues related to data consistency during SmartTiers migration of data between tiers.
    • SmartTiers UI now displays consistent usage status.

    SnapReplicate and SNAP HA – Several fixes to the SnapReplicate and SNAP HA features have been implemented:

    • Unsupported SNAP HA recovery options within the UI for the VMware version are no longer visible to the user.  
    • Previously, the snapshot weekly retention policy defaulted to zero. The default has been changed to 1 to ensure that snapshots are retained at least one week unless otherwise specified. 
    • Resolved an issue in which a mismatch between the mbuffer block size and the SnapRep block size could result in memory fragmentation that impacts performance.

    SECURITY FIXES IN 4.4.4

    • CVE-2018-17192: Missing X-Frame : Indicates to a browser whether a page can be rendered into a frame or other type of embedded object
    • CVE-2018-12302: Missing http only attribute
    • CVE-2018-17192: Clickjacking: X-Frame-Options header missing
    • CVE-2019-19089: Missing X-Content-Type-Options
    • CVE-2016-6884, CVE-2013-0169: TLS v1.1
    • CVE-2013-2566, CVE-2015-280, CVE-2015-4000, CVE-2013-2566: Weak Ciphers

    ***Read the Buurst SoftNAS Release Notes for the full list of enhancements in version 4.4.4 and more detailed instructions on how to upgrade versions.

    HOW TO UPGRADE

    Read the Release Notes for specific instructions on how to update from your specific SoftNAS Cloud version number.

    NOTE: Your in-place upgrade can take up to an hour to complete if upgrading from version 3.4.9.4 or previous. If updating from more recent versions, the upgrade could still take up to 45 minutes for operating system updates. Do NOT terminate the upgrade while in progress because errors will occur. If you feel that your upgrade is taking longer than expected, please contact Buurst Support for assistance, and do NOT reboot your instance.

    ARE YOU REGISTERED FOR GOLD SUPPORT?

    All customers can submit a support ticket by emailing support@buurst.com.

    AWS Marketplace subscribers of Buurst® SoftNAS running the 800 performance band or higher [min. Throughput MBps (128k) up to 265 on AWS] are entitled to 24x7x365 Buurst Gold-level phone support. Contact Buurst to register and receive your free Gold-level support.

    10 Ways Enterprise Selling is changing for ISV’s – Evolving from direct, to channel, to the cloud era

    10 Ways Enterprise Selling is changing for ISV’s – Evolving from direct, to channel, to the cloud era

    As a student of our industry I have seen enterprise software selling evolve. IBM created this industry and became the gold standard for enterprise direct selling in the 70’s and 80’s. Many other great enterprise sales companies followed like Oracle, EMC, NetApp, Cisco and the like. But even pioneers like IBM learned that leverage was needed and that “controlling the customer” through account management was a myth!

    This gave birth, In the 90’s, to companies like Compaq and Novell who developed channel models from the start and were quickly followed by Microsoft, Citrix, VMware and many other companies. In the early 2000’s the channel model evolved to a hybrid model with channel elements coupled with direct account management. Most enterprise software companies today are hybrid, but almost all “start-up” with direct sales. You can argue the pluses and minuses of channel and direct, but I am a leverage guy, and my favorite quote from my old friend, Mark Templeton, former CEO of Citrix, is “25,000 people wake up every day selling Citrix and none of them work for us”.

    Enter the Cloud Era; So, what’s now different? As Enterprises migrate to the public cloud, the number of “true” public cloud players is small, with Amazon Web Services, Microsoft Azure and Google Cloud Platform being the runaway favorites. Because all cloud services, including the ISV’s are consumed through the customer’s tenant, the Cloud Service Provider “owns” the customer. You must rethink your selling model to take advantage of this additional leverage and align yourselves with partners who think and act in the cloud.

    Below are 10 Ways Enterprise Selling is changing for ISV’s:

    01. 

    Who owns the customer?

    Today the major public cloud vendors “own” the customers. Make no mistake about it, AWS, Microsoft Azure and Google Cloud Platform all “own” their customers and will be focused on programs that enable and retain this ownership. This isn’t a negative, but you must create and manage this as a leverage point.

    02. 

    Marketplaces

    All cloud vendors have marketplaces that promote, resell, demo, transact and compensate ISV’s for selling “through the marketplace”. Customers purchase on their AWS, Azure, GCP accounts and the ISV gets paid by the cloud vendor. You are a 3rd party product being sold by the cloud platform to “their” customers. I do realize that they are also your customers, but you are not as strategic as the cloud vendor. Marketplace selling is a new artform and you must become a cloud marketplace expert if you are going to succeed in the cloud era.

    03. 

    Cloud Sellers

    Cloud vendors compensate their sales teams who “co-sell” with ISV’s and other partners when the solution “lands” on their cloud platform. Most sales reps retire 10% of the Total Contract Value (TCV) against their quota which can be a big deal. You must now learn how to Cosell with AWS, Microsoft and GCP. This is an evolving model and will be difficult to master, so have patience and get expert help if you don’t have it in-house.

    04. 

    Partner to Partner Engagement

    With the advent of mainstream cloud adoption, partner roles have changed. Cloud vendors have new technical, marketing and sales roles and responsibilities. Global and regional SI’s have stepped up and become cloud platform experts.

    05. 

    Sales, Marketing, and Technology

    Sales, Marketing, and Technology are the trifecta of cloud platform support. Lean on the cloud vendors for technical, marketing and sales assistance. They have teams and infrastructure to assist you with migration, testing, marketing and positioning and selling in the cloud.

    06. 

    Cloud Platform Matters

    The platform matters more now, and cloud vendors are aggressively competing for market share, and you must stay ahead of their development and aligned with their cloud native stack. Attend quarterly roadmap updates and leverage their resources to best align your technology with the evolving cloud platform.

    07. 

    Multi-Cloud Strategy

    Almost every enterprise has a multi-cloud strategy and every ISV must evolve with them. Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP) are the market leaders and have enough capital to compete with each other and have their areas of focus but other public clouds like Oracle, IBM, Alibaba, etc. offer products that might be a great fit for your solutions because of technology stack, co-selling requirements and regional access.

    08. 

    Don’t Forget Verticals

    Not all public cloud vendors are created equally when it comes to verticals. For example, retailers may find it difficult to work with AWS because of the competition they see with its parent company. If you are Advertising and Marketing focused, GCP has a direct connection with the CMO in most enterprises because of Google Search. The top five verticals in cloud are Healthcare, Finance, Education, Automotive and Manufacturing, but this is an ever-changing landscape.

    09. 

    Messaging and Positioning

    Messaging and positioning for customers is different than to partners and cloud vendors. ISV’s must have three sets of messaging and positioning:

    • Customer – This is your standard, what’s in it for the customer but you must be careful in a multi-cloud world to not bring another cloud vendor into one cloud vendors account, particularly if it’s a Cosell.
    • Cloud Vendor – What’s in it for the cloud vendors’ sellers from a customer value and sales commission perspective. How do they make money recommending your solution? How much drag do you have underlying cloud infrastructure?
    • Partner – What’s in it for the SI or Consulting Partner from an overall value perspective. How do they make money recommending your solution? How much drag do they receive by way of cloud services and reselling cloud infrastructure?

    10. 

    Think Globally

    The public cloud marketplaces are global; you need to act global. My company supports customers in 29  countries because of the cloud vendor marketplaces. It’s large scale international leverage, take advantage.

    The cloud era is a new and exciting chapter in our industry. If you aren’t on the cloud train, you will be missing out. The cloud business is growing at a faster rate by eating most on-premise solutions. With all disruption comes opportunity. Carpe Diem!

    5 Tips for Maximizing Your Cloud Data

    5 Tips for Maximizing Your Cloud Data

    5 tips that every Enterprise technologist and business decision-maker should think about when they are managing and migrating data to the cloud.

    What are you going to do when you can no longer afford your data? The data explosion is upon us! More data is created every two years than in all of history before and there is no sign of this slowing. Even a global pandemic didn’t slow down data creation, it accelerated it!

    Enterprises struggle to balance two normally opposing things:

    • How can I get the maximum performance from my data to meet the demands of my customers, partners, and employees?
    • How can I get the best possible cloud economics because my data, not my budget is doubling every two years?

    Below are five tips that every Enterprise technologist and business decision-maker should think about.

    5 Tips for Maximizing Your Cloud Data

    Compress your Data

    Compress your data as much as possible, it will save you money. Compressed data takes up less space and requires less time and network bandwidth to transfer. Efficient compression cuts storage costs and high-performance compression can improve communication efficiency―providing a better customer experience.

    DeDupe your Data

    Duplicate data double and triple your data costs. Deduplication eliminates redundant data, reducing the size of the dataset. Deduplication with cloud storage reduces the storage requirements, along with the amount of data to be transferred over the network, resulting in faster and more efficient data operations.

    Tier your Data

    Not all Data is created equally. Put your hot data on the expensive and super-fast flash disks and your warm data on medium performance disks and your cold data on cold, less expensive cloud storage. Tiered storage infrastructures enable enterprises to effectively improve performance and enhance the cost-effectiveness of their cloud storage system and make the most of their available storage resources.

    Don’t forget about HA – High Availability

    When you absolutely, positively need access to your data, please turn on HA! Today there are several ways to create a HA solution for your data. Enterprises must balance cost against how many 9’s of availability. Baseline HA is the most efficient but will only get you 3 9’s of availability or 8.77 hours per year of unplanned downtime. Cross Zone and cross region HA gives you the best possible solution at 5 9’s of availability or 5.2 mins per year of downtime, with added cost but can guarantee you have your data available when a zone or a region of the public cloud goes down, and they do go down. By applying a high availability strategy, you can serve your customers through thick and thin. You send a message that you value their business. A highly available infrastructure also mitigates the negative impact of outages to revenue and productivity, which can cost hundreds of thousands of dollars per hour of downtime.

    Stop Paying a Storage Tax

    Pay the premium for your data performance and don’t pay an additional storage tax.

    The concept of cloud storage was based on traditional storage vendor pricing and ideals. The more data you move to the cloud the more you pay. This will always be the case for the underlying disks, but shouldn’t be the case for your file system. If you have a large amount of data and performance isn’t an issue, pay less, but if performance is a deciding factor, pay more, and if it depends on the data, then tier your data. Either way you can save 20-80% on your overall cloud storage costs by following the practices outlines in the blog.

    The I’s have it! – Why Buurst has embedded U’s

    The I’s have it! – Why Buurst has embedded U’s

    I have been involved with branded or named companies in every decade since the 1980’s and can tell you that naming has become exponentially more difficult over the decades! We now live in a world where people and businesses squat on domains hoping for a payday and an industry has been created selling and reselling these domain names. We also live in a world where SEO is closely tied to owning the dot com version of your company name, so .io, .cloud, .biz just doesn’t cut it. All the good names are taken….or so it sometimes appears!

     When we set out to rename and reposition SoftNAS we sought a name that best reflected our vision for the company as a data performance company, we wanted a descriptive name that talked to the explosion in data in the cloud. We looked at hundreds of names from made up names, to compound words to misspelled names and we found Buurst, spelled with two U’s and many people’s first reaction might be why choose a name misspelled with two U’s?

    Why does this matter? Something I learned very early in life was taught to me by Lee Iacocca, an American automobile executive best known for the development of Ford Mustang, while at the Ford Motor Company in the 1960s, and for reviving the Chrysler Corporation as its CEO during the 1980s. I met Lee at a conference and he told the story about how he named the Mustang. He didn’t seek a name that everyone liked, he knew that a familiar name that half the people loved and half the people didn’t meant that everyone would be talking about it and the name would stick. I believe the Ford Mustang name still in use 50 years later was a genius move!

    My CMO, Alex Rublowsky and I thought that this was opportunity for something special. Earlier in my career, I spent 13 years at Citrix whose name and logo was defined by duplicate letters in the name. The name Citrix worked, in part, because of the two I’s. The two I’s stand out when it is typed, but truly punched when you oppose the I’s in the Citrix logo.

    We saw this as the inspiration for the embedded U’s in the Buurst logo. The two U’s make Buurst interesting, but the embedded U’s in the logo make it Buurst!

    There are many things about Buurst that remind me of the exciting earlier days at Citrix….We are betting on the same trajectory! Those in favor say “I”.