Choosing the Right Instance Type and Instance Size for AWS and Azure

Buurst Staff

In this post, we’re sharing an easy way to determine the best instance type and appropriate instance size to use for specific use cases in the AWS and Azure cloud. To help you decide, there are some considerations to keep in mind. Let’s go through each of these determining factors in depth.

What use case are you trying to address?

  • A. Migration to the Cloud

    • Migrating existing applications into the cloud should be neither complex, expensive, time-consuming, or resource intensive, nor force you to rewrite your application to run it in the public cloud. If your existing applications access storage using CIFS/SMB, NFS, AFP, or iSCSI storage protocols, then you will need to choose a NAS filer solution that will allow your applications to access cloud storage (block or object) using the same protocols it already does.
  • B. SaaS-Enabled Applications

    • For revenue-generating SaaS applications, high performance, maximum uptime, and strong security with access control are critical business requirements. Running your SaaS apps in a multi-tenant, public cloud environment can be challenging while simultaneously fulfilling these requirements. An enterprise-grade cloud NAS filer may help you cope with these challenges, even in a public cloud environment. A good NAS solution provider will assure no downtime, high availability and high levels of performance, strong security with integration to industry-standard access control, and make it easier to SaaS-enable apps.
  • C. File Server Consolidation

    • Over time, end users and business applications create more and more data – usually unstructured – and rather than waiting for the IT department, these users install file servers wherever they can find room to put them, close to their locations. At the same time, businesses either get acquired or acquire other companies, inheriting all their file servers in the process. Ultimately, it’s the IT department that must manage this “server sprawl” when dealing with OS and software patches, hardware upkeep and maintenance, and security. With limited IT staff and resources, the task becomes impossible. The best long-term solution is using the cloud, of course, and a NAS filer to migrate files to the cloud. This strategy allows for scalable storage that is accessed the same way by users as they have always accessed their files on the local files servers.
  • D. Legacy NAS Replacement

    • With a limited IT staff and budget, it’s impractical to keep investing in legacy NAS systems and purchase more and more storage to keep pace with the rapid growth of data. Instead, investment in enterprise-grade cloud NAS can help businesses avoid burdening their IT staff with maintenance, support, and upkeep, and pass those responsibilities on to a cloud platform provider. Businesses also gain the advantages of dynamic storage scalability to keep pace with data growth, and the flexibility to map performance and cost to their specific needs.
  • E. Backup/DR/Archive in the Cloud

    • Use tools to replicate and back up your data from your VMware data center to the AWS, and Azure public clouds. Eliminate physical backup tapes by archiving data in inexpensive S3 storage or to cold storage like AWS Glacier for long-term storage. For stringent Recovery Point Objectives, cloud NAS can serve as an on-premises backup or primary storage target for local area network (LAN) connected backups as well. As a business’ data grows, the backup window can become unmanageable and tie up precious network resources during business hours. Cloud NAS with local disk-based caching reduces the backup window by streaming data in the background for better WAN optimization.

What Cloud Platform do you want to use?

No matter which cloud provider is selected, there are some basic infrastructure details to keep in mind. The basic requirements are:

  • Number of CPUs
  • Size of RAM
  • Network performance
  • A. Amazon Web Services (AWS)

    • Standard: r5.xlarge is a good starting point regarding memory and CPU resources. This category is suited to handle processing and caching with minimal requirements for network bandwidth. It comprises 4 vCPU, 16 GiB RAM, and a 1GbE network.

    • Medium
      : r5.2xlarge is a good choice for read-intensive workloads, and will benefit from the larger memory-based read cache for this category. The additional CPU will also provide better performance when deduplication, encryption, compression, and/or RAID are enabled. Composition: 8 vCPU, 32 GiB RAM, 10GbE network.
    •  
    • High-end: r5.24xlarge can be used for workloads that require a very high-speed network connection due to the amount of data transferred over a network connection. In addition to the very high-speed network, this level of instance gives you a lot more storage, CPU, and memory capacity. Composition: 96 vCPU, 758 GiB RAM, 25GbE network.
  • B. Microsoft Azure

    • Dsv3-series supports premium storage and is the latest, hyper-threaded general-purpose generation running on both the 2.4 GHz Intel Xeon® E5-2673 v3 (Haswell) and the 2.3 GHz Intel Xeon® E5-2673 v4 (Broadwell) processor. With the Intel Turbo Boost Technology 2.0, the Dsv3 can achieve up to 3.5 gigahertz (GHz). The Dsv3-series sizes offer a combination of vCPU, memory, and temporary storage best suited for most production workloads.
    •  
    • Standard: D4s Standard v3 4 vCPU, 16 GiB RAM with moderate network bandwidth
    • Medium: D8s Standard v3 8 vCPU, 32 GiB RAM with high network bandwidth
    • High-end: D64s Standard v3 64 vCPU, 256 GiB RAM with extremely high network bandwidth

What type of storage is needed?

Both AWS and Azure offer block as well as object storage. Block storage is normally used with file systems while object storage addresses the need to store “unstructured” data like music, images, video, backup files, database dumps, and log files. Selecting the right type of storage to use also influences how well an AWS Instance or Azure VM will perform.

Other resources:

About AWS S3 object storage

About AWS EBS block storage

Types of Azure storage

Need a “No Storage Downtime Guarantee”?

No matter which support platform is used, look for a NAS filer that offers High Availability. A robust set of HA capabilities protects against data center, availability zone, server, network, and storage subsystem failures to keep the business running without downtime. HA monitors all critical storage components, ensuring they remain operational. In case of an unrecoverable failure in a system component, another storage controller detects the problem and automatically takes over, ensuring no downtime or business impacts occur. So, companies are protected from lost revenue when access to their data resources and critical business applications would otherwise be disrupted.

If you’re looking for more personal guidance or have any technical questions, get in touch with our team of cloud experts who have done thousands of VPC and VNet configurations across AWS and Azure. Even if you are not considering SoftNAS, our free professional services reps will be happy to answer your questions.

Subscribe to Buurst Monthly Newsletter 


More from Buurst