Best Practices Learned from 1,000 AWS VPC Configurations
Yolande Yip
April 17, 2019

At Buurst SoftNAS, we’ve configured over 1,000 Amazon VPCs for companies of all sizes: small businesses, Fortune 100 companies, and everything in between. When you’ve worked on this many AWS VPC designs and configurations, you learn a lesson or two about getting optimal results.

From this experience, we’re sharing AWS VPC best practices to help you with your AWS VPC deployment.

  1. Organize your AWS environment
  2. Create AWS VPC subnets
  3. Control your access
  4. SoftNAS Cloud NAS and AWS VPCs
  5. Common AWS VPC Mistakes
  6. SoftNAS Cloud NAS Overview
  8. Claim my $100 AWS Credit

Organize your AWS environment

aws vpc best practice
The very first Amazon VPC best practice is to organize your AWS environment. We recommend that you use tags. As you continue to add instances, create route tables and subnets, it’s nice to know what’s connects with what. And the simple use of tags will make life so much easier when it comes to troubleshooting.

Make sure you plan your CIDR block very carefully. Go a little bit bigger than you think you need, not smaller.

Remember that for every VPC subnet that you create, AWS takes five of those IP addresses for subnet. So when you create a subnet know that off the top there’s a five IP overhead.

Avoid using overlapping CIDR blocks— at some point, if not today but maybe down the road, you may want to pair this VPC with another VPC, and if you have overlapping CIDR blocks, the pairing of VPCs will not function correctly, and you’re going to find yourself in a configuring nightmare in order to get those VPCs to pair.

Try to avoid using overlapping CIDRs, and always save a little bit of space for future expansion. There’s no cost associated here with using a bigger CIDR block, so don’t undersize what you think you may need from the IP’s perspective – just to try to make it clean and easy.

Create AWS VPC subnets the right way for success

aws vpc subnet

Design your subnets lead to success. What is your AWS VPC subnet design strategy going to be?

One of the best practices for AWS subnets is to align your VPC subnets to as many different tiers as possible, such as DMZ/Proxy layer, ELB layer if you’re going to be using load balancers, application or database layer. Remember, if your subnet is not associated to a specific route table, then by default it’s going to the main route table. I’ve seen so many cases where people create a route table, and they’ve got a subnet, but they haven’t associated the subnet to the route table when they thought they did, so the packets aren’t flowing where they think they’re flowing.

Put everything in a private subnet by default and use either ELB filtering and monitoring type services in your public subnet. You can use NAT to gain access to public networks. We highly recommend, and you’ll see this later, that you use a dual NAT configuration for redundancy. There are some great cloud formation templates that are available to set up highly available NAT instances and make sure that you size those instances properly for the amount of traffic you’re going to actually push into your network.

You can go ahead and set up VPC peering for access to other VPCs within your environment or maybe from a customer or a partner environment, and I highly suggest leveraging the endpoints for access to services like S3 instead of actually going out either over a NAT instance or over an internet gateway in order to gain access to services that may not live within the specific VPCs. They’re very easy to configure, and they’re actually much more efficient and have lower latency by leveraging an endpoint than actually going out over a NAT or over an internet gateway to gain access to something like S3 from your instance.

Control your access

aws vpc access

Control your access within the AWS VPC. Don’t cut corners and use a default route to the internet gateway. We see a lot of people that do this, and it comes back to cause them problems later on. We mentioned to use redundant NAT instances. There are some great cloud formation templates available from Amazon on creating a highly available redundant NAT instance.

The default NAT instance size is an m1.small, which may or may not suit your needs depending upon the amount of traffic you’re going to use, and I would highly recommend that you use IAM for access control, especially configuring IAM roles to instances, and remember that IAM roles cannot be assigned to running instances. It has to be set during instance creation time, and using those IAM roles will actually allow you to not have to continue to populate AWS keys within the specific products in order to gain access to some of those API services.

SoftNAS Cloud NAS and AWS VPCs

softnas aws vpc

How does SoftNAS Cloud NAS fit into AWS VPCs?

We have a highly available architecture from a storage perspective, leveraging our SNAP HA capability, which allows us to provide high availability across multiple different availability zones. We leverage our underlying secure block replication with SnapReplicate, and we highly recommend using SNAP HA in a high-availability mode which would give you a no downtime guarantee, plus a five nine uptime, and also it’s important to remember that Amazon provides no SOA unless you run in a multi-zone deployment, right? So a single AZ deployment has no SLE within AWS.

We have two methods of actually deploying our cross-zone high availability at SoftNAS. The first is actually to leverage the use of elastic IPs, where you have two separate controllers, each in their own availability zones. They’re in the public subnet and we assign each node an elastic IP address. We use a third elastic IP address as our VIP or virtual interface.

You can configure SnapReplicate between the two instances which will provide you the underlying block replication, and then what happens is that the elastic IP address that’s considered to be the VIP IP address is assigned to whatever’s the primary controller, and whatever services you have from an NFS, CIFS or iSCSI perspective will actually mount or map drives to that elastic IP address, and then if there is a failover or failure of the storage instance.

It will move that elastic IP address over from the primary controller to the secondary controller should anything trigger our HA monitor, which looks at things like health of the file system, health of the network, at multiple different levels. This is applicable for doing things like backing EBS with SoftNAS, using S3 with SoftNAS.

The second mode is to use a private virtual IP address where both SoftNAS Cloud NAS instances actually live within a private subnet and don’t have any access out, and what you would actually do there is it’s the same underlying SnapReplicate technology and monitoring technology. However what happens here is you actually pick a virtual IP address that is outside of the CIDR block of your AWS VPC, your clients map to it, there’s an entry that’s automatically placed into the route table, and should a failover occur we’ll update the route table automatically in order to route the track properly to the proper controller that should be the primary at the time. This is probably the more common way of deploying SoftNAS in a highly available architecture.

Common AWS VPC Mistakes

aws vpc mistakes

The SoftNAS support team sheds some light on common mistakes they see when it comes to Amazon VPC configuration. Read on to understand what you should avoid.

Each of these deployments require two ENIs or two NIC interfaces, and both of those NICs need to be in the same subnet. Make sure that you check this when you’re creating your instances or adding the ENS, and make sure that both NICs are in the same subnet.

Another common error is that one of the health checks we actually perform is to do a ping between the two instances, and the security group isn’t always open to allow the ICMP health check to happen. This will cause an automatic failover because we can’t gain access to the other instance. We do actually leverage an S3 bucket here in our HA deployment as a third party witness, so if you deploy SoftNAS as your private subnet, we do need to gain access to the S3, either via NAT or the configuration of an S3 endpoint within the VPC.

And again, as I mentioned just a few moments ago, for private HA, a virtual IP address must not be in the same CIDR of the AWS VPC. So if your CIDR is, then you need to pick a virtual IP address that doesn’t fit within that subnet, so say would work in that particular case or whatever works for you best, but it cannot fall within the CIDR block of the AWS VPC, or the route failover mechanism that we’re leveraging will not function properly.

SoftNAS Cloud NAS Overview

SoftNAS Cloud NAS is a powerful enterprise-class, virtual storage appliance that works for both public, private and hybrid clouds. It’s easy to try, easy to buy, and easy to learn and use. You have freedom from platform lock-in, and it works with the most popular cloud computing platforms including Amazon EC2, VMware vSphere, CenturyLink, Microsoft Azure. Our mission is to be the data fabric for businesses across all types of cloud, whether private, public or hybrid.

We have a couple of different products that we can leverage. Our first is our SoftNAS Cloud NAS cloud product which runs on your public clouds, which is a NAS filer for public clouds. We have our cloud file gateway which is for on premise use to connect to cloud-based storage. We also have SoftNAS for service providers. Which is our multi-tenant NAS replacement for service providers that leverage iSCSI and object storage.


We’re sharing the questions that came from the attendees of our webinar on AWS VPC best practices, and our answers to them. You may just find your own questions answered here.

  • We use VLANs in our data centers for isolation purposes today. What VPC construct do you recommend to replace VLANs in AWS?
    • That would be subnets, so you could either leverage the use of subnets or if you really wanted to get a different isolation mechanism, create another VPC to isolate those resources further and then actually pair them together via the use of VPC pairing technology.
  • You said to use IAM for access control, so what do you see in terms of IAM best practices for AWS VPC security?
    • So the biggest thing is that you deal with either third party products or customized software that you made on your web server. Anything that requires use of AWS API resources need to use a secret key and an access key, so you can store that secret key and access key in some type of text file and have it reference it, or, b, the easier way is just to set the minimum level of permissions that you need in the IAM role, create this role and attach it to your instance and start time. Now, the role itself can’t be assigned, only during start time. However, the permissions of several can be modified on the fly. So you can add or subtract permissions should the need arise.
  • So when you’re troubleshooting the complex VPC networks, what approaching tools have you found to be the most effective?
    • We love to use traceroute.  I love to use ICMP when it’s available, but I also like to use the AWS Flow Logs which will actually allow me to see what’s going on in a much more granular basis, and also leveraging some tools like CloudTrail to make sure that I know what API calls were made by what user in order to really understand what’s gone on.
  • What do you recommend for VPN intrusion detection?
    • There’s a lot of them that are available. We’ve got some experience with Cisco and Juniper for things like VPN and, Fortinet, whoever you have, and as far as IVS goes, like Alert Logic is a popular solution. I see a lot of customers that use that particular product. Some people like some of the open source tools like Snort and things like that as well.
  • Any recommendations around secure junk box configurations within AWS VPC?
    • If you’re going to deploy a lot of your resources within a private subnet and you’re not actually going to use a VPN, one of the ways that a lot of people do this is to just configure a quick junk box, and what I mean by that is just to take a server, whether it be a Windows or Linux, depending upon your preference, and put that in the public subnet and only allow access from a certain amount of IP addresses over to either SSH from a Linux perspective or RDP from a Windows perspective.  It puts you inside of the network and actually allows to gain access to the resources within the private subnet.
  • And do junk boxes sometimes also work? Are people using VPNs to access the junk box too for added security
    • Some people do that. Sometimes they’ll just put like a junk box inside of the VPN and your VPN into that. It’s just a matter of your organization security policies.
  • Any performance or further considerations when designing the VPC?
    • It’s important to understand that each instance has its own available amount of resources, from not only from a network IO but from a storage IO perspective, and also it’s important to understand that 10GB, a 10GB instance, like let’s say take the c3.8xl which is a 10GB instance. That’s not 10GB worth of network bandwidth or 10GB worth of storage bandwidth. That’s 10GB for the instance, right? So if you have a high amount of IO that you’re pushing there from both a network and a storage perspective, that 10GB is shared, not only from the network but also to access the underlying EBS storage network. This confuses a lot of people, so it’s 10GB for the instance not just a 10GB network pipe that you have.
  • Why would use an elastic IP instead of the virtual IP?
    • What if you had some people that wanted to access this from outside of AWS? We do have some customers that primarily their servers and things are within AWS, but they want access to files that are running, that they’re not inside of the AWS VPC.  So you could leverage it that way, and this was the first way that we actually created HA to be honest because this was the only method at first that allowed us to share an IP address or work around some of the public cloud things like node layer to broadcast and things like that.
  • Looks like this next question’ is around AWS VPC tagging. Any best practices for example?  
    • Yeah, so I see people that basically take different services, like web and database or application, and they tag everything within the security groups and everything with that particular tag.  For people that are deploying SoftNAS, I would recommend just using the name SoftNAS as my tag.  It’s really up to you, but I do suggest that you use them.  It will make your life a lot easier.
  • Is storage level encryption a feature of SoftNAS Cloud NAS or does the customer need to implement that on their own?  
    • So as of our version that’s available today which is 3.3.3, on AWS you can leverage the underlying EBS encryption. We provide encryption for Amazon S3 as well, and coming in our next release which is due out at the end of the month we actually do offer encryption, so you can actually create encrypted storage pools which encrypts the underlying disk devices.
  • Virtual VIP for HA: does the subnet this event would be part of add in to the AWS VPC routing table?
    • It’s automatic. When you select that VIP address in the private subnet, it will automatically add a host route into the routing table. Which allows clients to route that traffic.
  • Can you clarify the requirement on an HA pair with two next, that both have to be in the same subnet? 
    • So each instance you need to move NIC ENIs, and each of those ENIs actually need to be in the same subnet.
  • Do you have HA capability across regions? What options are available if you need to replicate data across regions? Is the data encryption at-rest, in-flight, etc.?  
    • We cannot do HA with automatic failover across regions.  However, we can do SnapReplicate across regions. Then you can do a manual failover should the need arise. The data you transfer via SnapReplicate is sent over SSH and across regions. You could replicate across data centers. You could even replicate across different cloud markets. 
  • Can AWS VPC pairings span across regions?
    • The answer is, no, that it cannot.
  • Can we create an HA endpoint to AWS for use with direct connect?
    • Absolutely. You could go ahead and create an HA pair of SoftNAS Cloud NAS, leverage direct connect from your data center and access that highly available storage.
  • When using S3 as a backend and a write cache, is it possible to read the file while it’s still in cache?
    • The answer is, yes, it is. I’m assuming that you’re speaking about the eventual consistency challenges of the AWS standard region; with the manner in which we deal with S3 where we treat each bucket as its own hard drive, we do not have to deal with the S3 consistency challenges.
  • Regarding subnets, the example where a host lives in two subnets, can you clarify both these subnets are in the same AZ?
    • In the examples that I’ve used, each of these subnets is actually within its own VPC, assuming its own availabilities. So, again, each subnet is in its own separate availability zone, and if you want to discuss more, please feel free to reach out and we can discuss that.
  • Is there a white paper on the website dealing with the proper engineering for SoftNAS Cloud NAS for our storage pools, EBS vs. S3, etc.?
    • Click here to access the white paper, which is our SoftNAS architectural paper which was co-written by SoftNAS and Amazon Web Services for proper configuration settings, options, etc. We also we have a pre-sales architectural team that can help you out with best practices, configurations, and those types of things from an AWS perspective. Please contact and someone will be in touch.
  • How do you solve the HA and failover problem?

    • We actually do a couple of different things here. When we have an automatic failover, one of the things that we do when we set up HA is we create an S3 bucket that has to act as a third party witness. Before anything take overs as the master controller, it queries the S3 bucket and makes sure that it’s able to take over. The other thing that we do is after a take-over, the old source node is actually shut down.  You don’t want to have a situation where the node is flapping up and down and it’s kind of up but kind of not and it keeps trying to take over, so if there’s a take-over that occurs, whether it’s manual or automatic, the old source node in that particular configuration is shut down.  That information is logged, and we’re assuming that you’ll go out and investigate as to why the failover took place.  If there’s questions about that in a production scenario, is always available.
  • Can we monitor SoftNAS logs using SplunkSumo and see which log file we should monitor?
    • Absolutely, but we also provide some built-in log monitoring.  They key logs here are going to be in the SnapReplicate.log which controls all of your SnapReplicate and HA functionality. The snserv.log, which is the SoftNAS server log. It controls all things done via StorageCenter, and because this is a Linux operating system, monitoring log messages is a good idea.  That’s just a smattering of those. 

Our goal here was to pass on some of the lessons that we’ve learned from configuring AWS VPC deployments for our customers. As you’re making that journey to deploying in the cloud or you’re already operational in the cloud, maybe you’ve saved some time tripping over some of the obtsacles that other customers have faced.

We’d like to invite you now to try SoftNAS Cloud NAS on AWS. We do have a 30 day trial. If you click blue button below, you can try SoftNAS Cloud NAS on the AWS platform with a $100 AWS credit. There are also some links there about how you can contact us further if you have any more questions and you’d like to get more information around it.

Claim my $100 AWS Credit

softnas aws vpc credit


More from Buurst

Islands in the Stream

Islands in the Stream

Data is everywhere, flowing through our networks like the rivers and streams in our physical world. And like the physical world, it accumulates...

read more