Learn how the necessary cloud infrastructure resources are deployed within the custom VPC.
What you’ll learn
how the necessary cloud infrastructure resources are deployed within a custom VPC
Webiny Cloud Infrastructure - API - Custom VPC
(click to enlarge)
As mentioned in the introduction section, the API project application’s cloud infrastructure comes with two setups - development and production. The difference between the two is a bit different setup when it comes to networking and Amazon ElasticSearch Service. In the production setup, these are configured a bit differently, mainly for improving your project’s security posture and availability.
Unlike in the development setup, where your project is deployed into the default VPC, in the production setup, your project is deployed into a custom Virtual Private Cloud (VPC), which we cover in this section.
Note that the VPC setup presented here is a good foundation, but is not an ultimate solution. There is a chance that the setup might need additional cloud infrastructure resources or different configurations on your or your organization’s behalf.
Virtual Private Clouds (VPCs) is a topic that requires some general networking knowledge and knowledge on AWS-specific concepts like regions, availability zones, different network gateways, and so on. Be sure to read about it before going through this section.
The shown diagram gives an overview of which cloud infrastructure resources are deployed when the Custom VPC option was chosen during the creation of a new Webiny project. When compared to the Default VPC option, essentially, resources still work and communicate with each other in the same way, except this time, there are a couple of additional network-level resources, and rules in place. This helps in improving your project’s overall security posture.
Public and private subnets
The most prominent change, when compared to the default-VPC option, is the inclusion of a VPC that consists of three subnets - one public C and two private De, deployed across multiple availability zones (AZs).With this network structure, you are given the opportunity to place mission-critical cloud infrastructure resources into the private subnets DE, which makes these resources more secure, because they are not directly exposed to the public internet. This is especially important when talking about hosting databases, for example the Amazon ElasticSearch Service H.
With the Amazon ElasticSearch Service H placed inside of a private subnet, note that you can’t connect to it directly from your machine. Deploying a jump-box (bastion host) in a public subnet can resolve this problem.
Multiple Availability Zones
As mentioned, the public and private subnets are deployed across multiple availability zones (AZs). This helps in making your application more highly available, fault tolerant and scalable. For example, if in a single region, one of the AZs goes offline, all of the network traffic is essentially routed to other AZs that are online. This means your application still works.
Note that the number of distinct AZs depends on the region you’re deploying to as some only have 2 AZs.
Have in mind that hosting your application in multiple availability zones may incur additional cost, since some of the cloud infrastructure resources need to be deployed multiple times. For example, this is true for Amazon ElasticSearch Service h.
The only way resources located in the private subnets De can talk to the public internet is via the public subnet C, which includes a NAT gateway F. The NAT (network address translation) gateway is the middleman that forwards all internet-routable network traffic, received from private subnets, to the Internet Gateway g.This makes it possible for Lambda functions that are located in private subnets DE to talk to AWS resources that operate in an internet facing environment, like Amazon DynamoDB i, Amazon S3 j, and Amazon Cognito k.
Note that when private subnet resources are communicating with the ones operating in an internet facing environment ijk, sending and receiving data is still performed across the public internet l.
How is API Gateway communicating with Lambda functions if it's outside of the VPC?
To our knowledge, there is no official evidence on how this actually works. But, since no additional configuration was needed in order to establish the API Gateway B - Lambda Functions connection, it means that this is automatically handled for you by AWS’s internal structure and mechanisms. This Stack Overflow question briefly discusses this, but again, no concrete evidence and answers are provided.