MetroAE supports deployment of the following components in AWS.
- VSD
- VSC (as KVM running on an AWS bare-metal instance)
- VSTAT (ElasticSearch)
- VNS Utils
- NSGv
1. Install Libraries
2. Upload or Import AMIs
3. Setup Virtual Private Cloud
4. Setup Bare Metal Host (for VSC Only)
5. Configure Components
6. Deploy Components
MetroAE uses the CloudFormation Ansible module for deploying components in AWS. This module requires the python-boto and python-boto3 python libraries. Use one of the following three methods to install these required libraries on the MetroAE host.
pip install boto
pip install boto3
yum install python-boto
yum install python-boto3
apt-get install python-boto
apt-get install python-boto3
Amazon Machine Images (AMIs) are used to run instances in EC2. For each Nuage Networks component that you want to deploy (except VSC), you'll need to upload or import an AMI to AWS. The AMI identifiers are provided to MetroAE for deployment. VSC is not supported as an AMI. It must be deployed as KVM running on an AWS bare-metal instance.
Before installing Nuage Networks components, you must define and deploy a virtual private cloud (VPC) in AWS. An example file (aws-example-vpc.yml) of a basic VPC is provided in the examples directory. This VPC must define the network interfaces that will be used by each component. The VPC should also provide connectivity between various components and Internet access (either directly or outgoing only through NAT). We strongly recommend that you define security policies, IP addressing and DNS. The recommended subnets for each component are defined below. Note that the access subnet is expected to have direct Internet access and the management subnet to have outgoing only access.
Component | Subnet1 | Subnet2 |
---|---|---|
VSD | Mgmt | |
VSC | Mgmt | Data |
VSTAT | Mgmt | |
VNS Util | Mgmt | Data |
NSGv | Access | Data |
Deploying VSC as a standard AWS component is not supported. Because it relies on the VxWorks operating system, the VSC image cannot be converted to an AMI. Instead, you can run VSC as a KVM instance within an AWS bare-metal server. Follow the steps below to setup the bare-metal host.
The AWS bare-metal server does not support bridge interfaces, PCI passthrough, or macvtap. To make connections use the routed network option. The routed networks must be defined in libvirt on the host. Multiple addresses can be supported on a single bare-metal interface by adding secondary IP addresses via the EC2 console and using SNAT and DNAT iptables rules.
Configuring components for AWS is similar to configuring for other server types. See CUSTOMIZE.md for details on standard deployments. The configuration files for AWS deployments require a few additional specifications.
AWS access can be specified as aws_access_key
and secret keys can be specified as aws_secret_key
. If AWS access is not specified, values are taken from the environment variables AWS_ACCESS_KEY
and AWS_SECRET_KEY
.
For components other than VSC, set target_server_type
to "aws".
AWS requires that the following fields be specified for all components, except VSC.
- aws_region: The AWS region (i.e. us-east-1)
- aws_ami_id: Identifier for the AMI image to be used for the component
- aws_instance_type: The AWS instance type for the image (i.e. t2.medium)
- aws_key_name: The name of the key pair used for access to the component
- aws_mgmt_eni/aws_data_eni/aws_access_eni: The elastic network interface identifiers from the deployed VPC for each required subnet for the component.
VSC is not supported as a direct AWS component, but it can be deployed as a bare-metal server by specifying several fields in vscs.yml
as shown below.
Set target_server_type
to "kvm" and target_server
to the address(es) of the bare-metal host(s).
To support routed network connectivity, specify the following fields.
internal_mgmt_ip
: The ip address to be assigned to the management interfaces on the VSC itself. This internal address can be NATed to the real address of the bare-metal host using iptables rules.internal_ctrl_ip
: The ip address to be assigned to the data interfaces on the VSC itself. This internal address can be NATed to the real address of the bare-metal host using iptables rules.internal_data_gateway_ip
: The ip address of the data network gateway for the VSC. This ip address is used for VSC to connect to NSG and other components via static route addition on VSC
Bootstrapping of NSGvs deployed to AWS is supported through the normal bootstrapping process. See NSGV_BOOTSTRAP.md for details.
If you'd like to deploy only NSGv (no other components), then MetroAE can optionally provision a suitable VPC. You will need to configure nsgvs.yml
in your deployments subdirectory. For the automatic creation of a test VPC on AWS to host your NSGv, the following parameters must be provided in nsgvs.yml
for each NSGv:
- provision_vpc_cidr
- provision_vpc_nsg_wan_subnet_cidr
- provision_vpc_nsg_lan_subnet_cidr
- provision_vpc_private_subnet_cidr
The CIDRs for the VPC, WAN interface, LAN interface and private subnet must be specified. When provisioning a VPC in this way, the elastic network interface identifiers aws_data_eni
and aws_access_eni
for the NSGv do not need to be specified as they are discovered from the created VPC.
After you have set up the environment and configured your components, you can use MetroAE to deploy your components with a single command.
metroae-container install everything
Alternatively, you can deploy individual components or perform individual tasks such as predeploy, deploy and postdeploy. See DEPLOY.md for details.
Get support via the forum on the MetroAE site.
Ask questions and contact us directly at [email protected].
Report bugs you find and suggest new features and enhancements via the GitHub Issues feature.
You may also contribute to MetroAE by submitting your own code to the project.