Terraform
1) Basic Information about Terraform
- tool to provisioning infrastructure
- write and compile in Golang
- Infrastracture as code
- Automation of infrastracture
- Keep your infrastracture in a certain state (compliant) e.g. 2 web instances with 2 volumes, and 1 load balance
- make your infrastracture auditable
* you can keep your infrastracture change history in a version control system like git
-Ansible, chef Puppet , Saltstack have a focus on automationg the installation and configuration of software. Keep the machines in compliance , ina a certain state.
-Terraform can automate provisioning of the infrastracture itself eg. Using the AWS, DigitalOcean, Azure API. Works well with automation software like ansible to install software after the infrastracture is provisioned.
Terraform Use Cases
a) Infrastracture as a Code
-Use infrastracture as code to safely and efficiently provision and manage infrastracture at any scale.
b) Multi-Cloud Compliance & Managment
-Provision and manage public cloud , private infrastracture, and external services holistically while still preserving the uniqueness of each.
c) Self-Service Infrastracture
-Provide a library of approved infrastracture that developers can use to safely and efficiently provision infrastracture on-demand.
-there are 2 ways to provision software on your unstances
-you build you own custom AMI and bundle your software with the image -> Packer is a greate tool to do this
-another way is to boot standarized AMIs , and then install the software on it you need
*using file uploads
*using remote exec
*using automation tools like chef, puppet, ansible
-Chef is integrated with terraform , you can add chef statments
-you can run puppet agent using remote-exec
-For ansible, you can first run terraform, and output the IP addresses, then run ansible-playbooks on those hosts
*this can be autometed in a workflow script
*there are 3rd party initiatives integrating Ansible with terraform
Other providers:
-Some other examples of Cloud provders supported by terraform are:
*Google Cloud
*Azure
*Heroku
*DigitalOcean
-And on-premise/private cloud:
*VMware vCloud / vSphere / OpenStack
- it is not limited to cloud provders
* Datadog - monitoring
* GitHub - version control
* Mailgun - emailing (SMTP)
* DNSSimple / DNSMadeEAsy / UltraDNS - DNS hosting
There is still no good tool to import your non-terraform maintained infrastructure and create the definitions for you.
There is an external tool called terraformin that you can use for now, but it will taje you quite some time to convert your current infrastracture to managed terraform infrastracture (https://github.com/dtan4/terraforming)
2) Installing
3 ) Commands:
terraform get // print project modules
terraform init // inicialization of project
terraform validate //validation of project
terraform plan //configuration checking
terraform plan -out out.terraform // terraform plan and write the plan to out file
terraform apply //applying changes
terraform apply out.terraform // apply terraform plan using out file
terraform show // show current state
terraform destroy //destroy
terraform destroy -lock=false
terraform fmt // checking errors, rewrite terraform configuration files to canonical frmat and style
terraform graph // create a visual representation of a configuration or execution plan
terraform output [options] [NAME] // print output content, output of any resources. Using NAME will only output a specific resource
terraform import [options] ADDRESS ID // import will try and find the infrastracture resource identified with ID and import the state into terraform.tfstate ID ADDRESS
terraform push // push changes to Atlas, Hashicorp's Enterprise tool that can automatically run terraform from a centralized server
terraform refresh // refresh the remote state. Can identify differences between state file and remote state
terraform remote // configure remote state storage
terraform state // use this command for advanced state management, e.g. Rename a resource with terraform state mv aws_instance.example aws_instance.production
terraform taint // manually mark a resource as tainted, meaning it will be desructed and recreated at the next apply
terraform validate // validate your terraform syntax
terraform untaint // undo a taint
https://linuxacademy.com/howtoguides/posts/show/topic/13922-a-complete-aws-environment-with-terraform
4) variables
-everything in one file is not great
-use variables to hide secrets
* You do not want the AWS credentials in your git repository
-use variables elements that might change
* AMIs are different per region
-Use variables to make it yourself easier to reuse terraform files
To take advantage of that you should divided your project into files:
//provider.tf
provider "aws" {
access_key = "${var.AWS_ACCESS_KEY}"
secret_key = "${var.AWS_SECRET_KEY}"
region = "${var.AWS_REGION}"
}
//vars.tf
variable "AWS_ACCESS_KEY" {}
variable "AWS_SECRET_KEY" {}
variable "AWS_REGION" {
default = "eu-west-1"
}
variable "AMIS" {
type = "map"
default = {
us-east-1 = "ami-13be557e"
us-west-2 = "ami-06b94666"
eu-west-1 = "ami-0d729a60"
}
}
//instance.tf
resource "aws_provider" "example" {
ami = "${nslookup(var.AMIS, var.AWS_REGION)}"
instance_type = "t2.micro"
}
//terraform.tfvars
AWS_ACCESS_KEY = ""
AWS_SECRET_KEY = ""
AWS_REGION = ""
5) Provisioning on Linux/Windows
6) Output
- Terraform keeps attributes of all the resources you create eg. the aws_instance resource has the attribute public_ip
- Those attributes can be queried and outputted
- This can be useful just to output valubale information or to feed information to external software
- Use "output" to display the public IP address of an AWS resource:
resource "aws_provider" "example" {
ami = "${nslookup(var.AMIS, var.AWS_REGION)}"
instance_type = "t2.micro"
}
output "ip" {
value = "${aws_instance.example.public_ip}"
}
-You can also use the attributes in the script:
resource "aws_provider" "example" {
ami = "${nslookup(var.AMIS, var.AWS_REGION)}"
instance_type = "t2.micro"
provisioner "local-exec" {
command = "echo ${aws_instance.example.private_ip} >> private_ips.txt"
}
}
- Useful for instance to start automation script after infrastracture provisioning
- You can populate the IP addreses in an ansible host file
- Or another possiblity: execute a script (with attributes as argument) which will take care of a mapping of resource name and the IP address
7) State
- Terraform keeps the remote state of the infratracture
- it stores it in the file called terraform.tfstate
- there is also a backup of the previous state in terraform.tfstate.backup
- when you execute terraform apply, a new terraform. tfstate and backup is written
- this is how terraform keeps track of the track of the remote state
* if the remote state changes and you hit terraform apply again, terraform will make changes to meet the correct remote state again
* eg. you terminate an instance that is managed by terraform apply it will be started again
-You can keep the terraform.tfstate in version control eg. git
-It gives you a history o your terraform.tfstate file (which is just a big JSON file)
- It allows you to collaborate with other team members - unfortunately you can get conflicts when 2 people work at the same time
-Local state works well in the beginning , but when you project becomes bigger, you may want to store your state remote
- terraform state can be saved remote, using the backend functionality in terraform
- the deafult is a local backend (the local terraform state file)
-other backens include:
* s3 ( with a locking mechanism using dynamoDB)
* consul (with locking)
* terraform enterprise (the commercial solution)
-using the backend functionality has definitely benefits:
* working in a team: it allows for collaboration, the remote state will always be avaible for the whole team
* the state file is not stored locally. Possible sensitive information is only strored in the remote state
* some backends will enable remote operations. Terraform apply will then run completly remote. These are called the enhanced backends (https://www.terraform.io/docs/backends/types/index.html)
-There are 2 steps to configure a remote state:
* add the backend code to a .tf file
* run the initialization process
-To configure a consul remote state , you can add a file backend.tf with the following contents:
terraform {
backend "consul" {
address = "demo.consul.io" # hostname of consul cluster
path = "terraform/myproject"
}
}
-you can also store your state in S3
terraform {
backend "s3" {
bucket = "mybacket"
key = "terraform/myproject"
region = "eu-west-1"
}
}
-when using an S3 remote state , it is best to configure the AWS credentials
-Using a remote store for the terraform state will ensure that you always have the latest version of the state
-It avoids having to commit and push the terraform.tfstate to version control
- terraform remote stores do not always support locking
*the documentation always mentions if locking is avaiable for a remote store
*S3 and consul support it
-you can also specify a (read-only) remote store directly in the .tf file
data "terraform_remote_state" "aws-state" {
backend = "s3"
config {
bucket = "terraform-state"
key = "terraform.tfstate"
access_key = "${var.AWS_ACCESS_KEY}"
secret_key = "${var.AWS_SECRET_KEY}"
region = "${var.AWS_REGION}"
}
}
- this is only useful as a read only feed from your remote file
-it is a datasource
8) Data Sources
- For certain providers ( like AWS), terraform provides datasources
- Datasources provide you with dynamic information
* a lot of data os avaiable by AWS in a structures format using their API
* terraform also exposes this information using data sources
- Examples:
* List of AMIs
* List of Availability Zones
- Another greate example is the data source that gives you all IP addresses in use by AWS
-This is great if you want to filter traffic based on an AWS region e.g. allow all traffic from amazon instances in Europe
-Filtering traffic in AWS can be done using security groups
*incoming and outgoing traffic can be filtered by protocol, IP range , and port
* Similar to iptables (linux) or a firewall appliance
data "aws_ip_ranges" "european_ec2" {
regions = ["eu-west-1", "eu-central-1"]
services = ["ec2"]
}
resources "aws_security_group" "from_europe" {
name = "from_europe"
ingress {
from_port = "443"
to_port = "443"
protocol = "tcp"
cidr_blocks = [ "${data.aws_ip_ranges.european_ec2.cidr_blocks}"]
}
tags {
CreateDate = "${data.aws_ip_ranges.european_ec2.create_date}"
SyncToken = "${data.aws_ip_ranges.european_ec2.sync_token}"
}
}
9) Template Provider
- the tample provider can help creating customized configuration files
-you can build templates based on variables from terraform resource attributes (e.g. a public IP address)
-The result is a string that can be used as a variable in terraform
* the string contains a template
* e.g. a configuration file
- can be used to create generic templates or cloud init configs
-In AWS, you can pass commands that need to be executed when the instance starts for the first time
-In AWS this is called "user-data"
-If you want to pass user-data that depends on other information in terraform (e.g. IP addresses), you can use the provider tempate
-First create a tamplate file:
#!/bin/bash
echo "database-ip = ${myip}" >> /etc/myapp.config
- Then you create a template_file resource that will read the template file and replace ${myip} with the IP address of an AWS instance created by terraform
data "template_file" "my-template" {
template = "${file("templates/init.tpl")}"
vars {
myip = "${aws_instance.database1.private_ip}"
}
}
- then you can use the my-template resource when creating a new instance
#create a web server
resource "aws_instance" "web" {
# ...
user_data = "${data.template_file.my-template.rendered}"
}
- when terraform runs , it will see that it first needs to spin up the database1 instance, then generate the template, and only then spin up the web instance
- the web instance will have the template injected in the user_data, and when it launches, the user-data will create a file /etcc/myapp.config with the IP address of the database
10) Modules
-you can use modules to make your terraform more organized
-use third party modules - modules from github
-Reuse parts of your code e.g. to set up network in AWs - the Virtual Private Network (VPC)
-Use a module from git
-you can either use external modules, or write modules yourself
-External modules can help you setting up infrastracture without much effort
*when modules are managed by the community, you will get updates and fixes for free
*https://github.com/terraform-aws-modules/ lists terraform modules for AWS, maintained by the community
-https://github/terraform-aws-modules/terraform-aws-vpc - a module to create VPC resources
-https://github/terraform-aws-modules/terraform-aws-alb - a module to create an Application LoadBalancer
-https://github.com/terraform-aws-modules/terraform-aws-eks - a module to create a Kubernetes cluster
-Writing modules yourself gives you full flexibility
-If you maintian the module in a git repository, you can even reuse the module over multiple projects
-In the next demo I will show you how to build a module for ECS with an Application LoadBalancer (ALB)
*I will first give you a high level overview, then I will do a deep dive of the module
module "module-example" {
source = "https://github.com/wardviaene/terraform-module-example"
}
- Use a module from a local folder
module "module-example" {
source = "./module-example"
}
-pass arguments to the module
module "module-example" {
source = "./module-example"
region = "us-west-a"
ip-range = "10.0.0.0/8"
cluster-size = "3"
}
- Inside the module folder, you just have again terraform files:
#module-example/vars.tf
variable "region" {} #the input parameters
variable "ip-range" {}
variable "cluster-size" {}
#module-example/cluster.tf
resource "aws_instance" "instance-1" {...}
resource "aws_instance" "instance-2" {...}
resource "aws_instance" "instance-3" {...}
#module-example/output.tf
output "aws-cluster" {
value = "${aws_instance.instance-1.public_ip}, ${aws_instance.instance-2.public_ip}, ${aws_instance.instance-3.public_ip}"
}
-Use the output from the module in the main part of your code:
output "some-output" {
value = "$module.module-example.aws-cluster}"
}
//main.tf
#configure the aws provider
provider "aws" {
region = "us-east-1"
}
#create an ec2 instance
resource "aws_instance" "example" {
ami = "ami-08111162"
instance_type = "t2.micro"
tags {
Name = ${var.instance_name}
}
}
variable "instance_name" {}
//module file
module "foo"{
source = "../terraform_example"
instance__name = "foo"
}
terraform module registry : https://registry.terraform.io/
11) Other providers
12) Terraform on AWS - VPC
-On Amozon AWS, you have a default VPC (Virtual Private Network) created for you by AWS to lunch instances in
-Up until now we used this default VPC
- VPC isolate the instances on network level
- Best practice is to always launch your instances in a VPC
* the deafult VPC
* or one you create yourself (managed by terraform)
-There is also EC2-Classic, which is basically one big network where all AWS customers could launch their instances in
- For smaller to medium setups, one VPC (per region) will be suitable for your needs
-An instance launched in one VPC can never communicate wih an instance in an other VPC using their private IP addresses
*they could communicate still, but using their public IP (not recommended)
*you could also link 2 VPC, called peering
a) creating the VPC
- on Amazon AWS , you start by creating your own Virtual Private Network to deploy your instance (servers) / databases in
- this VPC uses the 10.0.0.0/16 addressing space, allowing you to use the IP addresses that start with "10.0.", like this: 10.0.x.x
10.0.0.0/8 (10.0.0.0 -> 10.255.255.255)
172.16.0.0/12 (172.16.0.0 -> 172.31.255.255)
192.168.0.0/16 (192.168.0.0 -> 192.168.255.255)
Range Network mask Total addresses Description
10.0.0.0/8 255.0.0.0 16,777,214 full range 10.x.x.x
10.0.0.0/16 255.255.0.0 65,536 full range 10.0.x.x
10.1.0.0/16 255.255.0.0 65,536 full range 10.1.x.x
10.0.0.0/24 255.255.255.0 256 full range 10.0.0.0-10.0.0.255
10.0.1.0/24 255.255.255.0 256 full range 10.0.1.0-10.0.1.255
10.0.0.5/32 255.255.255.255 1 10.0.0.5
Private Subnets
-those subnets orr IP addresses cannot by used on the internet.
-they are only to be used privately within a VPC, or in a home network, or in an office network.
- all the public subnets are connected to an Internet Gateway. These instaces will also have a public IP addres, allowing them to be reachable from the internet
- instances launched in the private subnets do not get a public IP addresses, so they will not be reachable from the internet. you can still have a mechanism called a net-gateway that will allow the private instances to communicate outside, but not from outside to inside. Instances launched in the promary subnet , do not get a public IP addresses so they will not be reachable from the internet.
- Instances from the main public can reach instances from main private , because they are all in the same VPC. This of course if you set the firewall to allow traffic from one to another.
https://github.com/wardviaene/terraform-course/tree/0ade7786cd711ac5112f397b83ac58d26ed34f4c/demo-7
vpc.tf
| # Internet VPC | |
| resource "aws_vpc" "main" { | |
| cidr_block = "10.0.0.0/16" | |
| instance_tenancy = "default" | |
| enable_dns_support = "true" | |
| enable_dns_hostnames = "true" | |
| enable_classiclink = "false" | |
| tags { | |
| Name = "main" | |
| } | |
| } | |
| # Subnets | |
| resource "aws_subnet" "main-public-1" { | |
| vpc_id = "${aws_vpc.main.id}" | |
| cidr_block = "10.0.1.0/24" | |
| map_public_ip_on_launch = "true" | |
| availability_zone = "eu-west-1a" | |
| tags { | |
| Name = "main-public-1" | |
| } | |
| } | |
| resource "aws_subnet" "main-public-2" { | |
| vpc_id = "${aws_vpc.main.id}" | |
| cidr_block = "10.0.2.0/24" | |
| map_public_ip_on_launch = "true" ####differenc beetwen private nad public subnets, we map public ip on launch | |
| availability_zone = "eu-west-1b" | |
| tags { | |
| Name = "main-public-2" | |
| } | |
| } | |
| resource "aws_subnet" "main-public-3" { | |
| vpc_id = "${aws_vpc.main.id}" | |
| cidr_block = "10.0.3.0/24" | |
| map_public_ip_on_launch = "true" | |
| availability_zone = "eu-west-1c" | |
| tags { | |
| Name = "main-public-3" | |
| } | |
| } | |
| resource "aws_subnet" "main-private-1" { | |
| vpc_id = "${aws_vpc.main.id}" | |
| cidr_block = "10.0.4.0/24" | |
| map_public_ip_on_launch = "false" | |
| availability_zone = "eu-west-1a" | |
| tags { | |
| Name = "main-private-1" | |
| } | |
| } | |
| resource "aws_subnet" "main-private-2" { | |
| vpc_id = "${aws_vpc.main.id}" | |
| cidr_block = "10.0.5.0/24" | |
| map_public_ip_on_launch = "false" | |
| availability_zone = "eu-west-1b" | |
| tags { | |
| Name = "main-private-2" | |
| } | |
| } | |
| resource "aws_subnet" "main-private-3" { | |
| vpc_id = "${aws_vpc.main.id}" | |
| cidr_block = "10.0.6.0/24" | |
| map_public_ip_on_launch = "false" | |
| availability_zone = "eu-west-1c" | |
| tags { | |
| Name = "main-private-3" | |
| } | |
| } | |
| # Internet GW | |
| resource "aws_internet_gateway" "main-gw" { | |
| vpc_id = "${aws_vpc.main.id}" | |
| tags { | |
| Name = "main" | |
| } | |
| } | |
| # route tables | |
| resource "aws_route_table" "main-public" { | |
| vpc_id = "${aws_vpc.main.id}" | |
| route { | |
| cidr_block = "0.0.0.0/0" | |
| gateway_id = "${aws_internet_gateway.main-gw.id}" | |
| } | |
| tags { | |
| Name = "main-public-1" | |
| } | |
| } | |
| # route associations public | |
| resource "aws_route_table_association" "main-public-1-a" { | |
| subnet_id = "${aws_subnet.main-public-1.id}" | |
| route_table_id = "${aws_route_table.main-public.id}" | |
| } | |
| resource "aws_route_table_association" "main-public-2-a" { | |
| subnet_id = "${aws_subnet.main-public-2.id}" | |
| route_table_id = "${aws_route_table.main-public.id}" | |
| } | |
| resource "aws_route_table_association" "main-public-3-a" { | |
| subnet_id = "${aws_subnet.main-public-3.id}" | |
| route_table_id = "${aws_route_table.main-public.id}" | |
| } |
nat.tf
| # nat gw | |
| resource "aws_eip" "nat" { | |
| vpc = true | |
| } | |
| resource "aws_nat_gateway" "nat-gw" { | |
| allocation_id = "${aws_eip.nat.id}" | |
| subnet_id = "${aws_subnet.main-public-1.id}" | |
| depends_on = ["aws_internet_gateway.main-gw"] | |
| } | |
| # VPC setup for NAT | |
| resource "aws_route_table" "main-private" { | |
| vpc_id = "${aws_vpc.main.id}" | |
| route { | |
| cidr_block = "0.0.0.0/0" | |
| nat_gateway_id = "${aws_nat_gateway.nat-gw.id}" | |
| } | |
| tags { | |
| Name = "main-private-1" | |
| } | |
| } | |
| # route associations private | |
| resource "aws_route_table_association" "main-private-1-a" { | |
| subnet_id = "${aws_subnet.main-private-1.id}" | |
| route_table_id = "${aws_route_table.main-private.id}" | |
| } | |
| resource "aws_route_table_association" "main-private-2-a" { | |
| subnet_id = "${aws_subnet.main-private-2.id}" | |
| route_table_id = "${aws_route_table.main-private.id}" | |
| } | |
| resource "aws_route_table_association" "main-private-3-a" { | |
| subnet_id = "${aws_subnet.main-private-3.id}" | |
| route_table_id = "${aws_route_table.main-private.id}" | |
13) Terraform on AWS - EC2 instaces and EBS
-Spinning up an instance on AWS
*Open AWS Account
*Create IAM admin user
*Create terraform file to spin up t2.micro instances
*Run terraform apply
varaibles "aws_access_key" {}
varaibles "aws_secret_key" {}
provider "aws" {
access_key = "access_key"
secret_key = "secret_key"
region = "us-east-1"
}
Search for AMI in particular region -> http://cloud-images.ubuntu.com/locator/
File uploads
resource "aws_instance" "example" {
ami = "$nsloookup(var. AMIS, var.AWS_REGION)}"
instance_type = "t2.micro"
provisioner "file"{
source = "app.conf"
destination = "/etc/myapp.conf"
}
}
- file uploads is an easy way to upload a file or a script
-can be used in conjunction with remote-exec to execute script
-to provisioner may use SSH (Linux hosts) or WinRM (on Windows hosts)
- to override the SSH defaults, you can use "connection":
resource "aws_instance" "example" {
ami = "$nsloookup(var. AMIS, var.AWS_REGION)}"
instance_type = "t2.micro"
provisioner "file"{
source = "app.conf"
destination = "/etc/myapp.conf"
connection {
user = "${var.instance_username}"
password = "${var.instance_password}"
}
}
}
-When spinning up instances on AWS, ec2-user is the deafult user for Amazon Linux and ubunut for Ubuntu Linux
- typically on AWS, you will use SSH keypairs:
resource "aws_key_pair" "user-key" {
key_name = "mykey"
public_key = "ssh-rsa my-public-key"
}
resource "aws_instance" "example" {
ami = "$nsloookup(var. AMIS, var.AWS_REGION)}"
instance_type = "t2.micro"
key_name = "${aws_key_pair.mykey.key_name}"
provisioner "file"{
source = "app.conf"
destination = "/etc/myapp.conf"
connection {
user = "${var.instance_username}"
private_key= "${file(${var.path_to_private_key})}"
}
}
}
this solution need Security Group with opend 22
aws ec2 get-password-data --instane-data --instance-id i-03... --priv-launch-key mykey # it provides with the password for Windows instance
Adding ec2 to VPC with security group, using a keypair that will be uploaded by terraform
resource "aws_instance" "example" {
ami = "$nsloookup(var. AMIS, var.AWS_REGION)}"
instance_type = "t2.micro"
## the public SSH key
key_name = "${aws_key_pair.mykey.key_name}"
## the VPC subnet
subnet_id = ["${aws_subnet.main-public-1.id}"]
## the security group
vpc_security_group_ids = ["${aws_security_group.allow-ssh.id}"]
}
14) Terraform on AWS - Security Groups
- We need a new security group for this EC2 instance
- A security group is a just like a firewall, managed by AWS
-you specify ingress (incoming) and egress (outgoing) traffic rules
- If you only want to access SSH (port 22), then you could create a security group that:
* Allows ingress port 22 on IP address range 0.0.0.0/0 (all IPs) - it is best practice to only allow your work/home/office IP
* Allows all outgoing traffic from the instance to 0.0.0.0/0 (all IPs, so everywhere)
| resource "aws_security_group" "allow-ssh" { | |
| vpc_id = "${aws_vpc.main.id}" | |
| name = "allow-ssh" | |
| description = "security group that allows ssh and all egress traffic" | |
| egress { | |
| from_port = 0 | |
| to_port = 0 | |
| protocol = "-1" | |
| cidr_blocks = ["0.0.0.0/0"] | |
| } | |
| ingress { | |
| from_port = 22 | |
| to_port = 22 | |
| protocol = "tcp" | |
| cidr_blocks = ["0.0.0.0/0"] | |
| } | |
| tags { | |
| Name = "allow-ssh" | |
| } | |
| } |
-to be able to login, the step is to make sure AWS installs our public key pair on the instance
-our EC2 instance already refers to a aws_key_pair.mykeypair.key_name, you just need to declare it in terraform
keypairs.tf
-the keys/mykeypair.pub will be uploaded to AWS and will allow an instance to be launched with this public key installed on it
-you never uploaded your private key. You use your private key to login to the instance
| resource "aws_key_pair" "mykeypair" { | |
| key_name = "mykeypair" | |
| public_key = "${file("${var.PATH_TO_PUBLIC_KEY}")}" | |
| } |
EBS
-The t2.micro instance with this particular AMI automatically adds 8 GB of storage (= Elastic Block Storage)
-Some instance types have local storage on the instance itself
* this is called ephemeral storage
* this type of storage is always lost when the instance terminates
-The 8 GB EBS root volume storage that comes with the instance is also set to be automatically removed when the instance is terminated
* you could still instruct AWS not to do so, but that would be counter-intuitive (anti-pattern)
-in most cases the 8GB for the OS (root block device) suffices
-extra EBS storage volume
* extra vlumes can be used for the log files, any real data that is put on the instance
* that data will be persisted until you instruct AWS to remove it
-EBS storage can be added using a terraform resource and then attached to our instance
instance.tf
| resource "aws_instance" "example" { | |
| ami = "${lookup(var.AMIS, var.AWS_REGION)}" | |
| instance_type = "t2.micro" | |
| # the VPC subnet | |
| subnet_id = "${aws_subnet.main-public-1.id}" | |
| # the security group | |
| vpc_security_group_ids = ["${aws_security_group.allow-ssh.id}"] | |
| # the public SSH key | |
| key_name = "${aws_key_pair.mykeypair.key_name}" | |
| } | |
| resource "aws_ebs_volume" "ebs-volume-1" { | |
| availability_zone = "eu-west-1a" | |
| size = 20 | |
| type = "gp2" # General Purpose storage, can also be standard or io1 or st1 | |
| tags { | |
| Name = "extra volume data" | |
| } | |
| } | |
| resource "aws_volume_attachment" "ebs-volume-1-attachment" { | |
| device_name = "/dev/xvdh" | |
| volume_id = "${aws_ebs_volume.ebs-volume-1.id}" | |
| instance_id = "${aws_instance.example.id}" | |
| } |
-to increase the storage or type of the root volume, you can use root_block_device within the aws_instance resource
| resource "aws_instance" "example" { | |
| ami = "${lookup(var.AMIS, var.AWS_REGION)}" | |
| instance_type = "t2.micro" | |
| # the VPC subnet | |
| subnet_id = "${aws_subnet.main-public-1.id}" | |
| # the security group | |
| vpc_security_group_ids = ["${aws_security_group.allow-ssh.id}"] | |
| # the public SSH key | |
| key_name = "${aws_key_pair.mykeypair.key_name}" root_block_device { voulme_size = 16 volume_type = "gp2" delete_on_termination = true } | |
| } |
15) Terraform on AWS - Userdata
- Userdata in AWS can be used to do any customization at launch:
* You can install extra software
* Prepare the instance to join a cluster e.g. consul cluster, ECS cluster (docker orchestration)
* Execute commands / scripts
* Mount volumes
- Userdata is only executed at the creation of the instance, not when the instance reboots
-Terraform allows you to add userdata to the aws_instance resource
* just as a string (for simple commands)
* using templates (for more complex instruction)
-Userdata using a string - installing an OpenVPN applicationserver at boot time
resource "aws_instance" "example" {
ami = "$nsloookup(var. AMIS, var.AWS_REGION)}"
instance_type = "t2.micro"
## the public SSH key
key_name = "${aws_key_pair.mykey.key_name}"
## the VPC subnet
subnet_id = ["${aws_subnet.main-public-1.id}"]
## the security group
vpc_security_group_ids = ["${aws_security_group.allow-ssh.id}"]
#userdata
user_data = '#!/bin/bash\nwget http://swupdate.open.vpn.org/as/openvpn-as-2.1.2-Ubuntu14.amd_64.deb\ndpkg -i openvpn-as-2.1.2-Ubuntu14.adm_64.deb"
}
-another example is to use the template system
resource "aws_instance" "example" {
ami = "$nsloookup(var. AMIS, var.AWS_REGION)}"
instance_type = "t2.micro"
## the public SSH key
key_name = "${aws_key_pair.mykey.key_name}"
## the VPC subnet
subnet_id = ["${aws_subnet.main-public-1.id}"]
## the security group
vpc_security_group_ids = ["${aws_security_group.allow-ssh.id}"]
#userdata
user_data = '${data.template_cloudinit_config.cloudinit-example.rendered}"
}
- another better example is to use the template system of terraform
cloudinit.tf
| data "template_file" "init-script" { | |
| template = "${file("scripts/init.cfg")}" | |
| vars { | |
| REGION = "${var.AWS_REGION}" | |
| } | |
| } | |
| data "template_cloudinit_config" "cloudinit-example" { | |
| gzip = false | |
| base64_encode = false | |
| part { | |
| filename = "init.cfg" | |
| content_type = "text/cloud-config" | |
| content = "${data.template_file.init-script.rendered}" | |
| } | |
| } |
scripts.init.cfg
| #cloud-config | |
| repo_update: true | |
| repo_upgrade: all | |
| packages: | |
| - lvm2 | |
| output: | |
| all: '| tee -a /var/log/cloud-init-output.log' |
16) Terraform on AWS - Static IPs and DNS
- Private IP addresses will be auto-assigned to EC2 instances
- Every subnet within the VPC has its own range (e.g. 10.0.1.0 - 10.0.1.255)
- By specifying the private IP, you can make sure the EC2 instance always uses the same IP address:
resource "aws_instance" "example" {
ami = "$lookup(var.AMIS, var.AWS_REGION)}"
instance_type = "t2.micro"
subnet_id = "${aws_subnet.main-public-1.id}"
private_ip = "10.0.1.4" #within the range of subnet main-public-1
}
EIP
-To use a public IP address, you can use EIPs (Elastic IP addresses)
-This is a public, static IP address that you can attach to your instance
resource "aws_instance" "example" {
ami = "$lookup(var.AMIS, var.AWS_REGION)}"
instance_type = "t2.micro"
subnet_id = "${aws_subnet.main-public-1.id}"
private_ip = "10.0.1.4" #within the range of subnet main-public-1
}
resource "aws_eip" "example-eip" {
instance = "${aws_instance.example.id}"
vpc = true
}
- tip: You can use aws_eip.example-eip.public_ip attribute with the output resource to show the IP address after terraform aplly
Route53
-Typically, you will not use IP addresses but hostnames
-There is where route53 comes in
-You can host a domain name on AWS using Route53
-You first need to register a domain name using AWS or any accredited registrar
-You can then create a zone in route53 (e.g. ezample.com) and add DNS records (e.g. server1.example.com)
17) Terraform on AWS - RDS
- RDS stands for Relational Database Services
- It is managed database solution:
* You can easily setup replication (high availability)
* Automated snapshots (for backups)
* Automated security updates
* Easy instance replacement (for vertical scaling)
-Supported databases are:
* MySQL
* MariaDB
* PostgresSQL
* Microsoft SQL
* Oracle
- Steps to create an RDS instance:
* Create a subnet group - allows you to specify in what subnets the database will be in (e.g. eu-west-1a and eu-west-1b)
* Create Patrametr group - allows you to speciy parameters to change settings in the database
* Create a security group that allows incoming traffic to the RDS instance
* Create the RDS instance(s) itself
Parameter group
resource "aws_db_parameter_group" "mariadb-parameters" {
name = "mariadb-parameters"
familiy = "mariadb10.1"
description = "MariaDB parameter group"
parameter {
name = "max_allowed_packet"
value = "16777216"
}
}
Subnet
resource "aws_db_subnet_group" "mariadb-subnet"{
name = "mariadb-subnet"
description = "RDS subnet group"
subnet_ids = ["${aws_subnet.main-private-1.id}", "${aws_subnet.main-private-2.id}"]
}
Security group
resource "aws_security_group" "allow-mariadb" {
vpc_id = "${aws_vpc.main.id}"
name = "allow-mariadb"
description = "allow-mariadb"
ingress {
from_port = 3306
to_port = 3306
protocol = "tcp"
security_groups = ["${aws_security_group.example.id}"] # ellowing access from our example instance
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
self = true
}
tags {
name = "allow-mariadb"
}
}
RDS resource
resource "aws_db_instance" "mariadb" {
allocated_storage = 100 #100GB of storage, give us more IOPS than a lower number
engine = "mariadb"
engine_version = 10.1.14"
instance_class" = "db.t2.small" 3use micro if ypou want to use the free tier
identifier = "mariadb"
name = "mariadb"
username = "root" # username
password = "a1cksd"
db_subnet_group_name = "${aws_db_subnet_group.mariadb-subnet.name}}"
parameter_group_name = "mariadb-parameters"
multi_az = "false" #set to true to have high avaibility: 2 instances sychronized with each other
vpc_security_group_ids = [""${aws_security_group.allow-mariadb.id}"]
storage_type = "gp2"
backup_retention_period = 30 # how long you are going to keep your backups
availability_zone = "${aws_subnet.main-private-1.avaibility_zone} # prefered AZ
tags {
name = "mariadb-instance"
}
}
18) Terraform on AWS - IAM Users and Groups
- Iam is AWS Identity & Access Management
-It is a service that helps you control access to your AWS resources
-In AWS you can create:
*Groups
*Users
*Roles
-Users can have groups - for instance an "Administrator" group can give admin privileges to users
-Users can authenticate
*using a login/password - optionally using a token: multifactor Authentication (MFA) using Google Authenticator compatible software
* an access key and secret key (the API keys)
- To create an IAM adminitstrators group in AWS, you can create the group and attach the AWS managed Administrator policy to it
resource "aws_iam_group" "administrators"{
name = "administrators"
}
resource "aws_iam_policy_attachment" "administrators-attach"{
name = "administrators-attach"
groups = ["${aws_iam_group.administrators.name}"]
policy_arn = "arn:aws:iam::aws:policy/AdministratorAccess"
}
-You can also create your own custom policy. This one does the same:
resource "aws_iam_group" "administrators"{
name = "administrators"
}
resource "aws_iam_policy_attachment" "administrators-attach"{
name = "administrators-attach"
groups = ["${aws_iam_group.administrators.name}"]
policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "*",
"Resource": "*"
}
]
}
EOF
}
- create a user and attach it to a group
resource "aws_iam_user" "admin1"{
name = "admin1"
}
resource "aws_iam_user" "admin2"{
name = "admin2"
}
resurce "aws_iam_group_membership" "administrators-users" {
name = "administrators-users"
users = [
"${aws_iam_user.admin1.name}",
"${aws_iam_user.admin2.name}",
]
group = "${aws_iam_group.administrators.name}"
}
19) Terraform on AWS - IAM Roles
- Roles can give users/services (temporary) access that they normally would not have
- The roles can be for instance attached to EC2 instances
*From that instance, a user or service can obtain access credentials
*Using those access credentials the user or service can assume the role, which gives them permission to do something
-example:
*You create a role my-bucket access and assign the role to an EC2 instance at boot time. You give the role the permissions to read and write items in "mybucket". When you log in, you can now assume this mybucket-access role, without using your own credentials - you will be given temporary access credentials which just look like normal user credentials. You can now read and write item in "mybucket"
-Instead of a user using aws-cli, a service also assume a role
-The service needs to implement the AWS SDK
-When trying to access the S3 bucket, an API call to will occur
-If roles are configured for this EC2 instance, the AWS API will give temporary access keys which can be used to assume this role
-After that, the SDK can be used just like when you would have normal credentials
-This really happens in the background and you do not see much of it
-the temporary access credentials also need to be renewed, they are only valid for a predefine amount of time - this is also something the AWs SDK will take care of
resource "aws_iam_role" "s3-mybucket-role" {
name = "s3-mybacket-role"
assume_role_policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "sts:AssumeRole",
"Principal": {
"Service": "ec2.amazonaws.com"
},
"Effect": "Allow",
"Sid": ""
}
]
}
EOF
}
resource "aws_iam_instance_profile" "s3-mybucket-role-instanceprofile" {
name = "s3-mybucket-role"
roles = ["${aws_iam_role.s3-mybucket-role.name}"]
- creating thte bucket is just another resource:
resource "aws_s3_bucket" "b"{
bucket = "mybucket-c29df1"
acl = "private"
tags {
Name = "mybucket-c29df1"
}
}
-adding some permissions using a policy document
resource "aws_iam_role_policy" "s3-mybucket-role-policy" {
name = "s3-mybucket-role-policy"
role = "${aws_iam_role.s3-mybucket-role.id}"
policy = <<EOF
{
"Version": "2012-10-17",
"Statement: [
{
"Effect": "Allow",
"Action": [
"s3:*"
],
"Resource": [
"arn:aws:s3::mybucket-c29df1",
"arn:aws:s3::mybucket-c29df1/*" #in the bucket
]
}
]
}
EOF
}
20) Terraform on AWS - Autoscaling
- In AWS autoscaling groups can be created to automatically add/remove instances when certain thresholds are reached - e.g. your application layer can be scaled out when you have more visitors
-To set up autoscaling in AWS you need to setup at least 2 resources:
*An AWS launch configuration
**specifies the properties of the instance to be launched (AMI ID, security group, etc.)
*An autoscaling group
** specifies the scaling properties (min instances, max instances, health checks)
-Once the autoscaling group is setup, you can create autoscaling policies
* A policy is trigger based on a threshold (CloudWatch Alarm)
* An adjustment will be executed
** e.g. if the average CPU utilization is more than 20%, than 20%, then scale up by +1 instances
-First the launch configuration and the autoscaling group needs to be created:
resource "aws_launch_configuration" "example-launchconfig" {
name_prefix = "example-launchconfig"
image_id = "${lookup(var.AMIS, var.AWS_REGION)}"
instance_type = "t2.micro"
key_name = "${aws_key_pair.mykeypair.key_name}"
security_groups = ["${aws_security_group.allow-ssh.id}"]
}
resource "aws_autoscaling_group" "example-autoscaling" {
name = "example-autoscaling"
vpc_zone_identifier = ["${aws_subnet.main-public-1.id}", "${aws_subnet.main-public-2.id}"]
launch_configuration = "${aws_launch_configuration.example-launchconfig.name}"
min_size = 1
max_size = 2
health_check_grace_period = 300
health_check_type = "EC2"
force_delete = true
tag {
key = "Name"
value = "ec2.instance"
propagate_at_launch = true
}
}
- to create a policy, you need a aws_autoscaling_policy:
resource "aws_autoscaling_policy" "example-cpu-policy" {
name = "example-cpu-policy"
autoscaling_group_name = "$aws_autoscaling_group.example-autoscaling.name}"
adjustment_type = "ChangeInCapacity"
scaling_adjustment = "1"
cooldown = "300"
policy_type = "SimpleScaling"
}
-then, you can create a CloudWatch alarm which will trigger the autoscaling policy
resource "aws_cloudwatch_metrc_alarm" "example-cpu-alarm" {
alarm_name = "example-cpu-alarm"
alarm_description = "example-cpu-alarm"
comparison_operator = "GreateThanOrEqualToThreshold"
evaluation_periods = "2"
metric_name = "CPUUtilization"
namespace = "AWS/EC2"
period = "120"
statistic = "Average"
threshold = "30"
dimensions = {
"AutoScalingGroupName" = "${aws_autoscaling_group.example-autoscaling.name}"
}
actions_enabled = true
alarm_actions = ["${aws_autoscaling_policy.example-cpu-policy.arn}"]
}
- If you want to receive an alert (e.g. email) when autoscaling is invoke, need create a SNS topic (Simple Notification Service):
resource "aws_sns_topic" "example-cpu-sns" {
name = "sg-cpu-sns"
display_name = "example ASG SNS topic"
} # email subscription is currently unsupported in terraform and can be done using the AWS Web Console.
-That SNS topic needs to be attached to the autoscaling group:
resource "aws_autoscaling_notification" "example-notify" {
group_names = ["${aws_autoscaling_group.example-autoscaling.name}"]
topic_arn = "${aws_sns_topic.example-cpu-sns.arn}"
notification = [
"autoscaling.EC2_INSTANCE_LAUNCH",
"autoscaling.EC2_INSTANCE_TERMINATE",
"autoscaling.EC2_INSTANCE_LAUNCH_ERROR"
]
}
21) Terraform on AWS - Load Balancers
- The AWS Elastic Load Balancer (ELB) automatically distributes incoming traffic across multiple EC2 instances
* the ELB itself scales when you receive more traffic
* the ELB will healthcheck your instances
*if an instance fails its healthcheck, not traffic will be sent to it
*if a new instances is added by the autoscaling group, the ELB will automatically add the new instances and will start healthchecking it
-The ELB can also be used as SSL terminator
*It can offload the encryption away from the EC2 instances
*AWS can even manage the SSL certification for you
-ELBs can be spread over multiple Availability Zones for higher fault tolerance
-You will in eneral achieve higher levels of faullt tolerance with an ELB routing the traffic for your application
-ELB is comparable to a nginx/haproxy, but then provided as a service
-AWS provides 2 different types of load balancers:
*The Classic Load Balancer (ELB) - routes traffic based on network information - e.g. forwards all traffic from port 80 (HTTP) to port 8080 (application)
*The Application Load Balancer (ALB) - routes traffic based application level information - e.g. can route/api and /website to different EC2 instances
a) Classic Load Balancer
resource "aws_elb" "my-elb" {
name = "my-elb"
subnets = ["${aws_subnet.main-public-1.id}", "${aws_subnet.main-public-2.id}"]
security_groups = ["{${aws_security_group.elb-securitygroup.id}"]
listener {
instance_port = 80
instance_protocol = "http"
lb_port = 80
lb_protocol = "http"
}
health_check {
healthy_threshold =2
unhealthy_threshold = 2
timeout = 3
target = "HTTP:80/"
interval = 30
}
instances = ["${aws_instance.example-instance.id}"] # optional, you can also attach an ELB to an autoscaling group
cross_zone_load_balancing = true
connection_draining = true
connection_draining_timeout = 400
tags {
Name = "my-elb"
}
}
-Attach the ELB to an autoscaling group
resource "aws_launch_configuration" "example-launchconfig" {
name_prefix = "example-lauchconfig"
image_id = "${lookup(var.AMIS, var.AWS_REGION)}"
instance_type = "t2.micro"
key_name = "${aws_key_pair.mykeypair.key_name}"
security_groups = ["${aws_security_group.allow-ssh.id}"]
}
resource "aws_autoscaling_group" "example-autoscaling" {
name = "example-autoscaling"
vpc_zone_identifier = ["${aws_subnet.main-public-1.id}", "${aws_subnet.main-public-2.id}"}
launch_configuration = "${aws_launch_configuration.example-launchconfig.name}"
min_size = 1
max_size = 2
health_check_grace_period = 300
health_check_type = "ELB"
force_delete = true
load_balancers = ["${aws_elb.my-elb.name}"]
tag {
key = "Name"
value = "ec2 instance"
propagate_at_launch = true
}
}
b) Application Load Balancer
- for an application load balancer, you first define the general settings
resource "aws_alb" "my-alb" {
name = "my-alb"
subnets = ["${aws_sunbnet.main-public-1.id}", "${aws_subnet.main-public-2.id}"]
security_groups = ["${aws_security_group.elb-securitygroup.id}"]
tags {
Name = "my-alb"
}
}
-specify a target group
resource "aws_alb_target_group" "frontend-target-group" {
name = "alb-target-group"
port = 80
protocol = "HTTP"
vpc_id = "${aws_vpc.main.id}"
}
-attach instances to target
resource "aws_alb_target_group_attachment" "frontend-attachment-1" {
target_group_arn = "${aws_alb_target_group.frotend-target-group.arn}"
target_id = "${aws_instance.example-instance.id}"
port = 80
}
resource "aws_alb_target_group_attachment" "frontend-attachment-2" {
[...]
}
- specifiy the listeners separately
resource "aws_alb_listener" "frontend-listeners" {
load_balancer_arn = "${aws_alb.my-alb.arn}"
port = "80"
default_action {
target_group_arn = "${aws_alb_target_group.frontend-target-group.arn}"
type = "forward"
}
}
-the default action matches always if you have not specified any other rules
-with ALBs, you can specify multiple rules to send traffic to another target:
resource "aws_alb_listener_rule" "alb-rule" {
listener_arn = "${aws_alb_listener.front_end.arn}"
priority = 100
action {
type = "path-pattern"
values = ["/static/*"]
}
}
resource "aws_alb_target_group" "new-target-group" {
[...]
}
resource "aws_alb_target_group_attachement" "new-target-group-attachment" {
[...]
target_id = "${aws_instance.other-instances-than-the-first-one.id}"
[...]
}
22) Terraform - Elastic Beanstalk
- Elastic Beanstalk is AWs's Platform as a Service (PaaS) solution
-It is a platform where you launch your app on without having to maintain the underlying infrastracture
*you are still responsible for the EC2 instances, but AWS will provide you with updates you can apply
** updates can be applied manually or automatically
** the EC2 unstaces run Amazon Linux
-Elastic Beanstalk can handle application scaling for you
* underlying it uses a Load Balancer and an Autoscaling group to achieve this
*You can schedule scaling events or enable autoscaling based on a metric
-It is similar to Heroku (another PaaS solution)
-You can have an application running just in a few clicks using the AWS Console
*Or using a the elasticbeanstalk resources in Terraform
-the supported Platforms are:
*PHP
*Java SE, Java with Tomcat
*.NET on Windows with IIS
*Node.js
*Python
*Ruby
*Go
*Docker (single container + multi-container, using ECS)
- Resources
* Secure your Registry with TLS
* Storage cleanup via Garbage Collection
* Enable Hub caching via "--registry-mirror"
23) Terraform with Packer - Building AMIs
-Packer is a commandline tool that can build AWS AMIs based on templates
-Instead of installing the software after booting up an instance, you can create an AMI with all the necessary software on
-This can speed up boot times of instances
-it is a common approach when you run a horizontally scaled app layer or a cluster of something
{
"variables": {
"aws_access_key" : "",
"aws_secret_key": ""
},
"builders": [{
"type": "amazon-ebs"
"access_key": "{{user `aws_access_key`}}",
"secret_key": "{{user `aws_secret_key`}}",
"region": "us-east-1",
"source_ami": "ami-fce3c696",
"instance_type": "t2.micro",
"ssh_username": "ubuntu",
"amin_name": "packer-example {{timestamp}}"
}]
"provisioners": [{
"type": "shell",
"scripts": [ "scripts/install_software.sh" ],
"execute_command": "{{.Vars}} sudo -E sh '{{ .Path }}'",
"pause_before": "10s"
}]
}
24) Terraform with Packer - A Jenkins workflow
1. git clone, app repo - from github to jenkins
2. packer build - on jenkins to build AMI containing node + app onAWS
ARTIFACT= `packer build -machine-readble packer-demo.json | awk -F, '$) ~/artifact, 0, id/ (print $6)'`
AMI_ID=`echo $ARTIFACT |cut -d ':' -f2`
echo 'variable "APP_INSTANCE_AMI" { default = "'${AMI_ID}'" }' > amivar.tf
3. git clone, terraform repo - pn jenkins from terraform files from github repository
4. terraform apply, aws_instance.app demo on jenkins to create :
-app instance launched from AMI created by Packer
-S3 bucket with terraform state
25) Terraform with Docker - Introduction
Building Docker images
-Just like packer builds AMIs, you can use docker to build docker images
-Those images can then be run on any Linux host with Docker Engine installed
Dockerfile -> Build -> Push -> Run -> App
Jenkins -> Amazon EC2 Container Registry -> Amazon EC2 Container Service (Prod)
Local build -> Local Development enviorment wit Docker Compose (Development)
Dockerfile
FROM node:4.6
WORKDIR /app
ADD ./app
RUn npm install
EXPOSE 3000
CMD npm start
index.js
var express = require('express');
var app = express();
app.get('/', function (req, res) {
res.send('Hello World!');
});
var server = app.listen(3000, function() {
var host = server.address().address;
var port = server.address().port;
console.log('Example app listening at http://%s:%s', host, port);
});
package json
{
"name": "myapp",
"version": "0.0.1",
"private": true,
"scripts" : {
"start": "node index.js"
},
"engines": {
"node": "^4.6.1"
},
"dependencies" : {
"express": "^4.14.0",
}
}
26) Terraform with Docker - Docker Repository
-Creation of the ECR repository can be done using terraform
ecr.tf
resource "aws_ecr_repository" "myapp" {
name = "myapp"
}
output.tf
output "myapp-repository-URL" {
value = "${aws_ecr_repository.myapp.repository_url}"
}
Docker build & push command
docker build -i myapp-repository-url/myapp .
`aws ecr get-login`
docker push myapp-repostory-url/myapp
27) Terraform with Docker - Docker Orchestion
28) Terraform with Docker - Terraform with ECR
29) Terraform with Docker - Terraform with ECS
ECS
-Now that your app is dockerized and uploaded to ECR, you can start the ECS cluster
-ECS - EC2 Container Services will managed your docker containers
-You just need to star an autosscaling group with a custom AMI - The custom AMI contains the ECS agents
-Once the ECS cluster is online, tasks and services can be startedon the cluster
-Defining the ECS cluster needs :
resource "aws_ecs_cluster" "example-cluster" {
name = "example-cluster"
}
-An autoscaling group launches EC2 instances that will join this cluster:
resource "aws_launch_configuration" "ecs-example-launchconfig" {
name_prefix = "ecs-launchconfig"
image_id = "${lookup(var.ECS_AMIS, var. AWS_REGION)}"
instance_type = "${var.ECS_INSTANCE_TYPE}"
key_name = "${aws_key_pair.mykeypair.key_name}"
iam_instance_profile = "${aws_iam_instance_profile.ecs-ec2-role.id}"
security_groups = ["${aws_security_group.ecs-securitygroup.id}"]
user_data = "#!/bin/bash\necho 'ECS_CLUSTER=example-cluster'>/etc/ecs/ecs.config\nstart ecs"
lifecycle {create_before_destroy = true }
}
resource "aws_autoscaling_group" "ecs-ecample-autoscaling" {
name = "ecs-example-autoscaling"
vpc_zone_identifier = ["${aws_subnet.main-public-1.id}", "${aws_subnet.main-public-2.id}"]
launch_configuration = "${aws_launch_configuration.ecs-example-launchconfig.name}"
min_size = 1
max_size = 1
tag {
key = "Name"
value = "ecs-ec2-container"
propagate_at_launch = true
}
}
- the iam role policy (aws_iam_role_policy.ecs.ec2-role-policy)
"Statement": [
{
"Effect"; "Allow"
"Action": [
"ecs:CreateCluster",
"ecs:DeregisterContainerInstance",
"ecs:DiscoverPollEndpoint",
"ecs:Poll",
"ecs:RegisterContainerInstance",
"ecs:StartTelemetrySession",
"ecs:Submit",
"ecs:StartTask",
"ecs:GetAurhorizationToken",
"ecs:BatchCheckLayerAvailability",
"ecs:GetDownloadUrlForLayer",
"ecs:BatchGetImage",
"ecs:CreateLogStream",
"ecs:PutLogEvents"
],
"Resource":"*"
}
]
-
30) Terraform with Docker - A Jenkins workflow with terraform, ECR, ECS
31) Interpolation
-In Terraform , you can interpolate other values, using ${...}
-You can use simple math functions, refer to other variables, or use conditionals (if-else)
-I have been using them already throughout the course, without naming them
* variables: ${var.VARIABLE_NAME} refers to a variable
* resources: ${aws_instance.name.id} (type.resource-name.attr)
* data source: ${data.template_file.name.rendered}(data.type.resource-name.attr)
Interpolation: variables
Name Syntax Example
String variables var.name ${var.SOMETHING}
Map variables var.MAP["key"] 1) ${var.AMIS["us-east-1"]}
2) ${lookup(var.AMIS, var.AWS_REGION)}
List variables var.LIST, var.LIST[i] 1) ${var.subnets[i]}
2) ${join(",", var.subnets)}
Interpolation: various
Name Syntax Example
Outputs of a module module.NAME.output ${module.aws_vpc.vpcid}
Count information count.FIELD when using the attribute count = number in a resource, you can resource, you can use
${count.index}
Path information path.TYPE path.cwd (current directory)
path.module (module path)
path.root (root module path)
Meta information terraform.FIELD terraform.env shows active workspace
-Math
*Add (+), Sustract (-), Multiply (*), and Divide (/) for float types
*Add (+), Substract (-), Multiply (*), Divide (/), and Modulo (%) for integer types
*For exampl: ${2 + 3 * 4} resaults in 14
32) Conditionals
-Interpolattions may contain conditionals (if-else)
-the syntax is:
CONDITIONS?TRUEVAL : FALSEVAL
-for example:
resources "aws_instance" "myinstance" {
[...]
count = "${var.env == "prod" ? 2 : 1 }"
}
-The support operators are:
*equality: == and !=
*numerical comparison: >, <, >=, <=
*boolean logic: &&, ||, unary !
33) Built-in Functions
-You can use built-in functions in your terraform resources
-the functions are called with the syntax name(arg1, arg2, ...) and wrapped with ${...}
*for example ${file("mykey.pub")} would read the contents of the public key file
-I will go over some of the commonly used functions to get you an idea of what is avaible
*it is best to use the reference documentation when you need to use a function: https://www.terraform.io/docs/configuration/interpolation.html
Terraform functions
Name Syntax Example
basename(path) Returns the filename basename("/home/edward/file.txt")
(last element) of a path returns file.txt
coalesce(string1, string2, ...) Returns the first non-empty coalesce("", "", "hello") returns hello
coalesce(list1, list2, ...) value; Returns the first
non-empty list
element(list, index) Returns a single element element(module.vpc.public_subnets,
count.index)
format(format, args, ...) Formats a string/list according format("server-%03d",
formatlist(format, args, ...) to the given format count.index + 1) returns
server-001, server-002
index(list, elem) Finds the index of a given index(aws_instance.foo.*.tags.Env,
element in a list "prod")
join(delim , list) Joins a list toghether with a join(",", var.AMIS) returns
delimiter "ami-123,ami-456,ami-789"
list(item1, item2, ...) create a new list join(":", list("a","b","c")) returns
a:b:c
lookup(map, key, [default]) perform a lookup on a map, lookup(map("k", "v"), "k", "not
using "key". Returns value found") returns "v"
representing "key" in the map
lower(string) returns lowercase value of lower("hello") returns hello
"string"
map(key, value, ...) returns a new map using map("k", "v", "k2", "v2") returns:
key:value { "k" = "v", "k2" = "v2" }
merge(map1, map2, ...) merges maps (union) merge(map("k", "v"), map("k2",
"v2")) returns: { "k" = "v",
"k2" = "v2" }
replace(string, search, replace) performs a search and replace replace("aaab", "a", "b") returns
on string bbbb
split(delim, string) splits a string into a list split(",", "a,b,c,d") returns
[ "a", "b", "c", "d" ]
substr(string, offset, length) extract substring from string substr("abcde", -3, 3) returns cde
timestamp() returns RFC 3339 timestamp "Server started at
${timestamp()}" Server started at ...
upper(string) Returns uppercased string upper("string") returns STRING
uuid() Returns a UUID string in RFC uuid() returns: 65d8cf0a-685d-
4122 v4 format 3295-73c1-393ef71bcd6
values(map) returns values of a map values(map("k","v","k2","v2"))
returns [ "v", "v2" ]
terraform console
34) Terraform Project Structure
-when starting with terraform on production enviorments, you quickly realize that you need a decent project structure
-ideally, you want to separate your development and production enviorments completely
*that way, if you always test terraform changes in development first, mistakes will be caught before they can have a production impact
*for complete isolation, it is best to create multiple AWS accounts, and use one account for dev, another for prod, and a third one for billing
*splitting out terraform in multiple projects will also reduce the resources that you will need to manage during one terraform apply
35) Workspaces
36) Terraform - AWS EKS
EKS
-AWS EKS provides managed Kubernetes master nodes
*there is no master nodes to manage
*the master nodes are multi-AZ to provide redudancy
*the master nodes will scale automatically when necessary - If you'd run your own Kubernetes cluster, you'd have to scale it when having more worker nodes
AWS EKS vs ECS
-AWS charges money to run an EKS cluster (in us-east-1 $0.20 per hour) - for smaller setups, ECS is cheaper
-Kubernetes is much more popular than ECS, so if you are planning to deploy on more cloud providers / on-prem, it is a more natural choice
-Kubernetes has more features, but is also much more complicated than ECS - to deploy simpler apps/solutions, i'd prefer ECS
-ECS has very tight integration with other AWS Services, but it is expected that EKS will also be tightly integrated over time
a) Provision an EKS Cluster
-EKS Cluster
-IAM Roles
-Security Groups
-VPC
b) Deploy worker nodes
-Launch Configuration
-Autoscaling Group
-IAM Roles
-Security Groups
c) Connect to EKS
-~/.kube/config
-ConfigMap
-kubectl
-Heptio auth
37)
38)
39)
40)
41)
42)
43)
44)
45)
46)
47)
48)
49)
50)
Source:
1) https://www.youtube.com/watch?v=LVgP63BkhKQ
2)https://github.com/brikis98/terraform-up-and-running-code/blob/2b64b2e1cd7a3405af97642bcd7a83f4267d881c/code/terraform/05-tips-and-tricks/loops-and-if-statements/live/global/three-iam-users-unique-names/vars.tf#L4
3) https://blog.gruntwork.io/how-to-manage-terraform-state-28f5697e68fa
4) https://github.com/terraform-providers
5) https://github.com/wardviaene/terraform-course
6)
Komentarze
Prześlij komentarz