Terraform

From Leo's Notes
Last edited on 6 June 2022, at 17:40.

Terraform is a "infrastructure as code" deployment tool. In a nutshell, Terraform allows you to define virtual machine configurations in HashiCorp Configuration Language (HCL) and then have Terraform do all the work of setting up these VMs for you automatically. The benefit of this approach is that your configs are declarative and any changes can be version controlled, are repeatable (idempotent too), and predictable.

Quick usage guide[edit | edit source]

Installation[edit | edit source]

Refer to the documentation at: https://www.terraform.io/downloads

Ubuntu[edit | edit source]

# curl -fsSL https://apt.releases.hashicorp.com/gpg | apt-key add -
# apt-add-repository "deb [arch=amd64] https://apt.releases.hashicorp.com $(lsb_release -cs) main"
# apt-get update && sudo apt-get install terraform

Red Hat / Fedora / Rocky Linux[edit | edit source]

# dnf install -y dnf-plugins-core
## Fedora
# dnf config-manager --add-repo https://rpm.releases.hashicorp.com/fedora/hashicorp.repo
## Red Hat / Rocky Linux / CentOS
# dnf config-manager --add-repo https://rpm.releases.hashicorp.com/RHEL/hashicorp.repo
# dnf -y install terraform

Build from source[edit | edit source]

See also: CloudStack#Terraform

We can build the Terraform provider using Docker and the golang image. I'll also be modifying the go.mod file to override the cloudstack-go library to a specific version.

# git clone https://github.com/apache/cloudstack-terraform-provider.git
# cd cloudstack-terraform-provide
# git clone https://github.com/tetra12/cloudstack-go.git
# cat <<EOF >> go.mod
replace github.com/apache/cloudstack-go/v2 => ./cloudstack-go
exclude github.com/apache/cloudstack-go/v2 v2.11.0
EOF
# docker run --rm -ti -v /home/me/cloudstack-terraform-provider/:/build golang bash 
> cd /build
> go build

Copy the resulting binary to your terraform plugins path. Because I ran terraform init, it placed it in my terraform directory under .terraform/providers/registry.terraform.io/cloudstack/cloudstack/0.4.0/linux_amd64/terraform-provider-cloudstack_v0.4.0. Edit the metadata file in the same directory as the provider executable and remove the file hash so that terraform runs the provider.

Terraform workflow[edit | edit source]

When working with Terraform, there are 4 actions that you will be using.

First, you need to initialize your providers -- the plugins that you will need to interact with your underlying infrastructure. This is accomplished by creating a main.tf file defining your providers and then running terraform init.

Next, define your resources in the same main.tf file. Your infrastructure's networking, virtual machines, etc. will be defined in this file. Once you're ready, run terraform plan to view what the changes Terrafom will perform. If everything looks good, run terraform apply to create your resources.

Finally, you can tear down everything using terraform destroy.

In summary:

Description Command
Initialize and setup any providers defined in your configuration. terraform init
Show any changes Terraform will do terraform plan
Apply the changes (create/modify/destroy) to VMs as required. terraform apply
Tear down everything terraform destroy

Provider-specific notes[edit | edit source]

Below are some quick notes on specific IaaS platforms to help get you started with Terraform. There are a plethora of different providers that you can find more about on Terraform's documentation: https://runebook.dev/en/docs/terraform/-index-#Providers

Proxmox[edit | edit source]

There are a number of Proxmox providers for Terraform. The most popular one is Telmate/proxmox which will be the one we'll be using here. For more information on this provider and to see all the available input parameters, see:https://registry.terraform.io/providers/Telmate/proxmox/latest/docs/resources/vm_qemu.

Create a main.tf file with the following:

# Use Telmate proxmox provider
terraform {
  required_providers {
    proxmox = {
      source = "telmate/proxmox"
      version = "2.7.4"
    }
  }
}

# Configure the proxmox provider
provider "proxmox" {
  pm_api_url = "https://proxox-server:8006/api2/json"
  pm_api_token_id = "terraform@pam!terraform_token_id"
  pm_api_token_secret = "e458e7bc-d8e6-4028-885b-d0896f4becfa"
  pm_tls_insecure = true
}

Setup an account with an API key on Proxmox. This account should have permissions to '/' as well as to any storage volumes it needs to create VMs on '/storage/data'. (TBD)

Run terraform init to set up the provider.

Deploy a VM[edit | edit source]

After setting up the Proxmox provider, add the following to your main.tf file.

# Define our first VM
resource "proxmox_vm_qemu" "test-vm" {
  count = 1                            # 0 will destroy the VM
  name = "test-vm-${count.index + 1}"  # count.index starts at 0. We want the VM to be named test-vm-1
  target_node = var.proxmox_host       # target proxmox host
  clone = var.template_name            # VM template to clone from
  
  os_type = "cloud-init"
  agent = 1
  cores = 2
  sockets = 1
  cpu = "host"
  memory = 2048
  scsihw = "virtio-scsi-pci"
  bootdisk = "sata0"

  disk {
    slot = 0
    size = "10G"
    type = "sata"
    storage = "data"
    iothread = 1
  }
  
  network {
    model = "virtio"
    bridge = "vmbr0"
  }
  
  lifecycle {
    ignore_changes = [
      network,
    ]
  }
  
  ipconfig0 = "ip=10.1.1.10${count.index + 1}/22,gw=10.1.1.1"
  
  # sshkeys set using variables. the variable contains the text of the key.
  sshkeys = <<EOF
  ${var.ssh_key}
  EOF
}

vars.tf:

variable "ssh_key" {
  default = "ssh-rsa  ... "
}

# This should be the exact same name as your proxmox node name
variable "proxmox_host" {
	default = "proxmox-server"
}

variable "template_name" {
	default = "rocky85-template"
}
  • Run terraform plan to plan the tasks
  • Run terraform apply to create the VMs.

CloudStack[edit | edit source]

Create a main.tf file with the following:

terraform {
  required_providers {
    cloudstack = {
      source = "cloudstack/cloudstack"
      version = "0.4.0"
    }
  }
}

provider "cloudstack" {                                                                                                                          
  api_url    = "${var.cloudstack_api_url}" 
  api_key    = "${var.cloudstack_api_key}" 
  secret_key = "${var.cloudstack_secret_key}" 
}

Create a vars.tf with the following:

variable "cloudstack_api_url" { 
   default = "http://cloudstack-management:8080/client/api" 
} 
variable "cloudstack_api_key" { 
   default = " 
} 
variable "cloudstack_api_secret_key" {                                                                                                           
   default = "" 
}

Ensure that your API URL ends with /client/api. If you don't already have a API key and secret key, log in to the CloudStack console and navigate to your profile. Click on 'Generate Keys' and copy down the keys listed under your profile. Populate the values into vars.tf.

Run terraform init to set up the provider.

Create your first network and VM[edit | edit source]

Here's a Terraform file that sets up a VPC, guest network, a network ACL, and a few VMs using a custom Cloud Init payload.

# Create a new VPC
resource "cloudstack_vpc" "default" {
  name = "rcs-vpc"
  display_text = "rcs-vpc"
  cidr = "100.64.0.0/20"
  vpc_offering = "Default VPC offering"
  zone = "zone1"
}

# Create a new ACL
resource "cloudstack_network_acl" "default" {
    name  = "vpc-acl"
    vpc_id = "${cloudstack_vpc.default.id}"
}

# One ingress and one egress rule for the ACL
resource "cloudstack_network_acl_rule" "ingress" {
    acl_id = "${cloudstack_network_acl.default.id}"

    rule {
        action       = "allow"
        cidr_list    = ["10.0.0.0/8"]
        protocol     = "tcp"
        ports        = ["22", "80", "443"]
        traffic_type = "ingress"
    }
}

resource "cloudstack_network_acl_rule" "egress" {
    acl_id = "${cloudstack_network_acl.default.id}"

    rule {
        action       = "allow"
        cidr_list    = ["0.0.0.0/0"]
        protocol     = "all"
        traffic_type = "egress"
    }
}


# Create a new network in the VPC
resource "cloudstack_network" "leosnet" {
    name = "leosnet"
    display_text = "leosnet"
    cidr = "100.64.1.0/24"
    network_offering = "DefaultIsolatedNetworkOfferingForVpcNetworks"
    acl_id = "${cloudstack_network_acl.default.id}"
    vpc_id = "${cloudstack_vpc.default.id}"
    zone = "zone1"
}

# Create a new public IP address for this network
resource "cloudstack_ipaddress" "public_ip" {
    vpc_id = "${cloudstack_vpc.default.id}"
    network_id = "${cloudstack_network.leosnet.id}"
}

# Create a port forwarding for SSH to the first VM we create
resource "cloudstack_port_forward" "ssh" {
    ip_address_id = "${cloudstack_ipaddress.public_ip.id}"

    forward {
        protocol  = "tcp"
        private_port = 22
        public_port = 22
        virtual_machine_id = "${cloudstack_instance.leo[0].id}"
    }
}

# Create VMs. We can create multiples by specifying count=
resource "cloudstack_instance" "leo" {
  count = 3
  name = "leo${count.index+1}"
  zone = "zone1"
  service_offering = "rcs.c4"
  # This template was created by Packer with CloudInit support
  template = "RockyLinux 8.5"
  network_id = "${cloudstack_network.leosnet.id}"

  # Warning: Enabling this option will reseult in VM's disks being deleted when the VMs are destroyed.
  # This option only works if 'allow.user.expunge.recover.vm' is set to true in global settings
  expunge = true

  user_data = <<EOF
#cloud-config
disable_root: false
chpasswd:
  list: |
    root:password
  expire: false

EOF
}
  • Run terraform plan to plan the tasks
  • Run terraform apply to create the VMs.

Troubleshooting[edit | edit source]

Issues with the apply step[edit | edit source]

I kept on getting "400 Parameter verification failed":

╷
│ Error: 400 Parameter verification failed.
│
│   with proxmox_vm_qemu.test-vm[0],
│   on main.tf line 20, in resource "proxmox_vm_qemu" "test-vm":
│   20: resource "proxmox_vm_qemu" "test-vm" {
│
╵

The issues I encountered which triggered this error was:

  • Incorrect host name for the Proxmox node
  • Invalid values for the disk type (I tried changing type = "scsi" to type = "sata") while leaving iothread = 1 defined.

To help diagnose issues, review the provider's documentation and ensure the values you're using are appropriate. You may also want to enable additional logging by running with the TF_LOG=TRACE environment variable:

$ TF_LOG=TRACE terraform apply