Provisioning your first VPS with Terraform


Throughout this tutorial series we will need multiple servers available on to which we shall install the various features and functionalities that make a Rancher 2 / Kubernetes HA cluster come together.

The aim of this tutorial in particular is to create our virtual server(s), or "cloud" infrastructure in a reliable, repeatable manner. For this, we will use HashiCorp's Terraform.

In short, Terraform allows you to define what your server infrastructure looks like using code.

To complete this tutorial, you will need:

Note I am using Digital Ocean in this tutorial series because you get a $100 worth of credit when you sign up there using that link. It's more than enough to complete this tutorial series.

By the end of this tutorial you will have:

  • Created a Terraform configuration-as-code file describing the infrastructure you want to provision
  • Validated your new Terraform execution plan
  • Applied your plan, creating your first VPS using Terraform
  • Destroyed your plan, cleaning up after yourself

Note: Terraform is great for working with cloud providers with repeatable infrastructure. It is not a good fit if you use dedicated / custom hardware. One of the larger goals of this tutorial series is to empower you to use cheaper, more powerful hardware than can be found at cloud VPS providers. If going that route, you can skip this and the next tutorial video steps.

Starting Point

A short example tells you most of what you initially need to know:

# main.tf

variable "digitalocean_token" {}

# Configure the Digital Ocean Provider
provider "digitalocean" {
  token = "${var.digitalocean_token}"
}

#  Resources
## Create a new ssh key
resource "digitalocean_ssh_key" "default" {
  name       = "my ssh key"
  public_key = "${file("~/.ssh/id_rsa.pub")}"
}

## Create a new Digital Ocean Droplet using the SSH key
resource "digitalocean_droplet" "node1" {
  name     = "node1"
  image    = "ubuntu-16-04-x64"
  size     = "s-1vcpu-1gb"
  region   = "lon1"
  ssh_keys = ["${digitalocean_ssh_key.default.fingerprint}"]
}

The naming of the file (main.tf) is a point for debate elsewhere. Here's a good starting point for bigger projects.

Configure the Digital Ocean Provider

Is decoupling our important bits of data from its plan of execution a good idea?

Is it overkill?

Sure, for this size of infrastructure it's probably not the best use of your time to be learning how to use Terraform if all you will ever do is set up a VPS every 6 months, or so.

If that's you, and you have no desire to learn Terraform, then crank your servers by hand and rock on. I'm not going to stop you.

If you provision a good number of servers, like law-and-order, and you have the desire to improve your working practices, then Let's Terraform.

Terraform can work with a whole slew of Cloud Providers. And who doesn't like a good buzzword?

Cloud Providers, to you and me, are the big ones like Amazon AWS, Azure, Google Cloud Platform, or by comparison, lesser known ones like Digital Ocean or Hetzner (depending on your part of the World), who I really like and heartily recommend.

Terraform provides the abstraction between one set of commands, and many providers.

All providers behave similarly, but there are specifics to each. It's quite lovely.

By declaring provider "digitalocean", we tell Terraform that we will be working with Digital Ocean. When we initialise (terraform init) this directory under Terraform, it will go away and download the required Cloud vendor / provider plugin that will enable the conversion between our config and that provider, Digital Ocean for example.

In order to be able to work with the provider specifically, we will need an API token / your credentials. How you get that depends on your specific cloud provider.

Resources

What do we want?

A brand new shiny VPS server.

When do we want it?

Now!

In order to get access to it, however, we will need a way to authenticate ourselves as a valid user.

Access Via SSH Key

resource "digitalocean_ssh_key" "default" {
  name       = "my ssh key"
  public_key = "${file("~/.ssh/id_rsa.pub")}"
}

You don't need to provide an SSH key, but if you don't, accessing your new VPS will be more difficult, and less secure than it need be for this tutorial. You should get emailed a password for your root login. This isn't an approach I have ever taken in reality, and again, every provider may be different.

By providing one or more SSH keys we will gain immediate ssh root@{new_server_ip_addr} powers. That will be very handy for the next part of this tutorial series where we will be using Ansible to turn the clean and fresh image into a fully installed Kubernetes HA cluster node.

Here we are asking Terraform to look at a local file on our machine: ~/.ssh/id_rsa.pub.

This is fine, for our small demonstration, but not so great if you're in a team of > 1. This will add the local, currently logged in user's SSH key to the remote machine's root user's authorized_keys. You can define multiple resource entries to have more than one SSH key, if you need that.

Creating A New Digital Ocean Droplet

resource "digitalocean_droplet" "node1" {
  name     = "node1"
  image    = "ubuntu-16-04-x64"
  size     = "s-1vcpu-1gb"
  region   = "lon1"
  ssh_keys = ["${digitalocean_ssh_key.default.fingerprint}"]
}

The syntax for our first digitalocean_droplet resource is the same as we used to set up the SSH key resource. Only the available arguments are different.

There are various "top level" sections, such as resource, provider, variable, data, and output.

These tell Terraform what type of thing it is we are configuring.

The strings after the section name: "digitalocean_droplet" "node1" are in the format of TYPE and NAME. The combination must be unique. In other words you cannot define two or more resource "digitalocean_droplet" "node1" { entries.

The digitalocean_droplet TYPE is only available because we defined provider "digitalocean".

The NAME is anything we want, and is something that makes sense to us. I'm after building a Kubernetes HA cluster, and that typically means we will need cluster nodes, of which node1 will be the first.

After that, all the attributes inside the resource can be found in the docs for the specific type of resource you are configuring.

Make sure you provide all the required attributes.

Perhaps the trickiest part is knowing what the values of each of the attributes can be. Some providers come with decent example documentation, so figuring out what regions are available is not that hard. Others, such as Hetzner, do not have documentation so finding out the relevant info is less intuitive. My current solution to this is to manually set up resources, then use Terraform to import them so I can figure out what the options might be. Maybe there's a better way.

Providing an API Key

By way of the digitalocean plugin, Terraform will convert our infrastucture as code (the main.tf above) into a full, working VPS.

In order to do that, Terraform will interact with the Digital Ocean API.

And in order to do that, we need to provide Terraform with a valid API key.

Every provider has a different way of generating API keys. With Digital Ocean it's (currently) a case of going to Manage > API > Generate New Token.

Your token needs both read and write access.

When you create your token, you will only see the token one time, and then it will never be shown again. Not the end of the world if you mess it up somehow at this stage, just delete it and re-create.

Once you have it, create a new file in the same directory as your main.tf file, which we will call terrform.tfvars. Unsurprisingly, this is where we shall store variables related to Terraform.

In main.tf we had a variable definition:

variable "digitalocean_token" {}

The syntax is slightly different here. We provided the 'section' of variable, and this particular variable's name. In our case this is digitalocean_token, but this can be anything we like. We've also got an empty definition block {}.

You can define multiple variables, just duplicate the line and change the TYPE.

What we've done here is tell Terraform that when we run the terraform command, we will pass in the value for the digitalocean_token. This can be done via a CLI argument, via an environment variable, or in our case, by using the default naming convention.

When we execute the terraform command, Terraform will look inside the current directory for a file called terraform.tfvars,

Be sure to read the docs to learn all about Terraform Variables.

Running Terraform

Whilst you are completely free to install Terraform locally on your machine, for this tutorial and throughout the rest of this tutorial series, we will be using Docker to run all our commands. This means the only installation requirement is Docker, and everything else will run inside a Docker container.

Again, for clarity / simplicity, feel free to install and run Terraform locally.

Here's the command we will run:

docker run --rm \
    -v $(pwd):/go/src/github.com/hashicorp/terraform \
    -w /go/src/github.com/hashicorp/terraform \
    hashicorp/terraform:0.11.11 \
    --version

# expected output:
Terraform v0.11.11

At the time of writing / recording 0.11.11 is the latest version of Terraform available via Docker Hub. Change up accordingly, but if you hit on any issues, use that specific version for maximum compatibility with this tutorial.

This Docker command uses a volume that maps our current working directory contents to the path of /go/src/github.com/hashicorp/terraform inside the resulting container.

We also set the working directory (-w) to that path so that when the terraform command executes inside the container, it knows about the files we have locally created.

And because we are using the default naming convention, we don't need to pass in any extra arguments to the command to get things to work.

Initialising the Project

As covered above, in order for Terraform to know how to interact with Digital Ocean, we need the relevant provider plugin. Because Digital Ocean is a common cloud provider, Terraform has a plugin readily available for us to download, and use. It will do so as soon as we initialise our project. Let's do that now.

docker run --rm \
    -v $(pwd):/go/src/github.com/hashicorp/terraform \
    -w /go/src/github.com/hashicorp/terraform \
    hashicorp/terraform:0.11.11 \
    init

# expected output:

Initializing provider plugins...
- Checking for available provider plugins on https://releases.hashicorp.com...
- Downloading plugin for provider "digitalocean" (1.1.0)...

The following providers do not have any version constraints in configuration,
so the latest version was installed.

To prevent automatic upgrades to new major versions that may contain breaking
changes, it is recommended to add version = "..." constraints to the
corresponding provider blocks in configuration, with the constraint strings
suggested below.

* provider.digitalocean: version = "~> 1.1"

Terraform has been successfully initialized!

You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.

If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.

After doing this, a new, hidden .terraform directory should have been created in your local directory.

Plan The Work, Work The Plan

Before we build our new infrastructure, it would be wise to check that the plan of execution is as expected.

Terraform provides us with the plan command which will show us the intended schedule of work (the execution plan) that will be carried out if we were to run terraform apply right now.

Let's see what we get:

docker run --rm \
    -v ~/.ssh/id_rsa.pub:/root/.ssh/id_rsa.pub \
    -v $(pwd):/go/src/github.com/hashicorp/terraform \
    -w /go/src/github.com/hashicorp/terraform \
    hashicorp/terraform:0.11.11 \
      plan

# expected output:

Error: digitalocean_ssh_key.default: 1 error(s) occurred:

* digitalocean_ssh_key.default: file: open /root/.ssh/id_rsa.pub: no such file or directory in:

${file("~/.ssh/id_rsa.pub")}

Right, so we told Terraform that we will provide a local file - our ~/.ssh/id_rsa.pub key - as the SSH key used to connect as the root user.

The problem is that we're using Docker, and there is no configured /root/.ssh/id_rsa.pub file inside the resulting container.

This isn't a major issue. We can again, use a volume to map our local user's public SSH key into the resulting container at that very path. We need to update the docker run command with that new volume mapping:

docker run --rm \
    # the new volume mapping
    -v ~/.ssh/id_rsa.pub:/root/.ssh/id_rsa.pub \
    -v $(pwd):/go/src/github.com/hashicorp/terraform \
    -w /go/src/github.com/hashicorp/terraform \
    hashicorp/terraform:0.11.11 \
    plan

# expected output:        
Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.

------------------------------------------------------------------------

An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  + digitalocean_droplet.node1
      id:                   <computed>
      backups:              "false"
      disk:                 <computed>
      image:                "ubuntu-16-04-x64"
      ipv4_address:         <computed>
      ipv4_address_private: <computed>
      ipv6:                 "false"
      ipv6_address:         <computed>
      ipv6_address_private: <computed>
      locked:               <computed>
      memory:               <computed>
      monitoring:           "false"
      name:                 "node1"
      price_hourly:         <computed>
      price_monthly:        <computed>
      private_networking:   "false"
      region:               "lon1"
      resize_disk:          "true"
      size:                 "s-1vcpu-1gb"
      ssh_keys.#:           "1"
      ssh_keys.1335814790:  "my ssh key"
      status:               <computed>
      vcpus:                <computed>
      volume_ids.#:         <computed>

  + digitalocean_ssh_key.default
      id:                   <computed>
      fingerprint:          <computed>
      name:                 "my ssh key"
      public_key:           "ssh-rsa {your key here}"

Plan: 2 to add, 0 to change, 0 to destroy.

------------------------------------------------------------------------

Note: You didn't specify an "-out" parameter to save this plan, so Terraform
can't guarantee that exactly these actions will be performed if
"terraform apply" is subsequently run.

Looks about right to me. We expected to create two new resources - a 1gb droplet running Ubuntu 16.04 that we're calling 'node1', and that new droplet should have the SSH key we are providing.

Cool. Let's build this bad boy.

Building Digital Ocean Droplets With Terraform

All the hard work has now been done. All that is left to do is apply the execution plan, which will go ahead and create our VPS / Digital Ocean Droplet as specified in our config.

Because we're using Docker we will hit upon an issue whereby the apply command expects input from us, and we can't provide any. For this there are two options.

The first is to -auto-approve whatever prompts we are given:

docker run --rm \
    -v ~/.ssh/id_rsa.pub:/root/.ssh/id_rsa.pub \
    -v $(pwd):/go/src/github.com/hashicorp/terraform \
    -w /go/src/github.com/hashicorp/terraform \
    hashicorp/terraform:0.11.11 \
    apply -auto-approve

The second, and the one I'm going for is to tell Docker that we want an interactive terminal session:

# added -it
docker run -it --rm \
    -v ~/.ssh/id_rsa.pub:/root/.ssh/id_rsa.pub \
    -v $(pwd):/go/src/github.com/hashicorp/terraform \
    -w /go/src/github.com/hashicorp/terraform \
    hashicorp/terraform:0.11.11 \
    apply

# expected output

An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  + digitalocean_droplet.node1
      id:                   <computed>
      backups:              "false"
      disk:                 <computed>
      image:                "ubuntu-16-04-x64"
      ipv4_address:         <computed>
      ipv4_address_private: <computed>
      ipv6:                 "false"
      ipv6_address:         <computed>
      ipv6_address_private: <computed>
      locked:               <computed>
      memory:               <computed>
      monitoring:           "false"
      name:                 "node1"
      price_hourly:         <computed>
      price_monthly:        <computed>
      private_networking:   "false"
      region:               "lon1"
      resize_disk:          "true"
      size:                 "s-1vcpu-1gb"
      ssh_keys.#:           <computed>
      status:               <computed>
      vcpus:                <computed>
      volume_ids.#:         <computed>

  + digitalocean_ssh_key.default
      id:                   <computed>
      fingerprint:          <computed>
      name:                 "my ssh key"
      public_key:           "{your ssh key here}"

Plan: 2 to add, 0 to change, 0 to destroy.

Do you want to perform these actions?
  Terraform will perform the actions described above.
  Only 'yes' will be accepted to approve.

  Enter a value: yes

digitalocean_ssh_key.default: Creating...
  fingerprint: "" => "<computed>"
  name:        "" => "my ssh key"
  public_key:  "" => "{your ssh key here}"
digitalocean_ssh_key.default: Creation complete after 1s (ID: 24092024)
digitalocean_droplet.node1: Creating...
  backups:              "" => "false"
  disk:                 "" => "<computed>"
  image:                "" => "ubuntu-16-04-x64"
  ipv4_address:         "" => "<computed>"
  ipv4_address_private: "" => "<computed>"
  ipv6:                 "" => "false"
  ipv6_address:         "" => "<computed>"
  ipv6_address_private: "" => "<computed>"
  locked:               "" => "<computed>"
  memory:               "" => "<computed>"
  monitoring:           "" => "false"
  name:                 "" => "node1"
  price_hourly:         "" => "<computed>"
  price_monthly:        "" => "<computed>"
  private_networking:   "" => "false"
  region:               "" => "lon1"
  resize_disk:          "" => "true"
  size:                 "" => "s-1vcpu-1gb"
  ssh_keys.#:           "" => "1"
  ssh_keys.3339039315:  "" => "81:93:65:c0:3e:d5:93:09:c6:b0:a3:19:f7:cc:a1:c4"
  status:               "" => "<computed>"
  vcpus:                "" => "<computed>"
  volume_ids.#:         "" => "<computed>"
digitalocean_droplet.node1: Still creating... (10s elapsed)
digitalocean_droplet.node1: Still creating... (20s elapsed)
digitalocean_droplet.node1: Creation complete after 23s (ID: 134089814)

Apply complete! Resources: 2 added, 0 changed, 0 destroyed.

And within 23 seconds we have a brand new, shiny Ubuntu 16.04 Digital Ocean Droplet up and running.

That said, it's not told us what the public IP address is, so connecting is a chore.

We could (and probably should) log in to Digital Ocean, verify our Droplet is up and running, and grab the new IP from there. That's cool, but not ideal with lots of servers.

Another option is to look inside the newly created terraform.tfstate file, which will tell us exactly what each of those computed values actually resolved too:

cat terraform.tfstate
{
    "version": 3,
    "terraform_version": "0.11.11",
    "serial": 7,
    "lineage": "bdebcaee-e1ba-947f-6cd5-ecb69282ddd5",
    "modules": [
        {
            "path": [
                "root"
            ],
            "outputs": {},
            "resources": {
                "digitalocean_droplet.node1": {
                    "type": "digitalocean_droplet",
                    "depends_on": [
                        "digitalocean_ssh_key.default"
                    ],
                    "primary": {
                        "id": "134089814",
                        "attributes": {
                            "backups": "false",
                            "disk": "25",
                            "id": "134089814",
                            "image": "ubuntu-16-04-x64",
                            "ipv4_address": "178.62.92.213",
                            "ipv4_address_private": "",
                            "ipv6": "false",
                            "ipv6_address": "",
                            ... etc

A third option is to ask Terraform to drop that info on to the terminal after running by way of an output. Add the following to your main.tf file:

# main.tf

variable "digitalocean_token" {}

# Configure the Digital Ocean Provider
provider "digitalocean" {
  token = "${var.digitalocean_token}"
}

#  Resources
## Create a new ssh key
resource "digitalocean_ssh_key" "default" {
  name       = "my ssh key"
  public_key = "${file("~/.ssh/id_rsa.pub")}"
}

## Create a new Digital Ocean Droplet using the SSH key
resource "digitalocean_droplet" "node1" {
  name     = "node1"
  image    = "ubuntu-16-04-x64"
  size     = "s-1vcpu-1gb"
  region   = "lon1"
  ssh_keys = ["${digitalocean_ssh_key.default.fingerprint}"]
}

# Display the IP address
output "ipv4_address" {
  value = "${digitalocean_droplet.node1.ipv4_address}"
}

At this point I should say that you can run the apply command as many times as you need. New resources won't be created each time, if they already exist.

docker run -it --rm \
    -v ~/.ssh/id_rsa.pub:/root/.ssh/id_rsa.pub \
    -v $(pwd):/go/src/github.com/hashicorp/terraform \
    -w /go/src/github.com/hashicorp/terraform \
    hashicorp/terraform:0.11.11 \
    apply

# expected output:

digitalocean_ssh_key.default: Refreshing state... (ID: 24092024)
digitalocean_droplet.node1: Refreshing state... (ID: 134089814)

Apply complete! Resources: 0 added, 0 changed, 0 destroyed.

Outputs:

ipv4_address = 178.62.92.213

Whatever way you choose, you should now be able to ssh in as root:

ssh root@178.62.92.213

The authenticity of host '178.62.92.213 (178.62.92.213)' can't be established.
ECDSA key fingerprint is SHA256:SquqgVztvmX/5oDvtyCIAfbVz+awt5aYBYmrP4XX7Kw.
Are you sure you want to continue connecting (yes/no)? yes

Warning: Permanently added '178.62.92.213' (ECDSA) to the list of known hosts.
Welcome to Ubuntu 16.04.5 LTS (GNU/Linux 4.4.0-142-generic x86_64)

 * Documentation:  https://help.ubuntu.com
 * Management:     https://landscape.canonical.com
 * Support:        https://ubuntu.com/advantage

  Get cloud support with Ubuntu Advantage Cloud Guest:
    http://www.ubuntu.com/business/services/cloud

0 packages can be updated.
0 updates are security updates.

The programs included with the Ubuntu system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.

Ubuntu comes with ABSOLUTELY NO WARRANTY, to the extent permitted by
applicable law.

root@node1:~# 

Nice.

As this is just an example node, we better clean up after ourselves.

Cleaning Up / Destroying Infrastructure

We can clean up, or destroy all the resources that Terraform has under management with a single command. Again, this command will expect us to provide a confirmation at the terminal, so either -auto-approve or more preferably (at this stage), manually accept:

docker run -it --rm \ 
    -v ~/.ssh/id_rsa.pub:/root/.ssh/id_rsa.pub \
    -v $(pwd):/go/src/github.com/hashicorp/terraform \
    -w /go/src/github.com/hashicorp/terraform \
    hashicorp/terraform:0.11.11 \
    destroy

# expected output:
digitalocean_ssh_key.default: Refreshing state... (ID: 24092024)
digitalocean_droplet.node1: Refreshing state... (ID: 134089814)

An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
  - destroy

Terraform will perform the following actions:

  - digitalocean_droplet.node1

  - digitalocean_ssh_key.default

Plan: 0 to add, 0 to change, 2 to destroy.

Do you really want to destroy all resources?
  Terraform will destroy all your managed infrastructure, as shown above.
  There is no undo. Only 'yes' will be accepted to confirm.

  Enter a value: yes

digitalocean_droplet.node1: Destroying... (ID: 134089814)
digitalocean_droplet.node1: Still destroying... (ID: 134089814, 10s elapsed)
digitalocean_droplet.node1: Destruction complete after 11s
digitalocean_ssh_key.default: Destroying... (ID: 24092024)
digitalocean_ssh_key.default: Destruction complete after 0s

It's a good idea to log in to DO at this point, if you haven't already done so, and visually confirm that the infrastructure has definitely been deleted. I trust a computer up to a point, but like to double check I'm not going to receive an unexpected bill at some future date. +1 for paranoia.

Ok, so that's it.

In this video we have learned how to use Terraform to create a Digital Ocean droplet, and how we can provide additional data such as SSH keys in order to connect to that new droplet.

We've learned how to validate our execution plan, and apply that plan to create new infrastructure. We then destroyed that infrastructure, with a single command.

In the next video we will map out the resources we need for our Rancher 2 & Kubernetes HA cluster, and provision them for the rest of the tutorial.

Episodes

# Title Duration
1 Provisioning your first VPS with Terraform 18:11
2 Provisioning lots of VPSs at the same time with Terraform 05:59
3 Installing software on your new VPS with Docker and Ansible 14:41