Cloud Native Blog - Container Solutions

Docker Swarm with Azure Container Services

Written by Carlos Leon | Apr 20, 2017 4:10:55 PM

Docker Swarm with Azure Container Services

As part of setting up a Docker Swarm cluster for one of our customers in their public cloud provider, Microsoft Azure, we learned how easy it was to set it up with Terraform.
In this blog post we're going to show and provide you with the tools for doing it yourself.


Background

Setting up a HA Docker Swarm Cluster in Azure is much easier than one would think. By abstracting all the underlying architecture and the way that each component interacts with each other, Microsoft has created a concept called Container Service. An instance of a Container Service is nothing but a system used to orchestrate your containers using either Kubernetes, DC/OS or Swarm as the engine. Azure will take care of creating all the underlying infrastructure for you (VMs, Public IP addresses, DNS names, etc.). All you need to specify is simple parameters like the profile of the worker nodes (VM size), DNS name prefixes, (geographical) location and some other minor details.

Getting Started

Setting up an instance of Docker Swarm in Azure using Terraform is, then, pretty straight forward:

Getting Ready

First we're going to define the variables that are most important to us in this scenario, along with some sane defaults. Please feel free to adjust depending on your needs:

 
# - vars.tf - #
variable "prefix" {
  default = "cscloud"
  description = "prefix to be used across the different resources to be created"
}
 
variable "location" {
  default = "West Europe"
  description = "the location where all your resources will be created"
}
 
variable "masters" {
  default = 1
  description = "amount of master nodes in the cluster"
}
 
variable "workers" {
  default = 3
  description = "amount of worker nodes in the cluster"
}
 
variable "vm_size" {
  default = "Standard_A2"
  description = "the VM size of the worker nodes"
}
 
variable "vm_diagnostics_enabled" {
  default = false
  description = "enable diagnostics for the VMs"
}
	
	

The Juice

Then it's just a matter of defining the Resource Group and the Container Service that we want to create.

 
# - main.tf - #
resource "azurerm_resource_group" "cscloud" {
  name     = "${var.prefix}"
  location = "${var.location}"
}

resource "azurerm_container_service" "swarm" {
  name                   = "${var.prefix}-swarm"
  location               = "${azurerm_resource_group.cscloud.location}"
  resource_group_name    = "${azurerm_resource_group.cscloud.name}"
  orchestration_platform = "Swarm"

  master_profile {
    count      = "${var.masters}"
    dns_prefix = "${var.prefix}-swarm-master"
  }

  linux_profile {
    admin_username = "deploy"

    ssh_key {
      key_data = "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC..." # contents of your public SSH key here
    }
  }

  agent_pool_profile {
    name       = "agentpools"
    count      = "${var.workers}"
    dns_prefix = "${var.prefix}-swarm-pool"
    vm_size    = "${var.vm_size}"
  }

  diagnostics_profile {
    enabled = "{var.vm_diagnostics_enabled}"
  }
}

output "swarm-master_url" {
  value = "${azurerm_container_service.swarm.master_profile.fqdn}"
}

output "swarm-pool_url" {
  value = "${azurerm_container_service.swarm.agent_pool_profile.fqdn}"
}
	
	

If you already have a resource group in your Azure account then I suggest you define it in Terraform and then import it. More on how to import existing resources in Terraform here.

Important:
There is some preparation that you must do before you can actually apply the Terraform plan.
See the Creating Credentials section of the Microsoft Azure Terraform Provider for up-to-date instructions on how to do it.

Creating The Cluster

After your credentials are all set it's now time to apply the Terraform plan as always:

 
$ terraform apply
	
	

Sit tight because this process might take a little while (around 15min in our experience).

Accessing The Cluster

When Terraform is done applying the changes to the infrastructure it'll print out:

  1. The Master Node URL: This is the endpoint you can use to talk to your Swarm manager. SSH into it and deploy your services as you would regularly do.
    There is also docker-compose available in this master node.
  2. The Agent Pool URL: this is the DNS name that you will want to use to reach the services that you deploy to your Swarm cluster.
    For example, let's say that your Agent Pool URL is nscloud-swarm-pool.westeurope.cloudapp.azure.com and you are running a service in your Swarm Cluster in Azure on port 8080, then the way to reach this service is by using the URL http://nscloud-swarm-pool.westeurope.cloudapp.azure.com:8080

Caveats

It's important to mention that even though the machines created by the Azure Container Service are running a fairly up-to-date version of Docker (17.04.0-ce), the Swarm Cluster that runs inside the Container Service is not the newest Swarm Mode but instead is the old swarm/1.1.0.

Useful resources