Cloud Native Blog - Container Solutions

Docker Swarm with Azure Container Services and Azure Resource Manager

Written by Carlos Leon | Apr 26, 2017 10:30:05 AM

Docker Swarm with Azure Container Services and Azure Resource Manager

In an earlier post we explained how easy it is to setup a Docker Swarm cluster in Azure using Azure Container Services and Terraform.

We understand that not everybody is ready to adopt Terraform in their companies and, even though we can help you get there, we have written this other guide to achieve exactly the same results we got from the previous exercise, this time using the good ol' Azure Resource Manager with the Azure CLI 2.0 tool and Azure Resource Manager Templates. In this way you can setup a Swarm Cluster in Azure without having to learn/install/adopt Terraform within your company (although you should).

Background

Azure Container Services makes it easy for DevOps engineers and Operators in general to create and manage clusters that run an orchestrator for shipping your containers at scale. The engines that are being offered as of the writing of this blog post are:

By abstracting the concept of orchestration tool into what is known in Azure as a Container Service, you can forget about the underlying infrastructure and focus on actually trying the tool out.

What this means is that a Kubernetes cluster is, in essence, a single instance of an Azure Container Service. The same goes for a Docker Swarm cluster and a Mesosphere DC/OS cluster. In the end, all of them deploy containers, the only thing being different is the way they work under the hood.
But Azure has abstracted away the setting up and bootstrapping of the cluster so that you don't have to worry about the operational challenges that come with adopting such technologies.

Requirements

Special note about the Azure CLI tool

Make sure that you have logged in with the CLI tool before you attempt to create any resource group or deploy the Container Service explained later in this guide.

Getting Started

As with Terraform, setting up an Azure Container Service instance powered by Docker Swarm is very easy, although if you are not familiar with Azure Resource Manager Templates (Microsoft's alternative to AWS CloudFormation) it may take you a while to get used to it (it helps if you're familiar with CloudFormation).

The Template

Below you will find the Azure Resource Manager Template with the definition of your Azure Container Service instance. Name the file acs.json

 
{
 "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
 "contentVersion": "0.1.0",
 "parameters": {
   "prefix": {
     "type": "string",
     "metadata": {
       "description": "prefix to be used across the different resources to be created"
     }
   },
   "masters": {
     "type": "int",
     "allowedValues": [1, 3, 5],
     "metadata": {
       "description": "The number of Swarm managers for the cluster."
     }
   },
   "workers": {
     "type": "int",
     "metadata": {
       "description": "amount of worker nodes in the cluster"
     },
     "minValue":1,
     "maxValue":100
   },
   "vmSize": {
     "type": "string",
     "allowedValues": [
       "Basic_A0", "Basic_A1", "Basic_A2", "Basic_A3",
       "Basic_A4", "Standard_A0", "Standard_A1", "Standard_A10",
       "Standard_A11", "Standard_A1_v2", "Standard_A2", "Standard_A2_v2",
       "Standard_A2m_v2", "Standard_A3", "Standard_A4", "Standard_A4_v2",
       "Standard_A4m_v2", "Standard_A5", "Standard_A6", "Standard_A7",
       "Standard_A8", "Standard_A8_v2", "Standard_A8m_v2", "Standard_A9",
       "Standard_D1", "Standard_D11", "Standard_D11_v2", "Standard_D11_v2_Promo",
       "Standard_D12", "Standard_D12_v2", "Standard_D12_v2_Promo", "Standard_D13",
       "Standard_D13_v2", "Standard_D13_v2_Promo", "Standard_D14", "Standard_D14_v2",
       "Standard_D14_v2_Promo", "Standard_D15_v2", "Standard_D1_v2", "Standard_D2",
       "Standard_D2_v2", "Standard_D2_v2_Promo", "Standard_D3", "Standard_D3_v2",
       "Standard_D3_v2_Promo", "Standard_D4", "Standard_D4_v2", "Standard_D4_v2_Promo",
       "Standard_D5_v2", "Standard_D5_v2_Promo", "Standard_DS1", "Standard_DS11",
       "Standard_DS11_v2", "Standard_DS11_v2_Promo", "Standard_DS12", "Standard_DS12_v2",
       "Standard_DS12_v2_Promo", "Standard_DS13", "Standard_DS13_v2", "Standard_DS13_v2_Promo",
       "Standard_DS14", "Standard_DS14_v2", "Standard_DS14_v2_Promo", "Standard_DS15_v2",
       "Standard_DS1_v2", "Standard_DS2", "Standard_DS2_v2", "Standard_DS2_v2_Promo",
       "Standard_DS3", "Standard_DS3_v2", "Standard_DS3_v2_Promo", "Standard_DS4",
       "Standard_DS4_v2", "Standard_DS4_v2_Promo", "Standard_DS5_v2", "Standard_DS5_v2_Promo",
       "Standard_F1", "Standard_F16", "Standard_F16s", "Standard_F1s",
       "Standard_F2", "Standard_F2s", "Standard_F4", "Standard_F4s",
       "Standard_F8", "Standard_F8s", "Standard_G1", "Standard_G2",
       "Standard_G3", "Standard_G4", "Standard_G5", "Standard_GS1",
       "Standard_GS2", "Standard_GS3", "Standard_GS4", "Standard_GS5",
       "Standard_H16", "Standard_H16m", "Standard_H16mr", "Standard_H16r",
       "Standard_H8", "Standard_H8m", "Standard_NV12", "Standard_NV24", "Standard_NV6"
     ],
     "metadata": {
       "description": "the VM size of the worker nodes"
     }
   },
   "sshKey": {
     "type": "string",
     "metadata": {
       "description": "the contents of your public SSH key"
     }
   },
   "vmDiagnosticsEnabled": {
     "type": "bool",
     "defaultValue": false,
     "metadata": {
       "description": "enable diagnostics for the VMs"
     }
   }
 },
 "variables": {
   "adminUsername":"deploy",
   "workers":"[parameters('workers')]",
   "agentsEndpointDNSNamePrefix":"[concat(parameters('prefix'), '-',  'swarm-pool')]",
   "vmSize":"[parameters('vmSize')]",
   "masters":"[parameters('masters')]",
   "mastersEndpointDNSNamePrefix":"[concat(parameters('prefix'), '-', 'swarm-master')]",
   "orchestratorType":"Swarm",
   "sshKey":"[parameters('sshKey')]",
   "vmDiagnosticsEnabled":"[parameters('vmDiagnosticsEnabled')]",
   "containerServiceName": "[concat(parameters('prefix'), '-', 'containerservice')]"
 },
 "resources": [
   {
     "type": "Microsoft.ContainerService/containerServices",
     "name":"[variables('containerServiceName')]",
     "apiVersion": "2016-09-30",
     "location": "[resourceGroup().location]",
     "properties": {
       "orchestratorProfile": {
         "orchestratorType": "[variables('orchestratorType')]"
       },
       "masterProfile": {
         "count": "[variables('masters')]",
         "dnsPrefix": "[variables('mastersEndpointDNSNamePrefix')]"
       },
       "agentPoolProfiles": [
         {
           "name": "agentpools",
           "count": "[variables('workers')]",
           "vmSize": "[variables('vmSize')]",
           "dnsPrefix": "[variables('agentsEndpointDNSNamePrefix')]"
         }
       ],
       "diagnosticsProfile": {
         "vmDiagnostics" : {
           "enabled": "[variables('vmDiagnosticsEnabled')]"
         }
       },
       "linuxProfile": {
         "adminUsername": "[variables('adminUsername')]",
         "ssh": {
           "publicKeys": [
             {
               "keyData": "[variables('sshKey')]"
             }
           ]
         }
       }
     }
   }
 ],
 "outputs": {
   "masterFQDN": {
     "type": "string",
     "value": "[reference(concat('Microsoft.ContainerService/containerServices/', variables('containerServiceName'))).masterProfile.fqdn]"
   },
   "sshMaster0": {
     "type": "string",
     "value": "[concat('ssh ', variables('adminUsername'), '@', reference(concat('Microsoft.ContainerService/containerServices/', variables('containerServiceName'))).masterProfile.fqdn, ' -A -p 2200')]"
   },
   "agentFQDN": {
     "type": "string",
     "value": "[reference(concat('Microsoft.ContainerService/containerServices/', variables('containerServiceName'))).agentPoolProfiles[0].fqdn]"
   }
 }
}
	
	

The Parameters File

If you read carefully, there are some parameters that you can (and should) adjust before you deploy the cluster to your Azure account. Name the file acs.params.json

 
{
 "$schema": "http://schema.management.azure.com/schemas/2015-01-01/deploymentParameters.json#",
 "contentVersion": "0.1.0",
 "parameters": {
   "prefix": {
     "value": "cscloud"
   },
   "masters": {
     "value": 1
   },
   "workers": {
     "value": 3
   },
   "agentVMSize" : {
     "value": "Standard_A2"
   },
   "sshRSAPublicKey": {
     "value": "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC..."
   }
 }
}
	
	

(Optional) Creating The Resource Group

If you do not yet have a resource group created, or if you wish to create a resource group for the sake of this exercise, you can do so with the following command

  
az group create --name FOO --location "The Location"

Where FOO is the name of the resource group you want to create (keep it simple, lowercase and no special characters) and the location where you want to place this resource group. Find in the appendix a link to the official list of regions available in Azure.

Creating The Cluster

After you have adjusted the parameters file according to your needs, create the cluster with the Azure CLI tool like this:

  
az group deployment create --name acs-swarm --resource-group YOUR_RESOURCE_GROUP --template-file acs.json --parameters @acs.params.json

Make sure that you replace YOUR_RESOURCE_GROUP with the actual name of your resource group.

Accessing The Cluster

When the Azure CLI tool is done applying the changes to the infrastructure you can then go ahead to the Azure Portal, find the Container Service blade and locate the instance that you just created. In the Overview blade you will be able to locate the following information:

  1. Master FQDN: This is the endpoint you can use to talk to your Swarm manager. SSH into it and deploy your services as you would regularly do.
  2. There is also docker-compose available in this master node.
  3. Agent Pool FQDN: this is the DNS name that you will want to use to reach the services that you deploy to your Swarm cluster.
  4. For example, let's say that your Agent Pool URL is nscloud-swarm-pool.westeurope.cloudapp.azure.com and you are running a service in your Swarm Cluster in Azure on port 8080, then the way to reach this service is by using the URL http://nscloud-swarm-pool.westeurope.cloudapp.azure.com:8080

Caveats

It's important to mention that even though the machines created by the Azure Container Service are running a fairly up-to-date version of Docker (17.04.0-ce), the Swarm Cluster that runs inside the Container Service is not the newest Swarm Mode but instead is the old swarm/1.1.0.

Useful resources