In this post I'll describe what the steps are to achieve a simple way to get Kubernetes running. A single node Kubernetes setup can be convenient for kicking the tires, testing and local development.
Prerequisites
You will need to have some background knowledge about the concepts behind Kubernetes, so you'll know what I'm referring to when I write about nodes, pods, replication controllers and services.
I am running OSX so these instructions will be geared towards that operating system. With some minor adjustments it'll work on Linux and Windows, too.
You will have to have Docker, Docker Compose, Docker Machine and kubectl installed. These pacakages are all available in Homebrew. The "kubernetes-cli" package contains kubectl.
Setting up Kubernetes
A single node Kubernetes "cluster" is remarkably easy to set up using hyperkube, which is a Docker image with the Kubernetes binary inside.
See this repository for the Kubernetes demo code that I'm describing here.
The Docker Compose file holds all required containers and stitches them together. Issue the
docker-compose up -d
command and presto, your very own Kubernetes node!
Check if everything is running using the docker-compose commands.
docker-compose ps
docker-compose logs
The ps command should show a number of running containers.
docker-compose ps
(out) Name Command State Ports
(out) -------------------------------------------------------------------------
(out) kubernetesdemo_apiserver_1 /hyperkube apiserver --ser ... Up
(out) kubernetesdemo_controller_1 /hyperkube controller-mana ... Up
(out) kubernetesdemo_etcd_1 /usr/local/bin/etcd --bind ... Up
(out) kubernetesdemo_kubelet_1 /hyperkube kubelet --api_s ... Up
(out) kubernetesdemo_proxy_1 /hyperkube proxy --master= ... Up
(out) kubernetesdemo_scheduler_1 /hyperkube scheduler --mas ... Up
Now we're ready to use the kubectl command. We can pass it a server and port using the --server flag, and we can also create a tunnel to the Docker machine using
machine=YOUR_DOCKER_MACHINE_NAME; ssh -i ~/.docker/machine/machines/$machine/id_rsa docker@$(docker-machine ip $machine) -NL 8080:localhost:8080
After that, kubectl can connect to localhost:8080, which is the default for earlier versions of Kubernetes. For current versions, you need to provide the --server flag which can be done by defining a context:
kubectl config set-context demo --server=http://localhost:8080
kubectl config use-context demo
Now let's run a demo application on there. Create a replication controller and start the first pod:
kubectl run kube-demo --image=containersol/kubernetes-demo --port=8080
Verify the replication controller was created:
kubectl get replicationcontrollers
Check the pod was started:
kubectl get pods
Create the service:
kubectl expose rc kube-demo --target-port=8080 --type=NodePort
The services can be displayed by running:
kubectl get services
Find the port Kubernetes assigned to the service:
kubectl get svc kube-demo -o yaml | grep nodePort
Now you can use curl to make requests to the pod:
curl -s http://$(docker-machine ip YOUR_DOCKER_MACHINE_NAME):NODE_PORT
The returned value will be the hostname of the container as assigned by Kubernetes.
Now let's make things a little more exciting by scaling the service up. We'll scale up to three pods:
kubectl scale rc kube-demo --replicas=3
After a while - check by going kubectl get pods - the curl commands will start returning the other pods' hostnames as well.
The replication controller will keep the correct number of replicas running. See what happens when you run this command:
kubectl delete pod $(kubectl get -o yaml po -l run=kube-demo | egrep -o "kube-demo-[a-z0-9]{5}" | head -1)
Check the list of running pods (kubectl get pods) and see that there are still three available. One is probably still starting up because it was just started by the replication controller.
As you can see, it's easy to start playing around with Kubernetes on your local machine.