Contents

Docker is everywhere

I noticed the other day that Docker is now a first class citizen among the big three cloud providers.

Windows server 2016 and windows 10 both support windows nano containers too!

We use Vagrant at work for our development virtual machines a lot and I love their tool set. We have also started using docker for more complex development environments. I’ve also been using Docker for personal projects for a good year now.

Docker has some great features that set it apart from other virtualization technologies that I’ll try to explain.

The following is brain dump of learnings I’ve picked up over the last 12 months of casual use.

Hopefully this article will help someone else navigate this area of knowledge. If you spot something that I have wrong, or is out of date, please let me know!

Back to Contents

New Terms

There are some terms that didn’t immediately make sense to me when I got started, and I’ve found that getting others over the terminology hump is required when introducing people to the tech.

Docker has a great glossary available here if you are interested in the formal definitions.

Not everyone likes reading glossaries… so here is my unofficial guide to Docker ‘things’…

Docker image registries

Essentially, a registry holds a bunch of images from different publishers, who can store many images, which in turn have many different ‘flavours’ known as tags.

A tag is a version of an image that you’ll use to create a running container.

Form: [publisher]/[image-name]:[tag]

E.g microsoft/dotnet:1.0.1-runtime

Note: if you omit the publisher, Docker will assume you mean its official set of images. If you omit the tag, it will assume you mean the latest. E.g. image name ‘rabbitmq’ uses the official, latest rabbitmq image

Back to Contents

Containers are not virtual machines

Containers are the running instances of an image/tag. These are different to virtual machines in some ways, primarily in terms of security.

Unlike a virtual machine, an elevated application running in a docker container can access the underlying host.

So treat your containers as you would a server application: use least privilege as much as possible, and drop privilege when no longer required.

Back to Contents

Containers are onions

Proof of edible containers

Probably not as edible as this container

Something that I think makes Docker easy to learn, is that it treats the problem of machine configuration as a series of layers, each adding a small amount of functionality required by its consumers and no more. Like an onion. Or a cake.

Docker hub is effectively a meeting place where people all over the world are collaboratively building ‘fit for purpose’ environments, layer by layer, rating their quality and following their progress. The built-in integration with Github allows consumers an avenue for contributing and lodging issues.

docker hub

Microsoft’s docker hub account

This means:

  • We don’t need to store a whole repository full of bash or powershell script files to reproduce an environment
  • Resultant images only have what is required for your software to run
  • Images are typically small enough to proliferate over the web. E.g. Microsoft’s dotnet image is ~250MB compressed. The official debian base images start at around ~50MB. No that is not a typo. Megabytes.
  • Ability to build from commonly shared, ‘known good’ images, making the ecosystem accessible, social and self-correcting.

Back to Contents

Talk is cheap, show me the code

For example, if I want to build a .net application and deploy it to a useful image, I can use Microsoft’s latest ‘dotnet’ image as a base to build my own:

First, I publish my app

cd /path/to/project.json # soon to be csproj
dotnet publish -c Release

Then I’ll create an image definition file - or ‘Dockerfile’

# Use the Microsoft's latest dotnet container
FROM Microsoft/dotnet:latest

# bring some binaries into this container 
COPY /path/to/published/binaries .

# optional - open up port 5000 to the host
EXPOSE 5000

# Tell this container to autorun my dotnet app on boot as PID 1
CMD dotnet run

That’s it.

Microsoft’s Dockerfile has already done the dotnet installation for us, and is in turn based on another image for a popular linux distribution - Debian version 8.x aka ‘Jesse’

Back to Contents

Common commands

We can now perform a bunch of operations on our new container, here’s some of the more common ones.

To build this image, so that I can use it later, I invoke:

docker build /path/to/app --tag myimage:latest

If I have access to a Docker repository on a registry somewhere I can share it with:

docker push myimage:latest

If I want to get an image from a registry I can pull it down

docker pull microsoft/dotnet:latest

Of course, I can stamp out as many containers of this image as I like, and because the image is pretty small (~250MB), this startup cost can often be in the milliseconds.

docker run myimage:latest

Back to Contents

Networking is configuration too

Networking in Docker is equally declarative, with its suite of other CLI tools that allow us to define things like port numbers and volume mappings, and to define dependencies between the containers themselves.

For [a really oversimplified] example, consider a website, where notifications to users have been decoupled from the web application, and these two components need to communicate with each other, via a queue.

system design

While we can expose ports in Dockerfiles, we could also use docker-compose (part of the docker toolbox) and feed it a configuration template in YAML:

a docker-compose.yml configuration

# define some services
services:
  # our messaging backbone - called 'backbone'
  backbone:
      # use the standard rabbitmq image from docker hub
      image: rabbitmq
      hostname: localhost
      # set some resilience rules
      restart: always
      # and some environment variables from a file
      env_file:
        - backbone.env
  # an aspnet core website
  web:
    # ensure port connectivity to our backbone
    links: 
      - backbone
    # my own custom image, defined in that Dockerfile we made earlier 
    image: myapplication
    #maps port 5000 inside the container to port 5000 on the host
    ports:
      - "5000:5000"   
    env_file:
      - web.env
    # bring this up after the backbone is started  
    depends_on: 
        - backbone
  # a notification service      
  notifications:
    links: 
      - backbone
    image: partytime-notification
    env_file:
      - notifications.env
    depends_on: 
        - backbone

This whole environment is brought up with a single call to compose like this:

cd /path/to/docker-compose.yml
docker-compose up -d 
# -d starts these containers as background services, omit if you want to console log output 

A few things to note:

  • If our backbone dies, it is configured to automatically restart
  • I used labels to name my services, I’ll use these as references in scaling commands later
  • Dependencies determine startup order, but services wont wait for a dependency to finish loading.

A word on the subject of controlling startup order

TFM states…

‘The problem of waiting for a database (for example) to be ready is really just a subset of a larger problem of distributed systems. In production, your database could become unavailable or move hosts at any time. Your application needs to be resilient to these types of failures.’

^^ My tip: Substitute the word ‘database’ for anything you consider to be ‘central’

Back to Contents

Scaling services sideways

Among other features, docker-compose allows you to create more instances of your named services:

Let’s imagine that marketing have just had a successful campaign to generate loads of interest and you are expecting a rush of registration notifications to get sent.

# scale out our notification service with 100 instances total
docker-compose scale notification=100

too much scale

TIP: Depending on memory footprint, 100 might be too many containers for a single host. In that case, you might want to consider using swarm mode!

Trap for new players Scaling a service that uses a port mapping (e.g 5000-> 5000 externally) - will fail because new copies of the service will try to use a port that is already in use!

Unfortunately, docker-compose is not quite smart enough to handle this (yet).

Back to Contents

Scaling teamwork and delivery

As a parting thought, there is scope for not only scaling services out and up, but changing the structure of your team to enable gains in their delivery.

So much well researched content exists on the subject of team structure and its impact on your architecture.

Docker could easily be used as part of a distributed architecture to achieve some of these goals. I’ve enjoyed reading these articles and have similar experiences, I hope they inspire you to enact change in your team!

In Summary

There is a huge body of knowledge forming around containerization and its applications. I’m very new to the ecosystem, even after a year of trying things out and making use of the more obvious parts. I’m keen to hear from others using Docker or other container technologies, so please share your experiences.