What is Docker?

To understand what Docker is, first we must understand the computer.

The computer has 3 main resources, a Central Processing Unit (more commonly known as CPU), Random Access Memory (more commonly known as RAM) and Storage (known by a few names now, CD-ROM, Hard Disk Drive HDD, Solid State Drive SSD or NVMe Non-Volatile Memory express)

We use some software to guide the usage of these resources, the Operating System, this comes in many many flavours but boils down to 3 main ones, Windows, Linux and MacOS. There are so many others, including distributions but we won't get bogged down.

The Operating Systems (OSes) job is quite simple, but in theory and practice, it is incredibly hard. It just has to manage these resources and run programs.

It does this by reading configurations, loading data, optimising processing etc etc etc. those are the 3 main parts we want to take note of.

Here comes the problem, changing the OSes configuration can have a profound effect on all running applications - for developers this is bad, very very bad and is where the phrase spawned from "It works on my machine!" - by some fluke the configuration was correct enough on the developers machine - but is lacking on the production server...

So how can we isolate these applications so that they're unaffected by the current OSes configuration changes, but still have access to the OSes resources?

We could virtualise. What does that mean? Simply, we can install a layer on the OSes which gives a bedrock for running up new Virtual Machines (Real computers, that are running on top of yours).

There is a problem with Virtualisation. It comes at great cost, it allows you to install a whole new operating system (something which is hard to automate), these operating systems are large in size (sometimes up to 15GB) and the layer that allows this is super inefficient and resource hungry itself.

This all sounds very complicated and quite intensive - and you would be very right for thinking that. So, as you may have guessed - Docker comes to the rescue.

We can now Containerise. What does that mean?

So, before we talked about a very heavy layer that allows virtualisation, that's because it isn't very good at sharing resources.

Containers are just slices of the curent operating system (still giving you isolation) that allow you to install very very very very (15x more verys) slimmed down OSes. With Linux and Windows releasing sub 100MB container OSes. Why can they be so small compared to the whopping 15GB mentioned earlier (150x larger)? Because the container interface is way way way better at sharing resources with the host machine and can do it at a much much much lower level (closer to the core kernel functions and services)

By installing Docker for Desktop, developers can now deploy isolated apps to consistent OSes running in containers on the host machine. As you can imagine, this quickly gets rid of "Works on my machine" mantra because now it is easier to guarantee the same execution on any compatible machine.

How do we Docker?

To start creating Docker containers, we must create a Docker image, a typical one shown below:

So this creates what we call a Docker image through the process of running a Docker build command.

Once we build an image, we can then run it and it is running an image when the container is created!! I won't get into the nitty gritty here, but there is plenty of resources describing in detail how this all works.

Let's have a look at Docker holistically

Milestones

  • Mar 2013 - Great Reception at PyCon, with Docker image format and container runtime quickly becoming the de facto standard
  • Jun 25 2013 - Docker joins Linux Foundation
  • Mar 2016 - Private betas for Mac and Windows

Since these landmarks, they've released a Kubernetes single cluster along with their Docker for Desktop. Docker was really a catalyst for the creation of Kubernetes and Container Orchestration in general.

They've held DockerCon, a massive convention for Docker enthusiasts and everyone else alike.

The Big Figures

  • 130B Total Pulls on Hub
  • 6M Total Repositories on Hub
  • 5M Hub Users
  • 2M Desktop Installations

These are some impressive stats, showing a huge usage of the Docker platform.

So how has this changed Development?

There is this buzz word floating around, you may have heard of it, "DevOps"?

This term recognises the huge effort it takes to get source code to production and maintained for the masses to use.

Now, Applications aren't just single binaries (exes, dlls or anything else), they are several services talking to several different databases, serving up to several different User Interfaces (be it on the web, the desktop or mobile).

All these component parts play a big role in serving up a fully functional application and thus introduce so many different working parts. These parts need to be as consistent as possible to enable the fast effective delivery of the applications.

The first of the development changes that containerisation has improved is testing changes.

Testing Changes

Because we can now express all the different components as containers it's now far easier to deploy these containers on a local machine to give developers a like for like production environment (minus all the horsepower of the xeon intel servers in the cloud).

There are several deployment mechanisms they can choose from, just a simple docker run, docker compose or a Kubernetes deployment - but we'll talk more on this later.

Now that the different component parts are deployed on the developers local machine, they can start to get a **quicker feedback loop** - those that know me will hear me say this a lot, it is all about the quicker feedback loop.

The quicker a developer can understand, fix and deploy bug fixes the quicker the software can get to production and Docker has - to no end - improved this workflow.

Deploying Changes

Earlier I spoke about 3 deployment mechanisms to a Docker for Desktop instance, a simple docker run, docker compose and Kubernetes deployment. That is a progressive order as well, as the application matures, there is no doubt you start going up the ranks of deployment.

Docker run, this command quite simply takes a Docker image and makes it a running container.

Docker compose, as we start to create more containers that all need to communicate with each other in simplistic ways, we need a better mechanism than running tonnes of Docker run commands - so enter Docker Compose. This is a really nice way of specifying all the services that make up an application and deploying them with a single Docker Compose Up command.

Kubernetes, this is one of the most meteoric rises of cluster technology that I have ever seen. Kubernetes is an orchestrator of containers, and more specifically it handles the creation, deployment, wiring up of and resilient aspects of running containers. Developers will approach this with caution, but ultimately will choose this when a Docker Compose doesn't provide everything necessary for running an application.

All of these can be done with simple commands, the code to get it going does increase from stage to stage - but ultimately once that is done, it is done. It rarely changes.

This is what has really affected Developers the most - being able to reproducibly deploy, time and time again, a full production Application containing many different services and database - the knowledge of which is not needed to be known by each developer.

DevOps

Due to containers DevOps has now been really simplified - it does still come with its complexities mainly due to custom project requirements, but as our clients have recognised, a vastly more simplified process.

Builds on Build Servers are now more consistent with higher rates of passing, Tests are more consistent due to interfacing with more consistent running code and deployments can now be down with the call of a command and there are tonnes of benefits from Kubernetes - like rolling updates, Blue Green deployments, configuration updates, Role Based Access Controls, Load Balancing and the list goes on.

This really has taken the weight off of the DevOps shoulders, to allow more time for monitoring and security - this is what really matters about production deployments!

So what's Next?

In conclusion, Docker has helped developers produce more consistent code easing up the "build > continuous integration > deploy > operate" parts of the DevOps process. Developers are now more engaged in the running of their services and other services that they may not have written.

In my mind Docker has enabled a more collaborative and a more reassuring environment for innovating new ideas.

The sky is the limit with this new technology, and the uptake is massive. In my mind this has brought back a bunch of old questions about "what if" - but now we can, because we are far less limited by technology.

Such new technology we can look at are, service meshes - the idea of being able to join up services from any type of cluster from anywhere as if the two services were right next to each other.

There are also changing views on how we now develop, MonoRepo or stand one repo per service.

Do we Build -> Deploy -> Test -> Release or do we still Build -> Test -> Deploy -> Release, should we augment slightly Build -> Test -> Deploy -> Test -> Release

Can we have 1 environment per change so that we can test 1 change with entire production application to ensure success?

The answer is yes - to everything. Docker solves so many problems and helps solve so many others that the sky is really the limit.