I have only minimal sys-admin experience, but was able to get a cluster of Docker containers running without too much trouble.
The Dockerfile is an awesome, unified abstraction that exposes the definition of a container.
What the derp is Docker?
Docker is a platform for running apps on any computer. It’s a binary that you install wherever you want to run your apps, and once installed, you can install “containers” defined by Dockerfiles.
These containers can interact in a number of ways, depending on what you want to do.
You can do all these things without Docker, right? Sure, you can lean on Heroku and plugins, or you could script out and install all these things yourself on your own machine somewhere. And it’d probably work great.
The advantage of Docker is the Dockerfile – in one place, you define the exact spec for one of your containers.
That file can be used to build the container from scratch, and it’ll run the same in every build, whether your app is in Dev, Staging, or Production.
This is great for aligning the environment your app runs in, but it also implies something powerful: container throw-away and rebuild is super cheap and super clean. Should something go awry on your system, you can easily remove and rebuild any and all of it.
This is pretty useful if you’re like me, trying to figure out how to get this black-box cloud computer to listen to my commands.
Also, coming from a sheltered Heroku up-bringing, being able to read Dockerfiles is an awesome way to learn how these systems are composed.
So, Dockerfiles for my system:
Much thanks to the Dockerfile Project for pulling together an awesome set of resources.
Some tools of the Docker trade
If you’d like to dive into Docker, I highly recommend going through the entire User Guide.
Docker provides a CLI, and recently launched a few tools that make it even easier to work with.
Docker Machine is a CLI for interacting with machines you are running Docker on, be it virtual machines on your own computer or machines off in the clouds.
Creating a new, Docker-running machine is as easy as:
docker-machine create --driver virtualbox dev
is a nice, version-controlled way to handle your
docker run commands
and improve your logging/debugging workflow.
This lets you control and log a slew of containers at once with commands like
docker-compose up and
which is much more convenient than the drawn out flags and vars of long
Consul + Registrator
Being new to this kind of development, there are still concepts that I’m wrapping my head around. Service Discovery is one of these, and Consul (plus a tool called Registrator) provides a nice solution for it.
The gist of Service Discovery in this context is that the containers we’re running on our machine(s) need to know how best to communicate. Our Node container needs to know what ports to send data to the RethinkDB container on.
If you’re running on one machine, something like Consul + Registrator may be overkill. But if you’re not - Consul does some very cool things.
For example, Failure Detection. If we’re running a cluster of RethinkDB containers, and one of these containers kicks the bucket, Failure Detection allows Consul to no longer direct requests to that container, with minimal work on our part (essentially just defining a ‘Failure’ for that container).
Registrator is a service registry bridge - it automatically publishes/unpublishes the services your Docker containers expose.
If you’re only running on one machine and don’t want to get into this complexity, you’ll want to look into container linking.
Otherwise - my consul + registrator setup led to this current docker-compose file:
consul: image: progrium/consul command: -server -bootstrap -advertise 192.168.99.101 ports: - "8300:8300" - "8301:8301" - "8301:8301/udp" - "8302:8302" - "8302:8302/udp" - "8400:8400" - "8500:8500" - "172.17.42.1:53:53/udp" dns: - 172.17.42.1 - 220.127.116.11 registrator: image: gliderlabs/registrator command: consul://192.168.99.101:8500 volumes: - /var/run/docker.sock:/tmp/docker.sock
I ran into a number of issues getting all of this running, but predominantly I needed to learn how all these things work together. I’m in a better place now, and will spare the post the details.
Feel free to reach out if you’re having trouble getting going, and I’ll try to help.
An aside: a Wordpress success story
Just a quick aside about the convenience of Docker.
A new contract came along, and I had to run Wordpress locally to do the work.
I’ve run Wordpress before, but it’s been a while, and I thought I’d have to face the details around Mamp/php/mysql/whatever. All of which is fine, but it’s a headache that isn’t really related to doing the work.
Docker to the rescue!
Here’s an awesome Wordpress Dockerfile. That plus Docker Machine lets you go from 0 to locally running Wordpress in just a few minutes, and lets you push up your own wordpress instance to any machine in the cloud in the same amount of time.
Going forward, I’d like to take on Docker Swarm. It’s a tool that makes it easy to control big swaths of containers all at once.
Aka, Beast mode.