Docker Documentation

NOTE: This is a work-in-progress (WIP) documentation page for Docker. For feedback about this page, please leave a comment, or send me an e-mail.

The goal of this documentation page is to simplify the understanding of Docker, by making conceptual things very explicit. As a member of the Docker Captains team, I am passionate about sharing knowledge about Docker. You might also want to check out the Docker training content from Art of Shell.

What is Docker?

You’ve probably heard all about containers being the big, new thing, right? Docker, Docker, Docker, Docker, Docker …. etc. Well, hardware virtualization still plays a massive role in the IT space, but containers certainly make it easier to deploy certain applications. Docker is a containerization solution that enables you to rapidly deploy and manage containerized applications and services.

Unfortunately, Docker suffers from some misinformation. You might hear that you can run any container on any operating system. That’s not entirely true, because you can’t run containers on Mac OS X. Windows Containers are coming soon with Windows Server 2016, and will be manageable by Docker, but you won’t necessarily be able to run a Linux-built container on Windows automatically. Docker is also sometimes seen from a “run any app inside a container” perspective, when truly, it’s geared towards scalable enterprise applications: middleware, web services, databases, caching services, and so on. While you might be able to hack it to run GUI applications, such as games or your favorite text editor, it isn’t really designed for this purpose.

Docker Architecture

Docker containers are not terribly different from traditional application architecture, however it does give you more flexibility in how you deploy them. Traditionally, if you wanted to set up services like web, database, queues, or caching, you’d have to install the binaries for those application packages onto an infrastructure VM. When the infrastructure’s capacity was exceeded, from sheer user demand, you’d have to build more infrastructure servers, install the software again, and configure them as part of the cluster. With Docker, you can simply build an array of “Docker Hosts” and then orchestrate the deployment of containers to them. Because containers can easily be moved between Docker Hosts, to balance hardware resource consumption, you have more flexibility in your deployment model. No longer do you have front-end web servers sitting almost completely idle, and wasting valuable resources — you can leverage some of those free resources to run other application services side-by-side!

High-Level Docker Architecture Diagram

How is Docker “lightweight?”

Docker Image Layers - Python 3.5The term “lightweight” is often thrown around, without any kind of supporting facts. Docker isn’t necessarily lightweight, but it can be. One of the interesting things about Docker is that you can run Linux operating systems inside these containers. In fact, some Docker images are based on a core Linux operating system, and then build on top of that with additional layers. The reality is that many official Docker images are actually quite sizable. For example, the Python 3.5 Docker images are hundreds of megabytes in size, even the “slim” version. The Python 3.5 download itself, in pre-compiled archive format, is only about 25MB.

Another manner in which Docker containers could be considered “lightweight” is that they support resource governance. When you “run” a new or existing Docker container, you can see some options pertaining to CPU Fair Scheduling (CFS), which is part of the Linux kernel, as well as customizable memory constraints.

Docker Ecosystem

There are a lot of confusing terms being tossed around these days. Traditionally speaking, Docker containers were a Linux-exclusive concept. Microsoft has adopted Docker on the Windows platform, further adding to the confusion. We’ll use this section to explore and demystify some of the myths surrounding Docker, to help ensure that you’re familiar with all the components, and can navigate the waters easily.

Windows Containers

In 2016, Docker and Microsoft announced a commercial partnership to bring software containers to the Windows platform. Beginning with Microsoft Windows Server 2016, Microsoft is supporting the notion of Windows Containers. There’s actually two different container formats that are supported by Windows Server 2016:

  • Hyper-V Containers – a lightweight wrapper around a Hyper-V virtual machine, enabling VMs to be deployed using similar tools as application containers.
  • Windows Containers – application-level containers, through kernel namespaces and process isolation. Windows Containers are

It’s very important to understand that the Windows / Hyper-V container formats, and the Docker Linux container, are very different. Docker Linux containers cannot run on Windows Docker Hosts, and Windows containers cannot run on Linux Docker Hosts. You can read more about the Windows container formats here.

What’s interesting though, is that Microsoft has committed to supporting a shared API between Linux and Windows containers. What this means is that, while you can’t run Windows containers on Linux, and vice versa, the actual commands and underlying API that you invoke, to manage those containers, is relatively consistent. You can use the very common docker run command to create and run containers on Linux, and you can use the same nomenclature to create and run containers on the Microsoft Windows platform.

Another important point to understand about the Windows container ecosystem is that Windows 10 Professional and Enterprise both support Hyper-V containers, but do not support Windows [application] containers. In order to run Windows application containers, you must use Windows Server 2016 or later.

Docker for Windows

To simplify the software development experience, Docker created Docker for Windows. This installer dramatically simplifies the process of:

  • Creating a Hyper-V Linux virtual machine
  • Installing the Docker Engine
  • Setting up your shell with docker, docker-machine, and docker-compose
  • Self-updating Docker to the latest version

If you’re a developer who’s using Windows as their primary operating system, then you’ll definitely want to take advantage of Docker for Windows. By far, it’s the easiest and quickest way to get started using Docker.

Docker for Mac

TBD

Docker Toolbox

Before Docker for Mac and Docker for Windows existed, Docker produced something called the Docker Toolbox. This was a simple installer that includes:

  • Docker Machine – the command line utility used to create and manage Docker Hosts
  • Docker Compose – the command line utility used to “compose” multiple container services as a single application
  • Docker CLI – the command line utility used to interact with containers
  • Docker Engine – the Docker daemon itself, which enables container management via a remote API
  • Oracle VirtualBox – a simple to use, open source, cross-platform hardware virtualization solution
  • Kitematic – a graphical application that enables new Docker users to easily run containers, and familiarize themselves with the Docker platform

Docker Concepts

Dockerfile

The Dockerfile is a file that controls the creation of a Docker image. Typically, you’d build your own Docker Image if you wanted to deploy your own, custom application code inside of a container. You can also build a custom Docker Image that contains application services, such as Redis Cache, MySQL, and many others. The Dockerfile can be checked into source control alongside your project’s other source code. After creating the Dockerfile, you use the docker build command to build the Docker image. What’s interesting is that you aren’t required to use a Dockerfile. You can actually just run the commands to build a container manually, without orchestrating them in a Dockerfile. However, using a Dockerfile simplifies the build process, by keeping all the commands in a convenient, single file.

There’s a very helpful Dockerfile reference, which contains an authoritative list of all the supported commands to build a custom Docker image.

Docker Images

Docker images are essentially templates, or blueprints, for Docker containers. For example, there are pre-built images for application services like Redis Cache, WordPress, MySQL, PostgreSQL, and many other application services. If you don’t want to actually run your own container, based on an image, but want to explore how an image was built, there’s a cool service called Image Layers, that lets you explore the layers that make up a Docker image. Using layers, Docker helps to ensure that containers remain isolated. A technique called “copy on write” (COW) enables a copy of a file to be made only if the file is changed. That way, files that are relatively static in nature, are not copied many times, which would greatly increase the size of the image.

Docker Hub

If you don’t want to build your own, custom Docker image, you can simply download pre-built ones from the Docker Hub. The Docker Hub is essentially a large, searchable repository of familiar (and sometimes unfamiliar) software, which is served up as Docker images. Some common software packages you’ll find there include: MySQL, PostgreSQL Redis Cache, PHP, Python, Ruby, WordPress, and so on. If you’re developing and deploying a custom application however, you’ll need to build your own Docker image, using a Dockerfile.

NameCategoryDescription
ApacheWebThe Apache HTTP Server Project (aka. httpd)
PythonLanguagePython is an interpreted, interactive, object-oriented, open-source programming language.
MongoDatabaseMongoDB document databases provide high availability and easy scalability.
RedisCacheRedis is an open source key-value store that functions as a data structure server.
PostgresDatabaseThe PostgreSQL object-relational database system provides reliability and data integrity.
WordPressApplicationThe WordPress rich content management system can utilize plugins, widgets, and themes.
MySQLDatabaseMySQL is a widely used, open-source relational database management system (RDBMS).
NGINXWebNginx (pronounced "engine-x") is an open source reverse proxy server for HTTP, HTTPS, SMTP, POP3, and IMAP protocols, as well as a load balancer, HTTP cache, and a web server (origin server).
CentOSOperating SystemCentOS Linux is a community-supported distribution derived from sources freely provided to the public by Red Hat for Red Hat Enterprise Linux (RHEL).
BusyBoxToolBusyBox combines tiny versions of many common UNIX utilities into a single small executable. It provides replacements for most of the utilities you usually find in GNU fileutils, shellutils, etc.
UbuntuOperating SystemUbuntu is a Debian-based Linux operating system, with Unity as its default desktop environment. It is based on free software and named after the Southern African philosophy of ubuntu (literally, "human-ness"), which often is translated as "humanity towards others" or "the belief in a universal bond of sharing that connects all humanity".
NodeLanguageNode.js is a software platform for scalable server-side and networking applications. Node.js applications are written in JavaScript and can be run within the Node.js runtime on Mac OS X, Windows, and Linux without changes.

Docker Host

A Docker Host is simply an operating system instance, running on bare metal, or as a virtual machine, that Docker containers can be deployed and executed on. The Docker Engine (or “daemon” background service) runs on the Docker Host, and enables interaction from the docker command line tool. The Docker command line tool, can connect to a local or remote Docker Host, and start new containers, stop containers, list running containers, and other related operations. However, if you want to actually manage the Docker host itself, then you would use the docker-machine command.

Linux distributions have popped up, that  have been specifically optimized (read: minimalist) for running Docker containers. Some of those include:

This blog post from Docker contains more information about minimalist [Linux] operating systems that are designed to support Docker containers efficiently.

Where can my Docker Host(s) run?

As mentioned earlier, Docker Hosts can run on bare metal, or as a virtual machine. This include virtual machines on-premises, but also virtual machines that are running in cloud providers like Microsoft Azure, Amazon Web Service (AWS), Google Cloud platform, and other Virtual Private Server (VPS) providers like Digital Ocean’s Droplet service or OVH.

Docker Compose

It’s important to understand Docker fundamental concepts individually. However, practically speaking, an application (or, “solution”) is going to be made up of more than one container. That’s where Docker Compose comes into play. Docker Compose helps you build applications that are made up of multiple “services.” For example, your application might have several services:

  • A web server, such as NGINX or Apache with application code, such as WordPress
  • A Database, such as PostgreSQL or MySQL
  • A NOSQL Database, such as MongoDB or CouchDB
  • A Cache mechanism, such as Redis Cache or memcached

To orchestrate an array of containers, that work together as a single enterprise application, you would build a Docker Compose YAML file that describes the service by referencing containers. If you’re new to Docker, and are deploying a “canned” application, such as WordPress + MySQL, and you don’t need to deploy individual containers, you’ll want to start out by using Docker Compose.

Docker SwarmKit

Docker SwarmKit is a new feature in Docker 1.12 that enables the deployment of container stacks, which are made up of individual services. In some ways, this is a production-level manifestation of Docker Compose, which is intended for non-production usage. Each of the services that are defined in a stack are able to be scaled independently.

Docker Registry

When you’re building Docker images with your custom application code, you probably won’t be pushing these to the publicly-accessible Docker Hub. Instead, Docker provides an image called registry that enables you to host your own private, internal version of the Docker Hub. That way, you can publish container images to the internal registry, and ensure that other individuals and companies cannot access your code.

Common Operations

Under this section, we’ll talk about some of the common operations that you perform with Docker containers. Think of this like a task-oriented approach to utilizing Docker. We’ll break down this section into various tools, and then under each tool, we’ll list out some of the common tasks that you’d perform with that tool.

Docker Machine

The docker-machine command is what enables you to perform operations concerning Docker Hosts, such as:

  • Provision a new Docker Host
  • List Docker Hosts that have been provisioned
  • Connect to Docker Hosts (over SSH or using other drivers)
  • Start / stop Docker Hosts

Create a Docker Host

When you’re brand new to Docker, one of the first things you’ll to is provision a new Docker Host. A Docker Host is simply a Linux operating system instance, that also has the Docker Engine installed. While the “manual” process of installing the Docker Engine on a barebones Linux VM is actually very straightforward, Docker Machine standardizes the deployment process of Docker Hosts across multiple cloud providers.

The docker-machine create sub-command is used to provision a new Docker Host. There are quite a few parameters / arguments available for this sub-command, so make sure you understand which ones are necessary, and which ones are optional.

$ docker-machine create # Run without parameters to get help
$ docker-machine create --driver digitalocean # Get help for the Digital Ocean provider parameters / arguments
$ docker-machine create --driver digitalocean --digitalocean-access-token {TOKEN-FROM-WEB-PORTAL} {NAME-OF-DOCKER-HOST}

Docker Machine doesn’t have a command that lists out all of the supported drivers, however there’s documentation available here.

Connect to Docker Host

To connect to a docker host, you need to actually use the docker-machine env <DockerHostName> command. This command emits the bash (or on the Windows platform, PowerShell) commands that you would use to configure an array of environment variables. Once you set these environment variables, you can begin using the Docker CLI to issue container-related commands.

$ docker-machine env 
export DOCKER_TLS_VERIFY="1"
export DOCKER_HOST="tcp://192.168.99.100:2376"
export DOCKER_CERT_PATH="/Users/trevorsullivan/.docker/machine/machines/dockerhost"
export DOCKER_MACHINE_NAME="dockerhost"

Docker Containers

The docker command line tool is how you interact with Docker containers that are running on Docker Hosts. It offers a wide variety of sub-commands that enable you to perform common container-related tasks, such as:

  • Run a new container instance from an image
  • Start or stop container instances
  • List out images in the local cache

Run a new Container Instance

One of the most common operations you’ll perform, is to simply run a new container instance, from an image. To do this, you use the docker run command. You can optionally provide a custom name for the container, for future reference. To do this, tack on the –name mycoolcontainername parameter.

The return value from the command will be the full unique ID of the newly created container instance. You can use this identifier to manage the container with other docker sub-commands.

Download Prebuilt Images

While the docker run and create commands will automatically download an image for you, sometimes you might just want to stage a Docker image onto your Docker Host, without actually creating or running a new container from that image. You can do this using the docker pull command. For example, to download the official WordPress image from Docker Hub, just run docker pull wordpress.

Build a Custom Container Image

Most software developers will want to their own Docker container image, in order to deploy their application code. You’ll first create a Dockerfile, and then issue the docker build command. After the build completes, the container image will be added to your Docker Host’s local image repository. To validate the existence of the new image, use the docker images command to enumerate the local image cache on the Docker Host.

Although the convention is to use the file name Dockerfile, you can actually name this file whatever you’d like. When you call the build command, you can add the –file parameter, to specify the path and name of the “Dockerfile.”

Another neat parameter is the –quiet parameter. Although you may think so, this parameter doesn’t suppress all output from the command. Instead, it prevents the typical build output from appearing. After the image has been successfully built, it prints out the unique ID for the newly created image. You can then use this image ID to provision new containers from the image.

List Images in the Local Cache

When you deploy an image from the Docker Hub, the image is downloaded into a local cache. That way, if you run multiple container instances, from the same image, the image doesn’t need to be repeatedly downloaded. When you build a custom image, it is also stored in the local image cache. You can view images in the local cache by using the docker images sub-command.

$ docker images
REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
trevor              FlaskCompose        57472748f314        3 days ago          683 MB
trevor              wptest              a241ebb139a5        8 days ago          501.7 MB
python              2.7                 dbcaac341861        2 weeks ago         676.2 MB
wordpress           latest              55f2580b9cc9        2 weeks ago         516.5 MB
redis               latest              fb46ec1d66e0        2 weeks ago         151.3 MB
php                 5.6-apache          4239932f5198        2 weeks ago         480.8 MB
mysql               latest              e13b20a4f248        2 weeks ago         361.3 MB
hello-world         latest              690ed74de00f        4 months ago        960 B

Delete Images from the Local Cache

If you find that images are taking up a lot of disk space, you can delete them to free up disk space. Unless you have containers that are actively using them, it doesn’t make sense to keep a large cache of images locally. To remove images from the local cache, use the docker rmi command, and then pass in the image ID or repository:tag combination. You’ll get an error if the images are associated to container instances, or if you try to explicitly delete an intermediate image that’s referenced by another image.

$ docker rmi trevor:wptest
Untagged: trevor:wptest
Deleted: sha256:a241ebb139a5109f05eaaea6708d9f25c60916dfa6bcf269b32d427272a1f848
Deleted: sha256:b82f69f5036bbc6d4e7caed53a923bcf6f31aa30bef406e08c5697e4cf175878

List / Start Existing Containers

Another common operation you’ll encounter is starting an existing container instance. Typically, you’ll “run” a new container instance, but if that container is stopped, and needs to be restarted, then you’ll use the docker start command. You can pass in either the friendly name or unique ID of a container, as a parameter to the start command. If you need to find out which container instances are available, use docker ps –all, otherwise if you omit the –all parameter, you’ll only see container instances in the running state.

$ docker ps --all
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
6b0531faf89f redis "/entrypoint.sh --nam" 27 minutes ago Exited (2) 26 minutes ago sleepy_booth
a439a8edbd1e redis "/entrypoint.sh redis" 40 hours ago Exited (0) 40 hours ago reverent_einstein
83ae225881de trevor:FlaskCompose "/bin/sh -c 'python a" 3 days ago Exited (137) 27 hours ago flaskcompose_web_1
c9e38eb8efd5 redis "/entrypoint.sh redis" 3 days ago Exited (0) 27 hours ago flaskcompose_redis_1
243d1a82f196 wordpress "/entrypoint.sh --lin" 8 days ago Exited (2) 8 days ago naughty_fermat
29e524a756a9 redis "/entrypoint.sh redis" 8 days ago Exited (0) 3 days ago kickass_bhabha
8e7ff75c2550 mysql:latest "/entrypoint.sh mysql" 9 days ago Exited (0) 8 days ago trevormysql

$ docker start sleepy_booth
sleepy_booth

Stopping / Killing Containers

Sometimes you’ll need to stop a container, or if the root process inside the container is being stubborn, you might even need to kill it. The Docker command line has a couple useful commands for this purpose, aptly named stop and kill. When you run docker stop containerid, the default behavior is for Docker to wait 10 seconds and then kill the container. You can override this by using the –time parameter. For example, to wait 30 seconds for the container to exit, use: docker stop –time 30 containerid.

If you know for sure that the container isn’t going to stop gracefully, then you can issue a docker kill command. For example: docker kill containerid. The default signal that’s sent to the container’s root process is SIGKILL. You can issue a specific process signal to the container’s root process by adding the –signal parameter. For example, you can send a SIGINT to the container by using docker kill –signal INT.

Delete Containers

A Docker Host will retain the state of a container, after it’s stopped or killed, so that it can easily be restarted. The container will “pick up where it left off,” so to speak. However, this may not be your intention. You can delete containers that are in the running or stopped stated, by issuing the docker rm containername command. In fact, you can specify additional container names or IDs at the end of the command, such as: docker rm registry web cache.

If the container(s) you attempt to delete are in a running state, then you’ll receive an error.  To work around this, you can add the –force parameter. For example, run: docker rm –force registry web cache.

Docker Compose

Docker Compose is the utility that enables you to deploy multiple containers (aka. services) as part of an overarching application. Each of these Docker Compose services can be independently scaled for the purposes of load testing.

Docker Compose File

The Docker Compose file is called docker-compose.yml or docker-compose.yaml. You create this file when you want to deploy an array of containers into your development environment. Docker Compose is not intended for production usage. As of Docker 1.12, you’ll want to look to SwarmKit, which builds off of Docker Compose concepts, for production deployments of Dockerized applications.

Building a Docker Compose file is fairly straightforward. At the very top of your file, you’ll want to declare the Docker Compose version. I recommend that you use version 2. Keep in mind that version 1 of Docker Compose files do not support volume mappings and container networks. After specifying the version, you’ll create a root property called services. Under services, you’ll create one or more named services, where you are able to choose the service name. For example, if you want to deploy a Redis service, you might name it nosql.

version: '2'
services:
  nosql:
    image: redis:latest

Beneath each service, you define things like:

  • The image that you want to deploy the service from or the Dockerfile that you want to build a new image from
  • An alternative named for the Dockerfile (if building a new image)
  • Environment variables to associate with the service
  • Linked services (enables cross-service communication and service dependency management)
  • Which networks the service should be associated with
  • Container aliases on each network
  • DNS servers and DNS search suffixes
  • The entrypoint for a service (overrides the Dockerfile)
  • TCP/UDP ports to expose from the service, enabling linked services to communicate

The list of Docker Compose configuration options are too many to list here, so I recommend that you visit the Docker Compose reference documentation for details.

Docker Errors

Docker

Conflicting Container Name

If you are giving your containers custom names, using docker run –name artofshell, you might accidentally try to run a new container over the top of an existing container with the same name. If you do this, you’ll receive an exception from Docker, similar to the one below.

Error response from daemon: Conflict. The name “/aosregistry” is already in use by container f3c5cdb2bf81f162dc79059df91c1d7da05118a0b68f877fbab2bc444442650a. You have to remove (or rename) that container to be able to reuse that name.

To resolve this, before you run the new container, you can either:

  • Rename the existing container, using docker rename artofshell artofshell2
  • Delete the existing container, using docker rm –force artofshell

After using one of these commands, you can go back and run your new container.

Delete a Running Container

Docker Containers are not deleted if they’re in a running state, by default. If you use docker rm containerName to remove a container, and the container is in a running state, then you’ll receive an error message similar to the following:

Error response from daemon: You cannot remove a running container 57059cf067d42a762b132f783d89fe1e042194b72974ee445040ee1eb9a06779. Stop the container before
attempting removal or use -f

To handle this scenario, you can either:

  • Stop the container, using docker stop containerName, and then re-run the docker rm command
  • Force deletion of the container, using docker rm –force containerName

Using the –force parameter will suppress the error message and delete the container, even if it is in a running state.

Docker Machine

TBD

Docker Compose

Version Should be a String

In your docker-compose.yml file, the version identifier at the top of the file should have a string value, not an integer. The correct way of specifying a version 2 Docker Compose file would be:

version: '2'

If you specify an integer value — without the single quotes around the value — rather than a string, then you’ll most likely receive an error message similar to the following:

ERROR: Version in “.\docker-compose.yml” is invalid – it should be a string.

 

Resources

If you’re looking for some other supporting resources for Docker containerization, check out the following items:

NameCategoryDescription
Image LayersServiceThe Image Layers service enables you to search for a Docker image in the Docker Hub repository, and it will break it down and show you the individual layers, and their corresponding sizes. This will help you optimize your Docker images, and images provided by other vendors.
Docker-ize a Complete ApplicationBlogThis blog post describes how to build a complete application, using Docker containers, based on Node.js, NGINX, and .
JoyentServiceThe Joyent service enables you to easily deploy containers.
Azure Container ServiceServiceThe Microsoft Azure Container Service enables the simple deployment, scaling, and orchestration of Docker containers on Azure infrastructure Virtual Machines. Industry standard tools, such as Apache Mesosphere or Docker Swarm can be used to manage containers directly on Azure.
10 Things to Avoid in Docker ContainersBlogThis blog post covers some best practices for building Docker images. There are some good guidelines in here that help you to make the most of Docker, and truly Docker-ize your application code.
Add Existing Docker Hosts to Docker-MachineBlogThis blog post talks about connecting to existing Docker Hosts (Virtual Machines that are running the Docker Engine) from the docker-machine command, using the "generic" driver, that were not provisioned from that client. For example, maybe you have a VM / VPS instance running on a cloud provider, but it was provisioned as a Docker Host from another client, by one of your team members.
Docker CloudServiceDocker Cloud is a service that was announced in early 2016, after Docker absorbed a company called Tutum, a few months prior. In essence, this service enables you to manage Docker containers in your cloud environment.
Building Multi-Container Applications with Docker ComposeBlogBuild multi-container applications with the Docker Compose utility, and a simple YAML file format! With multi-container apps, you can deploy and scale application services independently from the others.
Docker Storage Patterns for PersistenceBlogDo your Docker containers require data persistence, such as a relational or NOSQL database? If so, then you'll want to take a look at this article, which helps you understand a variety of container-ized storage approaches.
Google Container EngineServiceGoogle's Container Engine service, part of the Google Cloud Platform (GCP), allows you to easily deploy and schedule Docker containers on Google Compute instances. It supports declarative resource scheduling of containers, using a simple JSON format.
KubernetesSoftwareKubernetes is an open source software package that enables the deployment and scheduling orchestration of Docker containers across multiple Docker Hosts, that may run on a variety of cloud platforms. Kubernetes was developed by Google, who has many years of experience running containers as massive scale.
DeisSoftwareDeis (pronounced DAY-iss) is an open source PaaS that makes it easy to deploy and manage applications on your own servers. Deis builds upon Docker and CoreOS to provide a lightweight PaaS with a Heroku-inspired workflow.
Evaluating Container Platforms at ScaleBlogThis article addresses three questions about scaling Docker Swarm and Kubernetes. What is their performance at scale? Can they operate at scale? What does it take to support them at scale?
RedHat OpenShiftServiceRedHat OpenShift is a service built on top of Docker and Kubernetes, that reduces the software development time to market. OpenShift offers both on-premises and public cloud-based editions of the service, and includes a free tier for those who are interested in testing the service.
Creating a data-only Docker containerBlogIf you need to store data to support one of your other Docker containers, you can use the technique described in this blog post!
FlockerSoftwareFlocker is an open-source Container Data Volume Manager for your Dockerized applications. By providing tools for data migrations, Flocker gives ops teams the tools they need to run containerized stateful services like databases in production. Unlike a Docker data volume which is tied to a single server, a Flocker data volume, called a dataset, is portable and can be used with any container, no matter where that container is running.