decking

Create, manage and run clusters of Docker containers

View project on GitHub

Decking aims to simplify the creation, organsation and running of clusters of Docker containers in a way which is familiar to developers; by reading information from a decking.json package file on a project by project basis.

decking in action

Installation

Decking is written in Node.js. Although still under development the more recent releases on npm are stable:

$ [sudo] npm install -g decking

Alternatively, just clone the repo and run ./bin/decking.

Once installed decking can be run (without arguments) from anywhere:

$ decking

Usage: decking COMMAND [arg...]

Commands:
  build    build an image or pass 'all' to build all
  create   create a cluster of containers
  destroy  destroy a cluster of containers
  start    start a cluster
  stop     stop a cluster
  restart  restart a cluster
  status   check the status of a cluster's containers
  attach   attach to all running containers in a cluster

These commands only make sense with some context—provided by a local decking.json file.

Why decking?

While Docker is fantastic, it lacks any way of associating valuable metadata with images or containers. A Dockerfile can’t enforce that it be built as a certain image name. More importantly, an image can’t enforce that it be invoked with certain runtime values when it is used to create a container. Docker provides flexible, abstract building blocks. Decking transforms these building blocks into concrete implementations of services—and clusters of services—in a robust, repeatable, configuration-driven manner. It allows simple but powerful dependency modelling and takes all the hassle (and error) out of starting clusters of containers in the correct dependency order. It allows optional overrides to fine-tune cluster configuration on a per-environment basis.

It simplifies the building of images based on local Dockerfiles which can ordinarily be a time-consuming and error prone process (building the wrong Dockerfile as the wrong image, and having to move the Dockerfile to the root of a project in order to make the ADD directive work smoothly).

It simplifies the creation of containers by considering docker run parameters to be part of the definition of each container, meaning less room for error as each developer doesn’t have to remember the correct runtime parameters to use when creating each container.

It simplifies the orchestration of containers by allowing dependencies to be specified, ensuring that all containers forming part of a cluster are started in the correct order such that -link parameters work as expected. Entire clusters of containers can be started, stopped or attached to with a single command, without having to worry about (re)starting them in dependency order.

Why not Fig / Docker Compose?

Decking actually predates Fig (recently renamed to ‘Docker Compose’) by about a month and for quite some time offered more functionality; to this day the projects still have their own idiosyncracies and unique features. Decking will continue to exist as long as people want to use it!

The decking.json file format

The decking.json format aims to be clear, concise and simple. Note that the file must be present in the current working directory; decking will not recurse up parent directories looking for a valid definition file. The top-level keys are:

"images":     { /* the images which power your containers */ },
"containers": { /* the container templates which make up your clusters */ },
"clusters":   { /* lists of containers which combine to form a cluster */ },
"groups":     { /* optional overrides for different environments */ }

images

  • required: no

Images define the templates from which containers are built; decking gives them a boost by being able to build them in batches and resolving commonly experienced issues with Docker’s ‘context’ restrictions.

Each key is the name of the image you want to build. Each value is the location of the local Dockerfile relative to the project root. Only local images can be built at the moment, although eventually you’ll be able to specify tag names as values in to build an image from the Docker Index instead.

"images": {
  "makeusabrew/nodeflakes": "./docker/base",
  "makeusabrew/nodeflakes-server": "./docker/server",
  "makeusabrew/nodeflakes-consumer": "./docker/consumer",
  "makeusabrew/nodeflakes-processor": "./docker/processor"
}

build context

By default when building an image the root of your project (the directory containing your decking.json file) will be provided as Docker’s ‘build context’ such that any ADD or COPY directives will be relative to this top-level directory. However, this context is configurable on an image-by-image basis:

"images": {
  "makeusabrew/nodeflakes": {
    "path": "./docker/base",
    "context": "/path/to/context"
  },
  "makeusabrew/nodeflakes-server": {
    "path": "./docker/server",
    "context": "."
  },
  "makeusabrew/nodeflakes-consumer": "./docker/consumer",
  "makeusabrew/nodeflakes-processor": "./docker/processor"
}

Note the use of the dot character in the second example; this is simply a shorthand indicating that the context should be the same as the path parameter (i.e. relative to wherever the Dockerfile lives). Use this setting if you always want decking build to work the same way as docker build.

containers

  • required: yes

Containers define the runtime configuration for a given image. They let you model services—and crucially, their dependencies—in a clear, clean and repeatable manner, meaning you don’t have to remember all those arguments when creating a new container.

Each key is the name you want to assign to the container (i.e. docker run -name <key> ...). Values are either a string—in which case they are assumed to refer to an image—or an object. A definition of two containers demonstrating both approaches might look a bit like this:

"containers": {
  "nfprocessor": {
    "image": "makeusabrew/nodeflakes-processor",
    "port" : ["1234:1234"],
    "env"  : ["MY_ENV_VAR=value", "ANOTHER_VAR=foo"],
    "dependencies": [
      "nfconsumer:consumer"
    ],
    "mount": ["/path/to/host-dir:/path/to/container-dir"],
    "mount-from": ["nfconsumer"]
  },
  "nfconsumer": "makeusabrew/nodeflakes-consumer"
}

Each key in the definition of nfprocessor maps loosely onto an argument which will be passed to docker run. The currently supported options are:

port         → -p
cpu          → -c
env          → -e
dependencies → --link
mount        → -v
mount-from   → --volumes-from
memory       → -m
privileged   → -privileged
image        → used as-is
extra        → extra runtime arguments, used as-is after the 'image' argument
data         → see explanation below
ready        → see explanation below

data

If this is supplied as a boolean (e.g. "data": true) the container is assumed to be a data-only container. As such it can be modelled as a dependency of other containers using mount-from but decking won’t ever attempt to start or stop it.

ready

If this is supplied as an integer (e.g. "ready": 8080) it is assumed to refer to a TCP port which when listening determines that the container is ‘ready’. This is particularly useful when some dependencies start quickly but don’t listen quickly; without this parameter decking will start any dependents as soon as the container is running which may cause them to connect to the listening service before it is actually listening.

clusters

  • required: yes

Clusters define sets of related containers which—presumably—combine in some useful way (e.g. a master & slave database, a couple of web servers and a load balancer). This is where it all happens!

Each key is an arbitrary name to refer to the cluster by when using the cluster related commands. Values are usually just arrays of names found in the containers object. These definitions are simple as most of the configuration has already been done elsewhere:

"clusters": {
  "main": ["nfprocessor", "nfconsumer"]
}

The order we list our containers as part of each cluster definition doesn’t matter—decking will resolve the dependencies based on each container’s definition and make sure they start in the correct order. Similarly, dependencies don’t have to be explicitly listed as part of a cluster; if you want to run containers A and B as a cluster, and B happens to depend on C, then C will be started too—even if you aren’t really interested in it when mentally modelling your cluster.

groups

  • required: no

Groups allow clusters of containers to be run with different parameters, usually to support small configuration variations across different environments (e.g. build, test, production) without having to define entirely new containers.

Each key is an arbitrary name to associate with each group. Values are cluster-wide and container-specific overrides:

"groups": {
    "build": {
        "options": {
            "env":   ["NODE_ENV=build"],
            "mount": [".:/path/to/src"]
        },
        "containers": {
            "nfprocessor": {
                "port": ["4321:1234"]
            }
        }
    }

}

The above would create a new group called build, which when used would apply the relevant options when creating a cluster of containers. Per-container overrides can also be set, though these are optional. Opting into a group simply requires a slightly different cluster definition:

"clusters": {
  "main": ["nfprocessor", "nfconsumer"],
  "dev": {
      "group": "build",
      "containers": ["nfprocessor", "nfconsumer"]
  }
}

This would let us run two clusters based on the same containers, albeit one very clearly in ‘build’ mode. Of course we can’t have two containers with different configurations sharing the same -name, so decking namespaces containers based on the group name. In the above example, a call to decking create dev would look for containers named nfprocessor.build and nfconsumer.build. This namespacing is transparent to a user, meaning containers can always be thought of and referred to (i.e. as dependencies) by their original name.

Note that for now, group-wide options completely overwrite any previous values for matching keys rather than merge them with existing ones. Likewise, a container-level override overwrites any previous values (even those set at group level). This will be changed in future such that options are merged properly in a predictable manner.

Commands

Every command relates to the orchestration of containers into clusters with one exception—build—which relates to the base images powering each container.

build <image> [--no-cache] [--context <dir>] [--tag <name>]

When provided with a valid image name found in the decking definition file this builds an image from the referenced Dockerfile using the current folder as Docker’s build context.

When provided with the literal string ‘all’ this simply iterates through each key of the images object and builds them in turn.

Use --no-cache to prevent Docker using cached layers during the build.

Use --context <dir> to specify a custom build context to use. Note that this is probably best configured as part of your image definition instead.

Use --tag <name> to specify a custom tag to build instead of the default of latest. This is particularly useful for continuous integration builds.

create [cluster] [--as <alias>]

Builds containers for a given cluster as well as any implicit or explicit dependencies. Optionally uses group-level overrides if specified by the cluster definition. This method is safe to run multiple times; it will only create the containers in the cluster which don’t already exist.

destroy [cluster] [--as <alias>] [--include-data] [--include-persistent] [--include-volumes]

Destroys all containers for a given cluster but by default will not destroy any data containers, nor those marked as persistent (e.g. those where persistent: true). Use of any or all of the optional --include-xxx parameters will make this method progressively more destructive.

start [cluster] [--as <alias>]

Starts all the containers in a given cluster. Safe to run multiple times; it will only start containers which aren’t already running. Ensures that any dependencies are always started before their dependent services.

stop [cluster] [--as <alias>]

Stops all the containers in a given cluster. Safe to run multiple times; it will only stop containers which are currently running.

restart [cluster] [--as <alias>]

Restarts the containers in a given cluster. As with start, all dependencies are restarted in the correct order.

status [cluster] [--as <alias>]

Provides a quick overview of the status of each container in a cluster. Also displays each container’s IP and port mapping information if it is currently running.

attach [cluster] [--as <alias>]

Attaches to the stdout and stderr streams of each container in a cluster. This is incredibly useful for gaining an insight into the overall cohesion of a cluster and provides a coordinated output log. Survives brief outages in container availibility meaning it does not have to be re run each time a container is restarted.

Aliases

All cluster-related commands take an optional --as <alias> argument. This is particularly useful when wanting to create multiple clusters based on the same group definition but running as physically separate sets of containers. The most common use-case is during continuous integration builds where the --as parameter can be supplied as the build number to spin up and tear down a completely new cluster per build:

decking create ci --as $BUILD_NUMBER
decking start ci --as $BUILD_NUMBER
<run integration tests>
decking destroy ci --as $BUILD_NUMBER

Advanced Usage

While the basic decking configuration is simple yet powerful, occasionally you may need to use some more advanced configuration techniques:

Dynamic environment variables

Environment variables are often used for information you wouldn’t want commiting to source control; either because it is sensitive (e.g. a secret API token) or unpredictable (e.g. something which varies per host, such as the host’s IP address). Decking allows you to specify a dash value (literally, a -) for environment variables which it will try to resolve when creating a container. Let’s look at a very simple example first:

"containers": {
  "api_consumer": {
    "image": "makeusabrew/twitter-consumer",
    "env": : ["NODE_ENV=build", "CONSUMER_KEY=-"]
  }
}

When a cluster containing this container is first created, decking will notice the dash value and check the host machine’s $CONSUMER_KEY environment variable. If that yields an empty value the user will be prompted for it; the user’s input will be masked in case the data is sensitive. As Docker stores environment variables as part of a container’s configuration, this information will only be prompted for once, upon container creation. The value stored by the container is a snapshot; it does not change when the container is stopped or started.

Multiple instances of the same container in a cluster

It is not uncommon to have multiple instances of a service working together as part of a cluster (e.g. a group of workers which listen out for data before processing it). Since these cases are effectively just > 1 instance of the same container, there is no need to provide a separate container definition for each node. Let’s take a look at another example based on our previous API consumer:

"containers": {
  "api_consumer": {
    "image": "makeusabrew/twitter-consumer",
    "env": : ["NODE_ENV=build", "CONSUMER_KEY=-"]
  },
  "data_processor": {
    "image": "makeusabrew/twitter-processor",
    "dependencies": ["redis"]
  }
}

(We’ve snuck in a dependency on a ‘redis’ container here since our processors are probably going to pump data into it, but it’s not relevant to the example!)

Let’s say in build mode we only want one data processor; we’d just define our cluster definition as usual:

"clusters": {
  "build": ["api_consumer", "data_processor"]
}

But in production we want more data processors to take full advantage of our server:

"clusters": {
  "build": ["api_consumer", "data_processor"],
  "prod": ["api_consumer", "data_processor(8)"]
}

Voila; a call to decking start prod will now spawn eight instances of our data processor.

Note that multi-node containers will be named accordingly. Our data processor would simply be called data_processor whereas our production containers would be named data_processor.1data_processor.8 to avoid clashing with each other.

There is one outstanding issue with multi-node containers; since they all use the same container definition there is a problem if that definition specifies something which should vary per container instance. A concrete example of this would be a web node which binds to a specific port on the host; docker will error when trying to bind another container to that port (in this scenario don’t bind to a specific host port—use a load balancer and links instead!). This will be addressed in a future release.

Miscellaneous Notes

Some useful information which doesn't quite fit elsewhere…

Container naming conventions

In general, containers will be named exactly as per their keys in the containers object. There are however some exceptions to this rule:

  1. If a cluster specifies a group override
  2. If a cluster specifies a multi-instance container
  3. If a cluster specifies both 1 & 2.

A base container named redis would therefore have the following namespacing should it meet the relevant rules:

  1. redis.[group] (where group = the key name of the group)
  2. redis.[n] (where n = an index between 1 and the number of desired instances)
  3. redis.[group].[n]

These rules indicate that a cluster name by itself will not namespace a container; that is, a container named in two clusters will use the same instance. This is because clusters are not tied one to one with ‘modes’ (where different containers would be desirable in different clusters)—it is quite possible different clusters may well want to use the same instance of a container. If this isn’t what you want then be sure to opt-in to a group override in each cluster (and see the following note).

Automatic group opt-in

Since the majority of use-cases (thus far) usually associate a cluster with a different environment decking will automatically opt-in to a group with the same name as a given cluster if it exists. For example, a cluster named dev will look for a group named dev and use its overrides—and namespacing—if it exists. This eliminates needless configuration like this:

"clusters": {
  "test": {
      "group": "test",
      "containers": ["nfprocessor", "nfconsumer"]
  },
  "dev": {
      "group": "dev",
      "containers": ["nfprocessor", "nfconsumer"]
  }
}

Which, assuming groups named dev and test exist, can simply be written as:

"clusters": {
  "test": ["nfprocessor", "nfconsumer"],
  "dev": ["nfprocessor", "nfconsumer"]
}

Thus if you find your clusters accidentally opting in to group overrides, simply ensure they have different names.

This page was generated by GitHub Pages using the Architect theme by Jason Long.