Using Docker containers to perform quick prototypes

Docker container, alongside with cloud computing, is a big enablers of innovation. One of the use cases which we constantly face as developers is the ability quickly install and run software packages that we are not familiar with but need to explore and create proof-of-concepts or work on spike, to validate our design decision. We often need to create spike solutions to figure out answers to tough technical or design problems. That may require working copy of databases, messaging systems, cluster coordinators to name a just a few. How can you quickly and correctly install those packages and start working on you project? Another use cases is to run some system that may essential to you development infrastructure but are not part of shippable product. For example, build servers like Jenkins, quality control software like SonarQube, monitoring servers like Graphite and so on. You don’t want to spend time configuring installing those servers – all you need is a process to install it correctly in repeatable and reliable fashion. This is another area where Docker container shine.
But before you start working with docker, you have to answer some critical questions:

  • Where do I get docker image for software I am looking for
  • Is this docker image secure and reliable
  • Can I trust this image
  • How I manage ports on host server and container
  • How  I manage disk space on docker container

Answer to this question is critical to successful docker deployment in above mentioner scenarios.

Identifying software.

Docker maintain public repository of images at This is called Docker Hub. Docker Hub is a registry and index with a website run by Docker Inc. It is the default registry and index used by Docker when you issue docker pull or docker push. In order to use it you have to register on the website. You can search docker registry with command line interface as well as from website. Let say you need mongodb for your project. A quick search will give us following results:

Screen Shot 2015-01-11 at 10.54.42 AM

As you can see, there are multiple images for MongoDB. Which one to choose? Search result include details like the number of times each repository has been starred, a flag to indicate that a particular repository is official (the OFFICIAL column), and a flag to indicate if the repository was build automatically.  The Docker Hub website allows registered users to star a repository in a similar fashion to other community development sites like GitHub. A repository’s star count can act as a proxy metric for image quality, and popularity or trust by the community. Docker Hub also provides a set of official repositories that are maintained by Docker Inc. or the  current software maintainers. These are often called libraries. Based on docker result, I would go with official image, maintained by MongoDB.  Another column to look for is AUTOMATED for not official images.  Containers are great for attacker to plant malicious software, so if an attacker controls how an image is built, they can cause serious harm. For this reason, images were built using publicly available scripts are considered more secure. In the search results from running docker search, you can tell that an image was built from a public script by looking for an [OK] in the column label AUTOMATED.  For detailed information how to run image the best bet is to read image description on the docker registry web site.

Screen Shot 2015-01-11 at 11.10.53 AM

Here image maintainer explain how to run the image in different scenarios.

Where is the storage?

The next question to ask is how you maintain storage. For example, MongoDB is a database, and you want to store data on particular drive, put logging into another and so on outside of container. You want to separate running container from underlying storage it is using. In this case if you need to restart or update container you data is safe. It also facilitate backups.

Docker provide this capability with concept of volumes.  Volumes are files or folders that exist outside of a container, but are available to a container through mount points. A container can mount zero or many volumes. Volumes can be shared between containers and the host itself.  There are two types of volumes: docker maintained and bind-mounted. First are host directories created by the Docker daemon, in space controlled by the docker daemon. The second one is mounted file system on host than available to docker container.

The docker managed volumes are good when you don’t care where storage resides. You simply telling docker to provide some storage for the application.

In this listing you used the -v option but what followed was a map between two file system locations. The map delimits key-value pairs with a single as is common with Linux style command line tools. The map key is the host file system location, and the value is the location where it should be mounted inside of the container. It is important to note that these locations have to be specified using absolute paths. Foe example, mongodb stores data files in /data/db directory. To have this directory stored outside of container and be managed independently of container, we can use following command:

docker run –name some-mongo -d -v /data/mongo:/data/mongo mongo

The first /data/mongo refer to local file system directory /data/mongo. As soon as you start working with container, you will see that data stored outside of container. This provide you with great flexibility to store data.

Another interesting use of docker I found with Netflix OSS. Problem with NetflixOSS and other simular  systems is that platform and related ecosystem services are extensive.  This becomes a very large challenge to anyone trying to understand individual parts of the platform. Another part of the challenge, according to Netflix blog post, relates to how NetflixOSS was designed for scale. Most services are intended to be setup in a multi-node, auto-recoverable cluster.  While this is great once you are ready for production, it is prohibitively complex for new users to try out a smaller scale environment of NetflixOSS.  To solve those problems Netflix team package together all services in a Docker container called ZeroToDocker. I think this is one of the creative way to use Docker – package related services together for evaluation purposes.

On a side note, Oreilly just released new book, Docker Containers Cookbook which maybe a good resource to put docker concepts into practice.


Another interesting book is Docker In Actions

Screen Shot 2015-01-19 at 6.44.32 PM

Submit a Comment

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>