Containerization and How It Can Be Useful for IT Automation

Containerization accelerates the software delivery process. It is one of the key tools in the toolbox of high velocity software development and changes how modern software is developed, tested, deploy


Containerization accelerates the software delivery process. It is one of the key tools in the toolbox of high velocity software development and changes how modern software is developed, tested, deployed, and scaled to meet demands. In this article we explore:

  • How does containerization compare to traditional virtual machine technology?
  • How can containerization be utilized to automate development tasks and address some of the common issues in high velocity software development?
  • What are some popular choices available for containerization technology?

What is Containerization?

Software containerization is a specialized form of virtualization; in essence, one can view container technology as a kind of virtual machine (VM) technology falling between traditional chroot isolation (a method of restricting resources available to processes on Unix-like systems) and full-fledged VMs which duplicate both kernel and user spaces and file systems as guests running on top of host environments.

Another way to think about a container is as a single unit of software. Each container is a completely isolated, stand-alone unit containing all the code, dependencies, runtimes, system libraries, and settings it needs to run and complete tasks – and only these things.

Traditional Virtual Machines

Containers encapsulate many of the benefits of virtual machines and virtual hosted environments. For instance, they provide the same guarantees on isolation, security, hardware independence, and resource management as traditional VMs without the normal overhead associated with them.

The key difference between containerization software and traditional VMs is the use of a single operating system to orchestrate the software rather than relying on multiple isolated, full-fledged operating systems. In traditional virtual machine applications, an entire guest operating system is utilized on top of an existing host operating system. However, with container technology, the single host operating system is leveraged to manage the lighter-weight virtualized environments.

Containers operate at a different level than traditional virtual machines. While VMs simulate the hardware available to them, containers simulate the operating system – a step removed from the hardware. Containers are application virtualization whereas traditional VMs are operating system virtualization.

Many different application containers can run on the same underlying operating system, using fewer resources than if they each had to virtualize from the operating system level up. Containerization technology can be used to host more applications than traditional VMs on the same set of resources.

Efficiency and Reusability

As described in the previous section, containers lack the overhead associated with traditional virtualization environments. They do not require the same resources as fully dedicated machines.

Containers also use a layered file system, such as AuFS, which allows for a read-only and a write-only part of disk that allows one to reuse common components easily between different containers. This also helps to save space as layers can be shared across all containers whereas with traditional VMs that space would be claimed by each individual instance.

Containers also allow components to be reused quickly in different contexts with little to no changes required. For instance, with docker-compose, the entire application stack can be stood up with a single command on a development machine or in the cloud.


Containers allow for snapshots to be taken. An image is used to create a container initially, but once running, a container can be used to create new images capturing its run-state which can then be layered and used to create other containers. These can be shared between developers easily, allowing for quick hand-offs for work in progress.


Containers are faster than traditional software delivery methods not only because of the reduced overhead but also due to the support in modern operating systems to accommodate for them. For instance, the Linux kernel has support for containers via its LXC/LXD feature. This low-level, flexible set of tools, libraries, and kernel functionalities allow software to operate as fast as possible.

Being lighter in general, containers cold start in a matter of seconds whereas traditional VMs can take on the order of minutes to start.


Containers provide a way for you to write once and run anywhere. They provide the same consistent runtime environment regardless of where they are deployed. When developers create their software and test it out on their local machines, you get the same experience when that software is run in production, test, staging, or any other environment.

Isolation and Sandboxing

Containers are fully isolated environments. They only have access to the resources that are internal to the containers themselves and additionally any resources that are explicitly granted to them, such as port forwarding to the host machine’s networking bridge.


Containers are supported on a variety of host systems. They can be run in the cloud or on a local machine. They can even be run inside of other virtualized systems. For instance, Docker is supported on Windows, Mac, Linux, AWS, Azure, and other cloud platforms.


The reusable nature of containerization technology has naturally formed its own community of sharing. With Docker, for example, there is a great wealth of packages available for reuse on Docker Hub. These are developed by the community and shared for reuse by anyone. It is simple to pull an image from the service and reuse it in an application or as part of the system dependency stack for an application. For example, database images are available to quickly setup and run with a few commands, meaning that it is no longer a necessity to setup a database on a host system, which can be troublesome for developers using different environments.

It is also possible to push images back for use to the community at large. This give-and-take leads to a reusable infrastructure that makes it simple for anyone to pull components directly off the shelf and use them directly.


There are many different container technologies currently available from which to choose. A short list of popular options follows:

  • Docker
  • Kubernetes
  • OpenShift
  • runC
  • RKT


Regardless of the application that is being developed, containers can be utilized to speed up many parts of the development process, from making it easy to setup a development environment to making it simple to share components with others to allowing many more concurrent virtualized components to run together on a system than would be possible with traditional virtualization techniques and technologies.

Container technology is consistent and repeatable and removes the common problems associated with subtle differences in deployment environments, eliminating the “works on my machine” mentality. The popularity of the different technologies means that it is easy to locate and reuse work done by others in the community, reducing the overall workload required to get your product out the door.

Whatever your application, you stand to benefit greatly from the use of container technology. Containerization opens the door to a faster and more productive software application life-cycle and should not be ignored.

The JBS Quick Launch Lab

Free Qualified Assessment

Quantify what it will take to implement your next big idea!

Our assessment session will deliver tangible timelines, costs, high-level requirements, and recommend architectures that will work best. Let JBS prove to you and your team why over 24 years of experience matters.

Get Your Assessment