Containers are key to the modern datacenter.
There’a a lot of buzz around containers. At its core, containers provides a way for you to deploy your app with all of its depenencies. Your container run on premises the same way it will in the cloud.
For the application in its container, it has no knowledge of any other applications or processes that exist outside of its box. Everything the application depends on to run successfully also lives inside this container. Wherever the box may move, the application will always be satisfied because it is bundled up with everything it needs to run.
For developers, it means that you no longer have to say, “Well it ran on my machine.” And it means that when you have larger apps, you can deploy in smaller chunks of code, where the dependencies do not need to cascade between teams. For IT Pros, it means that you can more effectively use those virtual machines. Instead of having one virtual machine for each app, you use the same VM for multiple apps. And when the VM is being used, you can quickly scale based on user demand.
The Docker team describes it this way:
A container image is a lightweight, stand-alone, executable package of a piece of software that includes everything needed to run it: code, runtime, system tools, system libraries, settings. Available for both Linux and Windows based apps, containerized software will always run the same, regardless of the environment. Containers isolate software from its surroundings, for example differences between development and staging environments and help reduce conflicts between teams running different software on the same infrastructure.
Comparing Containers and Virtual Machines
One of my questions that I have for developers and IT Pros, “How many copies of Windows do you really need in the cloud.” Sometimes I get a snarky answer, “I use Linux.” Awesome. How many copies of Linux do you need?
Virtual machines virtualize the (uhm) machine. Containers virtualize the application. Containers run on a layer that sit on top of a host operating system. That OS is effectively virtualized. So (depending on how deep the code goes into the OS — and if it doesn’t need to) your same application can run on Mac or Windows or Linux.
|You package code and dependencies together. You can run multiple containers on the same machine. And that machine can be virtual.</em||VMs abstract the physical hardware. You can run many virtual machines on a single machine.
Each VM has a copy of an operating system, one or more apps.
(Images from Docker)
Container is a complete file system that encompasses everything, including the code the runtime, system tools, and system libraries. It just doesn’t include the operating system itself. Each container has all the dependencies that the app needs.
Build and deployment and DevOps
You no longer have the challenge of working on your development machine and moving to staging and then to production only to discover it no longer work because of some dependency.
Imagine where you have a specific version of Python (or .NET Core). And your app works great. And then you go to deploy your app. Amd you deploy it and it works great. Now there’s an update to Python. And you are working away on a your next app. And now you need to go back and either rev your older code or use the older version of Python (or .NET). Your build team needs to update the entire build chain for the entire application.
Instead, you can use containers and the dependencies are all in the container.
Docker containers offer a lot of portability. Build an application once, put it in a container image and run it on any host environment that supports Docker. It will run on the same family of operating system. This means you can host an application inside containers on Ubuntu Linux 16.04, then move that same application to a Red Hat Enterprise Linux 7.4 server without having to recompile anything. You simply move the container images over.
Docker doesn’t support porting across different families of operating systems. You can’t take a Dockerized app designed for Linux and run it on Windows, or vice versa.
You also can’t take a Docker container that was created for an x86 system and run it on an ARM machine, even though Docker does support ARM.
That said, if you have all of the supporting dependencies, such as in .NET Core, you can develop your application on your Windows 10 client, and then deploy to Ubuntu VM in the cloud.
It’s about the dependencies. More on that in subsequent section.
First of all, each app does not really need its own operating system. Instead, each app needs what it depends on.
If you wanted to run five servers in a cluster, you can now run each of the servers with the apps and their dependencies without
Docker containers are ideal for creating dense environments where the host server’s resources are fully utilized but not overutilized. With Docker containers you are not required to duplicate the functionality of the host operating system.
Docker doesn’t force you to allocate a given amount of resources to a container (although you can set resource quotas for individual containers if you want). This means Docker containers are able to make more efficient and dynamic use of the resources from the host. When the demand placed on one container or service decreases, the resources that it was consuming are freed to be used by other services.
It takes many minutes to copy a virtual machine and then start it. It takes seconds to spin up a container. This makes containers a great choice for on demand computing.
Any service that handles additional load by increasing the number of containers of the service is considered “horizontally scalable”.
There are two deployment modes when scaling a service:
- Parallel mode (default): all containers of a service are deployed at the same time without any links between them. This is the fastest way to deploy, and is the default.
- Sequential mode: each new container is deployed in the service one at a time. Each container is linked to all previous containers using service links. This makes complex configuration possible within the containers startup logic. This mode is explained in detail in the following sections.
Application containers provide several security benefits according to Lenny Zeltser:
- Containers make it easier to segregate applications that would traditionally run directly on the same host. For instance, an application running in one container only has access to the ports and files explicitly exposed by other container.
- Containers encourage treating application environments as transient, rather static systems that exist for years and accumulate risk-inducing artifacts.
- Containers make it easier to control what data and software components are installed through the use of repeatable, scripted instructions in setup files.
- Containers offer the potential of more frequent security patching by making it easier to update the environment as part of an application update. They also minimize the effort of validating compatibility between the app and patches.
Docker containers are, by default, quite secure; especially if you take care of running your processes inside the containers as non-privileged users (i.e., non-
You can add an extra layer of safety by enabling Apparmor, SELinux, GRSEC, or your favorite hardening solution.
There are four major areas to consider when reviewing Docker security:
- The security of the kernel and its support for namespaces and cgroups.
- The attack surface of the Docker daemon itself.
- Loopholes in the container configuration profile, either by default, or when customized by users.
- “Hardening” security features of the kernel and how they interact with containers.
Adrian Mouat describes five security concerns when using Docker.
- Kernel exploits.
- Denial of service attacks.
- Container breakouts.
- Poisoned images.
- Compromising secrets.
He writes, “While you certainly need to be aware of issues related to using containers safely, containers, if used properly, can provide a more secure and efficient system than using virtual machines (VMs) or bare metal alone.”
Matthew Setter adds some additonal considersations for how you can set up Docker images in Docker Security Best Practices:
- Image Authority.
- Excess Priveleges.
- System Security.
- Limit Available Resource Consumption.
- Large Attack surfaces.
The thought is that you need to be mindful of security as you are in virtual machines.
In additon to Docker, there are many other container solutions out there:
Containers on Microsoft
Microsoft provides two type of container solutions.
Docker on Windows
You can get started using Docker Containers on Windows. The Docker Engine on your laptop lets you run containers for either Linux or Windows.
Docker for Windows is a native Windows app integrated with Hyper-V, networking and file system. Integrated Docker platform and tools include Docker command line, Docker Compose, and Docker Notary command line.
Microsoft provides two type of container solutions Windows Server Containers and Hyper-V containers. The main difference between two that Windows Server Containers, just like Docker, share the kernel with the container host and the other containers while Hyper-V Containers do not.
Windows Container and Hyper-V Container work the same way. But Hyper-V containers are more isolated than Windows Container because they are running in a very lightweight virtual machine that provides kernel isolation and not just process isolation.
Hyper-V containers use Windows containers within the VM.
The only difference is the Windows container is now running inside a Hyper-V VM which provides kernel isolation and separation of the host patch/version level from that used by the application. The application is containerized using Windows containers and then at deployment time you pick the level of isolation required by choosing a Windows or Hyper-V container.
For more information about Hyper-V containers, See the official documentation on MSDN for more information about Hyper-V containers.