Raise a hand if you sometime during these last few months have been talking about cloud services and then in one way or another got to talking about containers. Groups from all types of trades and markets want to know more about them, how they work, what they mean for other cloud services and how to best use them. To make you wiser, we created this blog post that help you get a better picture of the concept.
Everyone in the IT world has heard of virtualization and more specifically, virtual machines. However, virtual machines is not the only way to virtualize hardware for a unit. Virtual memory is being used by almost every operating system (OS) and containers is another way of using virtualization.
Modern containers create a picture of a fully functional and isolated OS, which the application inside is free to use. By using techniques to isolate processes and resources to a unique container along with the quick boot times for virtualizing an OS, containers create a perfect environment for development and testing.
Since the conditions and resource handling will be exactly the same no matter which environment the container is hosted by, any application running inside a container will be able to run just the same. It doesn’t matter if it’s on dev, testing, QA, production or anywhere else.
Why containers over virtual machines
So why should you use containers instead of regular virtual machines? Well, a full isolation of a virtual machine requires its own OS files, libraries and source code – not to mention a full memory instance of the OS itself. When adding more machines, you also need to add OS instances and memory allocation. Due to these needs, every new machine needs its own resources just to be able to run. Even if you already have a virtual machine running, every new one puts bigger strain on the system.
If you instead use containers, these will share the host OS, including kernel and libraries. This means that you don’t have to start up a new OS or reserve memory for the files associated with such. The only thing needed for adding more containers is the memory and disk space required to run the application inside. The application, however, still believes it’s running on its own dedicated OS. Because of this, you can run many more instances on the same host compared to when using virtual machines.
The ideas and technologies related to containers are not new. One major help to the newly found interest is Docker, an open source technology developed by Docker Project in 2013. As a result, the market found a new standard for tools, packeting and deployment. In addition, Docker has made the distribution of applications inside containers easy using what’s called Docker images. These images can be run in the exact same way on every kind of device and environment, as long as they support Docker. For example, it will run the same way on a private laptop as on production servers.
With fast deployment and integration, Docker and DevOps fit together like peanut butter and jelly. As a result, more and more publish their application on DockerHub, the open registry for Docker images. To ensure that the format stays universal, Docker has worked together with Microsoft to start what’s called the Open Container Initiative (OCI), which strives to create open standards for container formats.
Microsoft och Azure container service
Docker is not alone on the market and Microsoft has worked hard to enable container services for everyone. The source code for Azure container service is open for the public and the benefits are being taken in by the entire company. Corey Sanders, Microsoft Director of Compute for Azure:
Containers are the next evolution in virtualization, enabling organizations to be more agile than ever before.
Have you already started using containers? What about them do you want to know more about? Let us know in the comments or send us a mail at [email protected]!