Containers are revolutionizing the way we develop software by allowing more to be packed on a single machine and making it easier to build portable software that can be continuously redeployed. They've become an essential technology to enable microservices and auto-scaling applications and are now a staple in many continuous integration/continuous delivery (CI/CD) pipelines. While containers bring unique security benefits in traditional infrastructure, they also introduce risks that software teams need to understand better and mitigate.

Containers isolate software from its environment and ensure that it works uniformly despite differences between development and staging and regardless of infrastructure. A container is a snapshot of the filesystem containing the application code, dependencies, and some metadata about the application. When the application in the container is executed, the filesystem snapshot is mounted as an overlay and set as a virtual root filesystem. Then the metadata about the application is used to start some command inside this virtual root filesystem overlay.

The overlay is typically mounted in memory, meaning that changes to this virtual filesystem are not saved after the application finishes execution.

The concept of a container is similar to a virtual machine in that both mount filesystem snapshots as root filesystems and then execute some code within those snapshots.

However, a container does not require an entire operating system within it - only the application's dependencies packaged in the container. Also, unlike a virtual machine, the container does not require pre-allocation and only uses the memory allocated by the application itself. A container also does not incur the performance penalty associated with virtualization because no hardware virtualization is necessary. The result is that containers are much more efficient than virtual machines. This efficiency translates directly into reduced operating costs, especially for large-scale applications.

What is Docker? Docker is a set of platform as a service (PaaS) products that use OS-level virtualization to deliver software in containers.

With Docker, you can package your microservice into a container and then deploy it on any host that has Docker installed or on any of the multitude of services that can run Docker containers for you. In practice, there might be some compatibility issues between hosts running different operating systems, but these issues are resolved over time. When creating a container, all the dependencies are automatically packaged together with your microservice. The resulting workflow is quicker, more intuitive, and more scalable than installing microservices on individual hosts.

From a security perspective, using containers reduces the attack surface, considering each application no longer requires a separate host, and containers themselves are generally read-only. A container only runs while the application is running, which prevents attackers from gaining persistence if the application is compromised. In other words, even if an attacker can compromise a microservice inside a container, as soon as the microservice finishes running, the container will be unmounted, and any backdoors that the attacker tried to install will typically disappear.

When using containers, it's crucial to configure the application metadata correctly. Configure the receptacles so that any data storage used by the microservice cannot hide malicious code that the application will automatically load. The application cannot permanently change code within the container during run-time by default, so teams must not allow changes to the container itself. If you are hosting your containers, you must also configure the container system correctly, such as using strong authentication credentials and access controls, preventing administrative access from untrusted networks, and regularly installing updates for the container system.

If you are hosting microservices yourself, you should harden the infrastructure used to host them. The exact procedures for setting a host depend on the system type, the specifics of your infrastructure, your organization's policies, requirements, and other factors, but some general guidelines are. Microservices are often deployed to run within containers. If your microservices use containers, it is recommended to use an operating system designed for hosting these to ensure container security. There are many mainstream Linux distributions intended for this purpose.

To reduce the attack surface:

  • Remove unused software and disable unnecessary features.

  • Configure the settings on each host according to the defined configuration guides.

  • Disable all unused user accounts.

  • Reduce privileges of each user account to the minimum requirement for the account to be used as intended.

  • Configure all access control lists to provide no more than the minimum access needed for the system to function correctly.

  • Configure all logging options that can facilitate sending logs to a Security Information and Event Management System (SIEM).

  • Configure backups for all your data and be sure to comply with any relevant policies and data retention and disposal requirements.

  • Install security updates regularly (and ideally automatically), including the operating system and any installed applications.

  • Encrypt data at rest, use robust algorithms implemented by trusted code and use vital keys managed according to your organization's policies.


About Lisa Parcella, Vice President of Product Management and Marketing

With a background in security awareness, product management, marketing communications, and academia, Lisa leverages her vast experience to design and deliver comprehensive security-focused products and educational solutions for the company’s diverse client base. Before joining Security Innovation, Lisa served as Vice President of Educational Services at Safelight Security. She holds a B.A. from the University of Vermont and an M.A. from Boston College. Connect with her on LinkedIn.