Docker images are not just artifacts, they’re part of your software supply chain. They can be copied, scanned, and exploited, putting the whole system at risk.
Once a compromised image reaches production, the damage goes far beyond a single CVE: stolen credentials, privilege escalation on the host, or data exfiltration through mounted volumes. These attacks often start from something as trivial as an outdated base image or a misconfigured Dockerfile.
In this article, we’ll go through 11 proven Docker image security practices you can immediately apply in your Dockerfiles, CI/CD pipeline, and Kubernetes nodes: from minimizing attack surface and dropping privileges to generating SBOMs, verifying image provenance, and hardening the host.
Table of Contents
- Keep your Container Images Lean
- No Package Manager if Possible
- Run as Non-root and with Minimum Priviledges
- Don’t Tweak Containers in Prod
- Strive Towards Deterministic Container Images
- Implement an SBOM and Provenance
- No Do-It-Yourself Approach for Base Images
- Use LTS Versions and Update Base Image Regularly
- No Secrets in Containers
- Use Security Scanners
- Implement Host Hardening
- Conclusion
- FAQ: Docker Image Security
Keep your Container Images Lean
The smaller the attack surface in your production image, the better. However, the attack surface is not a sum of all components in your image. Rather, it is a sum of attack vectors, potentially exploitable paths. Therefore, try to keep to a minimum components sitting in the direct execution path.
Depending on your app, consider using images based on a minimal Linux distribution, distroless images, or scratch. Use multi-stage builds to transfer the app into a final base image without unnecessary components.
Here’s the example of a multi-stage Dockerfile for Java. It uses Liberica Runtime Container with JDK to build the application; then, we copy the JAR file into a final image based on Liberica Runtime Container with JRE and run it:
FROM bellsoft/liberica-runtime-container:jdk-25-musl AS builder
WORKDIR /app
ADD my-java-app /app/my-java-app
RUN cd my-java-app && ./mvnw package
FROM bellsoft/liberica-runtime-container:jre-25-musl
WORKDIR /app
COPY /app/my-java-app/target/*.jar app.jar
ENTRYPOINT ["java", "-jar", "/app/app.jar"]
And here’s an example for Go that uses distroless as a final base image:
FROM golang:1.24.4 AS builder
WORKDIR /app
COPY go.mod main.go ./
RUN go mod download
RUN CGO_ENABLED=0 GOOS=linux go build -o /hello
FROM gcr.io/distroless/static-debian12
WORKDIR /
COPY /hello /hello
ENTRYPOINT ["/hello"]
No Package Manager if Possible
A package manager in production kills immutability and widens the attack surface.
If someone installs packages in production, the container no longer matches the image you built, scanned, and signed. As a result, an SBOM, signature, and provenance lose their power. Plus, you can’t reproduce or roll back cleanly.
Attackers can use the package manager to install specific tools for malicious activities, especially if you run your container as root or with additional capabilities. In addition, it is easier to install packages with CVEs.
Therefore, opt for a slim final base image without a package manager if possible.
Run as Non-root and with Minimum Priviledges
Running the container as root or giving it a full set of privileges will give the intruders a host-level access to container resources and the ability to exercise kernel attacks.
Therefore, Kubernetes and Docker security standards strongly advise running containers as non-root or rootless to limit the blast surface if the container is compromised.
So, set an unprivileged user and group in the Dockerfile to run the container. If you don’t set the user, Docker runs as root by default. Therefore, you should explicitly set the user if possible. You can do that by creating a custom user with specific UID and GID:
USER 1234:1234
Or
RUN groupadd -r myuser && useradd -r -g myuser myuser
USER myuser
As for the privileges. Even if you configure your containers to run as non-root or rootless, it is recommended not to run the containers with the --privileged flag if you don’t need it for a purpose, because it gives ALL Linux kernel capabilities to the container.
Limit the granted privileges only to those needed by the container. You can run --cap-drop all to drop all capabilities first just to be sure, and then add capabilities with --cap-add.
docker run --cap-drop all --cap-add <required-privilege>
Don’t Tweak Containers in Prod
The container must correspond to the container image. It means no patches, config tweaks, or quick code fixes in production. If something changes, you rebuild and redeploy the new artifact.
But what does it have to do with security?
- Runtime integrity: the SBOMs and signatures actually describe what’s running in prod.
- Easier rollback: if something suspicious happens, you simply kill the process and redeploy.
- Fewer opportunities for tampering: the absence of the package manager and restricted privileges gives attackers less mutation paths.
There are several approaches that can help you build more secure images, including:
- Multi-stage builds, where the toolchain is used only in the builder.
- Slim final base images without the package manager.
- The runtime settings that prevent privilege escalation.
In addition, store the stateful data like databases separately and don’t persist data in the container. You can use the --mount or the --tmpfs flag to create files outside the container:
--tmpfs <mount-path>
OR
--mount type=tmpfs,dst=<mount-path>
To prevent privilege escalation at runtime, you can use the --security-opt no-new-privileges option:
--security-opt no-new-privileges
You can also run the container with the read-only filesystem if your setup allows you to do so:
--read-only
Strive Towards Deterministic Container Images
Determinism should go hand-in-hand with the aforementioned approach. Deterministic images mean that given the same input, the build produces the exact same bytes. This way, you can detect tampering or dependency change between build runs. It can also help to eliminate the Time-of-Check to Time-of-Use (TOCTOU) vulnerability when the attacker has time to manipulate the resource in the time interval between when the system checks the state of resource and uses it.
How do we get deterministic images?
Pin everything: toolchain, build system, and OS version. Importantly, pin the base image by digest, not by tag, because tags can drift, but the digest always points to the image you used in the first place.
For example, instead of pinning Liberica Runtime Container by version:
bellsoft/liberica-runtime-container:jre-25_37-slim-musl
You should pin it by digest and specify the version in a comment:
#base image version: bellsoft/liberica-runtime-container:jre-25_37-slim-musl
bellsoft/liberica-runtime-container:sha256:5646cf896dafe95def30420defa8077fc8ee71ef5578e2c018c2572aae0541e2
Implement an SBOM and Provenance
One more powerful approach to increasing the security of container images is integrating a Software Build of Materials or an SBOM. It is a document compiled in machine- and human-readable format that provides the data on the components, libraries, and modules that were used for building a given piece of software. Basically, an SBOM says what’s inside your container image.
An SBOM includes direct and indirect dependencies on open source and proprietary software. The data on each component includes but is not limited to its name, version, licence, unique identifiers if any.
An SBOM prevents hidden vulnerabilities from nesting in your project and accelerates CVE remediation because you can easily map a new CVE to the affected image. An SBOM is the first line of defence against software supply chain attacks. No wonder that many legislations demand an SBOM adoption.
There are several open source tools for generating SBOMs: Syft, CycloneDX Generator, for instance. Optionally, you can generate an SBOM at the pre-build stage to check the dependency state. But generating an SBOM at the build stage is a must. In this case, you will get the list of exactly what you ship and run.
Here’s an example of generating an SBOM for a container image:
syft $IMG --output cyclonedx-json=oci-sbom-syft.json
If you ship a Java application, this article describes generating an SBOM for Java apps using various approaches and tools in more detail.
However, producing an SBOM is not enough. You need to provide the written proof of software provenance and that the SBOM indeed reflects the contents of the image.
Provenance is a set of processes aimed at proving where the container image came from by providing metadata about where, when, and how the artifact was built. This data includes builder identity, source repo commit, build steps, and so on.
In other words, provenance together with an immutable digest lets you verify: “This image was built by our CI from commit X with builder Y.” This helps to avoid tag-swap and time-of-check to time-of-use (TOCTOU) vulnerabilities.
Provenance processes and tools are defined by the Supply-chain Levels for Software Artifacts (SLSA) framework.
So, how do we combine SBOMs and provenance? In comes the attestation! It is a cryptographically signed statement about some property of an artifact, tied to its immutable digest. In other words, it is metadata that consumers can verify independently of the software producer.
The workflow can look as follows:
- Generate an SBOM and implement provenance for every image.
- Store an SBOM and provenance data as attestations tied to the image digest.
- Sign images.
- At deploy, verify signature and provenance.
You can use Cosign for attestation. For instance, let’s attest the SBOM we generated earlier:
cosign attest --predicate oci-sbom-syft.json --type cyclonedx $IMG
The software consumer then verifies the signature of the artifact and the required attestations (provenance, SBOM) in their CI/CD. Depending on the verification results, you can
- Accept the image and deploy it,
- Quarantine it for manual review if, for instance, you detect unverifiable signer, a stale SBOM, or medium vulnerabilities with compensating controls.
- Reject the image in case it is unsigned, or you detect wrong identity/issuer, missing/forged attestation, wrong source/ref, or critical vulnerabilities for which the patches are available but not implemented in the image.
No Do-It-Yourself Approach for Base Images
Quite a few teams build their images on some random base. Unfortunately, even if such base images don’t cause any performance regressions in your project, they introduce severe security risks:
- CVE blind spots, late or no patches for various community-driven components;
- No provenance and weak trust;
- No vendor-backed SLA;
- No compliance with the legal requirements to use trusted base images.
Therefore, opt for a well-maintained, regularly-updated base image from a trusted vendor. At BellSoft, we provide container images for Java applications based on the products we develop and support: minimalistic Alpaquita Linux and Liberica JDK. The images come with an SBOM and can be easily verified by the checksum.
Pin the base image by digest and verify signatures and provenance before building the application, for instance, using the Cosign tool mentioned in the section above:
cosign verify \
--certificate-identity "$EXPECTED_IDENTITY" \
--certificate-oidc-issuer "$EXPECTED_ISSUER" \
<chosen-image>@sha256:<PINNED_DIGEST>
Use LTS Versions and Update Base Image Regularly
Consider using base images with LTS software versions. Long-Term Support (LTS) software releases are supported for several years and have clearly defined security update policies, with security patches and critical fixes backported from the newest versions. Furthermore, less vulnerabilities appear in LTS versions as compared to non-LTS ones.
Below you can find the support cycles for several operating systems and programming languages:
- Node.js LTS releases see the light every year in April or May and receive support with security patches and critical patches for 2.5 years.
- Java LTS versions are released every two years, the support period depends on the JDK vendor.
- Ubuntu LTS versions are released every two years and receive patches for 5 years.
- Alpine doesn’t have a fixed release cycle, but usually, the edge release is snapshotted every 6 months and receives security patches for two years.
- Alpaquita LTS versions come out every two years and are supported for four years.
Regardless of whether you use LTS or non-LTS software versions, if you don’t update the base image, you accumulate known vulnerabilities and extend the attack surface. For example, the OS layer gets new CVEs constantly.
The good news is that reliable vendors also update their images constantly. Not so good news: when a vendor releases an update for your base image, it won’t get updated in your builds auto-magically.
Therefore, you need to set up automatic updates monitoring using such tools as, for example, Dependabot or Renovate. After that, you must rebuild, rescan, and resign the image so your SBOM and signatures reflect the patched contents. Otherwise, your trusted artifact cannot be considered trusted anymore.
No Secrets in Containers
A container image is a distribution artefact. Anything you store there can be cached, copied, and extracted. Therefore, secrets baked into the image are a serious security risk. Use dedicated secret management tools like HashiCorp Vault, Google Secrets Manager or Kubernetes Secrets to keep secrets there, and then fetch them at runtime.
Also, prefer files over environmental variables because the latter are more easily leaked to child processes, crash logs, and so on. On the other hand, files allow for the principle of least privilege. They can be mounted as read-only to a non-root process with locked egress.
Use Security Scanners
Scanners are automated tools that can analyze the application code, a container image, configs, SBOMs, etc. to detect security flaws. If a Software Bill of Materials answers the question “What’s in the image?” the scanners answer “How risky is it?”
New CVEs appear daily, and scanners map them to your artifacts automatically using an SBOM inventory.
But some risks aren’t related to CVEs. They include writable file systems, dangerous capabilities, bad permissions, or exposed secrets in env/files. Scanners or lint tools can catch such risks as well.
Scanners vary in purpose, so you can use them at different stages of the software lifecycle.
The table below summarizes the deployment stages where scanners can be useful and provides several scanning tools fit for the purpose:
|
Stage |
Process |
Scanners |
|
Pre-commit / pre-PR |
Scan for secrets to stop credentials from landing in Git |
Gitleaks TruffleHog |
|
CI build |
Analyze the dependencies by scanning the image and/or an SBOM |
OSV-Scanner Trivy Grype Clair |
|
Post-publish |
Perform continuous rescans of pushed images and/or SBOMs as advisories update | |
|
Runtime |
Monitor for malicious activities Scan nodes/clusters to to flag running workloads when new CVEs land |
Trivy Operator Falco |
Scanners supply evidence, policies pull the trigger. You should define policies for acting upon the scanning results, for instance, block or allow the build/deploy, produce an additional artifact, perform manual review, issue notifications of suspicious activities, etc.
Implement Host Hardening
Containers share the host kernel. If the kernel is vulnerable, everything on it is vulnerable. So, host hardening is also an indispensable part of container security enhancement.
You can use a dedicated OS for Kubernetes nodes such as Bottlerocked or a minimal OS such as Alpaquita LTS. The key is that the system should be immutable and without any spare components. Update the kernel on a regular basis.
Also, enable Linux Security modules such as AppArmor or SELinux to set the required security policies. Therefore, when some risky kernel operation is detected, the LSM decides whether to allow or deny it as per defined policies. Combine it with seccomp (syscall filter) and capabilities to enforce least privilege.
Conclusion
In this article we looked into the key principles of securing Docker container images. Here’s what we covered: Keep container images lean and deterministic,
- Use a trusted base image,
- Prove the contents with an SBOM and provenance,
- Implement security scanners to monitor for CVEs and other security flaws,
- Harden the Linux Host.
Each of the practices described here can be expanded into a separate article. Therefore, we will continue to explore the topic in the following articles. Subscribe to our newsletter so as not to miss them!
FAQ: Docker Image Security
How do I generate an SBOM for a Docker image?
You can generate an SBOM for a Docker image using open source tools such as Syft or CycloneDX Generator. You can use a Maven or Gradle plugin to generate an SBOM at build time or use the tool to generate an SBOM for a ready container image. Here’s an example using Syft:
syft $IMG --output cyclonedx-json=oci-sbom-syft.json
Should containers run as root?
Docker and Kubernetes security standards do not recommend running containers as root because it can give malicious actors host-level access to container resources and increase the severity of attacks.
How often should I rebuild my base image?
The base image should be rebuilt every time the updated version is released. You can automatically track updates with open source tools such as Dependabot.
What is image provenance and why should I care?
Provenance is a set of processes that enable the software consumers to verify that the container image is trustworthy and wasn’t tampered with by providing metadata about where, when, and how the container image was built. Provenance increases the transparency of software and helps to ensure its integrity, which is critical for protecting the software supply chain.








