Hardened Container Images 101: What, Why, and How for DevSecOps [2025]


Transcript:

Most teams use containers, but not all of them know or control how many vulnerabilities or unnecessary packages their base image contains. Hardened container images help to solve these issues. Minimized attack surface, low to zero CVEs, immutable component set, and continuous patching. Let's explore these and other features of hardened container images and see how to integrate them into your workflow.

Many container images ship with some random base that includes a ton of unnecessary tools such as a package manager, a debugger, a compiler, and lots of random utilities. What is worse, the base image often lacks provenance, so it is hard to determine what is in the base image anyway. The result is more than 600 vulnerabilities in a typical container image. Most of them are in the base image, not in the application code.

The attack surface is the size of a given container image. Every scan report contains hundreds of CVEs. You either accept the risk or drown in the attempt to patch them all. Of course, you can try to patch all these vulnerabilities, but few teams do that. And frankly, it is not your job to solve issues in code that you did not write.

Hardened container images are there to solve these issues. One can say that they set a new standard for container image security. The concept of hardened container images rests on four key pillars.

First, a minimalistic base. The image contains only the components your application needs to function in production. No unnecessary packages, hence a reduced attack surface.

Second, low to zero CVEs. The vendor of hardened base images performs continuous patching so the base image stays free of known vulnerabilities.

Third, an immutable component set. Hardened images cannot be modified. There is no package manager and no other ability to change them. The risk of attackers introducing malicious packages or tampering with a container at runtime is minimized.

Fourth, provenance data. The images come with a software bill of materials and a digital signature. You can always verify their origin and the components they contain.

You might have heard of or even used distroless images. What is the difference between them and hardened base images? Distroless images are about stripping down almost all Linux components, including the shell. Image hardening, on the other hand, is about a clear CVE management process: tracking components, monitoring CVEs, and patching them.

Hardened container images integrate perfectly into DevSecOps practices, and here is why. You get reduced noise from security scanners. Fewer CVEs mean fewer false-positive alerts, so the security team can focus on real issues. You can shift security left without slowing down deployments, with hardened images standardized across the organization. Developers do not have to reinvent Dockerfiles every time.

Thanks to SBOMs and digital signatures, hardened container images facilitate compliance and audits. You can say that you know exactly what you are running in production and easily prove it. The potential attack blast radius is reduced. For attackers, it is harder to introduce malicious packages or exploit known vulnerabilities because there is basically nothing in the container image they can leverage.

To sum up, hardened container images are a security baseline. You standardize deployment processes once and solve several problems right away: compliance, CVE monitoring, and patching.

Now let's look at how we can introduce hardened container images into the workflow. There are several vendors who provide hardened container images. I will use BellSoft's hardened images as an example. They are available for free on various registries and for various runtimes.

To try them out, pull the required container image. You can use Cosign to verify the image signature. Then you can run it with your application.

Suppose you have a Dockerfile and you want to migrate to a hardened base. On the screen, you can see a Dockerfile for a Java project. It uses OpenJDK as a base image, compiles, and runs the application. To use a hardened base, we need to rewrite this Dockerfile to use multi-stage builds. We build the application in the first stage and then transfer it into a fresh base image without any packages or tools used during the build. This way, the image will be smaller and free of additional utilities.

If you already use multi-stage builds, migrating to a hardened base image is just a matter of substituting the FROM command. Remember that hardened base images are immutable, so you cannot add additional packages. It is also recommended to pin the image by digest for reproducibility.

As far as signature verification is concerned, Cosign verification can be introduced into the CI/CD pipeline, for example with GitHub Actions.

That was hardened container images 101. Leave a comment if you want me to discuss other topics related to hardened images. And as usual, do not forget to like, subscribe, and until next time.

Summary

Hardened container images address the security risks of traditional container bases by minimizing the attack surface, reducing CVEs, and providing immutable, well-defined components. They rely on continuous patching, strict CVE management, and provenance data such as SBOMs and digital signatures to ensure transparency and trust. Compared to distroless images, hardened images focus not only on minimalism but also on long-term vulnerability tracking and maintenance. They integrate naturally into DevSecOps workflows by reducing scanner noise, simplifying compliance, and standardizing images across teams. Adopting hardened images improves security posture while keeping builds reproducible and deployment workflows efficient.

About Catherine

Java developer passionate about Spring Boot. Writer. Developer Advocate at BellSoft

Social Media

Videos
card image
Dec 12, 2025
Will AI Replace Developers? A Vibe Coding Reality Check 2025

Can AI replace software engineers? ChatGPT, Copilot, and LLM-powered vibe coding tools promise to automate development—but after testing them against 17 years of production experience, the answer is more nuanced than the hype suggests. Full project generation produces over-engineered code that's hard to refactor. AI assistants excel at boilerplate but fail at business logic. MCP servers solve hallucination problems but create context overload. Meanwhile, DevOps automation actually works. This breakdown separates AI capabilities from marketing promises—essential for teams integrating LLMs and copilots without compromising code quality or architectural decisions.

Videos
card image
Dec 12, 2025
JRush | Container Essentials: Fast Builds, Secure Images, Zero Vulnerabilities

Web-conference for Java developers focused on hands-on strategies for building high-performance containers, eliminating CVEs, and detecting security issues before production.

Further watching

Videos
card image
Dec 30, 2025
Java in 2025: LTS Release, AI on JVM, Framework Modernization

Java in 2025 isn't about headline features, it's about how production systems changed under the hood. While release notes focus on individual JEPs, the real story is how the platform, frameworks, and tooling evolved to improve stability, performance, and long-term maintainability. In this video, we look at Java from a production perspective. What does Java 25 LTS mean for teams planning to upgrade? How are memory efficiency, startup time, and observability getting better? Why do changes like Scoped Values and AOT optimizations matter beyond benchmarks? We also cover the broader ecosystem: Spring Boot 4 and Framework 7, AI on the JVM with Spring AI and LangChain4j, Kotlin's growing role in backend systems, and tooling updates that make upgrades easier. Finally, we touch on container hardening and why runtime and supply-chain decisions matter just as much as language features.

Videos
card image
Dec 24, 2025
I Solved Advent of Code 2025 in Kotlin: Here's How It Went

Every year, Advent of Code spawns thousands of solutions — but few engineers step back to see the bigger picture. This is a complete walkthrough of all 12 days from 2025, focused on engineering patterns rather than puzzle statements. We cover scalable techniques: interval math without brute force, dynamic programming, graph algorithms (JGraphT), geometry with Java AWT Polygon, and optimization problems that need constraint solvers like ojAlgo. You'll see how Java and Kotlin handle real constraints, how visualizations validate assumptions, and when to reach for libraries instead of writing everything from scratch. If you love puzzles, programming—or both—and maybe want to learn how to solve them on the JVM, this is for you.

Videos
card image
Dec 18, 2025
Java 26 Preview: New JEPs and What They Mean for You

Java 26 is the next feature release that brings features for enhanced performance, security, and developer experience. This video discusses the upcoming JDK 26 release, highlighting ten JEPs including JEP 500. JEP 500 focuses on preparing developers for future restrictions on mutating final fields in Java, emphasizing their role in maintaining immutable state. This is crucial for robust programming and understanding the nuances of mutable vs immutable data, especially concerning an immutable class in java. We also touch upon the broader implications for functional programming in Java.