Hardened Container Images 101: What, Why, and How for DevSecOps [2025]


Transcript:

Most teams use containers, but not all of them know or control how many vulnerabilities or unnecessary packages their base image contains. Hardened container images help to solve these issues. Minimized attack surface, low to zero CVEs, immutable component set, and continuous patching. Let's explore these and other features of hardened container images and see how to integrate them into your workflow.

Many container images ship with some random base that includes a ton of unnecessary tools such as a package manager, a debugger, a compiler, and lots of random utilities. What is worse, the base image often lacks provenance, so it is hard to determine what is in the base image anyway. The result is more than 600 vulnerabilities in a typical container image. Most of them are in the base image, not in the application code.

The attack surface is the size of a given container image. Every scan report contains hundreds of CVEs. You either accept the risk or drown in the attempt to patch them all. Of course, you can try to patch all these vulnerabilities, but few teams do that. And frankly, it is not your job to solve issues in code that you did not write.

Hardened container images are there to solve these issues. One can say that they set a new standard for container image security. The concept of hardened container images rests on four key pillars.

First, a minimalistic base. The image contains only the components your application needs to function in production. No unnecessary packages, hence a reduced attack surface.

Second, low to zero CVEs. The vendor of hardened base images performs continuous patching so the base image stays free of known vulnerabilities.

Third, an immutable component set. Hardened images cannot be modified. There is no package manager and no other ability to change them. The risk of attackers introducing malicious packages or tampering with a container at runtime is minimized.

Fourth, provenance data. The images come with a software bill of materials and a digital signature. You can always verify their origin and the components they contain.

You might have heard of or even used distroless images. What is the difference between them and hardened base images? Distroless images are about stripping down almost all Linux components, including the shell. Image hardening, on the other hand, is about a clear CVE management process: tracking components, monitoring CVEs, and patching them.

Hardened container images integrate perfectly into DevSecOps practices, and here is why. You get reduced noise from security scanners. Fewer CVEs mean fewer false-positive alerts, so the security team can focus on real issues. You can shift security left without slowing down deployments, with hardened images standardized across the organization. Developers do not have to reinvent Dockerfiles every time.

Thanks to SBOMs and digital signatures, hardened container images facilitate compliance and audits. You can say that you know exactly what you are running in production and easily prove it. The potential attack blast radius is reduced. For attackers, it is harder to introduce malicious packages or exploit known vulnerabilities because there is basically nothing in the container image they can leverage.

To sum up, hardened container images are a security baseline. You standardize deployment processes once and solve several problems right away: compliance, CVE monitoring, and patching.

Now let's look at how we can introduce hardened container images into the workflow. There are several vendors who provide hardened container images. I will use BellSoft's hardened images as an example. They are available for free on various registries and for various runtimes.

To try them out, pull the required container image. You can use Cosign to verify the image signature. Then you can run it with your application.

Suppose you have a Dockerfile and you want to migrate to a hardened base. On the screen, you can see a Dockerfile for a Java project. It uses OpenJDK as a base image, compiles, and runs the application. To use a hardened base, we need to rewrite this Dockerfile to use multi-stage builds. We build the application in the first stage and then transfer it into a fresh base image without any packages or tools used during the build. This way, the image will be smaller and free of additional utilities.

If you already use multi-stage builds, migrating to a hardened base image is just a matter of substituting the FROM command. Remember that hardened base images are immutable, so you cannot add additional packages. It is also recommended to pin the image by digest for reproducibility.

As far as signature verification is concerned, Cosign verification can be introduced into the CI/CD pipeline, for example with GitHub Actions.

That was hardened container images 101. Leave a comment if you want me to discuss other topics related to hardened images. And as usual, do not forget to like, subscribe, and until next time.

Summary

Hardened container images address the security risks of traditional container bases by minimizing the attack surface, reducing CVEs, and providing immutable, well-defined components. They rely on continuous patching, strict CVE management, and provenance data such as SBOMs and digital signatures to ensure transparency and trust. Compared to distroless images, hardened images focus not only on minimalism but also on long-term vulnerability tracking and maintenance. They integrate naturally into DevSecOps workflows by reducing scanner noise, simplifying compliance, and standardizing images across teams. Adopting hardened images improves security posture while keeping builds reproducible and deployment workflows efficient.

About Catherine

Java developer passionate about Spring Boot. Writer. Developer Advocate at BellSoft

Social Media

Videos
card image
Feb 6, 2026
Backend Developer Roadmap 2026: What You Need to Know

Backend complexity keeps growing, and frameworks can't keep up. In 2026, knowing React or Django isn't enough. You need fundamentals that hold up when systems break, traffic spikes, or your architecture gets rewritten for the third time.I've been building production systems for 15 years. This roadmap covers three areas that separate people who know frameworks from people who can actually architect backend systems: data, architecture, and infrastructure. This is about how to think, not what tools to install.

Videos
card image
Jan 29, 2026
JDBC Connection Pools in Microservices. Why They Break Down (and What to Do Instead)

In this livestream, Catherine is joined by Rogerio Robetti, the founder of Open J Proxy, to discuss why traditional JDBC connection pools break down when teams migrate to microservices, and what is a more efficient and reliable approach to organizing database access with microservice architecture.

Further watching

Videos
card image
Feb 27, 2026
Spring Developer Roadmap 2026: What You Need to Know

Spring Boot is powerful. But knowing the framework isn’t the same as understanding backend engineering. In this video, I walk through the roadmap I believe matters for a Spring developer in 2026. We start with data. That means real SQL — CTEs, window functions, normalization trade-offs — and understanding what ACID and BASE actually imply for system guarantees. Spring Data JPA is useful, but you still need to know what happens underneath. Then architecture: microservices vs modular monolith, serverless, CQRS, and when HTTP, gRPC, Kafka, or WebSockets make sense. Not as buzzwords — but as design choices with trade-offs. Security and infrastructure follow: OWASP Top 10, AuthN vs AuthZ, encryption in transit and at rest, Docker, Kubernetes, Infrastructure as Code, and observability with Micrometer, OpenTelemetry, and Grafana. This roadmap isn’t about mastering every tool. It’s about knowing what affects reliability in production.

Videos
card image
Feb 18, 2026
Build Typed AI Agents in Java with Embabel

Most Java AI demos stop at prompt loops. That doesn't scale in production. In this video, we integrate Embabel into an existing Spring Boot application and build a multi-step, goal-driven agent for incident triage. Instead of manually orchestrating prompt → tool → prompt cycles, we define typed actions and let the agent plan across deterministic and LLM-powered steps. We parse structured input with Ollama, query MongoDB deterministically, classify risk using explicit thresholds, rank affected implants, generate a constrained root cause hypothesis, and produce a bounded containment plan. LLM handles reasoning. Java enforces rules. This is about controlled AI workflows on the JVM — not prompt glue code.

Videos
card image
Feb 12, 2026
Spring Data MongoDB: From Repositories to Aggregations

Spring Data MongoDB breaks down fast once CRUD meets production—real queries, actual data volumes, analytics. What looks simple at first quickly turns into unreadable repository methods, overfetching, and slow queries. In this video, I walk through building a production-style Spring Boot application using Spring Data MongoDB — starting with basic setup and repositories, then moving into indexing, projections, custom queries, and aggregation pipelines. You'll see how MongoDB's document model changes data design compared to SQL, when embedding helps, and when it becomes a liability. We cover where repository method naming stops scaling, how to use @Query safely, when to switch to MongoTemplate, and how to reduce payload size with projections and DTOs. Finally, we implement real MongoDB aggregations to calculate analytics directly in the database and test everything against a real MongoDB instance using Testcontainers. This is not another MongoDB overview. It's a practical guide to actually using Spring Data MongoDB in production without fighting the database.