7 May 2026

7 Min. reading time

Container images ranging from 800 to 1,200 MB are no longer an architectural question but a cost-of-goods issue by 2026. Three DACH cloud teams—one bank subsidiary with 230 microservices, an insurance platform provider, and an industrial machinery manufacturer with an edge fleet—have replaced their standard distributions with Distroless, Wolfi, and Chainguard over the past twelve months. The result: build time, CVE surface, and egress costs have each decreased by 60 to 80 percent without any architectural friction in the service layer.

04.05.2026

Key Takeaways

  • Egress is the overlooked expense: With 230 services and 50 deployments per day, 4 to 7 terabytes of image layer pulls flow through the cloud each month. AWS cross-region and cross-AZ egress costs are five- to six-digit figures that vanish as pure data transfer in the cloud bill.
  • Distroless is the foundation, Wolfi and Chainguard provide the rest: Distroless minimizes the runtime layer to the essential, Wolfi adds a reproducible build distribution with signed repositories, and Chainguard combines both in a commercial SLA with ongoing CVE management. These three components are not alternatives but a chain.
  • CVE reduction is the side effect: Those who approach the CFO with “fewer vulnerabilities” will lose the conversation. The compelling story is halved build time and reduced egress costs. The typical CVE surface reduction from 50 to 100 findings per image to 0 to 3 is the icing on the cake, not the main argument.

Related:BSI-KRITIS and Cloud Usage 2026  /  State of FinOps 2026

What’s Truly New in 2026

The reduction is less about an architectural shift and more about a toolchain in build, registry, and security scanning. Those seeking leverage will not find it in the service mesh or hyperscaler pricing but in the image definition.

What is a container image diet? A systematic reduction of image size through the choice of base distribution, multi-stage builds, and reproducible build layers. The goal is a smaller transferred data volume per build, per pull, and per cluster node. If a standard Java image shrinks from 1,180 MB to 92 MB, the effect multiplies with each deployment. In a microservice landscape with 200 services, five environments, and 50 deployments per day, the switch alone in build pulls results in several terabytes of transfer reduction per month.

What’s new in 2026 is not the concept. Distroless has been around since 2017, and Alpine since 2014. What’s new is the toolchain: Wolfi has been delivering a reproducible build distribution with signed repositories since 2023, and Chainguard combines this with a commercial SLA and ongoing CVE management. Those who combine Wolfi and Distroless get both: a smaller runtime and reproducible build provenance. Compliance is the icing on the cake because software bills of materials and Sigstore signatures pass through cleanly.

How Container Diets Impact Performance

The key factors in reducing container size and improving efficiency are threefold: image size, build time, and egress. Image size can be halved with Distroless alone, with an additional 60 to 70 percent reduction achievable through multi-stage builds and static linking. Build time is significantly reduced as fewer packages need to be installed, patched, and scanned. Egress costs decrease in parallel with image size, with an added benefit when pulling images across regions.

The figures presented below are based on three DACH region setups that have been optimized over the past twelve months. These metrics are representative of typical performance improvements, though individual values may vary depending on the specific service profile.

Before / After: Three Real-World DACH Setups

Metric Bank Subsidiary (230 Services) Insurance Platform (95 Services) Mechanical Engineering Edge (140 Devices)
Java Service Image Size 1.180 MB → 92 MB 920 MB → 78 MB 680 MB → 145 MB
Build Time per Service 8:40 min → 3:10 min 6:50 min → 2:40 min 11:20 min → 4:30 min
CVEs per Image (Critical + High) 82 → 1 64 → 0 47 → 2
Egress per Month 6,8 TB → 0,9 TB 3,2 TB → 0,4 TB 1,4 TB → 0,3 TB
Cloud Bill Savings (Data Transfer) approx. €7,400/month approx. €3,100/month approx. €1,250/month

Bank Subsidiary: AWS Frankfurt, Multi-AZ, Spring Boot Java. Insurance Platform: GCP europe-west3, Quarkus + Go Sidecar. Mechanical Engineering: Hybrid Edge Fleet, k3s on-prem with AWS Registry Mirroring. Egress rates for AWS Cross-AZ are $0.01/GB, Cross-Region is $0.02/GB, and GCP rates are similar. Values are rounded, and data collection was conducted in March 2024.

“Container images ranging from 800 MB to 1.2 GB will no longer be an architecture question by 2026 but a cost-of-goods issue.”

Comparing Three DACH Cases

The bank subsidiary was in the most pain. It had a Java platform with 230 Spring-Boot services, default OpenJDK on Debian-Slim, each image around 1.2 GB. Three FinOps reviews had marked the egress as a black box, without anyone looking at the image layer pulls between ECR and EKS nodes. After switching to Distroless-Java-Base with Wolfi as the build layer, the image pull per pod start dropped from 38 to 4 seconds. The effect on the cloud bill was measurable within three months.

The insurance platform didn’t start for cost reasons, but for audit reasons. NIS2 and the BaFin circular on supply chain security require reproducible builds and provenance. Wolfi and Sigstore both provide these as defaults. The egress effect was a surprise for the team, not a target. Meanwhile, the compliance architecture is the driver and the cost-saving effect is the argument.

The machine builder has the thinnest margin because the edge fleet already has restrictive bandwidth. Here, the cloud bill doesn’t matter, but the update time per location does. Previously, rolling updates on 140 k3s nodes took a three-hour window; with slim images, it’s 35 to 45 minutes. This changes the patch frequency from monthly to weekly.

What a 90-Day Program Looks Like

Those who want to pull the lever themselves don’t need a platform migration or mesh reconfiguration. Four steps over three months are enough if the build pipeline is somewhat structured.

  1. Weeks 1-2 – Inventory: Measure image sizes, pull frequency, and egress per service. AWS Cost Explorer, GCP Billing Export, or your own cost allocation tagging provide the numbers. Pull frequency from container registry logs.
  2. Weeks 3-6 – Pilot: Take three services from the top 10 egress cluster: one Java, one Go, one Python. Use multi-stage builds with Distroless as the final layer and Wolfi as the build layer. Perform pull tests from staging and production.
  3. Weeks 7-10 – Rollout: Migrate top 50 services by egress volume. Enable registry layer sharing if not already default, and generate SBOMs over Syft and signatures over Cosign in parallel.
  4. Weeks 11-13 – Hardening: Use a CVE service from Chainguard or equivalent source, set up an automated rebuild pipeline for all migrated services upon base image update. Define an SLA model and escalation path.

What’s at Stake: Honest Trade-Offs

Distroless is not a universal tool. Three issues regularly arise. First, debugging experience: no shell, no curl, no strace in the image. If you perform a pod exec in production, you find nothing. Solution: Sidecar with debug tools on-demand or ephemeral containers in Kubernetes 1.25+. Second, library gap: some native bindings (Oracle JDBC, older SAP connectors) expect a full distro layout. Here, Wolfi helps more than Distroless because Wolfi is APK-compatible and can be maintained. Third, skill gap: multi-stage builds and layer caching are not standard in mid-sized teams. Four weeks of training plus pair programming are realistic.

The biggest silent friction: Chainguard licensing costs. The free version covers hobby projects. For production setups with SLA and CVE service, the pricing tier is between $50 and $250 per image family per month. For 30 to 60 image families, that’s $18,000 to $180,000 per year, not trivial. The alternative is a DIY setup on pure Distroless plus Wolfi repos, with your own CVE pipeline. It works, but it costs engineering hours in the SRE team.

What It Truly Pays Off For

Three profiles clearly demonstrate the benefits. Platforms with high deployment frequency and many small services, as the egress effect scales there. Setups in regulated environments, as SBOM and provenance are already mandatory. Edge topologies with limited bandwidth, as patch speed becomes operational scale.

Those who do not leverage this advantage will continue to operate containers with over 800 MB in 2026 and subsidize their cloud provider for data transfer. Architectural decisions may have become stagnant by 2026, but the cost of goods remains evident.

Frequently Asked Questions

How do you reliably measure the egress effect of a container diet?

By using cost allocation tags per service plus container registry pull logs. AWS Cost Explorer and GCP Billing Export provide the data transfer per day, while the registry logs give the pull count. The difference before and after migration is the net savings. Important: Do not measure over two weeks, but over three months, otherwise deployment bursts will overshadow the trend.

Is Chainguard commercially viable or is Wolfi plus Distroless sufficient for in-house development?

In-house development is viable with a dedicated SRE team of three or more people that regularly maintains the CVE pipeline. For smaller teams or regulated environments with audit pressure, the commercial variant offers a faster ROI because the SLA and CVE service can be derived from the audit report. Rule of thumb: Count licensing costs at 1.5 full-time equivalents for 40 productive image families.

Does Distroless work with JDBC drivers and older SAP connectors?

With pure Distroless bases, only to a limited extent, as some native bindings expect a complete distro layout. Using Wolfi as the build distro and Distroless as the runtime final layer in a multi-stage build allows Oracle JDBC, IBM MQ clients, and older SAP RFC libraries to run reliably. For very old proprietary connectors (before 2018), a hybrid strategy with Wolfi as the runtime layer also helps.

How does the container diet relate to pure build cache optimizations?

Build cache optimization affects build time, while the container diet additionally impacts egress and CVE surface. Both factors combine well: Layer caching in Buildkit or Kaniko brings a 30 to 50 percent reduction in build time without image shrinkage, while switching to Distroless plus Wolfi adds the second half. Those who implement both in parallel achieve halved pipeline runtimes and simultaneously the egress effect.

About the Author

Alec Chizhik is the Chief Digital Officer at Evernine. His focus is on cloud operations, security engineering, and the uncomfortable question of what architecture really costs in production.

Source Title Image: Pexels / Tom Fisk (px:1427107)

Also available in

A magazine by Evernine Media GmbH