7 Min. reading time
With the GA release of Amazon S3 Files in April 2026, cloud architects now face a three‑way choice among AWS file systems. S3 Files lets you mount S3 buckets directly as an NFS mount point—without the EFS premium, but with different trade‑offs. When S3 Files is the economical choice, when EFS leverages its Multi‑AZ strength, and when FSx for Lustre remains the only sensible option: a decision matrix for 2026.
Key Takeaways
- S3 Files GA since April 2026: S3 buckets mountable as NFS, S3 pricing model instead of EFS throughput fees—attractive for analytics and batch workloads
- EFS stays the leader for Multi‑AZ shared storage: parallel writes from multiple Availability Zones, strong POSIX semantics, no manual capacity management
- FSx for Lustre: mandatory for HPC and ML training with sub‑ms latencies—Lustre protocol, parallel stripes across multiple storage servers, direct S3 integration for data import
- Cost comparison for 100 TB: S3 Files ~230 USD/month vs. EFS General Purpose ~3 100 USD/month vs. FSx Lustre ~4 500 USD/month (ScratchFS 2)—each without data transfer charges
- Decision criteria: access pattern (random writes vs. sequential reads), latency requirements, and whether Multi‑AZ consistency is needed
RelatedKubernetes 1.36 Migration: Enterprise Checklist cgroup‑v2 + DRA GA / Amazon Bedrock AgentCore: CDK tool chain for production agents
What is Amazon S3 Files? Amazon S3 Files is an AWS service that became Generally Available in April 2026, allowing you to mount S3 buckets as an NFS‑v4.0 file system directly on EC2 instances—without an intermediary appliance or separate file‑system service. It complements Amazon EFS and FSx as a third managed file‑system option on AWS.
S3 Files: What the GA Release Actually Means
Amazon S3 Files should not be confused with the older S3 File Gateway (formerly Storage Gateway NFS). The new S3 Files feature enables direct NFS mounting of S3 buckets using S3 Express One Zone or Standard buckets as the storage backend. The decisive difference from the File Gateway: there is no on‑premises appliance or VM any more – the mount endpoint runs entirely in the AWS region as a managed service.
The pricing model follows S3 logic: storage costs according to the S3 rate, request fees based on API calls. For read‑heavy workloads this is cheaper than EFS, which charges for provisioned or used capacity plus throughput fees. For workloads with many small random writes the pricing can become more expensive than expected – every write is a S3 PUT.
What S3 Files cannot do: POSIX locking (advisory locks, no enforced byte‑range locks), no strong consistency guarantee for parallel writes from multiple clients. For web servers, databases or applications that rely on POSIX locks, S3 Files is out of the question.
Cost comparison: 100 TB actively used storage (EU‑Frankfurt, May 2026)
~230 USD
S3 Files (S3 Standard, without request costs)
~3.100 USD
EFS General Purpose (Elastic Throughput)
~4.500 USD
FSx for Lustre Scratch FS 2 (1.2 TB blocks)
Prices exclude data‑transfer costs. FSx Lustre is designed for short‑lived scratch clusters – not for long‑term storage.
EFS: When the Price Premium Is Justified
Amazon EFS justifies its premium with a capability that S3 Files does not provide: strong consistency for parallel writes from multiple Availability Zones. If you have EC2 instances in eu-central-1a, 1b and 1c writing to the same filesystem simultaneously and need POSIX semantics, EFS offers no alternative.
The second argument for EFS is operational: no capacity planning. EFS grows and shrinks with the amount of data you store. With S3 Files you must actively manage bucket limits and lifecycle policies. For teams without dedicated storage engineering, EFS makes operations noticeably simpler.
Typical EFS workloads in 2026: Container shared storage for ECS and EKS (ReadWriteMany PVC), lift‑and‑shift applications that expect NFS from the data center, content‑management systems with parallel web‑server instances, DevTools shared environments.
“EFS is the right choice when multi‑AZ write consistency or POSIX locking is required. For pure read‑analytics workloads on existing S3 data, S3 Files is today cheaper than EFS ever will be.”
FSx for Lustre: Not a replacement for the others, but indispensable for HPC
FSx for Lustre is not a generic file system and never was. Lustre was built for parallel high‑performance‑computing workloads: ML training jobs, genomics analyses, video‑rendering pipelines, simulation workloads. Its strength lies in the stripe pattern: data is spread across multiple storage servers so that an ML training job with a hundred GPU instances can read simultaneously at maximum bandwidth.
The S3 integration is a key detail: FSx for Lustre can import an S3 bucket directly as a data source, process data locally with Lustre performance, and write results back to S3. This makes it the preferred choice for ML pipelines where training datasets reside in S3 and need fast access.
The main drawback is its nature as a temporary file system: scratch clusters are designed for short‑lived workloads, persistent clusters cost correspondingly more. FSx for Lustre is not an EFS replacement for durable shared storage – the price would prohibit it.
Decision Matrix: Which Service for Which Workload
| Criterion | S3 Files | EFS | FSx Lustre |
|---|---|---|---|
| Multi‑AZ write consistency | No | Yes | Single‑AZ |
| POSIX locking | Advisory only | Full | Full |
| Parallel read bandwidth | S3 limits | Good | Very high (GB/s) |
| Cost for 100 TB | ~230 USD/month | ~3.100 USD/month | ~4.500 USD/month |
| No capacity planning | No (bucket mgmt) | Yes | No (cluster sizing) |
| Ideal for | Analytics, Batch, Archive | Containers, Lift‑and‑Shift, CMS | ML training, HPC, Video |
Pros/Cons in Direct Comparison
S3 Files
+ Very low‑cost storage
+ No new storage layer needed (S3 already available)
+ Easy entry point for analytics teams
– No true multi‑AZ write
– Request costs rise with write‑intensive workloads
– No POSIX locking
EFS
+ Full POSIX semantics
+ Multi‑AZ without configuration effort
+ No capacity management required
– Significantly higher cost
– Throughput tier can drive up expenses
– Not sufficient for HPC‑grade bandwidth
FSx for Lustre
+ Maximum parallel bandwidth
+ Direct S3 data‑source integration
+ Sub‑ms latency for ML training jobs
– Highest cost of the three options
– Single‑AZ, no long‑term storage
– Cluster‑sizing effort required
Migration Tips for Existing EFS Deployments
For teams currently using EFS for analytics workloads or batch jobs, a cost review with S3 Files can pay off. The migration is technically manageable when the workload is primarily sequential‑read, does not require multi‑AZ writes, and does not rely on POSIX locking.
A typical migration pattern: create an S3 bucket, copy existing EFS data to S3 with AWS DataSync, then switch the mount point. The critical step is auditing the application for POSIX‑lock usage—many analytics frameworks don’t use locking, but some processing frameworks do require it.
For container workloads on EKS or ECS the rule still stands: EFS remains the recommended solution for ReadWriteMany persistent volumes. S3 Files is not yet available as a CSI driver for Kubernetes (as of May 2026), which limits direct substitution in container environments.
Frequently Asked Questions
Can I use existing EFS data directly with S3 Files?
No, EFS and S3 are separate storage back‑ends. For a migration you must first copy data via AWS DataSync from EFS to S3. Afterwards the NFS mount point can be switched from EFS to S3 Files – provided the workload does not require POSIX locking or multi‑AZ writes.
In which AWS regions is S3 Files available?
At GA in April 2026, S3 Files is available in the major AWS regions, including eu-central-1 (Frankfurt). The current region list is maintained in the official AWS documentation. For DACH companies with GDPR requirements, eu-central-1 is the relevant region.
How do FSx for Lustre Scratch and Persistent differ?
Scratch FS 2 is designed for temporary, short‑lived workloads – for example an ML training job that runs for a few hours. It has no automatic replication and is cheaper. Persistent FS 1 and 2 provide automatic data replication within the Availability Zone, higher durability, and are suited for longer‑running workloads – but they are considerably more expensive.
Does S3 Files support Windows workloads via SMB?
No. S3 Files supports only NFS v4.0 and is therefore limited to Linux workloads. For Windows environments that need a shared file system, Amazon FSx for Windows File Server (SMB protocol) remains the recommended AWS option.
Is there S3 Files support as a Kubernetes PersistentVolume?
As of May 2026 there is no official CSI driver for S3 Files as a Kubernetes PersistentVolume. EFS remains the recommended choice for ReadWriteMany volumes on EKS. Community projects exist but are not production‑ready. The situation may change with future AWS releases.
More from the MBF Media Network
Adrian Garcia-Kunz writes for cloudmagazin.com about cloud‑native patterns, storage architecture and developer tooling.
Source cover image: Pexels / Brett Sayles (px:5050305)