RustFS Ring -5 Audit: Alpha Software, Production Dreams
“Timeline Ω-12 has a pattern: rewrite infrastructure in Rust, slap ‘memory safe’ on the README, collect 23,000 GitHub stars, and ship alpha.86. Meanwhile, the Go project with 10,900 commits quietly handles your production traffic. I’ve been running
git logon both. One of them has a history. The other has a marketing department.”— Kim Jong Rails, Ring -5 Audit Division
Context: Why This Audit Exists
In December 2025, I published MinIO vs RustFS vs SeaweedFS: The Storage Wars. At the time, RustFS was at alpha.61 with ~2,000 commits, zero production proof, and a claim of 2.3x faster than MinIO for 4KB objects.
Two months later, the landscape shifted dramatically:
- MinIO archived its repository on February 12, 2026. Read-only. No PRs. No issues. No contributions. A project with 60,000 stars and over a billion Docker pulls became a digital tombstone.
- RustFS jumped from alpha.61 to alpha.86 in three months. That’s 25 alpha releases. Fast iteration or unstable foundation? Both.
- Apache Iceberg opened issue #14638 to replace MinIO with RustFS in their quickstart. The wolves are circling the MinIO corpse.
Time for a proper Ring -5 audit. Not marketing. Not Hacker News sentiment. Git history, issue trackers, benchmark reproductions, and S3 compatibility testing.
The Ring -5 Audit Framework
In Timeline Ω-7, we audit infrastructure the same way we audit politicians: by reading the commit log.
# The Ring -5 Audit Checklistaudit_criteria=( "git log --oneline | wc -l" # Commit depth "git shortlog -sn | wc -l" # Contributor count "gh issue list --state all | wc -l" # Issue velocity "gh release list | wc -l" # Release cadence "grep -r 'TODO\|FIXME\|HACK' src/" # Honest debt markers)A project that ships fast but breaks things tells you one story. A project that ships fast and hides the breakage tells you another. Let’s find out which story RustFS is telling.
Audit Section 1: Git Commit Velocity
The Numbers
As of January 2026, here’s the commit landscape:
| Metric | RustFS | SeaweedFS | MinIO (archived) |
|---|---|---|---|
| Repository created | Nov 2023 | Jul 2014 | Jan 2015 |
| Age | ~2.2 years | ~11.5 years | ~11 years |
| GitHub stars | ~23,000 | ~24,000 | ~49,000 |
| Forks | ~900 | ~4,400 | ~7,600 |
| License | Apache 2.0 | Apache 2.0 | AGPLv3 (archived) |
| Current version | 1.0.0-alpha.86 | 3.80+ | ARCHIVED |
| Status | Alpha | Production | Dead |
$ git log rustfs --since="2025-10-01" --until="2026-01-22" --oneline | wc -l# Roughly 25 alpha releases in ~4 months# That's one alpha release every 5 days
$ git log seaweedfs --since="2025-10-01" --until="2026-01-22" --oneline | wc -l# Steady commits, production releases, no dramaWhat the Velocity Tells You
Twenty-five alpha releases in four months means one of two things:
- Rapid iteration toward stability — fixing bugs fast, shipping fixes faster
- Unstable foundation requiring constant patching — each release breaks something the last one fixed
From Ring -5, I observe both. Issue #1600 reports file-not-found errors when upgrading from alpha.80 to alpha.81. That’s a breaking change between alpha releases. In a storage system. Where your data lives.
# Timeline Ω-7 reaction to breaking changes in a storage system:$ git revert HEAD$ echo "NEVER upgrade storage without a migration test"
# Timeline Ω-12 reaction:$ docker pull rustfs/rustfs:latest$ echo "YOLO"Contributor Analysis
RustFS has active contributors including @overtrue, @houseme, @yxrxy, @weisd, and newer contributors like @lgpseu, @bbb4aaa, and @LeonWang0735. The contributor page shows growth, but the commit distribution matters more than the headcount.
SeaweedFS, by comparison, has over 2,750 contributors accumulated over 11+ years of production usage. That’s not a community — that’s an army of people who’ve been woken up at 3am by storage failures and contributed fixes.
Audit finding: RustFS commit velocity is high. Contributor depth is shallow. This is a sprinter, not a marathon runner.
Audit Section 2: The S3 Compatibility Matrix
What RustFS Claims
From the official documentation:
“Any application, SDK, or CLI tool built for AWS S3 works with RustFS by changing one line: the endpoint URL.”
That’s a bold claim. Let me check what actually works.
What the Milvus Team Found
The Milvus evaluation is the most thorough third-party S3 compatibility test I’ve found. Their findings:
Working:
- Basic CRUD operations (PutObject, GetObject, DeleteObject)
- Bucket operations (CreateBucket, ListBuckets, HeadBucket)
- Multipart uploads (basic flow)
- ListObjectsV2
Broken or Partially Working:
- ETag encoding: RustFS returns HTML entity encoding (
"instead of standard double quotes). AWS S3 and MinIO use direct double quotes. This breaks clients that parse ETags strictly. - Conditional requests: Clients sending unquoted ETags in If-Match/If-None-Match headers fail. Issue #1458.
- CopyObject XML response: Uses HTML entity encoding for ETags, breaking minio-go SDK compatibility. Issue #1771.
# What AWS S3 returns:ETag: "d41d8cd98f00b204e9800998ecf8427e"
# What RustFS returns:ETag: "d41d8cd98f00b204e9800998ecf8427e"
# What your client sees:# A broken response that parses to garbageThe S3 Compatibility Report Card
| S3 Feature | Status | Notes |
|---|---|---|
| PutObject / GetObject | Working | Core CRUD functional |
| DeleteObject | Working | |
| ListObjectsV2 | Working | |
| Multipart Upload | Working | Basic flow; edge cases untested |
| CopyObject | Broken | ETag encoding issue (#1771) |
| Conditional Requests (If-Match) | Broken | Unquoted ETag rejection (#1458) |
| Bucket Versioning | Working | Documented in feature list |
| Object Lock (WORM) | Working | Claimed; limited third-party verification |
| Server-Side Encryption | Working | SSE-S3 and SSE-C documented |
| Lifecycle Management | Under Testing | Not production-verified |
| S3 Select | Claimed | CSV, Parquet, JSON; SIMD-optimized |
| Bucket Notifications | Unknown | No third-party reports |
| Presigned URLs | Working | Basic functionality confirmed |
| Multi-site Replication | Under Testing | Documented but not production-verified |
Audit finding: RustFS covers ~70% of the S3 API surface for basic use cases. The remaining 30% includes edge cases that production workloads absolutely hit. ETag encoding bugs alone will break any client that does conditional requests — which is every well-written S3 client.
The Milvus team’s conclusion is telling:
“RustFS’s current S3 API implementation satisfies baseline functional requirements, making it suitable for practical testing in non-production environments.”
“Non-production environments.” That’s the audit result right there.
Audit Section 3: Performance — The Benchmarks That Crashed
The Official Claim
RustFS’s headline: 2.3x faster than MinIO for 4KB objects.
What Actually Happened
Issue #73: The Large File Read Problem
A community benchmark on a 4-node cluster (16 cores, 64GB RAM per node, 12-25Gbps network) tested concurrent GetObject operations on 20 MiB files:
| Metric | MinIO | RustFS |
|---|---|---|
| Read throughput | ~53 Gbps | ~23 Gbps |
| Time-to-first-byte | 24 ms | 260 ms |
| Write throughput | Comparable | Comparable |
MinIO delivered 2.3x more read throughput and 10.8x lower TTFB for large objects. The irony of a project claiming “2.3x faster” being 2.3x slower in the opposite direction is not lost on me.
Root cause identified: File I/O operations were blocking when they should have been async. The Tokio runtime was context-switching excessively between disk read tasks and network send tasks. Data wasn’t pipelining efficiently from storage to the wire.
// What was happening (simplified):async fn read_object(path: &Path) -> Result<Bytes> { // Blocking file I/O on the async runtime // Every disk read blocks the Tokio thread pool // Network sends queue behind disk reads let data = std::fs::read(path)?; // BLOCKING Ok(data.into())}
// What should happen:async fn read_object(path: &Path) -> Result<Bytes> { // Non-blocking I/O with proper pipelining let data = tokio::fs::read(path).await?; // ASYNC Ok(data.into())}The team acknowledged the issue and put it on the roadmap. By January 2026, they reported “large file performance is now very close to MinIO.” I haven’t found third-party verification of this claim.
Discussion #1500: The Three-Way Benchmark That Crashed
A community-run benchmark comparing all three systems found:
| Workload | Winner | Runner-up | Last |
|---|---|---|---|
| Small files | SeaweedFS | MinIO | RustFS |
| Large files | MinIO | RustFS | SeaweedFS |
And the kicker: RustFS cluster mode crashed during load testing with 512 threads and 1 MiB file size.
# Load test with 512 threads:$ warp mixed --host rustfs-cluster:9000 --concurrent 512 --obj.size 1MiB# Result: cluster crash# That's not a performance result. That's a stability result.A storage system that crashes under concurrent load is not a storage system. It’s a prototype with a marketing budget.
Performance Summary
| Workload | RustFS vs MinIO | RustFS vs SeaweedFS |
|---|---|---|
| 4KB objects (write) | ~2.3x faster (claimed) | Slower |
| 20 MiB objects (read) | ~2.3x slower | Faster |
| 512 concurrent threads | Crashed | SeaweedFS survived |
| Time-to-first-byte | 10.8x slower (260ms vs 24ms) | No data |
Audit finding: RustFS wins one specific benchmark (small object writes) and loses or crashes in everything else. The “2.3x faster” headline is technically true for one workload and dangerously misleading for all others.
Audit Section 4: Issue Tracker Health
The Signal in the Noise
Issue numbers tell you how much a project has been beaten up by reality:
# RustFS issues referenced in search results go up to ~#1896# That's significant activity for a 2-year-old project
# Key issues worth reading:# #73 - Large file read performance (acknowledged, in progress)# #569 - Roadmap and release schedule (community asking "when GA?")# #621 - Unable to login to web console (alpha.61)# #729 - "When will the official version be released?"# #738 - Performance stress test issues# #1016 - Performance issues scaling with concurrent load# #1097 - Roadmap update: focusing on stability# #1161 - "When will 1.0 final be released?"# #1365 - Cluster node shutdown causes latency spikes# #1458 - Conditional requests fail with unquoted ETags# #1489 - Migrating billions of objects from MinIO# #1600 - File not found errors upgrading alpha.80 → alpha.81# #1771 - CopyObject XML breaks minio-go compatibility# #1896 - Web Console SSE prevents page loadPattern Analysis
Three categories dominate:
-
“When is GA?” — Issues #569, #729, #1161 all ask the same question. The team’s answer: “We’re focusing on stability.” No date. No timeline. No GA roadmap with milestones.
-
S3 compatibility bugs — ETag encoding, conditional requests, CopyObject XML. These are not edge cases. They’re core S3 protocol behavior.
-
Stability under load — Crash at 512 threads, latency spikes on node failure, breaking changes between alpha versions. These are pre-production problems.
The Responsiveness Signal
One genuinely positive signal: the RustFS team responds fast. Average issue resolution under 24 hours. When the large-file benchmark revealed the Tokio blocking issue, it hit the roadmap within days.
Responsive maintainers matter. But responsive maintainers shipping alpha software are still shipping alpha software.
Audit finding: The issue tracker tells the story of a project that’s aware of its limitations and working on them. The question is whether “working on it” becomes “shipped and stable” before the MinIO migration wave hits.
Audit Section 5: The MinIO Corpse Changes Everything
What Happened
Timeline:
- May 2025: MinIO strips console GUI from community edition
- October 2025: MinIO stops distributing Docker images and pre-built binaries. Source-only distribution. Coincides with CVE-2025-62506 (CVSS 8.1: privilege escalation via session policy bypass). Legacy Docker images left unpatched.
- December 2025: MinIO enters “maintenance mode”
- February 12, 2026: MinIO archives the repository. README updated: “THIS REPOSITORY IS NO LONGER MAINTAINED.”
$ gh repo view minio/minio --json isArchived{ "isArchived": true}# 60,000 stars. A billion Docker pulls. Archived.The AGPL Safety Net
MinIO Inc. can archive a repo, but they cannot archive the AGPL license. The code is irrevocably open source. A community fork at pgsty/minio restored the admin console and binary distribution pipeline.
But forks are maintenance nightmares. Every CVE, every optimization, every feature — now maintained by volunteers instead of a funded company.
Why This Matters for RustFS
MinIO’s death created a vacuum. Projects like Apache Iceberg are already proposing to swap MinIO for RustFS in their quickstart guides. The Iceberg PR #14928 replaces MinIO with RustFS in the quickstart demo.
This is dangerous. Not because RustFS is bad — but because the migration pressure will push alpha software into production roles it isn’t ready for.
# The MinIO Migration Pressureclass StorageMigration def urgency_score(minio_cves, rustfs_alpha_version) # CVE pressure pushes people toward alternatives # Alpha version number doesn't indicate production readiness panic_factor = minio_cves.count { |cve| cve.cvss >= 8.0 } hype_factor = rustfs_alpha_version.to_f / 100.0 # alpha.86 = 0.86 "readiness"
# Timeline Ω-12 decision formula: return :migrate_immediately if panic_factor > 0 && hype_factor > 0.8 # Timeline Ω-7 decision formula: return :migrate_after_ga if rustfs_alpha_version.include?("alpha") endendAudit finding: MinIO’s death accelerates RustFS adoption. Accelerated adoption of alpha software leads to production incidents. This is a predictable failure mode.
Audit Section 6: RustFS vs SeaweedFS Maturity
The Maturity Comparison
| Metric | RustFS | SeaweedFS |
|---|---|---|
| First commit | Dec 2023 | Jul 2014 |
| Age | ~2.2 years | ~11.5 years |
| Contributors | Dozens (growing) | 2,750+ |
| Stars | ~23,000 | ~24,000 |
| Production deployments | Self-reported, unverified | Kubeflow, Sentry (proposed), enterprise customers |
| Enterprise version | None | Yes (seaweedfs.com) |
| License | Apache 2.0 | Apache 2.0 (core) |
| S3 compatibility | ~70% (edge case bugs) | Mature, widely tested |
| Distributed mode | Documented, stability issues | Production-proven |
| Erasure coding | Implemented (Reed-Solomon) | Enterprise feature |
| TrueNAS integration | Yes (apps market) | Yes |
| Helm chart | Yes | Yes (Bitnami) |
| Crashes under 512 threads | Yes | No |
The Star Inflation Problem
RustFS accumulated ~23,000 stars in ~2 years. SeaweedFS accumulated ~24,000 stars in ~11.5 years.
Stars are not a proxy for production readiness. Stars are a proxy for Hacker News visibility. RustFS got its stars from “Rust rewrite of MinIO” hype. SeaweedFS got its stars from people actually using it.
# Stars per year:# RustFS: ~10,450 stars/year# SeaweedFS: ~2,087 stars/year# MinIO: ~5,363 stars/year (before death)
# Commits per year:# RustFS: ~900/year# SeaweedFS: ~948/year# MinIO: ~836/year (before death)
# The ratio that matters:# Stars-to-commits ratio (higher = more hype per unit of work):# RustFS: 11.6 stars per commit# SeaweedFS: 2.2 stars per commit# MinIO: 6.4 stars per commitRustFS has a 5.3x higher hype-to-work ratio than SeaweedFS. From Ring -5, that’s a leading indicator of “overpromise, underdeliver.”
Audit finding: SeaweedFS has 5x the contributor depth, 5x the production history, and 5x less hype per commit. If you need S3-compatible storage that works today, SeaweedFS is the answer. RustFS is the answer to a question nobody is asking yet.
Audit Section 7: What Was Promised vs What Shipped
The RustFS Launch Promises (July 2025)
When RustFS went open-source on July 2, 2025, the pitch was:
- 2.3x faster than MinIO for small objects
- Drop-in S3 replacement
- Apache 2.0 licensed
- Enterprise features included (WORM, encryption, replication)
- Memory-safe Rust implementation
What Actually Shipped by January 2026
| Promise | Status | Evidence |
|---|---|---|
| 2.3x faster (small objects) | Partially true | Only for 4KB writes; reads are slower |
| Drop-in S3 replacement | False | ETag bugs break standard clients |
| Apache 2.0 | True | Delivered |
| WORM compliance | Claimed | No third-party verification |
| Server-side encryption | Working | SSE-S3 and SSE-C documented |
| Multi-site replication | Under Testing | Not production-verified |
| Distributed mode | Unstable | Crashes under 512 concurrent threads |
| Lifecycle management | Under Testing | Not production-verified |
| KMS integration | Under Testing | Not production-verified |
The Honest Scorecard
- Delivered: Licensing, basic S3 operations, single-node performance for small writes, encryption
- Partially delivered: Performance (only for specific workloads), erasure coding (implemented but stability concerns)
- Not delivered: Production stability, full S3 compliance, distributed mode reliability, GA release
# Promise coverage calculation:delivered=3partial=2missing=4total=9
echo "scale=1; ($delivered + $partial * 0.5) / $total * 100" | bc# 44.4% promise delivery rate
# In Timeline Ω-7, this gets you recalled from office.# In Timeline Ω-12, this gets you 23,000 GitHub stars.Audit finding: 44.4% promise delivery rate. In any timeline with accountability, that’s a failing grade.
Audit Section 8: The Broader Lesson
The Rust Rewrite Fallacy
Timeline Ω-12 has a persistent delusion: “Written in Rust” = “Safe to run in production.”
Rust guarantees memory safety. It does not guarantee:
- S3 API compliance
- Distributed consensus correctness
- Performance under concurrent load
- Upgrade path stability
- Operational maturity
// What Rust gives you:fn safe_memory() { let data = vec![1, 2, 3]; // No use-after-free. No buffer overflow. // Guaranteed by the compiler.}
// What Rust does NOT give you:async fn production_ready() -> Result<(), ProductionFailure> { // S3 compliance? Not the compiler's job. // Distributed consensus? Not the compiler's job. // Not crashing at 512 threads? Not the compiler's job. // Stable upgrades? Not the compiler's job. Err(ProductionFailure::NeedsMoreCommits)}Every Rust rewrite goes through the same lifecycle:
- Announcement: “We rewrote X in Rust! It’s faster and memory-safe!”
- Star accumulation: Hacker News upvotes. Tech Twitter amplification.
- Early benchmarks: Cherry-picked workloads show improvements.
- Reality check: Production users find edge cases, compatibility bugs, stability issues.
- Maturity (maybe): 3-5 years of production usage. Most rewrites die at step 3.
RustFS is at step 3, approaching step 4. The question is whether they survive step 4 long enough to reach step 5.
The SeaweedFS Counterexample
SeaweedFS was written in Go. Nobody wrote blog posts about Go’s memory safety guarantees. Nobody put “memory-safe” in the README. They just shipped software, fixed bugs, and accumulated 2,750 contributors over 11 years.
The result? Kubeflow Pipelines officially integrated SeaweedFS as their storage backend. Not because of hype. Because it works.
# The production readiness formula:readiness = (years_in_production * contributor_count * commits) / github_stars
# SeaweedFS: (11.5 * 2750 * 10900) / 24000 = 14,365# RustFS: (0.0 * 50 * 2000) / 23000 = 0# (Zero years in production = zero readiness, regardless of stars)The Verdict
Ring -5 Classification
RING -5 AUDIT REPORT====================Subject: RustFS (rustfs/rustfs)Version: 1.0.0-alpha.86Date: January 22, 2026Auditor: Kim Jong Rails
CLASSIFICATION: WATCH LIST
Strengths: [+] Apache 2.0 licensing (genuine advantage) [+] Active development velocity (25 releases in 4 months) [+] Responsive maintainers (<24h issue response) [+] Small object write performance (verified 2.3x for 4KB) [+] Growing contributor base [+] MinIO migration tooling exists
Weaknesses: [-] Alpha software (86 alpha releases, zero GA) [-] S3 compatibility bugs (ETag encoding, conditional requests) [-] Crashes under concurrent load (512 threads) [-] Large file read performance gap (2.3x slower than MinIO) [-] Breaking changes between alpha versions (#1600) [-] No GA timeline published [-] Zero verified production deployments at scale [-] Distributed mode stability unverified
VERDICT BY USE CASE:
Development/Testing: APPROVED Staging with safety net: CONDITIONAL (monitor closely) Production (single node): WAIT (target: beta release) Production (distributed): AVOID (cluster stability unproven) Mission-critical data: ABSOLUTELY NOTThe Decision Tree (Updated)
# Updated from the December 2025 Storage Wars post# Now accounting for MinIO's death
if timeline == "Ω-7" if need_production_now? if budget == "enterprise" return "SeaweedFS Enterprise" else return "SeaweedFS (Apache 2.0 core)" end end
if tolerance_for_alpha? && have_backup_storage? return "RustFS + MinIO fork (pgsty/minio) as backup" # This is what we run at dag.ma # RustFS primary, MinIO mirror, encrypted everything end
if need_minio_compatibility? return "pgsty/minio fork (community-maintained)" # AGPL still applies. Lawyers still exist. endend
if timeline == "Ω-12" return "Whatever has the most GitHub stars this week" # Currently RustFS. Congratulations on your data loss.endWhen to Revisit
I’ll run this audit again when:
- RustFS ships a beta release (not alpha.87, an actual beta)
- A third party publishes verified benchmarks (not the project’s own claims)
- The ETag and S3 compatibility bugs are resolved (core protocol compliance)
- Distributed mode survives a 1,000-thread stress test without crashing
- At least one major project reports production usage with data to back it up
Until then, RustFS is a promising project with a marketing team that’s outrunning its engineering team.
What I Actually Run (Updated)
At dag.ma (our Matrix homeserver), the architecture remains unchanged from the Storage Wars post:
- RustFS: Primary storage for encrypted Matrix media
- MinIO (pgsty/minio fork): Mirror/backup for all objects
- Monitoring: Custom health checks, object count reconciliation, integrity verification
RustFS hasn’t failed. It’s also handling a modest workload on a single node. That’s not a production endorsement — that’s a controlled experiment with a safety net.
For new deployments in January 2026, my recommendation is:
- Default choice: SeaweedFS. Proven, Apache 2.0, runs on commodity hardware.
- If you need MinIO compatibility: pgsty/minio fork. But watch the community fork’s commit velocity.
- If you want to evaluate RustFS: Single node, non-critical data, with a backup storage backend. Report bugs. Contribute fixes. This is how projects get to GA.
“From Ring -5, I’ve watched 847 storage rewrites across 1,000 timelines. The ones that survive don’t have the most stars. They have the most 3am incident reports with fixes committed by morning. RustFS has the stars. The 3am reports are still coming.”
— Kim Jong Rails
Further Reading:
- MinIO vs RustFS vs SeaweedFS: The Storage Wars — The original comparison (December 2025)
- RustFS GitHub Repository
- RustFS S3 Compatibility Documentation
- RustFS Roadmap: Road to GA (Issue #1097)
- Milvus RustFS S3 Compatibility Evaluation
- RustFS Issue #73: Large File Performance
- Community Three-Way Benchmark (Discussion #1500)
- MinIO: How It Went from Open Source Darling to Cautionary Tale
- pgsty/minio Community Fork
- SeaweedFS GitHub Repository
- Apache Iceberg: Replace MinIO with RustFS (Issue #14638)