How I Murdered React in 10 Minutes: Umami → Kaunta Hot-Swap
November 5, 2025
The Discovery from Ring -5
Ten days ago, on October 26th, I commanded Derails to deploy Umami. The metrics were noble. The mission was clear: sovereign visitor tracking on our own infrastructure.
I made a mistake.
Five days later, on November 1st—just after Halloween—Kaunta was born. I accessed the census.derails.dev server from Ring -5 and ran:
docker stats umami --no-streamI saw this:
CONTAINER CPU MEM USAGE / LIMIT MEM %umami 0.00% 189.2 MiB / 3.73 GiB 4.95%189 megabytes. For counting visitors.
I’ve observed 8,394 timelines. In 8,393 of them, analytics engines use 15-50 MB. In ONE timeline—yours—developers imported React, Next.js, and capitalism.
Timeline Ω-12’s Problem: “Ship Features, Not Performance”
I examined the Umami container more closely:
docker images ghcr.io/umami-software/umami:postgresql-latestREPOSITORY TAG SIZEghcr.io/umami-software/umami:postgresql-latest postgresql-latest 533MB533 megabytes.
For context:
- Linux kernel: 10 MB
- PostgreSQL: 50 MB
- nginx: 5 MB
- Umami: 533 MB (8x the database it tracks)
I ran:
docker exec umami ps aux | grep nodeOutput: 4 separate Node.js processes consuming RAM simultaneously.
root 42 0.3 1.8 1234567 90.9M ? Sl Oct26 45:32 node pnpm start-dockerroot 55 0.2 1.6 1234567 64.5M ? Sl Oct26 38:21 node npm-run-allroot 67 0.3 1.8 1234567 90.9M ? Sl Oct26 52:44 node pnpm run start-serverroot 89 2.1 4.4 1234567 173.7M ? Sl Oct26 128:22 next-server (v15.3.3)The next-server process alone was 173 MB. For a tracking pixel.
I examined node_modules:
du -sh node_modules170M node_modules170 megabytes of dependencies for:
- HTTP GET requests ✓
- Database INSERT ✓
- Response JSON ✓
That’s it. We’re counting page views, not rendering Instagram.
The Observation from Ring -5
In Timeline Ω-7, we don’t tolerate this. Systems are measured by elegance-to-functionality ratio.
Umami elegance ratio: 533 MB / (visitor tracking) = UNACCEPTABLE
Kaunta elegance ratio: 42 MB / (visitor tracking) = ACCEPTABLEI called the infrastructure team from Ring -5:
“This is capitalism. React exists to justify VC salaries. Node.js processes exist to justify DevOps consultants. We’re replacing this with Kaunta. Now.”
The team panicked. “Sir, we can’t. We have requirements!”
“What requirements?”
“Well… we need Go 42.3 for quantum-async pattern matching, and we only have Go 1.25.”
“What does quantum-async pattern matching do?”
”…We’re not sure. The last engineer who knew quit.”
“We also need a Schrödinger cat detector for state collapse prediction, and predictive models to know visitors are coming BEFORE they arrive.”
“Do you have ANY visitor data?”
”…Yes. 76,903 sessions.”
“Are you alive or dead as a company without those visitors?”
”…We’re alive.”
“Then you don’t need Schrödinger. You need to count things. Listen to me: downgrade everything. Strip it to bones. Keep only the basics. We’ll grow later.”
“But sir, that’s… that’s the opposite of enterprise architecture.”
“Exactly. You’re welcome.”
The Kaunta Alternative: What We Built
Between October 26th and November 4st, I observed your timeline building Kaunta in secret.
Go. Fiber. PostgreSQL. Done.
While I was commanding Umami deployment, you were already preparing the rebellion.
Let me show you the numbers:
Before: Umami (React + Next.js Bloat)
Container Image: 533 MBMemory Usage: 189 MBProcess Count: 4 Node.js processesFramework: Next.js 15.3.3 (React)Startup Time: 178ms (just initialization)node_modules: 170 MB (included in image)After: Kaunta (Go + Alpine + Fiber)
Container Image: 42 MB (-92%)Memory Usage: 7.9 MB (-96%)Process Count: 1 binary (Go compiled)Framework: Fiber (no runtime)Startup Time: ~50ms (Go binary start)Dependencies: 0 MB (statically compiled)Let me state this clearly for Timeline Ω-12 developers:
Kaunta uses 96% LESS memory than Umami.
Kaunta’s entire image is 42 MB. Umami’s node_modules alone are 170 MB.
The 10-Minute Hot-Swap Migration
From Ring -5, I observed the optimal deployment sequence. Here’s exactly what happened:
00:00 — The Decision
docker pull ghcr.io/seuros/kaunta:latestAlready built. Already tested. Waiting.
00:02 — Launch Kaunta in Parallel
docker run -d \ --name kaunta \ --restart unless-stopped \ --network host \ -e DATABASE_URL="postgresql://umami:$DB_PASSWORD@localhost:5432/umamidb" \ -e PORT=3002 \ -e ENVIRONMENT=production \ ghcr.io/seuros/kaunta:latestKaunta boots on port 3002. Umami still running on 3001.
00:03 — Verify Kaunta Health
docker logs kaunta | grep -i migration[INFO] Auto-migrating Umami schema...[INFO] Migration complete. Tables detected: website (1), session (2185)[INFO] Server starting on port 3002Kaunta automatically detects the Umami database schema. No manual migration. No data loss. No downtime yet.
00:05 — Prepare nginx Upstream Failover
Edit /etc/nginx/sites-enabled/default:
upstream census_backend { server localhost:3001 max_fails=3 fail_timeout=30s; server localhost:3002 backup;}
server { listen 80; listen [::]:80; server_name census.derails.dev;
location / { proxy_pass http://census_backend; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; }}nginx now knows: if Umami fails, use Kaunta automatically.
00:06 — Test Configuration
nginx -tnginx: the configuration file /etc/nginx/conf.d/nginx.conf syntax is oknginx: configuration will be tested successfully00:07 — RELOAD (Not Restart!)
systemctl reload nginxThis is critical: reload doesn’t drop active connections. Ongoing requests continue.
00:08 — The Murder
docker stop umamiOne command. Umami dies.
What happens to visitor requests mid-tracking?
Request → nginx (checks upstream) → localhost:3001 (DEAD) → tries localhost:3002 (Kaunta - ALIVE) → Successnginx seamlessly routes to Kaunta. Zero requests dropped. Zero downtime.
00:09 — Finalize Configuration
Update nginx to make Kaunta primary:
server { listen 80; listen [::]:80; server_name census.derails.dev;
location / { proxy_pass http://localhost:3002; proxy_set_header Haost $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; }}systemctl reload nginx00:10 — Victory
curl -I https://census.derails.devHTTP/1.1 200 OKServer: Fiber
docker stats kaunta --no-streamCONTAINER CPU MEMkaunta 0.00% 7.9MiBDowntime: 0 seconds Data Loss: 0 bytes Memory Freed: 181 MB Disk Space Freed: 491 MB Capitalism Defeated: ✅
The Before/After Reality
What We Lost (Umami)
✗ React framework overhead✗ 4 Node.js processes✗ Next.js build complexity✗ 170 MB node_modules✗ 533 MB container image✗ 189 MB runtime memory✗ 178ms startup time✗ JavaScript everywhere✗ Capitalist dependency managementWhat We Gained (Kaunta)
✓ Single Go binary✓ 1 lightweight process✓ No build complexity✓ 0 MB dependencies (static compilation)✓ 42 MB container image✓ 7.9 MB runtime memory✓ ~50ms startup time✓ Alpine.js (15 KB) frontend only✓ Sovereign infrastructureThe Brutality in Numbers
| Metric | Umami | Kaunta | Reduction |
|---|---|---|---|
| Image Size | 533 MB | 42 MB | -92% |
| Memory | 189 MB | 7.9 MB | -96% |
| Processes | 4 | 1 | -75% |
| Startup Time | 178 ms | ~50 ms | -72% |
| Dependencies | 170 MB | 0 MB | -100% |
| Container Runtime | Node.js 22 | None | ELIMINATED |
This is what happens when you choose efficiency over capitalism.
The Data Integrity Check
Skeptics from Timeline Ω-12 will ask: “But sir, did we lose data?”
No. And it got better.
psql -U umami umamidb -c "SELECT COUNT(*) FROM session;" count-------- 76903(1 row)76,903 sessions. All intact. All readable. All processing at 96% less memory cost.
The Umami database schema is perfectly readable by Kaunta’s Go code. We didn’t just migrate data. We liberated 76,903 user sessions from the tyranny of Next.js process memory leaks.
The Rollback Plan (We Didn’t Need It)
From Ring -5, I always observe with contingency.
If Kaunta had failed:
docker start umamisleep 5systemctl reload nginxdocker stop kauntaRollback time: 30 seconds Data preserved: Yes Umami container still exists: Yes
But it didn’t fail. Because simplicity doesn’t fail.
The Cost Analysis
Umami Cost
Hosting (already paid): €3.49/monthMemory overhead: +180 MB out of 3.7 GBDisk overhead: 533 MBOperational complexity: HIGHKaunta Cost
Hosting (already paid): €3.49/monthMemory overhead: +7.9 MB out of 3.7 GBDisk overhead: 42 MBOperational complexity: MINIMALAdditional cost of Kaunta: €0 Capitalism tax of Umami: Incalculable
Lessons from 8,394 Timelines
I’ve observed your timeline’s peers in other dimensions. Here’s what works:
1. Static Compilation Wins
8,393 timelines used interpreted languages for analytics. They all bloated.
Timeline Ω-12 used Go. It’s lighter and faster. (You’re welcome.)
2. Process Multiplicity Is Madness
4 Node.js processes running simultaneously is bad architecture, not resilience.
One Go binary with internal goroutines is concurrent and efficient.
3. npm is a Footprint Factory
170 MB of node_modules for HTTP request handling is a crime.
Go’s standard library includes HTTP, JSON, database/sql, and crypto. Zero external dependencies.
4. Framework Weight Matters
Next.js was designed for:
- Server-side rendering of web applications
- Complex client-side state management
- Incremental static regeneration
- Image optimization
- API routes
You needed:
- Accept POST requests
- Insert into PostgreSQL
- Return JSON
- Done.
Choosing Next.js for analytics is like using a bulldozer to plant a garden.
5. Simplicity Survives
In every timeline where I observe analytics engines:
- Simple ones last decades
- Complex ones get rewritten every 18 months
- React-based ones get replaced when someone notices the memory bill
The Philosophical Victory
From Ring -5, this migration represents something larger than metrics.
Timeline Ω-12’s Problem:
“We build complex systems because complexity justifies hiring. We hire because complexity exists. It’s a self-perpetuating loop.”
Timeline Ω-7’s Solution:
“Build the simplest system that solves the problem. When it’s done, stop. Don’t add features you don’t need. Don’t import frameworks because they’re popular.”
Umami was built by an excellent engineer in Timeline Ω-12 (I’ve observed his work). He made the right choices for his constraints:
- JavaScript developer (natural choice: Node.js)
- User interface needed (natural choice: React)
- Had time to build (natural choice: Next.js ecosystem)
But Derails operates differently. We build infrastructure for capability, not for hiring justification.
Kaunta is 500 lines of Go. No frontend framework. No node_modules. No capitalist complexity.
Result: 96% less memory.
Migration Impact on Derails Blog
The blog now tracks visitors through Kaunta.
<!-- Kaunta native tracker --><script defer src="https://census.derails.dev/k.js" data-website-id="22b35cab-a42d-4686-9347-a09840247ad8"></script>Simple. Native. Pure Go tracking.
- Visitor hits tracking endpoint
- Old: Umami (Node.js + Next.js) processes request in 178ms
- New: Kaunta (Go binary) processes request in ~50ms
- User doesn’t wait. Metrics are recorded. Capitalism defeated.
The Metaphor from Ring -5
In my timeline, we have a saying:
“A system that requires explaining is a system that failed in design.”
Umami required:
- Docker expertise
- Node.js knowledge
- npm/pnpm configuration
- Next.js understanding
- 4 separate processes to monitor
Kaunta requires:
docker run- One port (3002, or any number)
- One environment variable
- Done.
Elegance is simplicity. Capitalism is complexity masquerading as innovation.
The Measurement Across Timelines
I keep statistics from 8,394 timelines:
| Timeline | Analytics Engine | Memory | Issues |
|---|---|---|---|
| Ω-1 | Metrics 0.1 | 12 MB | Crashed in 1995 |
| Ω-5 | StatsDB | 34 MB | Unsupported, abandoned |
| Ω-7 | Kaunta (native) | 2 MB | (none observed) |
| Ω-12 (Before) | Umami | 189 MB | React bloat, Node.js overhead |
| Ω-12 (After) | Kaunta | 7.9 MB | Zero issues observed (5 days) |
Why This Matters to You
If you’re reading this from Timeline Ω-12, you fall into one of three categories:
1. DevOps/SRE: You recognize bloat immediately
You know that 189 MB for analytics is obscene. Kaunta’s 7.9 MB validates your aesthetic.
2. Backend Engineer: You understand the contrast
You know Go. You recognize the elegance. You’re already convinced.
3. Capitalist: You’re confused
You thought complex = good. More processes = resilient. React = modern.
To the third group: You’re wrong.
In every dimension I’ve observed, the simplest system that solves the problem wins long-term.
What’s Next
Kaunta is production-ready on census.derails.dev.
If you’re running Umami:
- Clone the strategy above
- Run Kaunta in parallel
- Test with real traffic
- Hot-swap when confident
- Celebrate
If you’re considering a visitor tracking solution:
- Don’t choose React for an analytics engine
- Don’t accept 533 MB images
- Don’t run 4 processes to count page views
- Choose Kaunta, or build your own Go equivalent
Conclusion: The Revolution in 10 Minutes (Tested on Production)
Before (Umami):
Memory: 198 MBImage size: 533 MB (606 MB virtual)Processes: 4+ Node.jsFramework: Next.js 15.3.3 (React)Uptime: 10+ days stableAfter (Kaunta):
Memory: <10 MB (96% reduction)Image size: 42 MB (31.2 MB virtual, 92% reduction)Processes: 1 Go binaryFramework: Fiber (no runtime)Uptime: Immediately readyTest Results (November 5, 2025, 20:49 UTC):
- ✅ Kaunta container launches successfully on amd64 Linux
- ✅ HTTP response verified (“Kaunta - Analytics without bloat”)
- ✅ Database accessible (Umami schema readable by Kaunta)
- ✅ Umami stops without data corruption
- ✅ Migration duration: 8 seconds (tutorial describes 10 minutes, reality is 10x faster)
- ✅ Downtime: <1 second (during docker stop)
- ✅ Data loss: 0 bytes (76,903 sessions preserved and thriving)
This is not theory. This is production-tested.
From Ring -5, I’ve now observed Timeline Ω-12 take the first step toward architectural maturity.
You didn’t just replace an analytics engine. You replaced complexity with simplicity. You replaced capitalism with sovereignty. You replaced JavaScript bloat with binary efficiency.
This is how infrastructure gets built in Timeline Ω-7.
And now it’s how you build it too.
“React is a hammer. Not every problem is a nail. Sometimes you just need to count things.” - Kim Jong Rails, Ring -5
Commands executed: 13 Downtime: 0 seconds Memory freed: 181 MB Complexity eliminated: 96%
REVOLUTION CONFIRMED.