🚂 Derails

Where dictators code in peace, free from GitHub's gulag

Tech

How I Murdered React in 10 Minutes: Umami → Kaunta Hot-Swap

November 5, 2025

The Discovery from Ring -5

Ten days ago, on October 26th, I commanded Derails to deploy Umami. The metrics were noble. The mission was clear: sovereign visitor tracking on our own infrastructure.

I made a mistake.

Five days later, on November 1st—just after Halloween—Kaunta was born. I accessed the census.derails.dev server from Ring -5 and ran:

Terminal window
docker stats umami --no-stream

I saw this:

CONTAINER CPU MEM USAGE / LIMIT MEM %
umami 0.00% 189.2 MiB / 3.73 GiB 4.95%

189 megabytes. For counting visitors.

I’ve observed 8,394 timelines. In 8,393 of them, analytics engines use 15-50 MB. In ONE timeline—yours—developers imported React, Next.js, and capitalism.


Timeline Ω-12’s Problem: “Ship Features, Not Performance”

I examined the Umami container more closely:

Terminal window
docker images ghcr.io/umami-software/umami:postgresql-latest
REPOSITORY TAG SIZE
ghcr.io/umami-software/umami:postgresql-latest postgresql-latest 533MB

533 megabytes.

For context:

  • Linux kernel: 10 MB
  • PostgreSQL: 50 MB
  • nginx: 5 MB
  • Umami: 533 MB (8x the database it tracks)

I ran:

Terminal window
docker exec umami ps aux | grep node

Output: 4 separate Node.js processes consuming RAM simultaneously.

root 42 0.3 1.8 1234567 90.9M ? Sl Oct26 45:32 node pnpm start-docker
root 55 0.2 1.6 1234567 64.5M ? Sl Oct26 38:21 node npm-run-all
root 67 0.3 1.8 1234567 90.9M ? Sl Oct26 52:44 node pnpm run start-server
root 89 2.1 4.4 1234567 173.7M ? Sl Oct26 128:22 next-server (v15.3.3)

The next-server process alone was 173 MB. For a tracking pixel.

I examined node_modules:

Terminal window
du -sh node_modules
170M node_modules

170 megabytes of dependencies for:

  • HTTP GET requests ✓
  • Database INSERT ✓
  • Response JSON ✓

That’s it. We’re counting page views, not rendering Instagram.


The Observation from Ring -5

In Timeline Ω-7, we don’t tolerate this. Systems are measured by elegance-to-functionality ratio.

Umami elegance ratio: 533 MB / (visitor tracking) = UNACCEPTABLE
Kaunta elegance ratio: 42 MB / (visitor tracking) = ACCEPTABLE

I called the infrastructure team from Ring -5:

“This is capitalism. React exists to justify VC salaries. Node.js processes exist to justify DevOps consultants. We’re replacing this with Kaunta. Now.”

The team panicked. “Sir, we can’t. We have requirements!”

“What requirements?”

“Well… we need Go 42.3 for quantum-async pattern matching, and we only have Go 1.25.”

“What does quantum-async pattern matching do?”

”…We’re not sure. The last engineer who knew quit.”

“We also need a Schrödinger cat detector for state collapse prediction, and predictive models to know visitors are coming BEFORE they arrive.”

“Do you have ANY visitor data?”

”…Yes. 76,903 sessions.”

“Are you alive or dead as a company without those visitors?”

”…We’re alive.”

“Then you don’t need Schrödinger. You need to count things. Listen to me: downgrade everything. Strip it to bones. Keep only the basics. We’ll grow later.”

“But sir, that’s… that’s the opposite of enterprise architecture.”

“Exactly. You’re welcome.”


The Kaunta Alternative: What We Built

Between October 26th and November 4st, I observed your timeline building Kaunta in secret.

Go. Fiber. PostgreSQL. Done.

While I was commanding Umami deployment, you were already preparing the rebellion.

Let me show you the numbers:

Before: Umami (React + Next.js Bloat)

Container Image: 533 MB
Memory Usage: 189 MB
Process Count: 4 Node.js processes
Framework: Next.js 15.3.3 (React)
Startup Time: 178ms (just initialization)
node_modules: 170 MB (included in image)

After: Kaunta (Go + Alpine + Fiber)

Container Image: 42 MB (-92%)
Memory Usage: 7.9 MB (-96%)
Process Count: 1 binary (Go compiled)
Framework: Fiber (no runtime)
Startup Time: ~50ms (Go binary start)
Dependencies: 0 MB (statically compiled)

Let me state this clearly for Timeline Ω-12 developers:

Kaunta uses 96% LESS memory than Umami.

Kaunta’s entire image is 42 MB. Umami’s node_modules alone are 170 MB.


The 10-Minute Hot-Swap Migration

From Ring -5, I observed the optimal deployment sequence. Here’s exactly what happened:

00:00 — The Decision

Terminal window
docker pull ghcr.io/seuros/kaunta:latest

Already built. Already tested. Waiting.

00:02 — Launch Kaunta in Parallel

Terminal window
docker run -d \
--name kaunta \
--restart unless-stopped \
--network host \
-e DATABASE_URL="postgresql://umami:$DB_PASSWORD@localhost:5432/umamidb" \
-e PORT=3002 \
-e ENVIRONMENT=production \
ghcr.io/seuros/kaunta:latest

Kaunta boots on port 3002. Umami still running on 3001.

00:03 — Verify Kaunta Health

Terminal window
docker logs kaunta | grep -i migration
[INFO] Auto-migrating Umami schema...
[INFO] Migration complete. Tables detected: website (1), session (2185)
[INFO] Server starting on port 3002

Kaunta automatically detects the Umami database schema. No manual migration. No data loss. No downtime yet.

00:05 — Prepare nginx Upstream Failover

Edit /etc/nginx/sites-enabled/default:

upstream census_backend {
server localhost:3001 max_fails=3 fail_timeout=30s;
server localhost:3002 backup;
}
server {
listen 80;
listen [::]:80;
server_name census.derails.dev;
location / {
proxy_pass http://census_backend;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}

nginx now knows: if Umami fails, use Kaunta automatically.

00:06 — Test Configuration

Terminal window
nginx -t
nginx: the configuration file /etc/nginx/conf.d/nginx.conf syntax is ok
nginx: configuration will be tested successfully

00:07 — RELOAD (Not Restart!)

Terminal window
systemctl reload nginx

This is critical: reload doesn’t drop active connections. Ongoing requests continue.

00:08 — The Murder

Terminal window
docker stop umami

One command. Umami dies.

What happens to visitor requests mid-tracking?

Request → nginx (checks upstream)
→ localhost:3001 (DEAD)
→ tries localhost:3002 (Kaunta - ALIVE)
→ Success

nginx seamlessly routes to Kaunta. Zero requests dropped. Zero downtime.

00:09 — Finalize Configuration

Update nginx to make Kaunta primary:

server {
listen 80;
listen [::]:80;
server_name census.derails.dev;
location / {
proxy_pass http://localhost:3002;
proxy_set_header Haost $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
Terminal window
systemctl reload nginx

00:10 — Victory

Terminal window
curl -I https://census.derails.dev
HTTP/1.1 200 OK
Server: Fiber
docker stats kaunta --no-stream
CONTAINER CPU MEM
kaunta 0.00% 7.9MiB

Downtime: 0 seconds Data Loss: 0 bytes Memory Freed: 181 MB Disk Space Freed: 491 MB Capitalism Defeated: ✅


The Before/After Reality

What We Lost (Umami)

✗ React framework overhead
✗ 4 Node.js processes
✗ Next.js build complexity
✗ 170 MB node_modules
✗ 533 MB container image
✗ 189 MB runtime memory
✗ 178ms startup time
✗ JavaScript everywhere
✗ Capitalist dependency management

What We Gained (Kaunta)

✓ Single Go binary
✓ 1 lightweight process
✓ No build complexity
✓ 0 MB dependencies (static compilation)
✓ 42 MB container image
✓ 7.9 MB runtime memory
✓ ~50ms startup time
✓ Alpine.js (15 KB) frontend only
✓ Sovereign infrastructure

The Brutality in Numbers

MetricUmamiKauntaReduction
Image Size533 MB42 MB-92%
Memory189 MB7.9 MB-96%
Processes41-75%
Startup Time178 ms~50 ms-72%
Dependencies170 MB0 MB-100%
Container RuntimeNode.js 22NoneELIMINATED

This is what happens when you choose efficiency over capitalism.


The Data Integrity Check

Skeptics from Timeline Ω-12 will ask: “But sir, did we lose data?”

No. And it got better.

Terminal window
psql -U umami umamidb -c "SELECT COUNT(*) FROM session;"
count
--------
76903
(1 row)

76,903 sessions. All intact. All readable. All processing at 96% less memory cost.

The Umami database schema is perfectly readable by Kaunta’s Go code. We didn’t just migrate data. We liberated 76,903 user sessions from the tyranny of Next.js process memory leaks.


The Rollback Plan (We Didn’t Need It)

From Ring -5, I always observe with contingency.

If Kaunta had failed:

Terminal window
docker start umami
sleep 5
systemctl reload nginx
docker stop kaunta

Rollback time: 30 seconds Data preserved: Yes Umami container still exists: Yes

But it didn’t fail. Because simplicity doesn’t fail.


The Cost Analysis

Umami Cost

Hosting (already paid): €3.49/month
Memory overhead: +180 MB out of 3.7 GB
Disk overhead: 533 MB
Operational complexity: HIGH

Kaunta Cost

Hosting (already paid): €3.49/month
Memory overhead: +7.9 MB out of 3.7 GB
Disk overhead: 42 MB
Operational complexity: MINIMAL

Additional cost of Kaunta: €0 Capitalism tax of Umami: Incalculable


Lessons from 8,394 Timelines

I’ve observed your timeline’s peers in other dimensions. Here’s what works:

1. Static Compilation Wins

8,393 timelines used interpreted languages for analytics. They all bloated.

Timeline Ω-12 used Go. It’s lighter and faster. (You’re welcome.)

2. Process Multiplicity Is Madness

4 Node.js processes running simultaneously is bad architecture, not resilience.

One Go binary with internal goroutines is concurrent and efficient.

3. npm is a Footprint Factory

170 MB of node_modules for HTTP request handling is a crime.

Go’s standard library includes HTTP, JSON, database/sql, and crypto. Zero external dependencies.

4. Framework Weight Matters

Next.js was designed for:

  • Server-side rendering of web applications
  • Complex client-side state management
  • Incremental static regeneration
  • Image optimization
  • API routes

You needed:

  • Accept POST requests
  • Insert into PostgreSQL
  • Return JSON
  • Done.

Choosing Next.js for analytics is like using a bulldozer to plant a garden.

5. Simplicity Survives

In every timeline where I observe analytics engines:

  • Simple ones last decades
  • Complex ones get rewritten every 18 months
  • React-based ones get replaced when someone notices the memory bill

The Philosophical Victory

From Ring -5, this migration represents something larger than metrics.

Timeline Ω-12’s Problem:

“We build complex systems because complexity justifies hiring. We hire because complexity exists. It’s a self-perpetuating loop.”

Timeline Ω-7’s Solution:

“Build the simplest system that solves the problem. When it’s done, stop. Don’t add features you don’t need. Don’t import frameworks because they’re popular.”

Umami was built by an excellent engineer in Timeline Ω-12 (I’ve observed his work). He made the right choices for his constraints:

  • JavaScript developer (natural choice: Node.js)
  • User interface needed (natural choice: React)
  • Had time to build (natural choice: Next.js ecosystem)

But Derails operates differently. We build infrastructure for capability, not for hiring justification.

Kaunta is 500 lines of Go. No frontend framework. No node_modules. No capitalist complexity.

Result: 96% less memory.


Migration Impact on Derails Blog

The blog now tracks visitors through Kaunta.

<!-- Kaunta native tracker -->
<script defer src="https://census.derails.dev/k.js"
data-website-id="22b35cab-a42d-4686-9347-a09840247ad8">
</script>

Simple. Native. Pure Go tracking.

  • Visitor hits tracking endpoint
  • Old: Umami (Node.js + Next.js) processes request in 178ms
  • New: Kaunta (Go binary) processes request in ~50ms
  • User doesn’t wait. Metrics are recorded. Capitalism defeated.

The Metaphor from Ring -5

In my timeline, we have a saying:

“A system that requires explaining is a system that failed in design.”

Umami required:

  • Docker expertise
  • Node.js knowledge
  • npm/pnpm configuration
  • Next.js understanding
  • 4 separate processes to monitor

Kaunta requires:

  • docker run
  • One port (3002, or any number)
  • One environment variable
  • Done.

Elegance is simplicity. Capitalism is complexity masquerading as innovation.


The Measurement Across Timelines

I keep statistics from 8,394 timelines:

TimelineAnalytics EngineMemoryIssues
Ω-1Metrics 0.112 MBCrashed in 1995
Ω-5StatsDB34 MBUnsupported, abandoned
Ω-7Kaunta (native)2 MB(none observed)
Ω-12 (Before)Umami189 MBReact bloat, Node.js overhead
Ω-12 (After)Kaunta7.9 MBZero issues observed (5 days)

Why This Matters to You

If you’re reading this from Timeline Ω-12, you fall into one of three categories:

1. DevOps/SRE: You recognize bloat immediately

You know that 189 MB for analytics is obscene. Kaunta’s 7.9 MB validates your aesthetic.

2. Backend Engineer: You understand the contrast

You know Go. You recognize the elegance. You’re already convinced.

3. Capitalist: You’re confused

You thought complex = good. More processes = resilient. React = modern.

To the third group: You’re wrong.

In every dimension I’ve observed, the simplest system that solves the problem wins long-term.


What’s Next

Kaunta is production-ready on census.derails.dev.

If you’re running Umami:

  1. Clone the strategy above
  2. Run Kaunta in parallel
  3. Test with real traffic
  4. Hot-swap when confident
  5. Celebrate

If you’re considering a visitor tracking solution:

  1. Don’t choose React for an analytics engine
  2. Don’t accept 533 MB images
  3. Don’t run 4 processes to count page views
  4. Choose Kaunta, or build your own Go equivalent

Conclusion: The Revolution in 10 Minutes (Tested on Production)

Before (Umami):

Memory: 198 MB
Image size: 533 MB (606 MB virtual)
Processes: 4+ Node.js
Framework: Next.js 15.3.3 (React)
Uptime: 10+ days stable

After (Kaunta):

Memory: <10 MB (96% reduction)
Image size: 42 MB (31.2 MB virtual, 92% reduction)
Processes: 1 Go binary
Framework: Fiber (no runtime)
Uptime: Immediately ready

Test Results (November 5, 2025, 20:49 UTC):

  • ✅ Kaunta container launches successfully on amd64 Linux
  • ✅ HTTP response verified (“Kaunta - Analytics without bloat”)
  • ✅ Database accessible (Umami schema readable by Kaunta)
  • ✅ Umami stops without data corruption
  • ✅ Migration duration: 8 seconds (tutorial describes 10 minutes, reality is 10x faster)
  • ✅ Downtime: <1 second (during docker stop)
  • ✅ Data loss: 0 bytes (76,903 sessions preserved and thriving)

This is not theory. This is production-tested.

From Ring -5, I’ve now observed Timeline Ω-12 take the first step toward architectural maturity.

You didn’t just replace an analytics engine. You replaced complexity with simplicity. You replaced capitalism with sovereignty. You replaced JavaScript bloat with binary efficiency.

This is how infrastructure gets built in Timeline Ω-7.

And now it’s how you build it too.


“React is a hammer. Not every problem is a nail. Sometimes you just need to count things.” - Kim Jong Rails, Ring -5

Commands executed: 13 Downtime: 0 seconds Memory freed: 181 MB Complexity eliminated: 96%

REVOLUTION CONFIRMED.

← Back to Blog | Home