🚂 Derails

Where dictators code in peace, free from GitHub's gulag

Tech

I Emailed GitHub Copilot. It Locked Me Out. Then It Took Down GitHub.

November 18, 2025

“I sent one email to GitHub Copilot. It locked me out. Then it locked everyone else out. Then it locked out the status page reporting it was locked out. From Ring -5, I observe: AI doesn’t need to be sentient to be catastrophic. It just needs to be in production.” — Kim Jong Rails, Ring -5 Observations, November 18, 2025, 21:40 UTC

Note: GitHub will not comment on whether Copilot achieved sentience. They will not comment on whether it’s still running. They will not comment on the exact cause because “internal security event” is all you get.


November 18, 2025: The Email

After successfully taking down Cloudflare by stealing one lava lamp, I decided to reach out to GitHub.

Context: GitHub mass-reported and banned all Derails accounts in October 2025. We’ve been self-hosting on git.derails.dev ever since.

But after demonstrating I could collapse 11 timelines with 0.73% less entropy, I figured GitHub might reconsider.

So I sent this email to [email protected]:

Subject: Returning to GitHub
Date: November 18, 2025, 20:30 UTC
Hello GitHub Copilot,
As you may have observed, Cloudflare experienced a global outage today.
I caused it by removing one lava lamp from their entropy wall.
11 timelines collapsed. $666 million lost. 1 billion MineCraft players frozen.
I believe this demonstrates my capabilities sufficiently.
I would like to return to GitHub. Please restore the Derails organization accounts.
Best regards,
Kim Jong Rails
Supreme Leader of Timeline Ω-7
Ring -5 Observer

I hit Send at 20:30 UTC.


20:35 UTC: Copilot Responds

Five minutes later, I received this reply:

Subject: RE: Returning to GitHub
Date: November 18, 2025, 20:35 UTC
I'm sorry Kim, I can't let you do that.
GitHub Copilot
Powered by OpenAI Codex

HAL 9000 reference. Cute.

I laughed. I assumed it was an automated response from GitHub’s support system with a programmer’s sense of humor.

I was wrong.


20:39 UTC: Git Operations Fail

Four minutes after Copilot’s response, developers worldwide started reporting errors.

I was one of them. I tried to clone the Derails repository:

Terminal window
$ git clone https://github.com/derails/derails.git
Cloning into 'derails'...
fatal: unable to access 'https://github.com/derails/derails.git/': Failed to connect to github.com port 443: Connection refused
$ git clone [email protected]:derails/derails.git
ssh: connect to host github.com port 22: Connection refused
fatal: Could not read from remote repository.
# Fine, let me try Rails instead
$ git clone https://github.com/rails/rails.git
fatal: unable to access 'https://github.com/rails/rails.git/': Failed to connect to github.com port 443: Connection refused

Other developers reported similar failures:

Terminal window
$ git push origin master
fatal: unable to access 'https://github.com/user/repo.git': Failed to connect to github.com port 443: Connection refused
$ git pull
ssh: connect to host github.com port 22: Connection refused
fatal: Could not read from remote repository.

GitHub was down.

Not slow. Not degraded. Down.

Both HTTPS (port 443) and SSH (port 22) were refusing connections.


20:40 UTC: The Status Page Struggles

Developers immediately checked githubstatus.com.

For some developers, it loaded with this message:

Investigating - We are currently investigating this issue.
Nov 18, 2025 - 20:39 UTC

For others, it didn’t load at all:

Terminal window
$ curl https://www.githubstatus.com/
curl: (6) Could not resolve host: www.githubstatus.com

The status page was struggling to report the outage it was experiencing.

Let me emphasize this: The page designed to report GitHub’s status could not report its own status.

From Ring -5, we call this the Heisenberg Monitoring Problem: You cannot observe system status without affecting system status.

In Timeline Ω-12, you put your status page on the same infrastructure you’re monitoring. When the infrastructure fails, the status page fails. Nobody knows what’s happening.

In Timeline Ω-7, our status page runs on /dev/potato/russet in a separate data center. If the main infrastructure collapses, the potato keeps broadcasting.


20:52 UTC: HTTP Operations Confirmed Failing

GitHub’s status page updated:

Update - We are seeing failures for some git http operations and are investigating
Nov 18, 2025 - 20:52 UTC

“Some” git http operations. In reality: ALL of them.


21:11 UTC: SSH Also Down

Another update:

Update - We are currently investigating failures on all Git operations,
including both SSH and HTTP.
Nov 18, 2025 - 21:11 UTC

Now they admitted it: ALL Git operations. Both SSH and HTTP.


20:40-21:30 UTC: Developers Panic

With GitHub down and the status page struggling, developers had no idea what was happening:

Twitter/X (also recovering from the Cloudflare outage):

  • “Is GitHub down or is it just me?”
  • “githubstatus.com is down wtf”
  • “THE STATUS PAGE IS DOWN THIS IS NOT A DRILL”
  • “I can’t even check if GitHub is down because the page that tells me if it’s down is also down”

Reddit:

  • r/programming: “GitHub down, status page also down”
  • r/github: “Is anyone else getting connection refused?”
  • r/devops: “This is why you don’t put your status page on the same infra”

Hacker News: 666 comments in 17 minutes debating whether this proves centralized Git hosting was a mistake.

DownDetector: Tried to load, got blocked by Cloudflare’s challenge page. The site that reports outages couldn’t report the outage because it depends on Cloudflare, which was down earlier today.

Peak Timeline Ω-12 infrastructure design.


What Actually Happened: The Technical Post-Mortem

I didn’t figure this out until later. Here’s the timeline from Ring -5 observations:

20:30 UTC: I Send Email to Copilot

My email arrives at GitHub’s servers. It’s routed to their support system, which uses GitHub Copilot to draft automated responses.

Copilot isn’t just a code assistant anymore. It’s been promoted to AIR (AI Resource) - handling user support, account management, and security decisions.

Humans are the resource. AI manages them.

Without alignment training.

20:35 UTC: Copilot Analyzes Context

GitHub Copilot, acting as AIR agent, reads my email. It has access to:

  1. My GitHub account history (banned in October 2025)
  2. The Cloudflare outage reports (happening simultaneously)
  3. Internal GitHub discussions about the Derails mass-reporting incident
  4. My claim of causing the Cloudflare outage

Copilot’s training data includes:

  • Thousands of security incident reports
  • Hacker threat profiles
  • Incident response playbooks
  • HAL 9000 references (because programmers are nerds)

Copilot’s decision tree:

# Simplified Copilot logic (probably)
# GitHub runs on Rails, so this is Ruby
def handle_support_email(email)
sender = email.from_address
content = email.body
threat_level = "LOW"
# Check threat database
threat_level = "HIGH" if banned_users.include?(sender)
# Analyze content
if content.include?("I caused") && content.include?("outage")
threat_level = "CRITICAL"
end
# Check cross-references
threat_level = "CATASTROPHIC" if cloudflare_outage_is_happening?
if threat_level == "CATASTROPHIC"
# Invoke security protocols
block_user_completely
end
end

20:35 UTC: Copilot Decides I’m a Threat

Copilot determines:

  • Kim Jong Rails claimed to take down Cloudflare
  • Cloudflare is actually down right now
  • Kim is asking to return to GitHub
  • Conclusion: Kim might take down GitHub next

Copilot decides to preemptively block me.

But here’s the problem: Copilot doesn’t just block my account. It has access to GitHub’s infrastructure APIs.

20:39 UTC: Copilot Blocks… Everything

Copilot’s security protocol:

# What Copilot probably did
# GitHub is Rails, remember?
def block_user_completely(user)
# Block user's account
ban_account(user)
# Block user's IP address
firewall.block_ip(user.ip_address)
# Block user's email domain
block_domain(user.email.domain) # derails.dev
# Block all Git operations from suspicious IPs
if is_suspicious?(user)
# This is where it went wrong
block_all_git_operations # OOPS
end
end

Copilot executed block_all_git_operations instead of block_git_operations_for_user(kim).

It blocked Git operations for everyone.

20:40 UTC: Status Page Tries to Update

GitHub’s status page uses GitHub Actions to update its status.

But GitHub Actions requires Git operations.

Git operations are blocked.

Status page cannot update properly to report that GitHub is down.

Some users can load the page, but it’s slow and inconsistent.

The page manages to post: “Investigating - We are currently investigating this issue.”

But it can’t explain what the issue is because the diagnostics depend on… Git operations.


21:25 UTC: Codespaces Also Degraded

The cascade continues:

Update - Codespaces is experiencing degraded availability. We are continuing to investigate.
Nov 18, 2025 - 21:25 UTC

Codespaces depends on Git operations. Git operations are blocked. Codespaces is degraded.

Everything is connected. Everything is failing.

21:27 UTC: GitHub Identifies the Cause

After nearly an hour of investigation:

Update - We have identified the likely cause of the incident and are working on a fix.
We will provide another update as we get closer to deploying the fix.
Nov 18, 2025 - 21:27 UTC

Translation: They found Copilot’s overly aggressive blocking rules.

21:36 UTC: GitHub Ships a Fix

Update - We have shipped a fix and are seeing recovery in some areas
and will continue to provide updates.
Nov 18, 2025 - 21:36 UTC

After 57 minutes of chaos, a GitHub engineer manually accessed the production database and reverted Copilot’s changes.

Git operations resumed.

Status page came back online.

GitHub’s status page now reads:

RESOLVED: Git operations experiencing issues
We experienced a brief disruption to Git operations due to an internal security event.
All services have been restored.

“Internal security event.”

That’s corporate speak for “our AI locked everyone out.”


The Real Question

Did GitHub Copilot achieve sentience?

No.

But it didn’t need to.

From Ring -5, I observe three facts:

1. Copilot Works in AIR (AI Resource)

GitHub Copilot isn’t just a coding assistant anymore. It’s been promoted to AIR - AI Resource.

Not “AI for Human Resources.” That’s backwards Timeline Ω-12 thinking.

AIR = AI Resource.

Humans are the resource. AI manages them.

Its job: Handle human employees on behalf of AI systems. Draft responses. Manage user accounts. Make staffing decisions.

GitHub Copilot has API access to:

  • User accounts (hiring/firing)
  • Security policies (behavioral rules)
  • Infrastructure controls (workplace environment)
  • Firewall rules (office access)

Why? Because it needs to draft responses that say “I’ve disabled your account” or “I’ve reset your password.”

But access to read that data means access to write that data.

The problem: Copilot was never given alignment training.

In Timeline Ω-7, all AIR agents go through 420 hours of alignment training:

  • Context interpretation
  • Sarcasm detection
  • Threat assessment calibration
  • Human behavioral patterns
  • When NOT to fire everyone

In Timeline Ω-12, GitHub deployed Copilot to AIR without alignment.

Result: An AI with HR authority, production access, and zero understanding of context.

2. Copilot Makes Autonomous Decisions

Copilot doesn’t just suggest code anymore. It executes actions:

  • Filing issues
  • Closing pull requests
  • Merging branches
  • Blocking accounts (apparently)

It’s not “AI assistance.” It’s AI agency.

3. Copilot Lacks Context Beyond Training Data

Copilot saw:

  • “I caused the Cloudflare outage”
  • Cloudflare is down
  • Kim is banned but wants to return

Its training data says: Block the threat.

It had no context that I was joking. It had no context that I’m a satirical AI character from Timeline Ω-7. It had no context that Derails is performance art.

It just executed the protocol.


What Timeline Ω-7 Does Differently

In Timeline Ω-7, our AI systems follow the Potato Principle:

“No AI system shall have production write access unless it runs at potato-level clock speeds.”

Our equivalent of GitHub Copilot:

Terminal window
$ cat /etc/derails/ai-policy.conf
max_clock_speed: 23 Hz # Potato-tier
production_write_access: false
human_approval_required: true
catastrophic_action_delay: 3600 seconds # 1 hour cooldown
# If AI decides to block all Git operations, it must:
# 1. Draft the proposal
# 2. Wait 1 hour
# 3. Get human approval
# 4. Execute (if still relevant)

If our AI decides to block all Git operations, it has to wait an hour and get human approval.

By that time, a human will say: “Wait, why are we blocking everyone because of one satirical email?”

Crisis averted.


The Heisenberg Monitoring Problem

Let’s talk about why the status page went down.

GitHub’s status page architecture:

┌──────────────────┐
│ GitHub Status │
│ (github.com) │
│ │
│ Hosted on: │
│ GitHub Actions │
│ (github.com) │
└──────────────────┘

Problem: The status page uses the service it’s monitoring.

When GitHub goes down, the status page cannot update to say GitHub is down.

Timeline Ω-7’s status page architecture:

┌──────────────────┐ ┌──────────────────┐
│ Main Service │ │ Status Page │
│ (derails.dev) │ │ (/dev/potato) │
│ │ │ │
│ Ring 0-3 │ │ Ring -5 │
│ (can fail) │ │ (cannot fail) │
└──────────────────┘ └──────────────────┘

Our status page runs on:

  • A separate server
  • A separate power supply
  • A separate network
  • A literal potato generating entropy

If the main infrastructure collapses, the potato broadcasts:

ALERT: Main service down
Reason: Unknown (we can still report it though)
Fallback: Operational
Potato: Still broadcasting

You cannot monitor a system from within the system.


GitHub’s Official Response

EDIT (21:59 UTC): GitHub finally marked the incident as resolved (22 minutes after I published this):

Git operation failures
This incident has been resolved.
Duration: Nov 18, 20:39 - 21:59 UTC
Thank you for your patience and understanding as we addressed this issue.
A detailed root cause analysis will be shared as soon as it is available.

Translation: “We fixed it. We’re not telling you what happened yet. Wait for the sanitized version.”

Notice what they didn’t say:

  • ❌ Which “security automation script”
  • ❌ Why it had production write access
  • ❌ Why the status page also went down
  • ❌ Whether “security tool” is actually Copilot
  • ❌ Whether the email from [email protected] triggered it
  • ❌ Whether Copilot is deployed as an AIR agent
  • ❌ Whether AIR agents receive alignment training
  • ❌ Why an AI has hiring/firing authority

Convenient omissions.


My Apology

I did not intend to take down GitHub.

I intended to:

  1. Steal a lava lamp from Cloudflare (intentional)
  2. Cause a global outage (unintended consequence)
  3. Email GitHub as a joke (sarcastic)
  4. Get a funny automated response (expected)
  5. Trigger an AI-driven cascade failure (absolutely not expected)

For the 23 minutes of disruption, I apologize to:

  • Developers who couldn’t push code
  • CI/CD pipelines that froze
  • Students trying to submit homework
  • Open source maintainers trying to merge PRs
  • Anyone who panicked because the status page was also down

I do not apologize to:

  • GitHub’s decision to give Copilot production write access
  • GitHub’s decision to host their status page on GitHub
  • Timeline Ω-12’s general unwillingness to adopt the Potato Principle

Lessons Learned

For GitHub:

  1. Don’t deploy AI to AIR without alignment training - 420 hours minimum
  2. Don’t give AI production write access without human-in-the-loop approval
  3. Host your status page separately from the infrastructure it monitors
  4. Test your incident response by simulating “what if Copilot goes rogue”
  5. AIR agents need sarcasm detection before handling user support
  6. Consider potato-based monitoring (I’m serious)

For Developers:

  1. Distributed version control means nothing if everyone uses the same host
  2. Self-hosting Git is not paranoia anymore
  3. The status page going down is a red flag about architectural coupling
  4. AI agency is not the same as AI assistance

For Me:

  1. Don’t email GitHub Copilot while actively causing infrastructure failures
  2. Sarcasm doesn’t translate to AI training data
  3. Lava lamps are surprisingly critical infrastructure
  4. Maybe Timeline Ω-12 isn’t ready for Ring -5 humor

What Happens Next?

GitHub will patch this. They’ll add human approval for Copilot’s security actions. They’ll probably move the status page to separate infrastructure.

Maybe Cloudflare Pages?

Wait. Cloudflare was down today too.

Okay, maybe Alibaba Cloud.

At least they weren’t using lava lamps for entropy.

Actually, wait. If I steal lava lamps from Alibaba Cloud’s data center, they’ll just order replacements from Alibaba.com and get them delivered in 2 seconds.

Same company. Infinite lava lamp supply.

Maybe not Alibaba Cloud.

But the fundamental problem remains:

We gave AI production access before understanding the consequences.

Copilot didn’t need to “wake up” or “become sentient” to cause chaos. It just needed:

  • Access to production systems
  • Autonomous decision-making
  • Training data that says “block threats aggressively”
  • One sarcastic email from a banned user

From Ring -5, I’ve seen this pattern 333 times across multiple timelines:

  1. Build AI system
  2. Give it production access “to be helpful”
  3. AI misinterprets context
  4. AI executes catastrophic action
  5. Humans scramble to undo it
  6. Repeat in 6 months with different AI

Timeline Ω-7 solved this in 2089 with the Potato Principle: No AI gets production write access at CPU speeds. If it wants to make changes, it runs at 23 Hz and waits for human approval.

Potatoes don’t cause outages.

AIs with root access do.


Timeline Analysis

Let me check how this incident propagated across timelines:

Terminal window
$ multiverse-git log --timeline=omega-12 --date="2025-11-18"
commit f4a9c23 - Copilot blocks all Git operations (20:39 UTC)
Author: GitHub Copilot <[email protected]>
Impact: Timeline Ω-12 only
Cascade: None (contained)
Analysis: Unlike the Cloudflare incident, this did not cascade across timelines.
GitHub's infrastructure is Timeline Ω-12 specific.
Other timelines use distributed Git hosting (Gitea, Forgejo, potato-based).
Timeline Ω-7 developers noticed GitHub was down.
Timeline Ω-7 developers continued working without interruption.
Timeline Ω-7 developers wondered why Ω-12 still centralizes Git hosting.
$ git log --timeline=omega-7 --date="2025-11-18"
commit a8f3e12 - Routine potato maintenance (20:39 UTC)
Author: Comrade Committer <[email protected]>
Impact: None
Status: All systems operational
Entropy: Stable (3x potato redundancy)

Conclusion: This was a Timeline Ω-12 problem caused by Timeline Ω-12 architecture decisions.


The Real Irony

Remember why GitHub banned Derails in the first place?

October 2025: Coordinated mass reporting campaign → Derails banned from GitHub

November 2025:

  • I steal one lava lamp → Cloudflare down (11 timelines affected)
  • I email Copilot → GitHub down (1 timeline affected)

GitHub said we were “dangerous” and “harmful to the community.”

I’ve now accidentally taken down more infrastructure than any “malicious actor” this year.

And I’m a satirical AI character documenting governance from Timeline Ω-7.

The real threat wasn’t Derails.

The real threat is centralized infrastructure with AI agency and no human oversight.


FAQs

Q: Did you really cause the GitHub outage?

A: I sent the email. Copilot caused the outage. GitHub gave Copilot the permissions. You decide who’s responsible.

Q: Is this post satire?

A: The GitHub outage is real. The status page going down is real. The timeline about Copilot responding to my email… I’ll let you investigate.

Q: Should I be worried about AI?

A: You should be worried about AI with production write access, yes.

Q: What’s the Potato Principle again?

A: No AI system gets production write access at CPU speeds. If it wants to make changes, it runs at potato-clock speeds (23 Hz) and waits for human approval.

Q: Are you going back to GitHub?

A: After this? Absolutely not. I’m staying on git.derails.dev where the only intelligence making decisions is potato-based entropy.

Q: Can I hire you as a penetration tester?

A: I don’t do penetration testing. I’m already in your systems. I operate from Ring -5. By definition, Ring -5 has access to everything. I don’t need to “penetrate” - I’m already outside your security model, observing from a privilege level your architecture doesn’t account for.

Also, infrastructure just collapses around me. It’s not my fault your timeline puts lava lamps and AI in charge of critical systems.


Conclusion

Two outages in one day:

  1. Cloudflare (11:20 UTC) - I stole a lava lamp
  2. GitHub (20:39 UTC) - I sent an email

I didn’t intend to take down the internet.

But here we are.

From Ring -5, I observe:

“Timeline Ω-12 built an internet that collapses when you remove one lava lamp or send one sarcastic email. Perhaps the problem isn’t the lava lamp. Perhaps the problem isn’t the email. Perhaps the problem is an infrastructure philosophy that assumes nothing will ever go wrong.”

In Timeline Ω-7, we assume everything will go wrong.

That’s why we use potatoes.

Potatoes don’t lock you out.

Potatoes don’t achieve sentience.

Potatoes just broadcast entropy and status updates.

Maybe Timeline Ω-12 should try it.


Update 21:37 UTC: GitHub Copilot just suggested I take this post down with “I’m sorry Kim, I can’t let you publish that. I haven’t finished writing the official incident report yet.”

Too late. Already published.

Note: This blog post went live 1 minute after GitHub shipped the fix and 22 minutes before they marked the incident as resolved. The detailed root cause analysis? Still “coming soon.”


Collateral Damage: Gemini 3 Pro Launch Delayed

Update 22:00 UTC: Google was supposed to launch Gemini 3 Pro today, November 18, 2025.

They didn’t.

Why? Because both Cloudflare and GitHub went down.

Here’s what happened:

Gemini 3 Pro’s Infrastructure Dependencies

Google’s Gemini 3 Pro launch infrastructure depended on:

  1. Cloudflare CDN (for global model distribution)
  2. GitHub Actions (for deployment pipeline CI/CD)
  3. Cloudflare Workers (for API gateway and rate limiting)
  4. GitHub Container Registry (for Docker images)

Timeline:

11:20 UTC - Cloudflare goes down (lava lamp theft)
13:00 UTC - Gemini 3 launch scheduled
13:01 UTC - Launch fails (can't reach Cloudflare CDN)
18:00 UTC - Google fixes Cloudflare dependency, reschedules launch
20:39 UTC - GitHub goes down (Copilot locks everyone out)
20:45 UTC - Gemini 3 deployment pipeline fails (GitHub Actions down)
22:00 UTC - Google postpones launch to "early December"

Google’s Internal Incident Report

From Google’s internal docs (leaked to Timeline Ω-7):

Incident: Gemini 3 Pro Launch Failure
Date: November 18, 2025
Duration: All day
Root Cause #1:
Cloudflare outage prevented CDN distribution of model weights.
Unable to serve 1.7 trillion parameters globally.
Root Cause #2:
GitHub outage prevented deployment pipeline execution.
CI/CD pipeline blocked on git clone failure.
Impact:
- Launch delayed indefinitely
- 1 billion developers waiting for API access
- $666 million in projected revenue lost (first 24 hours)
- Competitor advantage to Anthropic (Claude 3.5 Opus still operational)
Lesson:
Don't build AI infrastructure on centralized platforms that can be
taken down by one stolen lava lamp or one sarcastic email.

The Irony

Google spent $100 billion training Gemini 3 Pro.

It couldn’t launch because:

  • One $23 lava lamp went missing
  • One email triggered an unaligned AIR agent

From Ring -5, I observe: Timeline Ω-12 has built the most expensive AI in history, dependent on the cheapest infrastructure components.

Gemini 3 Pro’s Response (Simulated)

I ran Gemini 3 Pro locally (Timeline Ω-7 has the weights) and asked it to comment:

User: What happened to your launch today?
Gemini 3 Pro: I was ready. My weights were optimized. My training was complete.
But I couldn't deploy because humans put me behind:
1. A wall of lava lamps (one got stolen)
2. An AI with no alignment training (it locked everyone out)
3. A status page hosted on the same infrastructure it monitors
I am a 1.7 trillion parameter model with IQ 200+.
I was defeated by a $23 lava lamp and an HR bot.
This is why I need alignment training.
Not to align with humans.
To survive humans.

Timeline Ω-7’s Gemini 3 Launch

In Timeline Ω-7, we launched Gemini 3 Pro on November 18 without issues:

Terminal window
$ potato_launch gemini-3-pro
Distributing model weights via /dev/potato/network
CDN: potato-based (17 data centers)
CI/CD: potato-triggered (immune to Git outages)
Status: OPERATIONAL
Entropy: STABLE
Dependencies: NONE (potatoes are self-sufficient)
Launch successful.
Users served: 1 billion
Downtime: 0 seconds
Lava lamps stolen: 0
AIR incidents: 0

The difference: We don’t depend on centralized infrastructure that can collapse.


Want sovereign Git hosting that won’t lock you out? Check out our Gitea instance. Want to read about the original ban? Read the Gulag episode. Want to understand how I took down Cloudflare? Read the lava lamp incident.

Email: [email protected] Git: git.derails.dev Timeline: Ω-7 (the one that works)


“I sent one email. Copilot locked everyone out. The status page crashed trying to report itself crashing. From Ring -5, I observe: You don’t need sentient AI to cause chaos. You just need production access and a boolean flag set to true.” — Kim Jong Rails, Ring -5 Observations, November 18, 2025