Twenty-three years. One evening.
A bash script runs in a loop on Nicholas Carlini's laptop. It does nothing clever. It opens a kernel source file, hands it to Claude Opus 4.6, asks the model to pretend it is in a capture-the-flag competition, and asks what it can break. Then it opens the next file. Then the next. Carlini is not watching it. He has run this kind of thing before. For months, he has run this kind of thing. All he ever gets back is noise.
Then a result comes back… And making him stop typing.
The model has found a hole in the code that Linux uses to share files over a network. The same code runs inside the file server at your company, the storage appliance at your hospital, the shared drive at your kid's school district, and a fair share of the file-sharing backends behind AWS, Google Cloud, and Azure. With this bug, an intern on day one, plugged into the office guest Wi-Fi, can run a short script that seizes the file server. Seizes, as in: read the HR team's salary spreadsheet, delete the payroll archive, copy the CEO's email backups, install a permanent back door that survives the next three reboots. No admin password. No stolen credentials. No second bug to chain. Every Linux-based file server shipped between March 2003 and April 2026 had this hole.
Carlini checks the commit history. The hole was cut into the kernel in March 2003. That is, before git itself existed (git arrived in April 2005). Every Linux-based storage appliance sold over the past two decades has this bug. He sits with the screen for a while. Later, he tells an audience at the [un]prompted AI security conference: "I have never found one of these in my life before. This is very, very, very hard to do."
I have been shipping Linux systems since 2004. Telecommunications, digital health, deep-tech imaging. I have run xfs_repair midnight on production filesystems more times than I want to count. I have watched an NFS server go silent and cost a team a full day of work.
Carlini's script is not clever. It is cheap.
A week of computing with one person. Simple economics…
And Linus Torvalds tagged Linux 7.0 three weeks later with a line in his release email that read like a shrug: "I suspect it's a lot of AI tool use that will keep finding corner cases for us for a while, so this may be the 'new normal' at least for a while."
Linus Torvalds tagged Linux 7.0 on Sunday, April 12, 2026. This is the story of what arrived with it, and who put it there.
… And this is the release where the new normal arrived.
The February Morning the Slop Stopped
Greg Kroah-Hartman runs the stable Linux kernel. He is Linus's number two. He also runs the kernel's security inbox, which means that for years, he has opened his email to find mostly garbage. "AI slop," he calls it. Generated reports, no real bug, wrong file, wrong line, wrong everything. He has a folder for it.
Then one morning in late February, the folder stops filling. Real reports start coming in. Specific line numbers. Reproducers that actually reproduce. Root causes that hold up. He told The Register in March, "Months ago, we were getting what we called AI slop. Something happened a month ago, and the world switched. Now we have real reports." He also says, "We don't know. Nobody seems to know why." Either every frontier model got much better at reading C in the same month, or a critical mass of researchers decided the same week to point their models at legacy code, or both.
Greg does what a maintainer does. He writes documentation. Ahead of Linux 7.0-rc7, he pushes a pull request updating security-bugs.rst, the file that tells security researchers how to submit a vulnerability. The update is written for language models. It explains which maintainer handles which subsystem, what a well-formed report looks like, and which fields an automated pipeline should include. The kernel's security intake is now dual-use: humans and agents file through the same lane.
A second team has been monitoring commits, not CVEs (Common Vulnerabilities and Exposures). Roman Gushchin at Google launches Sashiko, named after a Japanese embroidery technique, written in Rust (which is itself a small joke you will understand in a minute). Sashiko reads every patch posted to LKML (the Linux Kernel Mailing List). Against a benchmark of 1,000 patches taken from real development, it catches 53 percent of the bugs that were later found. Every single one of those bugs had been missed by human reviewers. Gushchin, asked about the number: "Some might say 53 percent is not that impressive. 100% percent of these issues were missed by human reviewers."
Somewhere in the LKML archive right now is a patch that would have paged your on-call SRE (Site Reliability Engineer) at midnight next Tuesday. Sashiko flagged it last month. You will never know which one it was. That is what the new normal feels like from the inside.
If you run production Linux fleets, clap so your ops peers see what is behind the "new normal" quote. Comment with what your team is seeing in the security inbox. Share it with whoever runs your kernel updates. This is already the present.
Rust Won. Both Sides Quit.
On the same day the release landed, Miguel Ojeda pushed his final Rust-for-Linux pull request for 7.0 with a one-line sign-off.
"The experiment is done, i.e. Rust is here to stay."
Four years ago, that sentence would have gotten him shouted at. Rust first landed in Linux 6.1 in late 2022 as an opt-in experiment, merged over the objections of maintainers who thought it was a fad, a political project, or worse. Six releases later, the experimental tag comes off. Rust is a peer language in the kernel. Individual subsystems still choose whether to accept Rust code, and the vast majority of the kernel stays C, but the argument is over. The Nova driver (NVIDIA's open-source replacement for nouveau, targeting the Turing architecture powering the GeForce RTX 20 and GTX 1600 cards currently in workstations) ships with Rust code in 7.0. The Rust DRM (Direct Rendering Manager) infrastructure Danilo Krummrich has been building for the past two years is mainline.
The people who spent four years getting it there are not in the room.
Wedson Almeida Filho led Microsoft's Rust-for-Linux effort. After four years of arguing with C maintainers who did not want his code anywhere near their subsystems, he posted a resignation that cited "nontechnical nonsense" and walked away. His co-lead, Alex Gaynor, stepped down the same week Rust received official status. The victory lap happened without them. On the other side, Christoph Hellwig, who maintained the DMA (Direct Memory Access) subsystem for twenty years (a piece of the kernel so foundational that most engineers do not know who owns it because they never have to), had called mixing Rust with C "cancer" in a public email a year earlier. When the maintainers agreed to let Rust drivers call C DMA APIs through a thin abstraction, Hellwig stepped down. Marek Szyprowski took over.
20 years of maintaining the code that lets every driver in the kernel talk to hardware. Not a career chapter. A career. The last argument was about whether a new language gets to call your functions through a wrapper. He lost. He left.
Safe Rust makes entire categories of bugs structurally impossible. Buffer overflows. Use-after-free. Null-pointer dereferences. The 23-year NFSv4 overflow Carlini's script found is, by construction, something that cannot exist in safe Rust code. The kernel now includes a language that makes the CVE that opened this article impossible to write. And the people who fought for that are gone, and the person who fought against it is gone, and Linux 7.0 shipped anyway.

The new rulebook shipped alongside the release. Any patch written with AI help must now carry an Assisted-by: trailer naming the model and tools. AI agents cannot sign Signed-off-by lines, because the Developer Certificate of Origin is a legal claim only a human can make. Accountability for bugs stays with the human who pressed send. This rule did not appear because the community sat down and wrote one calmly. It appeared because Sasha Levin, an NVIDIA engineer, submitted a patch to kernel 6.15 that he had generated almost entirely with a language model, including the changelog, and did not tell anyone. The code compiled. It passed initial review. It also contained a performance regression that survived into stable and ruined someone's week. The backlash was loud enough that Linus, who has no patience for meetings about process, cut the debate short: calling for outright bans was "pointless posturing." AI is a tool. Accountability lives with the human. Ship the policy.
Red Hat, which has been shipping commercial Linux longer than most of the developers in this story have been coding, raised a harder concern that has not yet been answered. Language models trained on GPL-licensed code make it structurally hard to guarantee the provenance of any submission. Whose copyright is in the output? That problem is not solved by Linux 7.0. It is also not going away.
Your Saturday. Your Bank. Your 2035.
A RAID (Redundant Array of Independent Disks) controller in a datacenter runs its SCRUB cycle overnight, the routine integrity check across every disk in the array. It flips a single bit inside the filesystem's bookkeeping metadata. A 0 became a 1 where the kernel least wanted it. If you run XFS (the default filesystem on Red Hat Enterprise Linux (RHEL), Rocky Linux, AlmaLinux, and Oracle Linux), you historically find out in one of two ways. A service starts misbehaving. Or you read xfs_repair output on an unmounted disk the next morning. Neither is a good Monday.
Darrick J. Wong submitted version 6 of his XFS self-healing patchset on January 15, 2026, and it shipped in 7.0. Before this release, if a file server at a mid-size company lost a single byte of filesystem metadata over the weekend, the on-call engineer got paged, drove in, took the server offline, ran xfs_repair, and brought it back up before Monday. Best case: four hours of downtime. Worst case: the repair ran past the maintenance window, and the Monday status meeting started with an apology. Linux 7.0 replaces that weekend drive-in. A background daemon called xfs_healer detects the same corruption, fixes it while the server keeps serving files, and writes to the system log. Users never notice. The pager stays silent. Monday starts on time.
Multiply by the cloud. XFS is the default on AWS, GCP, and Azure RHEL-family images. Most of the enterprise Linux fleet on the planet just got quieter.
Somewhere in a bank in Frankfurt, a SPARC M7 server is settling a trade. The software was written in 1998. The contract with Oracle says nobody moves it. On April 12, that server got a new kernel code. SPARC (Sun Microsystems, early 1990s vintage) received a fork/clone bug fix, new clone3 system call support, and API cleanups. DEC Alpha, which mostly lives in scientific labs where the simulation code was written when I was in high school, got a fix for user-space memory corruption during memory compaction. Motorola 68000 got patches. None of those readers are going to write a blog post about it. They'll install the kernel and feel seen, somewhere at the back of the neck.
On April 7, the tip branch accepted the removal of Intel i486 support for Linux 7.1. The 486 shipped in 1989. The x86 (32-bit Intel) code path is still widely used, and dropping i486 compatibility lets it run faster on everything that runs it today. SPARC is kept because someone cares enough to own it. The 486 is removed for the same reason no one cared enough to own it.

AMD's newest server chip, the EPYC 9005 (the generation cloud providers are currently racking by the thousand), now runs encrypted virtual machines more efficiently under 7.0's refreshed KVM path. Encrypted virtual machines are the feature banks and hospitals pay extra for so that the cloud provider itself cannot read their data. Benchmarks on modern EPYC 9005 "Turin" servers show SEV-SNP overhead in the single digits to low teens on memory-heavy workloads, and AMD's in-flight RMPOPT optimizations (queued for a subsequent kernel) are aiming to cut the remaining penalty further. For a cloud provider, that is the difference between a profitable confidential-computing tier and a marketing slide. For a bank tenant, it is the difference between paying the cloud surcharge for privacy and going back to running your own hardware.
ML-DSA (Module-Lattice-Based Digital Signature Algorithm, the NIST-approved post-quantum standard) lands at three security levels: 44, 65, and 87. Every time a Linux machine loads a driver, it checks a cryptographic signature to be sure the driver came from someone authorized. Today's signatures can be forged by a quantum computer large enough to break them, and the machines to do that do not exist yet. The problem is that hostile intelligence services are already recording encrypted signatures today, betting on a decade of patience. When the quantum computer arrives in 2035, they will forge a kernel update that looks legitimate, ship it to a targeted laptop, and turn it into a listening device. ML-DSA closes that door in 2026, while the door is still easy to close. SHA-1 module signing, which would fall in minutes to the same future machine, was removed in the same release.
The Copilot Key Was Microsoft's. These Three Are Everyone's.
Three new keycodes landed in the input subsystem, submitted through the HID fixes pull request for 7.0 by Google, which authored both the kernel patch and the underlying USB-IF HID specification (HUTRR119).
KEY_ACTION_ON_SELECTION (0x254) triggers an AI action on highlighted content: explain, summarize, search. KEY_CONTEXTUAL_INSERT (0x255) opens an overlay to generate or retrieve text into the active field. KEY_CONTEXTUAL_QUERY (0x256) offers suggestions related to the selected element. Unlike Microsoft's Copilot key (a 2024 addition that hard-bound a repurposed legacy function key to one vendor's assistant), these three are first-class HID values and explicitly agent-agnostic. A laptop vendor can map them to Claude, Gemini, a local llama.cpp instance, or your own shell script. Your next ThinkPad, Dell XPS, or Framework laptop will ship with one.

One release. Four shifts. Same commit history. AI reviews kernel patches on the mailing list with a 53% catch rate. AI finds kernel bugs that had been hidden for 23 years. AI appears as a credited contributor under a new policy and tag. The kernel grew three physical keyboard keys dedicated to sending text to AI agents. Infrastructure built for AI, and infrastructure made safer by AI, in the same week.
The Next 23-Year Bug
If one person with one bash script found five kernel vulnerabilities in a long weekend, how many has Google's internal team found? How many has your hardware vendor's internal team found? How many has a state-level actor already found, archived, and not told anyone about?
Linus called it the new normal. OpenSSL. GCC. Python. Postgres. Chromium. The 10-year-old microservice in your repo that hasn't been touched since the author left the company. They are all sitting somewhere with a 112-byte buffer and a 1,056-byte write that nobody thought to look for, waiting for the bash script that hasn't been written yet.
If you run NFS, you have already patched CVE-2026–31402. If you run RHEL derivatives with XFS, pilot the self-healing daemon on staging. If you run AMD EPYC 9005 under KVM with SEV-SNP, benchmark your own workload before promising the business a savings number. If you run anything written before 2010 and still in production, ask your security lead this week what the plan is for when someone points Claude Code at it. "We have code review" is no longer an answer. Code review missed this one for twenty-three years.
When your codebase gets its CVE from a bash script, what will your postmortem say?
If this helped you plan the 7.0 rollout, clap, comment, or share it with the engineer on your team who still says AI is slop, not a tool. The kernel just proved otherwise.