Linus Torvalds has merged the code beginning to remove Intel 486 CPU support in Linux 7.1
Posted by somerandomxander@reddit | linux | View on Reddit | 153 comments
DataPath@reddit
This is the first major step in their goal of removing 32-bit architectures entirely. It's not happening soon, but it's on their horizon, with the last 32-bit architecture probably being armv7 no sooner than a decade.
dthdthdthdthdthdth@reddit
32bit might never go away in the embedded world, surely not in a decade. Systems developed today will be around for decades. Also desktop processors have compatibility modes. While the kernel does not need to be able to run in those, it still has to be possible to load 32 bit binaries.
nothingtoseehr@reddit
x86-64 processors are actually i386 by default until you explicitly "convert" it into a 64 bits CPU, which hilariously in turn the i386 actually runs as a 16-bit 8086 by default too!. Its the way AMD found of extending the ISA without breaking compatibility, which was on top of what Intel had already made to extend it from 16 to 32!
Basically the entire x86-64 modern architecture has to pay a ~15% tax on performance simply for compatibility because we stacked the damn thing three times on top of each other. If you never tell your CPU it is a modern x86-64 capable of billions of instructions per second, it'll happily work as a very fast 8086!
meditonsin@reddit
Which is part of the reason why Intel's attempt at 64 bit (IA64) failed miserably, as it would have been a hard break, with no backwards compatibility.
monocasa@reddit
Eh, early Itaniums had hardware x86 backwards compat.
It's more that they didn't make sense for the same reason that Netburst and Cell didn't make sense (with a healthy dose of Intel internal politics mucking everything up too). Relying on dumb, high clocked archs stopped making sense when dennard scaling ended and hit the industry like a brick wall. If we were sitting of 50Ghz processors today that had extremely low FO4, stuff like Cell and Itanium make a lot more sense. Physics didn't end up working like that though.
sparky8251@reddit
Itanium was also clearly just an attempt at breaking the cross licensing deal. They hated it and sued over it twice before and after Itanium failed they started the "spin off a nominally independant benchmark company and make all the bench suites that claim intel is best", the bribes to lock AMD out of OEMs, the ICC fuckery and so on to try and snuff them out via other means.
Intel is pretty well defined as a bunch of lawyers that make chips (and not bad ones!) that hated the IBM cross licensing deal and clearly acted more like their entire goal was getting out of the deal than just making good products, down to the whole core series stagnation thats now destroying them given AMD caught them with their pants down.
monocasa@reddit
It legitimately has a lot of good ideas that got you most of the benefit of an OoO core on a leaner static core design. There's a reason why there's a remarkable amount of former Itanium folks that worked on the Mill.
If all Intel cared about was breaking the licensing deal, they would have just pushed one of the several in house RISC offerings they had at the time a lot harder into the high performance space.
sparky8251@reddit
It was the licensing deal, otherwise they wouldve done what AMD did.
Im not saying thats the ENTIRE reason. RISC was assumed a more or less dead end, HP already had VLIW work in place, etc. But intel bet an abnormal amount of money and resources and made insane contracts trying to promote it. You dont do that on an unproven tech stack because of engineering, you do it as a lawyering thing to get out of a cross licensing deal youve already tried to get out of 3 times and made worse the 3rd time (allowing custom fab vs clone making for AMD).
I agree, they had technical reasons for doing it. No, they werent the primary ones given how they did it.
monocasa@reddit
RISC was hardly a dead end at that point. It's not even a dead end now.
And in a world that ends up with today having 50Ghz cores, you have a reduced FO4 that means that X86 is a much greater albatross than hitting the power wall allowed for. That's in addition to the increased NRE for x86 decoder complexity. I'm sure a new IP moat was a reason, but it wasn't the reason.
And honestly if you look back, they seemed to put most of the money you cite into killing enterprise RISCs than their x86 competitors. Those orgs wanted a long term commitment if they were going to to kill off their own internal ISA. And so with Itanium, Intel was successful in getting HP to kill PA-RISC, Dec/Compaq to kill Alpha, and SGI to kill MIPS (at least in the non-embedded space).
rook_of_approval@reddit
there is no 15% penalty for legacy support. maybe 0.1%-0.5% at most.
monocasa@reddit
There's no where near a 15% tax on performance due to the 16 bit decoder. Everything trustworthy I've seen places x86's ISA overhead at about 1% on the high end all things considered. It has some definite drawbacks, but it has some definite wins versus RISC as well, like RMW mem ops essentially being a way to address physical register file entries without using architectural registers.
orygin@reddit
Do you have further reading on the 15% tax you mention? I'm curious about why this is still the case
MasterJeebus@reddit
It will be sad to see once 32bit goes out entirely. I still keep two old retro PC’s that can only run 32bit, an Pentium 3 and Pentium 4. While not as practical today still great to see what they can do with modern distro and even browse web safer than their old outdated eol Windows versions.
PsyOmega@reddit
No it won't.
Why? There will be forks, and maintained code pathways for centuries. No reason to keep track of legacy cruft in the mainline.
jlt6666@reddit
As someone else said 32bit will likely live on in embedded systems
PsyOmega@reddit
Yep. No reason to maintain Mainline’s just for them. The embed community will fork and maintain whatever works for them
monocasa@reddit
Embedded is one of the primary contributors. Pretty much anything will stay in the kernel for as long as someone cares about maintaining it to a state that upstream finds acceptable.
I mean, hell, we just saw patches to Dreamcast in the past couple weeks, and before that someone added a new port to N64 a couple years back. Both of those are being run by single digit numbers of people at best, but the metric isn't "number of users", but instead "someone is willing to maintain it".
PsyOmega@reddit
Embedded has VERY different needs. I worked on the las vegas sphere and most of the stuff in it is running a 2.6 or 3.x kernel.
7.0 has 486 support, but I would imagine most embedded systems just keep maintaining forks of 2.x and 3.x
The code exists to run the old systems and is well used outside of cutting edge mainline, i don't see a need to keep it maintained in mainline.
monocasa@reddit
I'm literally a kernel developer who has spent about half my career on the embedded side.
Once again, the only thing upstream cares about is 'will someone maintain this'. As long as someone puts in the work, it stays. Embedded support, and 32 bit support broadly have literally thousands of engineers contributing.
The only reason why 486 support is leaving is because nobody stepped up to maintain it.
PsyOmega@reddit
Then you'll have no problem maintaining your own forks, right?
I'm saying there's no loss in mainline losing this support. It'll be a cleaner kernel with less cruft.
You're literally vested, capable, and willing, so, why not maintain a fork for hardware you care about?
monocasa@reddit
High quality forks are intrinsically more of a pain in the ass than maintaining mainline.
Upstream not only welcomes weird stuff, but encourages it as long as someone is willing to maintain it.
That's core to the kernel's culture, and among the head developers is widely considered one of the primary reasons for its success.
That's why they accept and still have incredibly weird stuff like hexagon and csky that barely have even public API docs. The maintainers keep stepping up to maintain them.
PsyOmega@reddit
and yet, to this day, I still have to work mostly with 2.x forks if I'm delving into embedded.
What's the point of keeping the code around for 7.1+?
monocasa@reddit
Those forks that languish are invariably because of something they didn't upstream in the first place.
So for instance, worked a place that was still shipping a 2.4 kernel because of a driver they didn't upstream. Turned out a better driver had been written for that hardware by someone else and upstreamed, and getting them on mainline ended being -5kloc commit.
PsyOmega@reddit
It's more like, "if it aint broke dont fix it"
Every piece of embedded hardware needs a bespoke image made for it. They're mostly pretty ancient and the engineers for anything new are set in their ways. 2.x is a bomb-proof codebase. I can deploy 2.x.x to something and I know it'll still be ticking in 20 years, doing whatever I told it to do.
3.x is ok, but 5+ is honestly a bit too unstable for field deployment. (and i've never ever seen a 4.x kernel in the wild, I assume it just got skipped over by engineers)
Any new hardware deployments should be arm64 or risc-v anyway, so again no real loss.
Anyone still making 486 standard new hardware in 2026 can afford to maintain their own fork
monocasa@reddit
Are you a developer, or do you just install/admin these? These arguments sound like what I hear from people who aren't involved in the actual decisions.
And 2.x is far from "bomb proof" and has mountains of bugs that have been fixed since. That's where outfits like red hat make their billions, charging for backporting fixes for all of those bugs. And even they won't touch 2.x anymore.
jlt6666@reddit
My point was this line
They are still making 32-bit hardware, and it's not a liability.
PsyOmega@reddit
I was talking about consumer and server systems.
Using a 32 bit microcontroller doesn't really factor in, especially for the monolithic kernel that hardly fits on them.
InadequateUsername@reddit
There will probably be an enthusiast community that forks a 32-bit compatible kernel
Informal_954@reddit
I imagine the 2038 problem represents a soft cutoff for 32-bit devices. It'll be cheaper to upgrade or isolate the few 32 bit devices left than to invest in developing software mitigations at that point. We have twelve years left.
ParentPostLacksWang@reddit
Nah, 32bit kernels have supported 64bit time (solving the kernel-side 2038 problem) since Linux 5.6. Still requires some work in userland for many apps, but the machine will be fine.
mccoyn@reddit
Also, the 64-bit build of programs isn't immune to this problem if it calls the old API or stores the result as a 32-bit value.
monocasa@reddit
The 'old' API was safe on 64 bit linux archs in the first place since the whole issue is that seconds are stored in a long.
yo_99@reddit
Debian spent a lot of time solving it for Debain 13.
SirGlass@reddit
I always mention that CIP support for 6.12 I believe will be supported until 2035. So you still get 9 years of security fixes for your already 30 year old machines lol
emfloured@reddit
You would be able to run 32-bit x86 for a long time than you realize now.
Assuming the upstream kernel drops support for x86 entirely in 2030:
The Debian Duke is released in 2029 with the last kernel that supports 32-bit. You have LTS until 2034. Paid ELTS until 2039.
And if you can't afford Debian eLTS beginning from 2034, you can go the Gentoo route and build from source all the cutting edge packages with that last 32-bit kernel. Latest packages don't require the latest kernel. User space packages mostly depend on glibc version and the last year's glibc still support kernel from 5 years ago, this means you should be able to build all the cutting edge packages until 2042 something.
Of course another machine will be needed to build the whole distribution yourself in a feasible time. You just will have to target "-march=x86-64-v1" in GCC.
I think with moderate struggle, that machine should be able to do 15 more years assuming the motherboard keeps working throughout those years.
After that you probably will have to manually find 3rd-party forks from here and there of package(s) that you will still get tempted to install but that possibly won't be supporting your old glibc and/or kernel from the upstream.
I think 2040-2042 seems doable without hard struggle.
abotelho-cbn@reddit
I suspect the final LTS kernel that supports 32-bit will be stretched a long time because you'll get quite a few companies help support it.
Difficult-Court9522@reddit
Armv7 is brand spanking new. New asics with it still come out!
TRKlausss@reddit
Which doesn’t mean at all that they won’t be maintained. Even if mainline does not maintain v7a anymore, there are still LTE and CIF versions that you can use on those archs
barrykn@reddit
You mean Long Term Support (LTS) and Civil Infrastructure Platform (CIP). Normal kernel releases get around 2-3 months of security updates and big fixes. LTS kernels, released once per year, get at least 2 years of support, but are sometimes extended on a case-by-case basis (currently 4 years for 6.12 and 3 years for 6.18, ending in December 2028 for both, but either or both of those could potentially be extended further in the future before the deadline hits). CIP extends the support to 10 years from release but also restricts it to specific hardware and drivers, but once LTS support runs out CIP is still better than nothing. (6.12 will receive CIP support, 6.18 will not. It seems like every 2nd LTS release becomes a CIP release, but I don’t know if this is coincidence or an official policy.)
admalledd@reddit
FYI, it is between coincidence and policy in that CIP kernels are chosen roughly every-other year but they allow themselves the flexibility to pull/push that date to align with a "better" kernel LTS if needed due to either key big feature changes or amount of developer support/funding they have. So they try to do every-other LTS, but because they aren't entirely sure the forever-costs for as-yet-released future kernels and depends partly on funding/developer resources they let themselves the escape hatch to not choose a new CIP kernel release if they can't commit/promise the 10 years of support.
TRKlausss@reddit
Yep sorry I misremembered :)
But yes, lack of support on latest release does not mean at all that it is gone. It will remain for a really long time there.
cp5184@reddit
486s for embedded were being made for a long time, and in other specialized applications.
throwawayPzaFm@reddit
I had an ISA daughterboard with an 486DX4 real time controller on it for a roll-forming machine a few years ago.
I sincerely doubt anyone will be replacing that thing anytime soon.
protestor@reddit
Hard to justify constant kernel updates to decades old embedded hardware. Each update may bring new bugs for example
monocasa@reddit
I don't think they care that much about 32-bit archs being present or not. And I I wouldn't be surprised if RV32 lasts longer than Armv7 FWIW.
It's more that Pentium was really an inflection point in base functionality for x86 system software. That added the CPUID and CMPXCHG8B instructions, the MSRs, and I think the APIC. Fallback pathways for when those aren't present are obviously possible (it's what Linux currently does), but they're hacky and brittle.
msthe_student@reddit
There are ARMv8 cores that support A32 (AArch32), some IIRC don't support A64 (AArch64)
ouyawei@reddit
Microchip still releases new ARM9 SoCs
msthe_student@reddit
TIL. ARM9, so that'd be ARMv5TE
FlukyS@reddit
I'm not sure they will remove 32bit support from Linux really because of embedded devices specifically. There are thousands of 32bit ARM devices or legacy hardware that is still in use. Desktop and server Linux might kill it off from their ISOs or limit their support for it but the kernel itself doesn't need to or probably want to go through the effort.
UNF0RM4TT3D@reddit
TBF in 2038 most 32 bit computers won't be able to keep time properly anymore. At least without some major hacks on the OS level interpreting an earlier date which the RTC can store as the current one.
I suspect that armv7 will last the longest also because a lot of use cases with them don't even have an RTC and won't be affected as much.
vinciblechunk@reddit
sizeof(time_t) has been 8 on 32-bit Linux since 2019. (On NetBSD since 2012.)
Serena_Hellborn@reddit
32 bit addressing has little to do with 32 bit arithmetic
UNF0RM4TT3D@reddit
Not even what I mean. In my statement I don't really care about the architectural limitations of the chips, rather I'm talking about the hardware and firmware implementations not being able to handle the dates.
psi-@reddit
Please explain how 16bit machines handle dates to this date?
Booty_Bumping@reddit
But that's not true? x86 and BIOS never actually dealt in Unix time. The RTC in a typical x86 computer is limited to December 31, 9999. Or 1999 if it's not Y2K compliant.
MorallyDeplorable@reddit
using epochs for time is well tested and hardly a hack
SoBFiggis@reddit
The time_t issue was fixed and pushed a decade ago. it will be as much of an issue as y2k was.
TwiKing@reddit
Commander Keen ran so wonderfully on my 486!
HaplessIdiot@reddit
EXACTLY and people are making FPGA versions of the 486 you can OVERCLOCK LIKE CRAZY! why why why did they do this on vanilla?????
sunkenrocks@reddit
Because nobody is around to maintain or test it. It will also likely be forked and maintained by someone. If you are so passionate about the 484 why don't you fork and maintain it? You also have some pretty dang modern versions of Linux before this becomes a problem anyway, and I haven't seen any projects packaging distros for these FPGA implementations you speak of using a modern kernel version? What kernel version are you running on your FPGA?
teo-tsirpanis@reddit
Writing SIMD code for a modern processor would run faster than an overlooked 486 on an FPGA. And would be more accessible to everyone.
Narishma@reddit
It would be surprising if it didn't, since it was designed for 8088s and 286s.
MatchingTurret@reddit
Commander Keen didn't need a 486. But Wing Commander III did...
aitorbk@reddit
Yeah, and looked amazing on glide.
yrro@reddit
I used to play it on an 8086... :'(
skinnybuddha@reddit
Hey, I was using that!
Sataniel98@reddit
Fair. Most modern distros have dropped x86-32 altogether anyway, and if they haven't, they usually compile against much newer targets like i686 (Pentium Pro, Pentium II, AMD K7 but not original Pentium, Pentium MMX, AMD K6) - this goes for Alpine Linux, Debian 12 - or they even require SSE2 (Pentium 4, Pentium M, no non-64 Bit AMD) - such as Void Linux, Debian 13. The only distros that still support actual i486 processors are Gentoo, because it's source-based and you can compile it for whatever you want, and Sackware, because it never updates.
AliceCode@reddit
I refuse to write software for x86-32 these days. It's too much of a pain. Just recently I was writing a short string implementation that took advantage of the fact that pointers are 8 bytes so that it could store 15 byte long inline strings without any allocation. It would have only been 7 bytes on 32-bit, and that wouldn't have been worth the trouble, really. 15 bytes? Good. 7 bytes? Not so good.
rlaptop7@reddit
What code are you writing where you would know what architecture you are on? It would have to be very low level.
AliceCode@reddit
Sorry, it's not specifically x86-32, it's any 32-bit architecture. But I typically don't write code for ARM either, so typically I'm writing for x86_64 since that's the most common architecture.
And yes, it is low level code. Rust/C. But it's not often so low-level that I'm targeting a specific flavor of 64-bit ISA. Just 64-bit in general. But occasionally I'll use SIMD or something else that's x86 specific, such as a raytracer I wrote last year.
chmod_7d20@reddit
That is cursed.
AliceCode@reddit
What's even more cursed is that it can store and differentiate between static strings, heap allocated strings, or reference counted strings, and handles them all with ease.
psi-@reddit
Sounds like symbian with its 16 different string types (at a glance, probably more). The ease was not in sight..
AliceCode@reddit
Like the other person said, it's a simple type, and handling all the different representations is as simple as a switch/match statement.
turdas@reddit
From what I gather this is just one type. The implementation details of it shouldn't really matter to the user.
yrro@reddit
Inside STL: The string
ukezi@reddit
That is the usual short string optimisation in about every C++ std lib out there.
Zeikos@reddit
Gotta love using byte encoded string as a pointer like some sort of deranged hash table
AliceCode@reddit
I have no idea what you're talking about, to be honest.
msthe_student@reddit
Even "i686" isn't really i686 anymore, a lot of it requires extensions not found on the Pentium Pro
turdas@reddit
When I first really got into Linux in the mid 2000s with Arch, I'm pretty sure they were already compiling exclusively against i686 and x86_64. That was 20 years ago.
ohhowcanthatbe@reddit
You have to pick a line and move on. 64-bit seems legit. 2legit2quit. Omfgitislate
ComprehensiveHawk5@reddit
Its over... The Linux kernel has fallen. Trillions must migrate to NetBSD
Flashy_Pollution_996@reddit
Can’t even install that bsd shit on my laptop what a joke
ouyawei@reddit
but you can install it on a VAX!
jeroen-79@reddit
Good luck carrying your VAX everywhere you go.
MrGeekman@reddit
i486 was discontinued in 2007.
Socializator@reddit
Well, it still goes strong in Hubble space telescope!
wafflingzebra@reddit
But what if someone finds an exploit and threatens space agencies with ransomware!!?!?
ignorantpisswalker@reddit
You meant dozens.
Journeyj012@reddit
Less than a dozen.
A_Canadian_boi@reddit
Around two and a half.
jeroen-79@reddit
2027 will be the year ofvthe BSD desktop.
SirGlass@reddit
I always like to comment when some old architecture is removed in the kernel, thats not really when support is dropped, officially support will be dropped in 2035 (maybe longer) and no one will even notice then
6.12 is an extended support kernel or CIP or what ever its called will get security fixes and minor bug fixes until 2035
So even if you have some old 486 machine you can get bug and security fixes for 9 more years, and lets face it you won't see any benefit from running the newest kernel on 30 year old machines anyway
I also 100% guarantee in 2035 the actual date of 486 kernels no longer receiving support no one will actually notice.
dnabre@reddit
It doesn't get the attention it really should. Especially in places like this where X support is being dropped from the mainline, a reminder about the other supported branches (trees?). At the moment kernel.org denotes 6.19.12 as the "stable" branch and 6.18.22 as the "longterm" branch. With longterm EOL being Dec 2028 for 6.18.
The mainline is where all the big stuff is happening and all the news focuses, but 95+% of users don't (and probably shouldn't) be running the mainline.
SirGlass@reddit
I was referencing this
https://wiki.linuxfoundation.org/civilinfrastructureplatform/start
CIP or "Super long term release " kernels that are supported for 10 years; SLTS v6.12 will receive updates all the way through 2035-06.
That is probably officially when 486 will no longer be "supported"
dnabre@reddit
I wasn't aware of this, definitely grateful to learn about it.
The main kernel development team is always the focus of news about changes to the kernel, and projects like this never get brought up. This news story being about mainstream development removing 486, with near/longterm support being most completely to CIP (or something along those lines, that's probably not the most accurate description), would come off at a much different situation.
As is, it sounds like 'sorry if you are using 486, current linux isn't going to work for you anymore' compared to 'development and new features are no longer going to be actively developed for 486, and support may disappear in 2035'. Reporting is really manipulating (intentions aside) the message here.
SirGlass@reddit
Yea but that is the more clickbait headline so I am not surprised
So like you said "Linux " isn't actually dropping support for 486 , it will be supported until at least 2035-06 , so if you have some old hard or old router still running some old 486 chip you can still be on a fully supported kernel for another 9 years
And your old hardware won't see any improvements anyway from being on the latest kernel branch .
Tired8281@reddit
It's all about the Pentiums!
i860@reddit
My 486dx50 will never forgive you for this skullduggery, Linus.
arf20__@reddit
i am very sad now, thanks
UBSPort@reddit
Brian Lunduke felt the patch drop like a disturbance in the force, ran out to his front lawn, and screamed at the sky for 10 minutes - I guarantee it!
red_sky33@reddit
Tanenbaum was right! Designing for x86 was a mistake, this would be so much easier with a micro kernel
dnabre@reddit
It's interesting to see linux as the only major operating system that hasn't moved to a micro or hybrid/micro kernel design.
Personally, i don't think linux would have happened and become the big thing it is, if it had started out with the overhead of microkernels and looked so different from classical UNIX operating systems. It's hard to imagine linux's incremental development ever changing that overall architecture to the point of it becoming a microkernel though.
dnabre@reddit
I get the practicality of this, It's shame that can't isolate these platforms they don't want to deal with any more (regardless of reason) in a manner that doesn't make them disappear. Whatever mainline features have requirements that the 486 code doesn't provide, wouldn't be supported, but the code would still be there, and the interfaces for it be maintained. The feature-dependency model isn't how the linux kernel is designed (to my minimal understanding of it). I just wish these older hardware could be set aside, isolated, no longer burden the developers, without them having to be removed.
People can still grab the older releases for support, but it's sad to see these things go. A development model that permits thing to be somewhat abandoned or isolated, but still there, certainly isn't a priority for linux. Just seems bad that the only way to relieve themselves of the burden of an old platform is by removal/breaking it.
MrTenDollarMan-@reddit
I don't understand any of that. All I know is that I'm a Linux user since October and it's great!
IllustriousBed1949@reddit
The kernel supports a lot of computer architecture and for a long time. 486 is the ancestor of our modern CPUs. The last 486 was released in 1995 (it was already obsolete at that time).
Supporting these architecture means specific code for them that at some point you have to maintain and that maintenance as a cost. So time to time, the kernel team decide to remove support for old hardware (it's happens very rarely...).
msthe_student@reddit
IIRC there have been embedded cores released later
shadedmagus@reddit
Right, the discrete PC CPUs of the 80486 were what stopped getting sold before this century.
Embedded systems are a different beast entirely and the groups supporting them can still pull in the 32-bit code they need.
backyard_tractorbeam@reddit
And the first linux platform was intel 386, that Torvalds used on the computer he first developed Linux for. And support for that has already been removed, I assume ceremoniously.
MrTenDollarMan-@reddit
I see, thanks!
deanrihpee@reddit
end of an era
yo_99@reddit
To be fair, if you are actually using 486 you are better off using something like ELKS.
Narishma@reddit
No, you're not. ELKS is for MMU-less CPUs like the 8086 or 286.
AppleCherryWater@reddit
Can anyone explain why support is removed?
SirGlass@reddit
From my basic understanding is when some feature or change is added they still need to test or make sure it works with 486 architecher
That still takes time and testing or it has to be written in a way it still works with old architecture . Removing it just makes development easier as you do not need to worry about breaking some old 486 code
And why spend time and effort making sure 486 architecture still works if no one is running it? Or no one is running it on a modern kernel.
Also 6.12 release will be supported with security fixes until 2035 . So you still have 9 ish years of life if you are running some 486 machine.
Sataniel98@reddit
Because new features that are implemented in hardware in new CPUs and just need to call some instructions need to be tediously emulated in software for old CPUs. i486 was a CPU introduced in 1989 and popular until the late 90s (including AMD and Cyrix clones). Even if systems that use them still exist, they're unlikely to use/benefit much from new Linux kernels.
jeconti@reddit
I remember upgrading to the 486 when our 386 couldn't handle LucasArts Rebel Assault.
kombiwombi@reddit
It has not been removed yet. This is to stop building kernels for 486, a sort of 'turn it off and see who screams' decision.
The issue of 486 support gets reviewed every now and then, as it was for 386 before it. This time it was felt that their really are no users outside of retrocomputing and non-internet connected systems. This is not surpriaing: 486 systems aren't a good ft to the modern world. For example the IDE disk interface is three generations of disk I/O bus behind current storage, so even spares from the second-hand suppliers are scarce.
There are a few wins for kernel developers, but mostly in removing features which would need to be maintained but are difficult to test. To be honest, inability to test is how most old architectures get removed: someone made a change, that accidentally broke the kernel on that processor, and no one noticed for a few years.
Note that if there was a big group of users the kernel would be glad to support them. So there is no thought of retiring ATM 32-bit beyond a "wouldn't a world with no 32-bit code be wonderful" daydreaming.
minus_minus@reddit
New CF cards are still being made and sold as far as I know. They are direct pin compatible with IDE.
DheeradjS@reddit
It's moved out of the main line. If you really want it you can pull it in.
cp5184@reddit
I hate it, but some of it may be around 486s that don't have hardware fpu to get rid of the code that handles fpu exceptions, or at least I think I saw that mentioned.
It's a better reason than to just ditch cpu support for no other reason.
The 386 removal had a worse rationalization imo. I don't remember the specific details but there was an instruction that 386 didn't support, but it didn't seem like that was actually the liability that people supporting 386 removal claimed it was. So it felt like that was done dishonestly, and on a false premise.
yrro@reddit
Lack of CMPXCHG? That is a pain for lock-free programming, which becomes more important as core counts increase. Dropping 386 meant that maintainers no longer needed to write alternative implementations of data structure operations that would only be used on totally obsolete hardware.
cp5184@reddit
It's been a long time, but I don't think it was that simple.
paskapersepaviaani@reddit
There are still plenty of kernel forks and projects that enable people to run older hardware. It's not a problem really.
TheReelSlimShady2@reddit
Yeah that's what happened when they dropped itanium support from the kernel. it just is now maintaned out of tree.
00raiser01@reddit
Not worth the man power to keep alive.
voxadam@reddit
Source: https://arstechnica.com/gadgets/2026/04/linux-kernel-maintainers-are-following-through-on-removing-intel-486-support/
rook_of_approval@reddit
less maintenance burden
Rob_W_@reddit
The very first Linux install I did (Slackware 1.0!) was on a 486/SX (later upgraded to a DX4) at my community college. Ran a web server for the CompSci & Engineering departments and they taught C/C++ programming classes on it. Got a lot of mileage until a power blip took out the hard drive.
TheFumingatzor@reddit
🥲
caceomorphism@reddit
And here I am still waiting for MCA support for the 486...
sulix@reddit
Fortunately, it's still easy to revert: https://davidgow.net/linux/i486.html
SharktasticA@reddit
Thank you for doing this and sharing, this is great! I switched to 7.0 for my project (SHORK 486) and noticed the e820 regression caused panicking with EXTLINUX on my IBM ThinkPad 365ED and some PS/ValuePoints. I didn't precisely locate the issue, just porting 6.14.11's (the kernel I was using before) e820 code entirely was easy enough for now.
ggppjj@reddit
I knew it lmao. I knew you'd be here, hahahaha.
HaplessIdiot@reddit
thats what im saying dawg thanks for sharing the troof! people are such corpo bootlickers the kernel is for ANY CPU... this is so dumb along many other things linus is doing he has dementia.
MrKapla@reddit
Amazing username!
minus_minus@reddit
I’m curious if anyone is aware of 486 compatible cores still in production that run anything like vanilla Linux. I could see them maybe as useful for embedded systems with a legacy code base but I’m not aware of any actual instances.
Affectionate-Sea8976@reddit
nice, 32-bit it's peace of sh*t
HaplessIdiot@reddit
16 bit actually and there is no performance advantages to removing this users here are uneducated.
Affectionate-Sea8976@reddit
why would anyone want this legacy sh*t?
you said it yourself that users are 'uneducated', so why would anyone keep this mammoth piece of sh*t if people aren't going to use it or be interested in it?
HaplessIdiot@reddit
https://www.reddit.com/r/linux/comments/1slwxfh/comment/oga7au9/?utm_source=share&utm_medium=web3x&utm_name=web3xcss&utm_term=1&utm_content=share_button
Affectionate-Sea8976@reddit
b u l l s h I t, foooooo
ArdiMaster@reddit
Certain YouTubers in shambles right now
HaplessIdiot@reddit
This is why we have kernel forks like zen and cachyos so we don't have to deal with BS like this
Booty_Bumping@reddit
...what? The CachyOS kernel doesn't run on any hardware before 2014. It is the opposite of a solution to this problem. And Zen doesn't touch hardware support at all.
Dalemaunder@reddit
???
The 486 was lunched in 1989, I don’t think most people really care.
HaplessIdiot@reddit
if i wanna use a retro cpu why delete code that works for no reason quit defending corposlop
kcat__@reddit
He's just living up to his name, let him be
DramaticProtogen@reddit
Even most BSDs don't support 486.
TyrionBean@reddit
Well I, for one, am outraged! It's JUST like when car manufacturers removed support for cart wheels with spokes! A LOT of us were miffed at that outrageous and uncaring display, I can tell you! And I've NEVER forgiven Henry Ford for it either!
chmod_7d20@reddit
Shame
IBNash@reddit
My first PC was a 486-DX2, way back in 1995. I'm glad this cruft is being removed.