
Apple has discontinued the Mac Pro – but it’s just the first of the tower computers to go. The rest will follow soon.
Fruit-sniffers extraordaire 9-to-5 Mac got the news yesterday, complete with official confirmation from Apple itself. It’s official and it’s happened, but there have been warning signs for months – in November 2025, Bloomberg’s Matt Gurman said “The Mac Pro is on the back burner.”
The phantom fruit-flingers of Silicon Valley launched the seven-thousand-buck Apple Silicon-based Mac Pro in June 2023, with an M2 Ultra SoC. It sported seven PCIe slots – but the problem was that cash-rich customers couldn’t add the sorts of expansion that normally go into a PCIe slot… to the extent that Apple publishes a page about PCIe cards you can install in your Mac Pro (2023). Notably, the machine did not support add-on GPUs: only the GPU that’s integrated into the CPU complex along with the machine’s RAM and primary flash storage. The machine also had no RAM expansion whatsoever.
Presumably, this limited its appeal for many traditional buyers, and the machine never saw an M3 or M4 model, let alone the M5 SoC that The Register covered shortly before Bloomberg called the Arm64 cheesegrater’s fate.
Thus ended a line of distinctive machines, from the original G5-lookalike Xeon based Mac Pro that 20 years ago was the “fastest PC in the UK”, followed in 2014 by the polarizing “Darth Vader’s dustbin, and then in 2019 the original Intel-based “cheesegrater”.
Tracing the integration trend
This machine is a high-profile example, but the trend is inexorable. This is how the rest of the industry is going to go. The path to performance is increasing integration. The original 1981 IBM PC had very little on the motherboard. A 16-bit CPU on an eight-bit bus, 16 kB of RAM, a keyboard port and cassette interface. Everything else was on expansion cards: graphics, serial and parallel ports, an optional-extra floppy disk controller. Over the 45 years since then, most of the PC’s possible expansions and peripherals and addons gradually migrated onto the motherboard, then into the chipset, then into the processor. Processors went from 8-bit to 16-bit, then to 32-bit bringing a memory controller onto the CPU die. Then the next generation absorbed the math co-processor and a tiny amount of static RAM as a cache, so the cache on the motherboard was demoted to “level 2” cache… then that migrated onto the CPU die as well. This was not just some Intel thing: for instance, Motorola’s 680×0 family went through much the same evolution.
Bringing a whole second CPU core on-board followed: AMD launched the Athlon X2 in early May 2005, and Intel the Pentium D mere weeks later. The gap was narrowing: AMD launched the 64-bit Opteron in April 2003, while Intel’s 64-bit Xeon was almost a year later.
Graphics followed: by the end of the 1990s, the Intel 810 chipset included a GPU. To this day, the Linux kernel driver for Intel integrated GPUs is named after the Intel 915 chipset for the Core 2 Duo in 2005. The next year, AMD bought ATI. By 2008 it was talking about on-chip GPUs, although it took a while to happen: it announced the “Llano” APU chips in 2010, and they launched the next year – the same time as Intel’s GPUs moved onto the CPU die, with the second generation of Core i-series chips, codenamed Sandy Bridge. The x86 market was finally catching up with where Arm had been with the ARM250 SoC in 1992 – nearly 20 years later.
In 2020, Apple moved the bar on desktop and laptop processors with the M1 generation Apple Silicon, integrating the computer’s RAM and nonvolatile storage onto the SoC as well. For laptops, this wasn’t such a huge shift – ever since the “Retina” MacBook Pro in 2012, Apple’s laptops had soldered-in, non-upgradable RAM, just like every MacBook Air since the first one in 2008.
In August last year, we mentioned the new Reg FOSS desk testbed, a Dell XPS 13 made in 2018. It has no DIMM slots: the RAM it came with is all it will ever have.
Who’s next? Everyone
The trend is inexorable. Thanks to Moore’s Law, for 60 years buyers and users have expected computers to keep getting faster. The effects of Dennard scaling started to put the brakes on that, leading to its successor Koomey’s Law, which fewer people remember: that they take ever less electricity to do it. Most of us don’t know Moore’s Second Law: that as chips get ever-more integrated, the fabs to make them cost more and more.
The writing on the wall is large and clear. You can still have high-end kit, but you don’t get to put it together from discrete bits. The fastest parts – the CPU, GPU, volatile and non-volatile storage – all get assembled as a single, highly integrated, non-upgradable component.
The fabrication failure rates will be horrendous at first, but that’s OK: so long as the duff region can be turned off, you can sell the working remainder as a lower-end part. This is how Sinclair made the original ZX Spectrum so affordable: it bought known faulty RAM chips cheaply, and turned off the bad half of each chip.
Apple offered a Mac Studio with 512 GB RAM, although one year on, thanks to spiraling RAM prices, that model quietly disappeared earlier this month.
If you want faster x86 kit, it is heading in the same direction: huge, highly-integrated SoCs with all the core of the system in one package. AMD is well set for this: it already has very capable on-chip GPUs and the lead in chiplet-based manufacturing. The FOSS folks favor them, too, as AMD’s GPU drivers are all open source. They’re good enough for gaming, as Valve’s Steam hardware shows.
Nvidia didn’t get to buy Arm, so it can’t offer a combined package. Meanwhile, Apple’s respectable graphics performance demonstrates that a smaller, simpler integrated GPU, accessing the same RAM on the same die as the CPU shows, can rival a more capable GPU that is bigger, hotter, but further away and has its own local RAM.
This, we reckon, is what’s behind the “AI” boom. Nvidia is so gung ho for vast LLM clusters that it’s taking its vast market capitalization and investing money in its own customers. Its GPGPU line – graphics chips that don’t even have graphics outputs – are the last gasp of the discrete GPU market. When this bubble pops, Nvidia has nowhere else to go.
Aside from them, discrete graphics cards are history, just as disk controllers were a few decades earlier. DIMM slots are going too. The primary storage will be built in. (The industry missed a great deal there.)
What’s the point in a tower Mac Pro which despite lots of slots can’t take more memory, or newer GPUs, or even a bigger primary SSD? Well, not much, and so it’s gone. But as was the case with GUI desktops, and laptops with built-in pointing devices, and USB ports replacing everything else, and indeed with fondleslabs in general, the rest of the computer industry is going to follow where Apple goes first. There’s no point in tower or big desktop cases any more, when the board can’t have any expansion slots. You may as well build it all into a neat little closed box at the factory – you get better cooling that way, and it’s quieter as well as cuter.
The first microcomputer expansion bus was the Altair 8800’s S-100 bus, although DEC’s UNIBUS predated it, just as minicomputers predated micros. The late great Gordon Bell invented UNIBUS in 1969, but 57 years later, the idea of the expansion bus has reached the end of its route. We predict much resistance to the idea, but the expandable desktop (and laptop, and server) computer is obsolete. ®