Does anyone find the use of the word "fork" outside of open-source projects a bit annoying? BMW forked their 3-series coupe and called it 4-series. Ugh...
Will you also complain about Yogi Bera's "When You Come to a Fork in the Road, Take It! Inspiration and Wisdom from One of Baseball's Greatest Heroes", published in 2001?
The word fork has been used for various things in computing long before Github was around. I agree it's a bit overused in this article, but you have the Unix fork command, which creates a copy of the current process and effectively forks the execution. Older Macs of the 80's and 90's used a filesystem that allowed files to have both a data fork and a resource fork; they worked like independent streams to separate program data and embedded resources like images, sounds, etc.
I don’t think the argument holds very well. Apple has made three processor architectural changes including one on the current OS. “Too complicated” doesn’t really align with the execution history Apple has. The fat binary support Apple has is a huge tool and the ability to add instructions and optimizations to their chips to help with x86 emulation is a big deal.
I don’t know if Apple will ever actually do this, but it seems odd to suggest it’s not feasible given past performance and their current technology holdings.
The past cases are qualitatively different - an architecture switch to much more powerful processors with some degree of software backward compatibility provided. That's not yet a practical option.
Furthermore I predict that Apple won’t treat it like a transition, rather it will be a long term dual platform strategy. They’ll encourage developers to build fat binaries and have good x86 translation support in the interim.
But it’ll be long term, at least five years. Standard iMacs and MacBooks will be moved over to ARM quickly, higher-end iMacs and possibly some MBP SKUs a few years later, pro devices will stay Intel indefinitely.
The transition only needs to end if and when Apple can beat Intel/AMD for uncompromising high performance. Maybe that will happen, but until then, fat binaries are a perfect solution.
There’s definitely some common threads with this and Apple’s push to have apps use bitcode. It would make this duality a lot easier from an app distribution perspective without the need to have an emulation layer. My only guess as to why Apple hasn’t required bitcode by now is the sheer number of 3rd party libraries out there that aren’t built with bitcode enabled.
They don't need to force anything: Apple will simply release the hardware and make it straightforward for developers to build fat binaries. And why wouldn't developers eagerly comply?
— The vast majority of programs will probably recompile with no changes.
— Many developers who will need to make ARM-specific optimisations will have already done that work for iOS releases. Ditto for graphics optimisations for Apple's custom graphics cores.
— For the remaining developers, the skill of optimising for Apple's ARM CPU and GPU is already mature in the marketplace.
— Most apps which don't fall under the aforementioned categories are probably high performance apps that won't be important for buyers of the smaller iMac and MacBook Air.
For a platform with a history of smooth transitions, this would be the easiest "transition" in Apple's history.
He’s said since that while that was true when he left Apple, they have clearly made strides in the direction of making bitcode platform-neutral. Notably the shift from 32-bit to 64-bit ARM on the Watch was totally transparent to developers; they didn’t even need to resubmit their apps. And that’s not a small architectural change.
I think the argument against building custom cpu for Mac Pro is pretty good, as this puts them in a situation that is different from past architecture changes.
The future of Pro machines is largely about VMs. Heck, the recent past of Pro machines is largely about VMs — that’s the reason I so often hear people complain about 16GB ram limits, for example.
In that world, I wouldn’t be surprised to see a Mac Pro running a (few?) very high core-count cpus(s) underneath an Apple/ARM based software stack, even if the cpu comes from Intel or AMD. That would let a putative Mac Pro run the same software as the battery-friendly laptops.
You don't need a VM to run Photoshop. The whole point of the desktop Pro machines is high-performance computing for creative applications. VMs have no place in that. RAM is an issue because it's a limiting factor for many rendering and editing applications.
Laptop Pro is arguably different, but I suspect content creators still outnumber developers by a significant ratio.
Apple has made two transitions while retaining compatibility, in one case with emulation of the old architecture being around the speed of the previous (68k to PPC) and in the other with emulation being faster than the previous architecture (PPC to x86). There's no ARM that's fast enough to emulate an x86 in the same power envelope, so right now any transition would either require all performance sensitive apps to be ported or would result in machines that were slower for many tasks.
Bear in mind that both previous transitions were due to the processor line Apple was using being effectively EOLed (explicitly in the case of 68k, more implicitly in PPC - nobody was interested in making CPUs that had the appropriate performance/power ratio for consumer machines). Apple is doing great things with ARM, but they're not /that/ far ahead of the rest of the industry that they can pull off a seamless transition in the near future.
Do the ARM companies besides Apple even care about desktop long-term? There seems to be enough billions to be made in
Mobile and IoT that they can keep focusing on that indefinitely.
Apple sells iPad Pros which they call “computers”. Those are powerful devices, but there is not much productivity software available, and UI is very different from traditional desktop interfaces.
Apple could offer an Air-class machine powered by ARM with a familiar macOS, and most developers would be able to support it in a matter of weeks, if not days.
I did say “mainstream”. Despite the HN bubble, development is not in the “mainstream”. Besides, personally, I can’t stand developing on just a laptop screen. I need at least one external monitor and preferably two.
If I had to do a lot of development on the go, I would have invest in a portable USB monitor or use my iPad as a second display.
> ...with emulation being faster than the previous architecture (PPC to x86).
This is not the history that I remember, at least with contemporary hardware. In my recollection, PPC emulation was generally slower, but it didn't necessarily really matter.
> There's no ARM that's fast enough to emulate an x86 in the same power envelope, so right now any transition would either require all performance sensitive apps to be ported or would result in machines that were slower for many tasks.
I don't think that's true, at least at the lower end.
Whether it is running x86 emulation or native, the Snapdragon 835 comes pretty close to the Celeron N3450, at least in terms of single-core performance. Both chips have a comparable power envelope.
>Emulation adds overhead of course, but it's worth noting that most ISAs are effectively emulated through micro-operations
That's not even remotely the same thing because that micro-code is optimized to the ISA of the processor and it's obviously specific to the microarchitecture of the processor which is again optimized for a specific ISA. It's like translating a Wikipedia article from normal English to simple English. You didn't cross a complicated language barrier.
If I may add my own comment: It's not worth noting at all because your ARM CPU still only implements ARM optimizations and your x86 CPU only implements x86 optimizations.
If you had proposed adding hardware acceleration to make emulation of a specific architecture faster then maybe one could have squinted and said it's emulation instead of secretly implementing a x86 CPU in your ARM CPU.
I don't think it's that different. Microcode can work efficiently with multiple ISAs. In practice, x86 and amd64 are actually fairly different targets. With amd64 you have far more registers and no x87 weirdness, for instance. ARM chips also generally support multiple ISAs.
Of course you have trade offs regarding optimizations, but that's true at multiple levels. For example, many exotic x86 instructions (like BCD) aren't as optimized as they could be.
> If you had proposed adding hardware acceleration to make emulation of a specific architecture faster then maybe one could have squinted and said it's emulation instead of secretly implementing a x86 CPU in your ARM CPU.
Of course I'm talking about hardware-level support. I'm talking about the things that already exist.
Rosetta apps were not faster in emulation. This gets repeated a lot and while the Intel Macs were clearly faster at native apps, Power Macs were still faster clock for clock at their own PPC code. Barefeats did some head to heads; at least one benchmark has a Quad G5 beating a Mac Pro 3.0GHz on PPC software, then Apple's top of the line, and the 2.66GHz was clearly slower.
Yes, the Mac Pro was way faster on its own turf, but Rosetta's overhead wasn't overcome by that. Not until the wide availability of Universal apps was the speed advantage apparent.
From personal experience: the slowest PPC Mac the 6100/60 was about the speed of the LCIi (68030-16Mhz with a 16 bit bus) under emulation. It was much slower than my LCII with a 68030-40Mhz card. It wasn’t until Connectix came out with a better emulator that emulation approaches tte speed of my old accelerated LCII. It also didn’t help that parts of the operating system were still emulated.
Apple also obsoletes hardware without making any transition or major software or hardware changes. I can develop for latest android is on my 2009 macbook but not for latest Mac or iOS.
I can probably install latest MacOS in virtual box on 2009 macbook but not on actual hardware...
The key argument is that transitioning low to mid end Macs isnt enough. They would also need to transition the high end pro machines. That would require engineering super high performance, massively multi core ARM CPUs with huge IO throughput, that would only sell in the tens of thousands of units range.
There is just no way on earth they could ever be economically viable. The economies of scale are just dire. Intel can only do it because they sell many hundreds of thousands, or even millions of these high end CPUs, not just a few tens of thousands.
So the real point of contention is, does it make sense to only transition part of the Mac lineup to ARM if this is correct?
I know this isn’t your argument, but that doesn’t make sense either.
Why hold back the entire Mac line for a model that might sell a few 10K units? Especially since Apple doesn’t know if the new Pro is really viable at all.
We’re the only people who know what a T2 is too. I doubt that had big marketing implications like they are trying to spin ‘Axx’. Absent some geopolitical/supply chain risk.
I see a lot of Surface Pro in the iPad[1]. That isn't a bad thing, it is great to have two design studios trying to out do each other. If you agree that these devices should work better on ARM chips rather than x86 chips, then the iPad has an advantage with an already ARM based ecosystem. Of course the Surface R/T tried that too, and stumbled. But I wonder if Apple saw that as a hint of where Microsoft might go.
I find it particularly interesting that at both companies these are the products that have a license to kill sacred cows[2]. The Surface R/T was "Windows on NOT Intel", the iPad has "stylus improved UI". Both of these were antithetic to Gates and Jobs way of thinking.
Both products (and I've got several different generations of both) feel to me like the "post PC" product. An application focused, battery operated, network aware device. I am a bit surprised that Surface hasn't embraced cellular connectivity as strongly as the iPad has, that is a key feature of "on the go" computing.[3]
[1] And chuckled when the new ones had the pen attach with magnets.
[2] The colloquialism, "that is a sacred cow." meaning a feature or rule that cannot be broken.
[3] Yeah, I know the 'tether it to your phone' mantra, I get that a lot, but it simpler (lower friction) to have it built in.
I strongly prefer tethering to network awareness. Every computer I've had with a WWAN card has thought it would be terribly clever if all the sockets get reset every time the air interface flaps. It's network aware! Tethering is far superior because your laptop never learns that the mobile link was down for a moment.
Re. [3], there’s often a significant cost to maintaining a separate data plan for each device. I was recently quoted $15/mo to connect an $18/mo iPad. That’s a lot to pay for the privilege of not turning your wifi hotspot on.
Yes it is a lot - T-Mobile charges $20 a month for an iPad data plan. I do it anyway.
It’s convenient not to have to burn battery on your primary device and while T-mobile offers unlimited 512kbps tethering, for $20 a month getting unlimited high speed tethering for us (my wife has a cellular iPad also) even though rarely used, is worth it.
Also, the iPad is an excellent hot spot. They’ve done benchmarks showing the iPad can work as a hotspot for up to 36 hours.
He made fun of the idea of putting a color screen on the iPod and a year or so later presented the iPod color.
Of course apple was gonna put a stylus on the iPad, with or without Jobs.
Apple might be content with asking users to buy third party hardware to draw on Mac, but they aren’t stupid enough to say you need to buy a Citrix to draw on your iPad.
It needs to die for a more important reason: Jobs was specifically referring to devices which required a stylus because fingers wouldn’t work as well on their resistive touchscreens.
I’m not sure you actually mean Citrix for that example, but I actually have a mouse somewhere for iPad from when I was working for Citrix and they built an iPad mouse that works with virtualised apps. I think Apple are even adding official mouse support to iPad OS now too.
Perhaps like the "Math co-processor" in the 386 days?
Imagine a future generation T-series chip with the neural cores that the A-series chips have. Accessible using the existing frameworks that you are already using. Or adding GPU cores to the MacBook Pro lineup as a replacement for the AMD GPUs that they have now.
And while they are doing that, they can release a Smart Keyboard for a future iPad Pro that has a trackpad. That's got to be one of the least risky ways to try out the iPad as a laptop replacement. If it's a flop, you're out the $150 or whatever it would cost but still have a perfectly good iPad that you keep using as an iPad.
Apple is in the position to take a "lets see where this takes us" approach. As long as they keep the OS and Frameworks from drifting apart, there are a lot of nice things that could happen.
Seems to me if you write Mac software, supporting a new main CPU would be at the top of your backlog - whereas supporting an optional accelerator card would be much less urgent.
For some context: the author of this article, Jean-Louis Gassée, was a former director at Apple, founded Be, Inc. and was chairman at PalmSource for a while.
His background may help or hurt his viewpoint, depending on how you see things.
I will say that I disagree with his points. And I'll also point out that the Mac ARM transition is happening right before our eyes with the T2 chip, which replaces a bunch of system components (memory, i/o, disk controllers, etc) with an ARM-based platform.
Practically, the only thing that hasn't been replaced are the main CPU and the GPU, but it's only a matter of time.
Sorry, what do you mean? On a current MacBook Pro the memory is controlled by the CPU and the disk is attached to a PCIe root port also on the CPU. The "disk controller" as such has moved into the disk.
The only way to access the soldered-in NAND storage is through the T2 chip, which encrypts everything on-the-fly with a unique burned-in AES key. The T2 also acts as the SSD controller, controlling the raw NAND chips. You can see this in an iMac Pro teardown, as the removeable "SSDs" in it only have flash chips on them, no controller.
The T2 is not involved with memory access AFAIK though. As for I/O, it does handle the audio and camera access.
Seems like a distinction without a difference. On any PC with an SSD there is "a controller" and "some NAND" the difference with the recent Apple gear is they've changed which part you can remove and replace, and which you cannot. My computer also has NAND you can only address through a dedicated controller that handles the encryption, but I don't think of it as indicative of a revolution.
I don’t think anyone buying a MacBook even knows about this or is calling it a revolution.
The OP is the first I’ve heard make a big deal about this and the follow up comment is basically saying that he’s overstating the relevance to some ARM migration. I wouldn’t take random Internet comments too seriously.
OP just doesn't know what they are talking about. Welcome to HN.
You are correct, the T2 chip is just a storage/memory controller.
A lot of apple fans think it's a magical coprocessor. It's only there so they can use cheaper soldered ram and ssds and prevent repairs and upgrades. Good ol apple marketing at work.
Oh I'm not claiming like OP that this is some revolution that proves that a Mac is basically an ARM computer already. I'm just correcting your claim that "the disk controller has moved into the disk". There is no "disk". Rather the disk controller has moved into the security controller.
That's consolidating some controllers. It probably has more to do having fewer parts, fewer vendors, and better repair lock-in. It's not really an architectural step towards replacing the CPU--it's a southbridge with better marketing.
It doesn't lock out linux from using it. They deviated from the NVMe spec, and it took time(and somebody to actually investigate) for the linux guys to figure it out:
Is the struggle for microprocessor supremacy still even relevant? I'd ask, "who's going to build the best quantum processor?" rather than, "who's going to dethrone x86?"
Aren't quantum processors essentially special purpose accelerators, where most of the computation is still done on a classical computer? Shor's algorithm at least looks that way.
x86 is a free/open isa. The patents are expired up to i486 and x86_64 patents should expire soon. A guy from Oracle not too long ago implement i186 in Verilog as a hobby project. At least we know that if Intel ever gets too authoritarian, we can do that for i386.
I can't help but feel T2 chip is just another cynical ploy to gradually kill off third party repairs by baking software functions into proprietary hardware that only the manufacturer have access to. It may be a sound security decision, but I have doubts whether it's worth it for the end user. It also goes against a wider industrial trend of moving from ARM to RISCV and other architectures.
On the other hand, do all these things matter to users and developers as long as external components are hidden behind buses and abstractions?
> It also goes against a wider industrial trend of moving from ARM to RISCV and other architectures.
Is this really already an industry trend? It feels more like some parties experimenting with the platform, not necessarily with intentions to move to it wholesale at any point in the future. But I'm probably missing stuff, since I'm not following that space well.
The short answer is that since RISC-V is open source, it's a bypass around all the IP restrictions that are coming from the US-China tradewar and all the Chinese silicon companies want everything to be RISC-V yesterday and it's a strategic priority.
Western Digital has comitted to go 100% RISC-V in the near future, with Seagate and Samsung not far behind. As far as storage devices are concerned there is definitely a trend.
> It feels more like some parties experimenting with the platform, not necessarily with intentions to move to it wholesale at any point in the future.
Some big players are moving / have moved to it already. Western Digital is already replacing its ARM HDD controllers with RISC-V chips, nvidia is replacing some chips in their GPUs with RISC-V chips, etc. These are all shipping products.
If you need a chip to do something relatively simple and low cost, you are probably already better off with RISC-V today than you are with ARM mainly because its cheaper and has no licensing issues.
I don't know of any big player doing anything complex with RISC-V yet. There is no bit manipulation ISA, no vector ISA, no encryption ISA, ... and many other things that you would need for a general purpose SoC that's not only cheap but also competitive in performance and TDP with ARM.
> I can't help but feel T2 chip is just another cynical ploy to gradually kill off third party repairs by baking software functions into proprietary hardware that only the manufacturer have access to.
But this is only a feeling. If you look at it rationally, you would have to point out at least one way in which Apple is suffering from third party repairs. Then you would have to make the case how said point is a cost heavy enough to justify these deep architectural changes. Go ahead.
> you would have to point out at least one way in which Apple is suffering from third party repairs
When a third party messes up their repairs, it'going to give Apple bad PR because people just see broken Macs.
This is not a problem for Windows PCs because enough vendors are involved and they can blame each other when something breaks. With a Mac, it's Apple all the way down, from software to hardware, and 99% of the blame will hit them, no matter how justified that assignment is.
>I can't help but feel T2 chip is just another cynical ploy to gradually kill off third party repairs by baking software functions into proprietary hardware that only the manufacturer have access to.
Not very cynical: Apple is against "third party repairs" period, and wants to sell you the whole thing as a black box.
They also believe that's their whole value proposition: that software and hardware comes as much as possible as a black box, and works well together (not perfect, but better than if it wasn't a black box for the aspects that they focus on - simplicity, integration, security, sandboxing, etc).
That's their stance, and they got to where they are from near bankruptcy, sales wise, in pursuit of that.
>It also goes against a wider industrial trend of moving from ARM to RISCV and other architectures.
The T2 chip is a drive/ram controller. They are just using it so they can solder in ram and ssds. It doesn't do any processing on terms of the OS. Yes, it is ARM based, but no, it has nothing to do with them switching to ARM, which just seems like the natural evolution of things now.
My take on the iPad form-factor hasn't really changed since the introduction. Its really great for certain casual use cases, but as soon as you sit down at a desk to get some work done, oh god does this fucking suck. And the keyboard cases actually make it worse because you delude yourself into thinking it makes it better, when it does not.
So an iOS/iPadOS laptop with full KVM support always seemed like a no-brainer in my mind. Or at least a much more obvious move than MacOS/ARM (which would split a small ecosystem behind the only logic that "they did it before and it worked out".) Plus 'the future' is obviously iOS.
And the keyboard cases actually make it worse because you delude yourself into thinking it makes it better, when it does not.
When I was given my iPad I didn't want to buy a keyboard case. But sometimes I need a keyboard. I happened to have an extra Apple Wireless Keyboard, and it works fantastically. Wireless. Full-sized. And it's only slightly larger than the 9" iPad. I just throw them both in my bag.
(1) Nah, it certainly doesn't work "fantastically", but might work decently enough for you if you have bought an additional ipad stand. (2) If this was some really great solution and not just your personal workaround, Apple would have been promoting it years ago.
My point is the only way iPadOS can displace laptops is to be a laptop. Not this tablet-stand bullshit.
I love my iPad with the keyboard case. When I hook it up to an external monitor and my Kinesis keyboard, it’s even better. I’m looking forward to mouse support in iOS 13.
One of my favorite apps when it comes to the keyboard is Blink. It’s an SSH/Mosh client with a limited local shell. I use it to build a Jekyll site on an EC2 instance. Blink supports multi-monitor, so I can have different shells on different screens.
Multi-window in iOS 13 is going to be another game changer. This is the best device I’ve owned and I’m excited about what’s to come.
You literally just described a really shitty laptop from like 1995. Woah guys my PC is so cool and I love it! I can plug in another monitor and soon it will support a mouse . Have you ever tried a real computer, saying an iPad is more useful than anything else is just wrong.
You continue to miss the point. The cool thing about the iPad is not that's a laptop. It's that it's a great device in it's own category, that also more and more can replace a laptop.
I'll take all the downvotes in the world from insane people who must have some mental block that makes them think they can do "more" on an iPad than a laptop. If your idea of doing more is being so weak you would give up mouse support and a real OS to shave off 1 lb of something to carry, then yeah an iPad does way more. They make laptops with touchscreens and great battery life too, bro. Where is your God now? I bet you had no idea.
Between touch screen and the Apple Pencil I would say the mouse is the least important input device, unless you really want to play a first person shooter. If you are a keyboard centric developer then ideally everything is accessible by a keyboard shortcut.
Several years back I used my iPad remoting in to my desktop exclusively for about half of the year. There are terrible ways, mediocre ways, and good ways to simulate mouse input with a touch screen. The critical difference between touch and mouse is that a mouse can precisely hit much smaller UI parts (which, a lot of people are going to have trouble seeing anyways.) The Apple Pencil effectively does this — though not with the keyboard mouse pairing most are used to.
I’ve been using Clip Studio Paint on my iPad Pro for quite a while. Their iOS app is a 1:1 UI port from their desktop app. The app is basically Photoshop for illustrators and painters. It works way better than I would have predicted. Without the Apple Pencil, I don’t think the UI would be usable, but the Apple Pencil is the primary input device for the artist.
I’m expecting the iPad to behave very much like a desktop/laptop soon. My bigger concern is all of the software charging by subscription, which is a lot easier to implement with iOS than MacOS/Windows/Linux (kind of ironic that the unsustainable 99 cent price points makes subscription the only viable revenue model.)
Does the display matter for a terminal application? As for battery and weight, you are talking about pluggin in various other devices that you presumably don't carry around, so it wouldn't make any difference if you plug it in to the power or not.
If you often do carry it (without the other gear), then I can maybe see it, though you can buy a used small thinkpad that can easily run the terminal programs.
The new Safari in iPadOS makes it possible to use Code Server (Visual Studio Code on browser) with keyboard. If you already spin an EC2 instance, it is definitely worth a try. This also gives hope about the upcoming Visual Studio Online and its compatibility with iPadOS.
That's not really necessity unless you can show that the current values for those 3 are insufficient and that an Axx transition will produce big gains in at least 2 of those 3 areas, is it?
Even if native apps compiled and written for the Axx architecture have better energy and thermal performance, what about all the x86 apps that have to be emulated? There'll be a tax for that. It could be worse.
Intel-based macs are probably wasting some power that Axx would theoretically not waste, and Axx might theoretically have better thermals, but there's not some massive demand for Apple to upend the whole ecosystem like that. Does Apple have competition threatening to steal their lunch because Macbooks' battery life isn't long enough? Not really, no?
> Even if native apps compiled and written for the Axx architecture have better energy and thermal performance, what about all the x86 apps that have to be emulated? There'll be a tax for that.
Initially, probably. Think of the long game. This is something they have already done, twice. They have the experience and expertise to make this less impactful.
> Even if native apps compiled and written for the Axx architecture have better energy and thermal performance, what about all the x86 apps that have to be emulated? There'll be a tax for that.
Longer battery life is always preferable in portable devices, no? Alternatively, what about the reduction in the need for the materials, therefore the environmental impact, used in making batteries for laptops? Apple want to be the greenest manufacturer.
In those cases, the switched-to architecture could emulate the previous one with acceptable performance. For the hypothetical amd64 -> aarch64 transition, I'm unaware of the existence of software, silicon or the combination of the two which could emulate the former on the latter while staying in a reasonable power envelope.
Emulation is non-optional if one wants to avoid splitting the ecosystem.
I agree that Apple would prefer to use the ipad pro as a way to migrate users from their intel machines. I bet there's more margin in an ipad pro than a macbook pro.
I can't see Apple using any ARM chip other than their own. If they commit to converting OS X to ARM then they'll either produce it in-house or, less likely, buy AMD (or maybe just buy AMD's Zen tech only).
One of the main reasons for Apple to move to ARM is to control more of their technology stack. It's the direction they've been moving in for a while now and this would be part of that strategy.
The article claims Apple would need to keep x86 around indefinitely for the Mac Pro - I'm just saying that makes no sense.
Apple would definitely prefer to only use its own chips, but if they're unable to, using ARM Zen from AMD makes a million times more sense than x86 Intel.
I think you may be underestimating Apple's willingness to let the Mac Pro be an ugly duckling. This is a platform that they left stagnant for years after all.
>but if they're unable to, using ARM Zen from AMD makes a million times more sense than x86 Intel.
How do you figure? Transitioning from x86 to ARM would be a major, expensive undertaking. If it's justified at all it would mainly be by Apple gaining control over that part of their stack. I doubt they would make such an effort and investment only to switch from the current third party, Intel, to a different third party, AMD.
But otherwise yes they're really not likely to make the transition at all. Given how small a proportion of their overall sales Mac sales are, they're most likely to just leave it on x86 indefinitely.
I'm speaking specifically about the Mac Pro, which Apple doesn't really care about. I get the sense you think I'm talking about using Zen in their laptops etc.
They're not going to build a chip that competes with high end Xeons just to field the new Mac Pro.
Given that, in a world where MacBook Pros are powered by A15s, Apple can either cut the Mac Pro, keep it as their only x86 product or put an ARM Zen in it.
I'm saying keeping it as x86 will straight up never happen. ARM Zen or cancellation are the plausible options.
Macs don't have enough sale volume to have multiple SoCs. Heck even iPhones and iPads share the same SoC. They can't justify having a new SoC for each of their Macbook, Macbook Pro, iMac and the Mac Pro with dozens of cores. If they transition to ARM then half of their Mac lineup will still stay on x86.
Extremely unlikely. The beauty about RISC-V is that it's open source and royalty free. Both arguments aren't that relevant for Apple, as they got an architectural license from ARM which gives them full access to everything and allows them to design their own chips and because the royalty charges are vanishingly small compared to the prices Apple charges for it's products.
Apple has also already invested so much effort in its own ARM chips that I believe it wouldn't make much sense for them to throw that away in favor of RISC-V, just for saving a few million dollars per year in royalties.
> Extremely unlikely. The beauty about RISC-V is that it's open source and royalty free
The other beauty is that you can extend the ISA with whatever proprietary instructions you want. Others develop 99% of the ISA for you, and you focus on the 1% that differentiates your product.
I'm not sure if the ARM licenses allows them to extend the ISA with "Apple-only" instructions, but RISC-V does and it allows Apple for more vendor lock-in. E.g. you can't use a Hackintosh anymore without buying a CPU directly from Apple.
I think Apple could stop Hackintoshing on Intel (in a practical sense) if they really cared to do so. There will always be the extreme fringe who would get it to work, but it's so easy at the moment to get a fully functioning Hackintosh (including iCloud, iMessage, App Store, etc) that I have to believe Apple is both aware and doesn't care to try to stop it.
I don’t believe Apple will transition away from x86 anytime soon but for different reasons. They would abandon the pro market if it moved to arm. Most x86 pro applications would have to be rewritten. What would be the incentive for software makers?
Apple might very will be able to bring out a arm stack for consumer market but there is no indication that they can compete with x86 pro or server market.
Somewhere in between. First patched, possibly significantly, then recompiled. Unless you have a lot of low level processing in ASM or intrinsics for simd. Then there will be a bit of rewriting.
PPC was niche compared to Intel. So they moved from a dying ecosystem to a thriving one.
If they moved from AMD64 to ARM, they would be doing the opposite. AMD64 is not going anywhere on the desktop and is by far the undisputed performance king. Not to mention how many shops and pro end-users they would instantly lose. The number of developers that run macOS and depend on AMD64 virtual machines must be enormous. Given how much Microsoft is embracing Linux, it would be commercial suicide for Apple to abandon the AMD64 platform since they would be driving all these devs to their competition.
ARM on the Macbook Pro would put a lot of software shops (I'm guessing tech and design firms are the main purchasers of Macs) using Docker in an awkward spot with their containers not running on dev machines.
Do these shops have to use the exact same container on their dev machine as elsewhere? Can they not build an ARM flavor of their container? Same Dockerfile but with an ARM base image instead of an x86_64 base image?
Generally no, but it's been convenient to have the architectures match up. It minimizes surprises. Different architectures also make people ask if they should switch to PCs.
Those shops are deploying into Linux servers anyway, so they would be better off supporting an Linux OEM.
We use our Macs for Design (Sketch, Zepplin, Adobe), Web (via Java) and native iOS/macOS development, I am yet to hear anyone caring about docker beyond conference talks.
But developers using Macs for x86 Linux development is a common thing. Besides that they actually also might need the Mac part (Office, other software requirements), it certainly would be a hit to Apple, if those developers dropped the Macs.
I am one developer using a MacBook Pro at work, but depend on the ability to run x86-Linux VMs. I could not use an ARM-Based Laptop for work.
I can repeat what I wrote: people like me are dependant on MacOS and on Linux. Currently the Mac offers this ability. I would like to use a native Linux machine, but the only alternative for me would be Windows + VM.
And in any case, if the developers are happy with a non-Mac machine, Apple shouldn't be :). Unless they really want to continue their course of driving anyone who does not depend on a Mac for living and who doesn't have limitless funds away.
For private usage, I am about to switch to native Linux for my next laptop, unless Apple comes out with laptops with a reliable keyboard and realistic upgrade pricing again. Ironically, I would love an ARM-based laptop for private usage, if I can get one, I might just get the new Pinebook Pro.
All signs are pointing to Apple releasing a laptop with a new scissor-switch keyboard design Soon™.
I'm not sure what you mean about "realistic upgrade pricing"...it's not like you've ever been able to "trade in" an old computer to any manufacturer for a steep discount on a newer model (analogous to upgrade pricing of software)...
I mean the prices you need to pay to upgrade the machines beyond the basis configuation. Currently Apple charges 240€ for 8g of memory, and 240€ for 128g of ssd upgrade. The latter is 1920€/tb, while market prices are between 100€ and 300€/tb. A MB Air with 16g/512g is at 2000€, not counting Apple care, which is basically mandatory.
The containers don't run on macOS as such, they run on linux inside a VM. You could spin up that linux inside an x86_64 emulator such as qemu instead. The emulation would be slower, but it would be a serious problem for most use cases.
Apple would like to be less reliant on both Qualcomm (radio/modem) and Intel (CPU). Qualcomm is ahead on 5G and Intel was behind on 5G and 10nm. Apple needs TSMC's 5nm performance-per-watt for AR glasses and moving MacOS from x86 to Arm, and it carries keyboard debt into the post-Ive era.
- Qualcomm settlement bridges the 5G modem gap
- Intel modem acquihire rekindles dream of radio autonomy
- Did Intel modem deal delay "x86-Arm" mobile Mac transition?
- Can 16" Macbook + 2015 keyboard buy time for new designs?
> Today, I’ll contend that moving the macOs to an Axx processor is a fantasy, it’s too complicated and will never happen, at least no time soon. Mac users are wedded to x86 processors for the foreseeable future.
No wonder Be Inc went belly up. Apple had switched before from architecture.
To imagine that the transition from ppc to x86 is remotely comparable to a present-day x86-to-ARM transition is utterly absurd. Basically everything has changed: Hardware architectures, software architectures, OS design, etc. The amount of software out there for x86 macs is massive and end users are going to expect all that stuff to work (even if they'll grumble and accept it when Apple tells them that only 50% of it will work). You're also having to maintain compatibility with a massive set of third party hardware used by creatives, developers and enthusiasts - are you gonna run those drivers in an emulator? In kernel mode?
This would be an undertaking beyond any past one, because in the past computers were much simpler and Apple controlled a bigger part of the picture. Now they're shipping elaborate GPUs and hardware stacks that mash together chips from various vendors and there's a ton of third party hardware and software all working on top of it. If they don't keep most of that working users won't stand for it. Something like "we need to get AMD or NVIDIA to provide us a top-tier video driver for the Axx instruction set and convince them not to demand a king's ransom for it" would not have been an obstacle in the PPC days but it's unavoidable now unless they want to ship their own GPU too (which they could do, but again, compatibility)
He is right for right now, but the long game is the axx chip in most apple products. The MacPro won’t have one, if it’s still around then. ipad pros will get a dock much like today’s dell usb type c docks. you will connect up to your screen while at your desk. do what you need to do with the keyboard and mouse. Even have a vdi app for access to some corp windows only apps. Then unplug it for meetings with note taking etc. It sounds like an interesting future.
Every analysis I've read, including this one, assumes that Apple would start at the bottom rather than the top, as they have with every other CPU transition in the history of the company.
Is there something especially difficult about making a high-core-count CPU? The A-series is already crushing single-threaded performance and they're competitive with Intel desktop CPUs for some applications. Once you have a high-performance CPU with multi-core capabilities, is there any reason you can't just copy/paste cores?
It isn't especially difficult, but is is expensive. The big challenge with CPU design is to make up for the costs and for that you need numbers. In the 90ies, there were plenty of high-performance CPU designs, Alpha, Sparc, Mips. They were alle killed by two mechanisms, that Intel had a lead in fab technology and that Intel had much higher numbers, consequentely could gain back much higher investments as the competitors could afford.
The big question wouldn't be whether Apple could make a high-performance ARM, but if the financials play out for it. The Mac Pro numbers certainly wouldn't pay for such a CPU development. This could change of course, if they would bring back the Xserve in a modern incarnation or find some other uses for a high-performance ARM cpu.
But Apple does not need to be a good or efficient fab because TSMC and Global Foundries already exist. One of the cited reasons for ARM transition is specifically that Intel fab sucks and their CPUs have been lagging in performance per watt for years.
Performance per watt isn’t too important on the Mac Pro and the low volumes mean that developing a purpose-built chip would be extremely cost-prohibitive.
> The Mac Pro numbers certainly wouldn't pay for such a CPU development.
Again, though, what exactly does "development" mean here? Apple is already fabless. TSMC makes 32-core chips for AMD, so that scale is not new to them.
Well, the process of designing the chip to fabricate. This means paying a lot of electrical engineers to design the chip, probably paying for several iterations of lithography masks. Yes, TSMC certainly can make any chip of the desired size - they are already making enough of those - but the development is very expensive. The Mac Pro by itself is certainly not selling in high enough numbers to pay for such a development and considering how long Apple neglected the pro, I am somewhat dubious they would be willing to spend a huge amount of effort on it.
You will need to buy different set of tools to work and design for TSMC 's node. And while it is still from TSMC, the node for Desktop ( High Performance ) will be different to Phone / Tablet ( Low Power ). The cost of designing and testing a 5nm chips is roughly a billion. For iPhone that is spread out over 200M unit over its life time, or roughly $5 per chip excluding the actual cost of wafer.
For Mac you are looking at 25M unit per year, with TDP going from 15W to 250W. Apple could follow the same path as AMD and make one die to fit all. But in reality it is waste of engineering recourse and focus for little benefits.
> the node for Desktop ( High Performance ) will be different to Phone / Tablet ( Low Power )
The iPhone CPUs are already competitive with Intel chips. They've got incredible single-core performance, great memory bandwidth, and solid 64-bit and SIMD support. What specific feature of a "high performance" node would they require that they aren't using for iPhones now? They've been pushing the limits on allegedly "mobile"-oriented architectures for years.
Every time this comes up, I hear from the naysayers "Yes, Apple is great at mobile, but desktop is a different game", but then also "Apple's mobile chips are faster than many Intel desktop chips". The skeptics sound awfully similar to those who said "Apple is great at iPods but smartphones are a completely different game".
> The cost of designing and testing a 5nm chips is roughly a billion.
AFAICT, nobody in the world has a production 5nm chip yet. How can you put a price on something that nobody has successfully produced yet?
If I were Apple and I were confident that just throwing $1B at it would solve all the engineering problems, I would absolutely do it. $40 per Mac for the best CPU (and independence from Intel) sounds like a steal.
>The iPhone CPUs are already competitive with Intel chips.
In selected benchmarks. That is like saying the Qualcomm ARM Sever Chips are already competitive with Intel Chips. But it is pretty close, at least to the point for most consumer usage it doesn't matter.
>The iPhone CPUs are already competitive with Intel chips.
>What specific feature of a "high performance" node
At sub 10W ( The TDP for iPhone and iPad Pro ), not at 15W+ or 250W on the Mac Pro. You will require different tools, nodes, design guidelines operating at those TDP. No one is saying Apple cant do it, the question is if it make any financial sense doing it.
>AFAICT, nobody in the world has a production 5nm chip yet. How can you put a price on something that nobody has successfully produced yet?
The lead time for designing a chip / SoC with new node is roughly 2-3 years. Apple are already working ( finalising ) on 5nm SoC for next year, and they already knew the basic costing, ( excluding yield issues ). 3nm Costing Projection are also already known and are being worked on as we speak.
>If I were Apple and I were confident that just throwing $1B at it would solve all the engineering problems, I would absolutely do it. $40 per Mac for the best CPU (and independence from Intel) sounds like a steal.
Quoting myself excluding the actual cost of wafer. and yield. Apple would have made the jump if it really only cost them $40.
Every feature on the macOS sales page today is something that macOS shipped without for almost 20 years (and Mac OS for almost 20 years before that). That's what sales pages are for: they show off the new features.
Apple didn't withhold copy/paste out of spite. They were shipping a completely new platform, and you can't do everything in 1.0.
Frankly I’m surprised no one hasn’t built an external core/ram expansion docking station yet.
With OS support for Numa and a broker to schedule tasks on Big/Little cores all the pieces are there.
My guess is the halo effect around the cloud that put these devices back into the drawers; when you can sync files across two conventional independent devices, risk aversion wins.
But people might become aware and weary of routing everything through Facebook/Google so the value of cloudless mobility might tip the scale.
Plus, if AR/VR ever takes off, the hardware for high bandwidth wired or not PAN will become available
> Frankly I’m surprised no one hasn’t built an external core/ram expansion docking station yet
I imagine that the distance and all the connectivity required between the CPU and RAM sitting in an external docking station would destroy performance.
Suddenly disconnecting a dock today is a minor annoyance; worst case, you lose some file descriptors/mountpoints, and network connections. If you plug in processors, accidentally unplugging outright drops processes, and likely panics the kernel.
I have realised a long ago that Apple has for a long time now been working towards steering the Macintosh way from its open, Unix-based foundation.
I used to work with the old MacOS, long before OSX, and it was just the way they wanted it: a closed platform with a nice (for the time) user interface and giving Apple full control over what is being developed on it and how.
Dropping this platform for NeXTSTEP-based OS X was the right choice at the time it was made, when they needed a more advanced platform, and they needed it immediately. But the side effect was that it opened the Mac as a development platform, and Apple has since been trying to close it back, without affecting the existing users too hard.
The success of mobile/tablet market allowed them to build a new, closed platform, with precisely the features described above. After MacBook Air was first released I expected them to soon release an OSX-based tablet, but I have soon realised that it is not going to happen: instead of pushing Mac to the tablets, they will more and more push iOS to the Mac, until they have one uniform platform that will make it nearly impossible to develop on except if sanctioned by Apple.
Anyone could develop software for the Macintosh prior to the release of OS X. There were third-party tools to do so. You could extend the system software. You could distribute your software as you pleased. Depending upon how you look at it, it was actually more open than OS X development since modern releases of OS X lock certain things down in the name of security. Both are a far cry from iOS development.
Apple full control over what is being developed on it and how.
What “control” did Apple have on what was developed on classic MacOS? Famously, the only decent compiler during the early PPC years was made by a third party.
The UNIX foundation of NeXTSTEP was there only to help bring software into the platform and having a foot on workstation market.
NeXTSTEP application development in Objective-C, or for that matter Mac OS X in Objective-C/Swift, has anything to do with its UNIX based foundation.
The fact that many have bought Apple devices just as convenience to not have to bother setting up X properly or dealing with power configuration on Linux laptops is no longer relevant for Apple sales department.
They care about the developers that buy Apple devices to actually target Apple devices with their software, none of which uses frameworks that depend on the OS being UNIX.
- Apple is about to ship a new mac os that comes with improved support for running ios/ipados applications as well as using an ipad as an extension of your screen.
- The reason for the Mac Pro and the Mac Book Pro to exist is to enable professional artists, developers, and other people that need this kind of hardware to do their thing. Mostly they use tools not developed by Apple. E.g. Adobe is king for anything related to graphics. There are a bunch of third party 3D tools out there. Same for video. Same for Audio. Porting all of that to a new processor architecture will take a lot of time. It also has a severe risk of alienating developers and users during the transition. E.g. Adobe took their sweet time launching optimized versions of their tools for Intel ten years ago and meanwhile some users moved to Windows.
- Mac developers targeting intel macs (i.e. all existing macs), would probably want to develop and test on the machines they are actually targeting. Building cpu/gpu intensive x86 software on an ARM architecture with an X86 emulator makes no sense. You'd want the real thing. Xcode on arm would be a hard sell (except for IOS developers).
- A key money maker on IOS/IpadOS is the app store and the whole point of an ipad is that all the software comes from there. The whole point of Mac OS is that pro users get a lot of their software outside the app store.
- VR/AR is slowly starting to happen and Apple has mostly ignored this on mac os and only done a little bit on IOS (AR mostly). Any hardware that Apple is going to launch here is extremely unlikely to involve Intel or AMD hardware. In fact I suspect this is the primary reason Apple has not pushed hard on this on mac OS: they are looking to disrupt this space with an Apple product that consists 100% of Apple hardware and software and are not interested in filling the gap short term by depending on third party hardware.
So, a processor architecture change on mac os would be short term disruptive and risky. They have a coherent strategy for getting mac users to buy an ipad and double dip in revenue and it seems they are pushing a lot of users toward that platform. Also the pro market is comparatively small but extremely lucrative in its current form.
Apple product ecosystem has fast become a solution looking for problem combined with touchy feely hokus pokus aided by synthetic exclusivity and nonsensical differentiation.
Why the hell do we need the iPad to take over the PCs? Why the hell do we need ARM CPUs in Macs? Nobody is solving any real problems with those things - it's only purpose is to make Apple's margins and control even greater. And we as consumers are supposed to pay attention to the news outlets droning on about it for no benefit!
I think Apple will do well going back to the basics - simplifying the product lines, selling what matters and focusing on increasing the reach of its products to price conscious buyers. I don't think the market has a big enough appetite for more BS however well marketed it may be.
The trash can Mac Pro and butterfly keyboards are perfect examples of solutions looking for problems.
But I don't think making the iPad's operating system usable is an attempt to somehow position the devices as "PC killers." For use cases where a tablet is sufficient to produce a particular work product, a more usable OS only enhances its effectiveness. This has no bearing on the huge number of use cases where a tablet is insufficient.
I agree with the author that Apple would be barking up the wrong tree trying to switch the Macs to ARM processors. Macs are already comparatively a low-volume market and segmenting that market would not do them any favors in the face of increasing competition from Microsoft and Linux implementations.
> But I don't think making the iPad's operating system usable is an attempt to somehow position the devices as "PC killers.
I agree with that one - meaningful improvements to make the iPad a better choice for the users that can get away with using only the iPad as their primary device is a good thing. There's also a significant number of those users. But in practice, deciding what those capabilities are and how best to implement those are far more complicated questions - adding a mouse pointer is maybe a perfect illustration of the conundrum - should we, how should it look, what about the click targets, what about touch first, pencil etc.
I really hope there is a hybrid MBP with an Apple chip running mac OS and first party apps along with an intel chip that runs on demand for third party apps that need x86.
reply