Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

Not really, PPC was a reasonable choice for a high-performance architecture back then, and arguably a better fit for former 68k coders than x86. And a move was necessary because the 68k was becoming a dead platform by then.


sort by: page size:

Gosh I had to think back a moment to remember 68k to PPC. I wonder if that transition could be considered “botched” in that it happened at instead of going directly to x86. Outside hindsight I recall it was considered a questionable choice at the time.

Not just the PPC -> Intel switch, before that they switched from 680x0/m68k/68k/68K to PPC as well. Which certainly wasn't just a simple recompile.

There were 68k/PPC fat binaries too, though this was as much about compatibility with older systems as it was for performance (the emulated 68k system on PPC was quicker than any 68k hardware).

They've done it twice, not once. They moved from 68k to PPC in the early 90s, again with full emulation. In that case it was even more extreme, as much of the OS ran emulated in early PPC releases (yet it was still quicker than running on actual 68k hardware!).

> The 68k to ppc transition came with ppc being able to emulate 68k code faster than any 68k you could buy (in an interpreter, no JIT!!!)

This simply isn't true, it wasn't until the G3-300 was released that the 68k emulator could run 68k code at 40MHz comparable speeds, and that was after 10 years of ppc601s and 603s running 68k system binaries at slower than quadra speeds on 180 and 200MHz processors.


My memory is that the PPC -> x86 jump was due to PPC supply issues and the fact that the PowerPC 970 / G5 was too power hungry for laptops. I could be wrong, but I administered labs of mixed x86 / PPC Macs during the transition and the performance jump seemed just like the normal difference between successive generations.

Keep in mind that while you don’t see much 68k outside embedded these days, you still see POWER in supercomputer rankings, and it also appeared in game consoles.


Failure to try hard enough seems like the likely answer. X86 sold more & thus got more money poured into making it faster. X86 had more companies in the running for value longer (three then two, after Via really vanished).

I think it was powerpc but not 68k, but one notable different & distinct characteristic of ppc was that it has inverted page tables. I would not expect it to be a major make or break architectural difference, it's probably not a huge difference, but different ISAs seem not that consequential, while having totally different memory architectures makes the whole effort to optimize very very different, changes how all caches work deeply.

There were some very neat objects oriented sympathetic ways inverted page tables worked that seem potentially mechanistically sympathetic to have expressed at a hardware level. But there's also a real possibility that various caches were harder to make run fast, with inverted page tables. PPC's different memory architecture was fairly unique.


There is just the opposite precedent for the 68K to PPC transition. My 6100/60 (PPC 601/60) was about the speed of a 68030/25Mhz Mac when running emulated software. It was actually slower than my upgraded 68030/40Mhz Mac when running emulated code. My Mac was less than half the speed of top end 68K Macs.

The PPC Macs couldn’t emulate a 68K floating point unit at all.


The 68k to ppc transition came with ppc being able to emulate 68k code faster than any 68k you could buy (in an interpreter, no JIT!!!), and the native ppc code was just ungodly fast comparitively.

Neither part of that statement is true. I had an LCII with a 68030-40Mhz card before upgrading to a 6100/60. The 6100/60 (601-60Mhz processor with a half speed bus) was much slower running 68K software than my LCII. It was about the same speed once I bought SpeedDoubler.

Also, between the slower bus, the slow shared graphics memory and System 7 on the PPC being emulated, it felt much slower than a decent 486-DX2/66 or a Pentium-60.


I doubt there's benchmarks from ye olde days, but are you sure? I thought the first PPC machines managed to outrun 68040 machines, even in emulation. I fully admit my recollection could be wrong there.

But regardless, that entire pre-G3 era was the nadir of the platform. Those were dark days.


I was sort of surprised that after the PPC took off, nobody tried to do something meaningful with the 68k architecture in the immediate aftermath-- license it from motorola, and make pin-compatible clock-doubled 68060s, or eventually applying modern-x86-style "It's RISC everywhere after the decoder" designs.

There was probably a few years when there would have been commercial appeal-- getting a few more years of life from all those workstation platforms (pre-SPARC Suns, pre-PA-RISC HPs, etc.) that ran on 68k-family chips, anyone with a Mac that had software performance-constrained by the PPC migration-- as well as the Amiga enthusiasts.

When Transmeta showed itself to the world, I figured that's where their business case lived. (68k as one of many possible small-run high-margin markets)


Ahh, I miss the 68K architecture. It made sense, unlike x86, and it was fun to use, unlike PowerPC.

People thought Apple might be do the same thing when converting from 68K to PPC and PPC to x86. Apple would never do it.

However, there were PPC upgrade cards for 68K Macs but you had to reboot to switch from 68K to PPC.


Horrendous mismanagement at both Commodore and Atari respectively took the Amiga and ST down; the writing was on the wall for those platforms long before the 68k architecture was ever starting to plateau.

Even Apple's transition to PPC arguably had more to do with everyone (Motorola included) hopping on the RISC hype train than with some kind of fundamental performance limitations inherent to the 68k architecture.


The way I remember is that they couldn't get a PPC of the current (at the time) generation suitable for laptops - too power hungry, too hot. IBM weren't interested in supplying such a part so Apple were really left with no choice. It was a similar story of moving from 68k to PPC - the 68060 wasn't what they needed.

Maybe Motorola could have pulled off what Intel did with Pentium, paper over the aging CISC with RISC internals

They did; that's what the mc68060 was. Too little, too late and (as you say) corporate attention directed at PPC.


I've heard later versions of codewarrior had crappier 68k support, not due to strictly deprecating it, but just lack of interest/maintenance work as the OS developed and PPC happened

> This is coming from a guy that got burnt really bad on the 68-PPC conversion a while back (had basically a brick within 6mo of buying a new 68).

I assume you mean 68000 series to PowerPC transition? I lived through that and thought it was handled even better than the recent PPC -> Intel transition. Programs were compiled as FAT binaries for years and years because of the huge installed base of 68k machines. I don't see how you could have had a "brick" after just 6 months.


I'm not an expert, but as far as I understand, what killed 68k (and many 80s CISCs, like VAX), was the difficulty with producing an OoO implementation given the very complex instruction semantics, in particular the with indirect addressing modes.

It wouldn't be a problem today as designers have transistors to spare, but it was in the early '90s, when the high performance market was taken over by the simpler OoO RISCs and x86 [1] of which, against expectations, Intel managed to build a competitive OoO implementation in the form of the PentiumPro.

[1] which compared to other CISCs is much simpler.

next

Legal | privacy