Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

The other issue is that the older nodes are more reliable. They're already matured where they understand how to make everything work right. On top of that, each process shrink introduces new challenges that can cause components to fail. The most modern nodes seem like they make everything somewhat broken coming out the door with designers building in mitigations for that. They don't last as long.

The older nodes don't seem obsolete if component reliability is a concern. All my concepts consider their potential. They're quite limited in performance, storage size, and energy, though. There's a tradeoff. Lots of companies want a cheap, reliable, simple CPU/MCU. That's where the oldest nodes shine. That said, the newest nodes are tiny enough that one might make a 2 out of 3 setup with extra error correction like Rockwell-Collins' AAMP7G CPU. Might still be pretty cheap... per unit (not development cost)... on 28nm CMOS or SOI process. Haven't seen an attempt.



sort by: page size:

Under most circumstances I can't imagine a reason to bother powering on a 10 year old system, short of nostalgia. The costs of running it will quickly eclipse the costs of buying something newer and more efficient.

Of course, there are plenty of edge cases, like needing something bare metal that has a particular sort of software compatibility or IO requirements. Some industrial computers still run 486 chips with ISA buses for this reason. These sorts of systems will have been engineered with longevity in mind from the outset though.

Other edge case, just for fun: embedded style systems like the Raspberry Pi. These are tiny, low power, and can be used for specialty purposes for ages. They are also engineered on nodes and setup in a manner that will likely leave plenty running successfully in 10-20 years' time as it is.

It is really only since we've entered the era below TSMC's 7nm node that longevity has become much of a concern at all. It would take a whole essay to even TLDR the constraints of why that only becomes very relevant in the period where those nodes start to become known as "mature", and this is already enough of a tangent, so I'll just leave this breadcrumb of a presentation on the lifecycle of silicon process nodes:

  https://www.youtube.com/watch?v=YJrOuBkYCMQ

Would it not make sense to upgrade to the newest node they can afford to a longer lifespan more life out of their next node? It feels like picking the next-oldest node leaves them having to solve this issue again in the near future...

Exactly. Safety-critical applications by nature have to be risk-averse, and that means anything new, anything that hasn't been thoroughly tested and backed by years of experience, is an unacceptable risk.

Older processors constructed on older large-size processes and often operating at higher voltages and slower clocks are more robust because they have a smaller number of transistors, which means a simpler more predictable model of error propagation; larger features mean lower current densities, increasing resistance to electromigration and decreasing the chances of defects from natural process variation; higher supply voltages reduce the effects of noise; slower clock rates allow more time for noise-induced glitches to settle instead of propagating.

One of my favourite examples of this is the CDP1802 - an 8-bit CPU from the mid 70s, which is still in production and use today in aerospace applications.


This clearly spells the beginning of the end for transistor scaling. If every node makes the reliability worse, eventually the lifetime gets too short and you're through.

The interesting question is going to be how much faster the unreliable parts are than the more reliable nodes once we're at the limit. If they're only 30% faster then in many cases that's a small price to pay for reliability. If they're 30 times faster then what you want is an increase in modularity and standardization, so that the chip that wears out every year can be a cheap fungible commodity part that you can replace without having to buy a new chassis, screen or battery.


It could be for reliability reasons as well, Something which was running for decades in a low-maintenance environment can be expected to continue doing so unlike modern semiconductors which are iterated and manufactured at very high volume that longevity is questionable.

I've been burned by semiconductors earlier due to bad design/manufacturing e.g. Pentium-D, SD 810 etc.

Also with current semiconductor shortage/supply chain issues/inflation, Anything to re-use (or) re-purpose old compute hardware is valuable to many.


From what I understand (having heard an EE at work talking about this), at bleeding-edge nodes, electromigration is no longer the biggest concern; the reliability of the individual transistors is. They apparently stop transisting eventually. I’ve heard redundancy is being built into chips to accommodate this.

> There's essentially nothing that can "wear out" in a CPU

There's electromigration which is harder and harder to mitigate with smaller process nodes


Automotive industry is still using 90nm chips. It's not even the generation before the current one. They have long production runs and require stability, and they also prefer to use standardized parts across many production runs.

Defense also uses older chips. Don't ask what kind of chips are in Amraam missiles or the F-35 -- although the F-35 is getting a technology refresh now (after a 14 year production run).

Only in bleeding edge consumer devices does it make sense to keep changing chips. In other systems, production runs can last 20 years and the life of the asset can be 40 years or more, and you want the same spare parts available throughout the entire expected life of all assets produced. And then when you look at fixed assets, such as thermal power stations, then you are looking at even longer time horizons.


The newer ones could be worse as they move to smaller process nodes and store more bits per cell. That said, the controller logic is improving, and larger capacities mean more room for wear leveling, so it shouldn't be too bad.

I think it is more fair to say that, being old and well-used, the engineering behind old-fashioned hardware is more mature and its failure modes are better understood. Embedded electronic systems of various types will reach that point as they are more widely deployed and used for longer periods.

BTW, I have been through that exact failure scenario before myself (in an '81 Monte Carlo).


Well, I agree. Note that I didn't criticize that fact here. However, I very much dislike the disposability of current products, because this is a harmful way of ensuring repeat purchases through perpetuating waste.

> don't be surprised that the components that go into them aren't designed to last very long

It's not surprising, but it's also not obvious that discrete electrical components like ICs can degrade with use over the span of the years. That's why I mentioned it.

> If the customer is willing to pay for a CPU that will run reliably for 50 years, someone will produce it.

It is a chicken-and-egg problem. No one will pay for such CPU because it costs a lot, and it costs a lot because no one will pay for it for competition and economies of scale to kick in...


There are many ways to to planned obsolescence. I'm sure there is a way to just manufacture the chips such that they degrade in a few years.

Planned obsolescence through major security bugs doesn't sound very smart to me.

I don't get a vibe that Intel is enjoying this publicity or the presumed replace of those chips. My understanding is that AMD is quite competitive today in the data center.


"to my knowledge"

CPU technology is quite arcane, very high level, there are so many patents, IP money and a lot of secrecy involved, since CPU tech is quite a strategic one for geopolitical power. Do you work as an engineer at intel, ARM, AMD? On chip design?

> How do you tell if the CPU is on the margin of failing

It's not about failing, it's about error detection. Redundancy is a form of error detection. If several gates disagree on a result, they have to start again what they worked on. That's one simple form of error detection.

CPU never really fail, they just slow down because gates generate more and more errors, requiring recalculation until they finally correct the detected error. An aged chip will just have more and more errors, that will slow it down. Which is the reason why old chip are slower, independently of software.

Although a CPU that is very old will be very slow, or just crash the computer again and again that hardware-people will just toss the whole thing, since they're not really trained or taught to diagnose if it's the CPU, the RAM, the capacitors, the GPU, the motherboard, etc. In general they will tell their customers "it's not compatible with new software anymore". In the end, most CPUs get tossed out anyway.

It's also a matter of planned obsolescence. Maintaining sales is vital, so having a product that a limited lifespan is important if manufacturers want to hold the market.


Yeah, there's a difference in available public resources between a chip nobody knew anything about just 6 months ago and is just getting upstreamed to Linux in the current release cycle, and BCM2835 from Rpi Zero which has been around for 11 years.

Are you serious? :) At least compare with something like Allwinner H3/A10/... which had similar lifetime.


Bigger and more crudely build components probably play a large part. A Microprocessor from the 70s and 80s is made on much larger nodes than today's chips, traces and soldering points are bigger, PCB's thicker, almost all components sans resistors and wires were either bigger or build differently.

They have more material to play with while they decay compared to modern devices.


more compact chips will have a higher failure rate. Not a great idea for servers.

Given that 10+ year old boards are still perfectly serviceable for many applications and are still in widespread use all over the globe they may be End Of Life for Intel but that does not mean they are magically and suddenly all gone. The degree to which Intel stuff has proven to be insecure over the last couple of years would warrant at least some consideration for people who are still running that older hardware, especially if it does not cost Intel much to keep that hardware supported.

Also, the chip obviously doesn't need to wear out to become less preformant over time. Reliability is perhaps the wrong word, I think quality would suit better.

But in any case, if you have one chip that decreases in performance 10% per year due to security mitigations and another chip that remains consistently performant, it is something to consider when shopping.


I think, electromigration will still kill the chip earlier than individual device failures.

It was Intel is said to switched to cobalt wiring in latest node, and seems to be paying dearly for that. TSMC and others seem to go the conventional road and continued to perfect the salicide for smaller nodes without any issues.

next

Legal | privacy