Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

This was a very interesting point. It sounds like there are some serious architectural limitations on Windows, and this makes me believe the same might be true for the NT kernel, and that MS might not be interested in doing heavy refactoring of it.

I'm not a frequent Windows user, or a Windows dev at all. Does anyone know of any consequences that MS's decision might mean, if this hypothesis is true?



sort by: page size:

Why would MSFT throw away decades of development that have gone into the NT kernel? NT is an advanced and, to my eye, very elegant kernel. Win32 is much less so, but NT isn't Win32.

Their explanation was that Windows could not be enhanced but rather should be slimmed down - the APIs that the NT kernel provides can't be efficiently worked around without breaking all kinds of already existing Windows software.

They changed to a better kernel. A really old and battle tested kernel. It is an exceptional reason to break compatibility but it's hard to argue that they shouldn't have made the decision. Certainly they won't switch kernels again soon because there is no better Windows kernel than NT.

The problems with the NT kernel are nothing new. When I suggested for Windows to use a BSD kernel to overcome its current limitations, I got lots of disagreement. Microsoft developers seem to be happy with the current state and have no vision of a totally new architecture.

https://news.ycombinator.com/item?id=2841934


The problems with the NT kernel are nothing new. When I suggested for Windows to use a BSD kernel to overcome its current limitations, I got lots of disagreement. Microsoft developers seem to be happy with the current state and have no vision of a totally new architecture.

https://news.ycombinator.com/item?id=2841934


The problem is its scale. Windows likely has ~100M lines of implementation, which needs to be fully audited in a legal way before open-sourcing. Maybe open-sourcing NT kernel is a more realistic and useful goal.

Eh, not really. Windows NT has always been designed for multiple architectures. Well-written applications (and to some extent drivers) just need recompiling for the new architecture.

Changing to a completely different kernel would be something else.


This is the first thing I thought when I saw that comment. The Windows NT kernel is one of the best and most stable ever written. Why should Microsoft throw away all that investment?

Funny thing is that NT is micro-kernel (to some degree at least), but Microsoft keep moving things in and out of kernel space with each release as they are weighing performance vs stability.

I would be shocked if Microsoft abandoned the NT kernel but I wouldn't be too surprised if they open sourced a lot of the win32 runtime and turned it into some community maintained thing. They've already done this with .net.

I've been given the impression that NT's superior kernel architecture is why they are doing this. But they also want other people to care so they also try to get Windows apps working on it too.

The NT kernel is surprisingly small and well-factored to begin with - it is a lot closer to a 'pure' philosophy (e.g. Microkernel) than something like Linux to begin with.

If you have a problem with Windows being overcomplicated or in need of refactor it is almost certainly something to do with not-the-kernel.

If you look at something like the Linux kernel its actually much larger than Windows. It needs to have every device driver known to man (except that one WiFi/GPU/Ethernet/Bluetooth driver you need) because internally the architecture is not cleanly defined and kernel changes also involve fixing all the broken drivers.


There is a huge assertion in there - that Windows isn't as adaptable to multiple different environments. I know that empirically this looks to be the case - we don't see the NT kernel running on gadgets - but is there actually any true technical reason stopping this? By that, I mean is there any barrier that a team of Microsoft engineers couldn't overcome in say a year?

It would be interesting to know what those limitations are if that is the case - there would no doubt be lessons to learn for all of us.


Definitely not. I heard the NT kernel codebase is actually beautiful to work with.

There's definitely code rot in Windows, but that's probably due to compatibility reasons with third party code rather than actual bad design.


I think they would keep the NT kernel. The engineering that went into that was pretty top notch. The rest of the Windows legacy yeah, that gets dumped.

That discussion and others have led me to the conclusion that the NT kernel has an excellent design and a subpar implementation (since only Microsoft's team can work on it), whereas Linux has a crappy design and an excellent implementation (being constantly refined and iterated by anyone). Kind of makes you wonder what could be possible if Microsoft would ever open-source it.

Windows started on the DOS kernel, so switching to a new kernel wouldn't dismantle the whole product.

Although I don't see a reason why NT should be inferior to Linux from Microsoft's perspective.


While the NT kernel was designed with support for multiple architectures in mind (MS call this HAL), it doesn't really matter in the end. And I'm not saying that MS engineers are not capable of releasing the NT kernel for other architectures in reasonable time. The real problem here is the userspace. For political/business/culture reasons, there is almost nobody taking other architectures into account when developing theirs applications. Most vendors face difficulties even when porting programs to 64 version of the same thing. So why would anybody sane port NT kernel to other architectures, when all the userspace programs are practically unportable? Compare that with the world of unix clones and it's culture: the portability it's not just theoretically possible, it's also part of it's culture.

Why would they do that? NT is a very good kernel design.
next

Legal | privacy