I installed Windows 10 the other week - it kinda blew my mind how poor the install experience was.
The iso contained files greater than 4GB, which breaks fat32, which I'm sure many people are still using on flash drives. So I had to use an MS cmd-line tool to split the wim files manually and edit the install files. Why doesn't the installer just use smaller archive files?
On my Win10 system, that directory is ~600mb, ~300 files and 17 directories. Not advocating for this system, I just don't think it's that much of a problem.
"All I can say is, this article is the tip of the ice berg on Windows I/O weirdness."
Well, then, is there a more detailed summary than this one that's accessible?
This one looks very useful and I'll use it, but to make the point about more info it'd be nice to know how they differ across Windows versions.
For example, I've never been sure about path length of 260 and file length of 255. I seem to recall these were a little different in earlier versions, f/l being 254 for instance. Can anyone clear that up for me?
Incidentally, I hit the 255/260 limit regularly, it's damn nuisance when copying stops because the path is say 296 or 320, or more.
... wow. Instead of hashing the file contents, it just checked file size? Who came up with this algorithm?! There's plenty of applications that use files of fixed large sizes initialized at install, for performance reasons or just simply so if you're going to run out of space you do so during interactive installation, rather than silently breaking later on.
Just naively translating the registry into a NTFS directory structure would require 1kb per value, simply because that's the size of a file record (NTFS already has an optimization to store small files directly in the file record if it fits in next to all the attributes and ACLs).
Also the Windows Filesystem driver stack is not very efficient for accessing many small files. It's built for flexibility and security, not speed.
It's probably not really 20GB. Most of what's in WinSxS is links to files in other directories, often multiple links to the same file, and the Windows shell "Properties" dialog will include every "copy" when calculating total size.
If you're really that sensitive to size, may want to try 7z. I can usually get a few % smaller archive sizes than xz with faster decompression to boot. Of course then you might need to install a 7z lib which could be an issue.
I edited (not once) an assembly for AI that was almost 100MB in size in notepad. Took a few minutes to load, but then worked just fine. That was almost ten years back. I can't comprehend how something today can't open a 2MB file.
This has very little to do with Windows, itself. The issue is explorer that tries to estimate sizes and put the files in the recycled bin. Also likely it checks all the permissions while counting.
You can generally get around windows' 255 absolute character length limits by using UNC paths. We had that problem at my last company, where the system would allow you to write files into a path but not delete them.
There isn't a modern reason for the limit either. NTFS supports longer path names, as does FAT if I remember correctly. Rather irritating.
Lots of small files (particularly in a single directory) is a known failure mode of many OS filesystems. I remember putting a million files into MogileFS and finding that filesystem operations basically did not complete in any reasonable length of time.
The iso contained files greater than 4GB, which breaks fat32, which I'm sure many people are still using on flash drives. So I had to use an MS cmd-line tool to split the wim files manually and edit the install files. Why doesn't the installer just use smaller archive files?
reply